chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Learning Objectives
1. Describe mirror and canonical neurons, their locations in the brain, and their possible functions
2. Describe theory of mind (ToM) and its role in social cognition
3. Define the social brain
Overview
In the early 1990s, researchers working with monkeys identified a group of neurons in their frontal cortex that were activated when the monkeys made a particular motion with their hand or mouth. There was nothing surprising in that, since these neurons were located in what is considered a “motor” portion of the brain. What was surprising, though, was that this same group of neurons was also activated in monkeys who were not performing the motion themselves, but rather watching other monkeys perform it. Hence these neurons were given the name “mirror neurons”. Mirror neurons are one possible basis in the brain for what may be called social intelligence or social cognition, information processing necessary for successful interactions with conspecifics. The new field of social neuroscience studies the neurological structures and processes that carry out the information processing required for adaptive interactions with others, basis for all human social behavior. One of these capacities, labeled “theory of mind,” is ability to perceive and understand mental states of others, essential in human social life. Autism is in the category of pervasive developmental disorders and may result in part from impaired theory of mind and abnormalities in a number of brain structures collectively referred to by some neuroscientists as the "social brain."
Are Mirror Neurons the Basis for Communication?
Mirror neurons were discovered in area F5 of the ventral premotor cortex of macaque monkeys by researchers at the University of Parma, Italy, in 1992. The researchers found that these neurons had some very distinctive characteristics : they fired not only when a monkey performed a voluntary gesture (for example, turned a handle to open a door), but also when a monkey watched another monkey perform this same action. The part of the brain where these mirror neurons are found in monkeys corresponds to the part of the human brain known as Broca’s area, which since the 19th century has been known to play an important role in language. In addition to their being located in a brain area associated with language in humans, two other things about mirror neurons have led many researchers to suggest that they may play a role in the evolution and learning of language: these neurons tell us about the intentions of the people around us, and they help us to imitate the movements of other people’s lips and tongues.
Another category of neurons present in area F5, canonical neurons, may, like mirror neurons, be involved in human language faculties. The special characteristic of canonical neurons is that they fire when an individual simply sees a graspable object. For example, if a monkey looks at a ball, the canonical neurons that fire are the same ones that will fire if the monkey decides to actually grasp the ball. In contrast, the monkey’s mirror neurons will not be activated by the mere sight of a ball, but only if the monkey either grasps the ball or sees another monkey do so.
In essence, mirror neurons react to visual stimuli that represent an interaction between a biological means of action (such as the hand or mouth) and an object. These neurons thus act as agents for recognizing purposive actions as opposed to simple movements.
Even in the case of canonical neurons, which can be activated by the sight of a graspable object in the absence of movement, the internal representation is that of a purposive action, and not just a simple movement of the hand or arm.
This is what has led some researchers to think that mirror neurons might help to explain the cognitive foundations of language, by providing the neural substrate for the human ability to understand the meaning of other people’s actions, which is the basis for all social relations. This system of correspondences between perceptions and actions would help us to infer other people’s mental states and to interpret their actions as intentional behaviors arising from these states. We can then easily imagine how this mechanism for interpreting gestural communication might have been applied to verbal communication as well.
The hypothesis advanced is that the motor system, through its mirror neurons, is involved in perceiving speech, and that through evolution, the “motor resonance” generated by the mirror neurons has been diverted (or exapted ) from its original function to serve the needs of language. One has to be impressed by the economy of such a cognitive system, in which one individual understands what other individuals are doing (or saying) on the basis of the internal neural representation of his or her own motor capabilities.
The point is that intentional communication between two individuals differs from the simple cries of alarm by which animals signal danger to all members of their group indiscriminately. Intentional communication, in contrast, requires one individual who is transmitting information and a second who is paying attention to receive it. Among all the possible origins of language , the first form of intentional communication among humans may have arisen from the imitation of gestures and facial expressions. Thus mirror neurons may have played a role in sharing these common representations and, eventually, a common language.
Mirror neurons have been shown to have some other interesting characteristics as well. First of all, as noted above, they will be activated when you see someone else’s hand grasp an object, but not when you see a tool grasp the same object. The explanation is that unlike tools and other human artifacts, human body parts are represented in the motor and premotor areas of the brain’s frontal lobes. Second—and here is where the implications for language become really interesting—mirror neurons do not react to just any movements of the hand or mouth, but only to movements that are involved in goal-directed actions.
In other words, it is only when an action has a meaning that it activates the mirror neurons. Their response is thus associated with the expression of intentionality—that is, with the purpose of the observed gesture. For example, certain mirror neurons that are activated when a monkey manipulates an object will remain quiet when the monkey uses the same muscles, connected to the same neurons, to perform a similar action for a different purpose, such as to scratch itself or to pick an insect out of its fur.
It is becoming more and more apparent that the brain’s motor system does not only control movements but can also in a certain sense read the actions performed by other individuals. Mirror neurons may thus play a fundamental role in all human social behavior, including language.
Summary
Mirror neurons discovered in area F5 of the frontal cortex in the macaque monkey, an area that corresponds to Broca's area of the left frontal lobe of humans, fire action potentials when a monkey performs a voluntary movement (for example, turning a handle to open a door), but also when a monkey watches another monkey perform this same action. if a monkey looks at a ball, the canonical neurons that fire are the same ones that will fire if the monkey decides to actually grasp the ball. However, by contrast, the monkey’s mirror neurons will not be activated by the mere sight of a ball, but only if the monkey either grasps the ball or sees another monkey do so. Mirror neurons do not react to just any movements of the hand or mouth, but only to movements that are involved in goal-directed actions. These neurons thus act as agents for recognizing purposive actions as opposed to simple movements. The hypothesis is that the motor system, through its mirror neurons, is involved in perceiving speech, and the intentions of others, and may thus play a fundamental role in all human social behavior.
Attributions
Are Mirror Neurons the Basis for Communication? by Bruno Dubuc, The Brain from Top to Bottom under a Copyleft license.
The Social Brain and Social Neuroscience
The brain mechanisms of social cognition and social intelligence help us understand other people and accomplish adaptive interactions with them. Deficits in these abilities due to damage in the "social brain" can lead to disorders such as autism.
Social Neuroscience
By and
University of Colorado Boulder, University of Delaware
This module provides an overview of the new field of social neuroscience, which combines the use of neuroscience methods and theories to understand how other people influence our thoughts, feelings, and behavior. The module reviews research measuring neural and hormonal responses to understand how we make judgments about other people and react to stress. Through these examples, it illustrates how social neuroscience addresses three different questions: (1) how our understanding of social behavior can be expanded when we consider neural and physiological responses, (2) what the actual biological systems are that implement social behavior (e.g., what specific brain areas are associated with specific social tasks), and (3) how biological systems are impacted by social processes.
Learning Objectives
• Define social neuroscience and describe its three major goals.
• Describe how measures of brain activity such as EEG and fMRI are used to make inferences about social processes.
• Discuss how social categorization occurs.
• Describe how simulation may be used to make inferences about others.
• Discuss the ways in which other people can cause stress and also protect us against stress.
Psychology has a long tradition of trying to better understand how we think and act. For example, in 1939 Heinrich Kluver and Paul Bucy removed (i.e. lesioned) the temporal lobes in some rhesus monkeys and observed the effect on behavior. Included in these lesions was a subcortical area of the brain called the amygdala. After surgery, the monkeys experienced profound behavioral changes, including loss of fear. These results provided initial evidence that the amygdala plays a role in emotional responses, a finding that has since been confirmed by subsequent studies (Phelps & LeDoux, 2005; Whalen & Phelps, 2009).
What Is Social Neuroscience?
Social neuroscience seeks to understand how we think about and act toward other people. More specifically, we can think of social neuroscience as an interdisciplinary field that uses a range of neuroscience measures to understand social behavior. As such, social neuroscience studies the same topics as social psychology, but does so from a multilevel perspective that includes the study of the brain and body. Figure 1 shows the scope of social neuroscience with respect to the older fields of social psychology and neuroscience. Although the field is relatively new – the term first appeared in 1992 (Cacioppo & Berntson, 1992) – it has grown rapidly, thanks to technological advances in brain science, and to the recognition that neural and physiological information are critical to understanding how we interact with other people.
Social neuroscience can be thought of as both a methodological approach (using measures of the brain and body to study social processes) and a theoretical orientation (seeing the benefits of integrating neuroscience into the study of social psychology). The overall approach in social neuroscience is to understand the psychological processes that underlie our social behavior. Because those psychological processes are intrapsychic phenomena that cannot be directly observed, social neuroscientists rely on a combination of measurable or observable neural and physiological responses as well as actual overt behavior to make inferences about psychological states (see Figure 1). Using this approach, social neuroscientists have been able to pursue three different types of questions: (1) What more can we learn about social behavior when we consider neural and physiological responses? (2) What are the actual biological systems that implement social behavior (e.g., what specific brain areas are associated with specific social tasks)? and (3) How are biological systems impacted by social processes?
How Automatically Do We Judge Other People?
Social categorization is the act of mentally classifying someone as belonging in a group. Why do we do this? It is an effective mental shortcut. Rather than effortfully thinking about every detail of every person we encounter, social categorization allows us to rely on information we already know about the person’s group. For example, by classifying your restaurant server as a man, you can quickly activate all the information you have stored about men and use it to guide your behavior. But this shortcut comes with potentially high costs. The stored group beliefs might not be very accurate, and even when they do accurately describe some group members, they are unlikely to be true for every member you encounter. In addition, many beliefs we associate with groups – called stereotypes – are negative. This means that relying on social categorization can often lead people to make negative assumptions about others.
The potential costs of social categorization make it important to understand how social categorization occurs. Is it rare or does it occur often? Is it something we can easily stop, or is it hard to override? One difficulty answering these questions is that people are not always consciously aware of what they are doing. In this case, we might not always realize when we are categorizing someone. Another concern is that even when people are aware of their behavior, they can be reluctant to accurately report it to an experimenter. In the case of social categorization, subjects might worry they will look bad if they accurately report classifying someone into a group associated with negative stereotypes. For instance, many racial groups are associated with some negative stereotypes, and subjects may worry that admitting to classifying someone into one of those groups means they believe and use those negative stereotypes.
Social neuroscience has been useful for studying how social categorization occurs without having to rely on self-report measures, instead measuring brain activity differences that occur when people encounter members of different social groups. Much of this work has been recorded using the electroencephalogram, or EEG. EEG is a measure of electrical activity generated by the brain’s neurons. Comparing this electrical activity at a given point in time against what a person is thinking and doing at that same time allows us to make inferences about brain activity associated with specific psychological states. One particularly nice feature of EEG is that it provides very precise timing information about when brain activity occurs. EEG is measured non-invasively with small electrodes that rest on the surface of the scalp. This is often done with a stretchy elastic cap, like the one shown in Figure 2, into which the small electrodes are sewn. Researchers simply pull the cap onto the subject’s head to get the electrodes into place; wearing it is similar to wearing a swim cap. The subject can then be asked to think about different topics or engage in different tasks as brain activity is measured.
To study social categorization, subjects have been shown pictures of people who belong to different social groups. Brain activity recorded from many individual trials (e.g., looking at lots of different Black individuals) is then averaged together to get an overall idea of how the brain responds when viewing individuals who belong to a particular social group. These studies suggest that social categorization is an automatic process – something that happens with little conscious awareness or control – especially for dimensions like gender, race, and age (Ito & Urland, 2003; Mouchetant-Rostaing & Giard, 2003). The studies specifically show that brain activity differs when subjects view members of different social groups (e.g., men versus women, Blacks versus Whites), suggesting that the group differences are being encoded and processed by the perceiver. One interesting finding is that these brain changes occur both when subjects are purposely asked to categorize the people into social groups (e.g., to judge whether the person is Black or White), and also when they are asked to do something that draws attention away from group classifications (e.g., making a personality judgment about the person) (Ito & Urland, 2005). This tells us that we do not have to intend to make group classifications in order for them to happen. It is also very interesting to consider how quickly the changes in brain responses occur. Brain activity is altered by viewing members of different groups within 200 milliseconds of seeing a person’s face. That is just two-tenths of a second. Such a fast response lends further support to the idea that social categorization occurs automatically and may not depend on conscious intention.
Overall, this research suggests that we engage in social categorization very frequently. In fact, it appears to happen automatically (i.e., without us consciously intending for it to happen) in most situations for dimensions like gender, age, and race. Since classifying someone into a group is the first step to activating a group stereotype, this research provides important information about how easily stereotypes can be activated. And because it is hard for people to accurately report on things that happen so quickly, this issue has been difficult to study using more traditional self-report measures. Using EEGs has, therefore, been helpful in providing interesting new insights into social behavior.
Do We Use Our Own Behavior to Help Us Understand Others?
Classifying someone into a social group then activating the associated stereotype is one way to make inferences about others. However, it is not the only method. Another strategy is to imagine what our own thoughts, feelings, and behaviors would be in a similar situation. Then we can use our simulated reaction as a best guess about how someone else will respond (Goldman, 2005). After all, we are experts in our own feelings, thoughts, and tendencies. It might be hard to know what other people are feeling and thinking, but we can always ask ourselves how we would feel and act if we were in their shoes.
There has been some debate about whether simulation is used to get into the minds of others (Carruthers & Smith, 1996; Gallese & Goldman, 1998). Social neuroscience research has addressed this question by looking at the brain areas used when people think about themselves and others. If the same brain areas are active for the two types of judgments, it lends support to the idea that the self may be used to make inferences about others via simulation.
We know that an area in the prefrontal cortex called the medial prefrontal cortex (mPFC) – located in the middle of the frontal lobe – is active when people think about themselves (Kelley, Macrae, Wyland, Caglar, Inati, & Heatherton, 2002). This conclusion comes from studies using functional magnetic resonance imaging, or fMRI. While EEG measures the brain’s electrical activity, fMRI measures changes in the oxygenation of blood flowing in the brain. When neurons become more active, blood flow to the area increases to bring more oxygen and glucose to the active cells. fMRI allows us to image these changes in oxygenation by placing people in an fMRI machine or scanner (Figure 3), which consists of large magnets that create strong magnetic fields. The magnets affect the alignment of the oxygen molecules within the blood (i.e., how they are tilted). As the oxygen molecules move in and out of alignment with the magnetic fields, their nuclei produce energy that can be detected with special sensors placed close to the head. Recording fMRI involves having the subject lay on a small bed that is then rolled into the scanner. While fMRI does require subjects to lie still within the small scanner and the large magnets involved are noisy, the scanning itself is safe and painless. Like EEG, the subject can then be asked to think about different topics or engage in different tasks as brain activity is measured. If we know what a person is thinking or doing when fMRI detects a blood flow increase to a particular brain area, we can infer that part of the brain is involved with the thought or action. fMRI is particularly useful for identifying which particular brain areas are active at a given point in time.
The conclusion that the mPFC is associated with the self comes from studies measuring fMRI while subjects think about themselves (e.g., saying whether traits are descriptive of themselves). Using this knowledge, other researchers have looked at whether the same brain area is active when people make inferences about others. Mitchell, Neil Macrae, and Banaji (2005) showed subjects pictures of strangers and had them judge either how pleased the person was to have his or her picture taken or how symmetrical the face appeared. Judging whether someone is pleased about being photographed requires making an inference about someone’s internal feelings – we call this mentalizing. By contrast, facial symmetry judgments are based solely on physical appearances and do not involve mentalizing. A comparison of brain activity during the two types of judgments shows more activity in the mPFC when making the mental versus physical judgments, suggesting this brain area is involved when inferring the internal beliefs of others.
There are two other notable aspects of this study. First, mentalizing about others also increased activity in a variety of regions important for many aspects of social processing, including a region important in representing biological motion (superior temporal sulcus or STS), an area critical for emotional processing (amygdala), and a region also involved in thinking about the beliefs of others (temporal parietal junction, TPJ) (Gobbini & Haxby, 2007; Schultz, Imamizu, Kawato, & Frith, 2004) (Figure 4). This finding shows that a distributed and interacting set of brain areas is likely to be involved in social processing. Second, activity in the most ventral part of the mPFC (the part closer to the belly rather than toward the top of the head), which has been most consistently associated with thinking about the self, was particularly active when subjects mentalized about people they rated as similar to themselves. Simulation is thought to be most likely for similar others, so this finding lends support to the conclusion that we use simulation to mentalize about others. After all, if you encounter someone who has the same musical taste as you, you will probably assume you have other things in common with him. By contrast, if you learn that someone loves music that you hate, you might expect him to differ from you in other ways (Srivastava, Guglielmo, & Beer, 2010). Using a simulation of our own feelings and thoughts will be most accurate if we have reason to think the person’s internal experiences are like our own. Thus, we may be most likely to use simulation to make inferences about others if we think they are similar to us.
This research is a good example of how social neuroscience is revealing the functional neuroanatomy of social behavior. That is, it tells us which brain areas are involved with social behavior. The mPFC (as well as other areas such as the STS, amygdala, and TPJ) is involved in making judgments about the self and others. This research also provides new information about how inferences are made about others. Whereas some have doubted the widespread use of simulation as a means for making inferences about others, the activation of the mPFC when mentalizing about others, and the sensitivity of this activation to similarity between self and other, provides evidence that simulation occurs.
What Is the Cost of Social Stress?
Stress is an unfortunately frequent experience for many of us. Stress – which can be broadly defined as a threat or challenge to our well-being – can result from everyday events like a course exam or more extreme events such as experiencing a natural disaster. When faced with a stressor, sympathetic nervous system activity increases in order to prepare our body to respond to the challenge. This produces what Selye (1950) called a fight or flight response. The release of hormones, which act as messengers from one part of an organism (e.g., a cell or gland) to another part of the organism, is part of the stress response.
A small amount of stress can actually help us stay alert and active. In comparison, sustained stressors, or chronic stress, detrimentally affect our health and impair performance (Al’Absi, Hugdahl, & Lovallo, 2002; Black, 2002; Lazarus, 1974). This happens in part through the chronic secretion of stress-related hormones (e.g., Davidson, Pizzagalli, Nitschke, & Putnam, 2002; Dickerson, Gable, Irwin, Aziz, & Kemeny, 2009). In particular, stress activates the hypothalamic-pituitary-adrenal (HPA) axis to release cortisol (see Figure 5 for a discussion). Chronic stress, by way of increases in cortisol, impairs attention, memory, and self-control (Arnsten, 2009). Cortisol levels can be measured non-invasively in bodily fluids, including blood and saliva. Researchers often collect a cortisol sample before and after a potentially stressful task. In one common collection method, subjects place polymer swabs under their tongue for 1 to 2 minutes to soak up saliva. The saliva samples are then stored and analyzed later to determine the level of cortisol present at each time point.
Whereas early stress researchers studied the effects of physical stressors like loud noises, social neuroscientists have been instrumental in studying how our interactions with other people can cause stress. This question has been addressed through neuroendocrinology, or the study of how the brain and hormones act in concert to coordinate the physiology of the body. One contribution of this work has been in understanding the conditions under which other people can cause stress. In one study, Dickerson, Mycek, and Zaldivar (2008) asked undergraduates to deliver a speech either alone or to two other people. When the students gave the speech in front of others, there was a marked increase in cortisol compared with when they were asked to give a speech alone. This suggests that like chronic physical stress, everyday social stressors, like having your performance judged by others, induces a stress response. Interestingly, simply giving a speech in the same room with someone who is doing something else did not induce a stress response. This suggests that the mere presence of others is not stressful, but rather it is the potential for them to judge us that induces stress.
Worrying about what other people think of us is not the only source of social stress in our lives. Other research has shown that interacting with people who belong to different social groups than us – what social psychologists call outgroup members – can increase physiological stress responses. For example, cardiovascular responses associated with stress like contractility of the heart ventricles and the amount of blood pumped by the heart (what is called cardiac output) are increased when interacting with outgroup as compared with ingroup members (i.e., people who belong to the same social group we do) (Mendes, Blascovich, Likel, & Hunter, 2002). This stress may derive from the expectation that interactions with dissimilar others will be uncomfortable (Stephan & Stephan, 1985) or concern about being judged as unfriendly and prejudiced if the interaction goes poorly (Plant & Devine, 2003).
The research just reviewed shows that events in our social lives can be stressful, but are social interactions always bad for us? No. In fact, while others can be the source of much stress, they are also a major buffer against stress. Research on social support shows that relying on a network of individuals in tough times gives us tools for dealing with stress and can ward off loneliness (Cacioppo & Patrick, 2008). For instance, people who report greater social support show a smaller increase in cortisol when performing a speech in front of two evaluators (Eisenberger, Taylor, Gable, Hilmert, & Lieberman, 2007).
What determines whether others will increase or decrease stress? What matters is the context of the social interaction. When it has potential to reflect badly on the self, social interaction can be stressful, but when it provides support and comfort, social interaction can protect us from the negative effects of stress. Using neuroendocrinology by measuring hormonal changes in the body has helped researchers better understand how social factors impact our body and ultimately our health.
Conclusions
Human beings are intensely social creatures – our lives are intertwined with other people and our health and well-being depend on others. Social neuroscience helps us to understand the critical function of how we make sense of and interact with other people. This module provides an introduction to what social neuroscience is and what we have already learned from it, but there is much still to understand. As we move forward, one exciting future direction will be to better understand how different parts of the brain and body interact to produce the numerous and complex patterns of social behavior that humans display. We hinted at some of this complexity when we reviewed research showing that while the mPFC is involved in mentalizing, other areas such as the STS, amygdala, and TPJ are as well. There are likely additional brain areas involved as well, interacting in ways we do not yet fully understand. These brain areas in turn control other aspects of the body to coordinate our responses during social interactions. Social neuroscience will continue to investigate these questions, revealing new information about how social processes occur, while also increasing our understanding of basic neural and physiological processes.
Outside Resources
Society for Social Neuroscience
http://www.s4sn.org
Video: See a demonstration of fMRI data being collected.
Video: See an example of EEG data being collected.
Video: View two tasks frequently used in the lab to create stress – giving a speech in front of strangers, and doing math computations out loud in front of others. Notice how some subjects show obvious signs of stress, but in some situations, cortisol changes suggest that even people who appear calm are experiencing a physiological response associated with stress.
Video: Watch a video used by Fritz Heider and Marianne Simmel in a landmark study on social perception published in 1944. Their goal was to investigate how we perceive other people, and they studied it by seeing how readily we apply people-like interpretations to non-social stimuli.
Discussion Questions
1. Categorizing someone as a member of a social group can activate group stereotypes. EEG research suggests that social categorization occurs quickly and often automatically. What does this tell us about the likelihood of stereotyping occurring? How can we use this information to develop ways to stop stereotyping from happening?
2. Watch this video, similar to what was used by Fritz Heider and Marianne Simmel in a landmark study on social perception published in 1944, and imagine telling a friend what happened in the video. http://intentionperception.org/wp-co...ider_Flash.swf. After watching the video, think about the following: Did you describe the motion of the objects solely in geometric terms (e.g., a large triangle moved from the left to the right), or did you describe the movements as actions of animate beings, maybe even of people (e.g., the circle goes into the house and shuts the door)? In the original research, 33 of 34 subjects described the action of the shapes using human terms. What does this tell us about our tendency to mentalize?
3. Consider the types of things you find stressful. How many of them are social in nature (e.g., are related to your interactions with other people)? Why do you think our social relations have such potential for stress? In what ways can social relations be beneficial and serve as a buffer for stress?
Vocabulary
Amygdala
A region located deep within the brain in the medial area (toward the center) of the temporal lobes (parallel to the ears). If you could draw a line through your eye sloping toward the back of your head and another line between your two ears, the amygdala would be located at the intersection of these lines. The amygdala is involved in detecting relevant stimuli in our environment and has been implicated in emotional responses.
Automatic process
When a thought, feeling, or behavior occurs with little or no mental effort. Typically, automatic processes are described as involuntary or spontaneous, often resulting from a great deal of practice or repetition.
Cortisol
A hormone made by the adrenal glands, within the cortex. Cortisol helps the body maintain blood pressure and immune function. Cortisol increases when the body is under stress.
Electroencephalogram
A measure of electrical activity generated by the brain’s neurons.
Fight or flight response
The physiological response that occurs in response to a perceived threat, preparing the body for actions needed to deal with the threat.
Functional magnetic resonance imaging
A measure of changes in the oxygenation of blood flow as areas in the brain become active.
Functional neuroanatomy
Classifying how regions within the nervous system relate to psychology and behavior.
Hormones
Chemicals released by cells in the brain or body that affect cells in other parts of the brain or body.
Hypothalamic-pituitary-adrenal (HPA) axis
A system that involves the hypothalamus (within the brain), the pituitary gland (within the brain), and the adrenal glands (at the top of the kidneys). This system helps maintain homeostasis (keeping the body’s systems within normal ranges) by regulating digestion, immune function, mood, temperature, and energy use. Through this, the HPA regulates the body’s response to stress and injury.
Ingroup
A social group to which an individual identifies or belongs.
Lesions
Damage or tissue abnormality due, for example, to an injury, surgery, or a vascular problem.
Medial prefrontal cortex
An area of the brain located in the middle of the frontal lobes (at the front of the head), active when people mentalize about the self and others.
Mentalizing
The act of representing the mental states of oneself and others. Mentalizing allows humans to interpret the intentions, beliefs, and emotional states of others.
Neuroendocrinology
The study of how the brain and hormones act in concert to coordinate the physiology of the body.
Outgroup
A social group to which an individual does not identify or belong.
Simulation
Imaginary or real imitation of other people’s behavior or feelings.
Social categorization
The act of mentally classifying someone into a social group (e.g., as female, elderly, a librarian).
Social support
A subjective feeling of psychological or physical comfort provided by family, friends, and others.
Stereotypes
The beliefs or attributes we associate with a specific social group. Stereotyping refers to the act of assuming that because someone is a member of a particular group, he or she possesses the group’s attributes. For example, stereotyping occurs when we assume someone is unemotional just because he is man, or particularly athletic just because she is African American.
Stress
A threat or challenge to our well-being. Stress can have both a psychological component, which consists of our subjective thoughts and feelings about being threatened or challenged, as well as a physiological component, which consists of our body’s response to the threat or challenge (see “fight or flight response”).
Superior temporal sulcus
The sulcus (a fissure in the surface of the brain) that separates the superior temporal gyrus from the middle temporal gyrus. Located in the temporal lobes (parallel to the ears), it is involved in perception of biological motion or the movement of animate objects.
Sympathetic nervous system
A branch of the autonomic nervous system that controls many of the body’s internal organs. Activity of the SNS generally mobilizes the body’s fight or flight response.
Temporal parietal junction
The area where the temporal lobes (parallel to the ears) and parieta lobes (at the top of the head toward the back) meet. This area is important in mentalizing and distinguishing between the self and others.
Attributions
Adapted by Kenneth A. Koenigshofer, PhD, from Ito, T. A. & Kubota, J. T. (2021). Social neuroscience. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/qyekc5gf
Authors
• Tiffany A. Ito is a Professor of Psychology and Neuroscience at the University of Colorado Boulder. Her research integrates neuroscience methods and theories to better understand social processes, with a particular focus on aspects of stereotyping and prejudice.
• Jennifer Kubota is an assistant professor at University of Delaware. She received her Ph.D. in social neuroscience from The University of Colorado Boulder. Her work focuses on the psychological and neural substrates of impression formation and their relation to decision-making.
Creative Commons License
Social Neuroscience by Tiffany A. Ito and Jennifer T. Kubota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Permissions beyond the scope of this license may be available in our Licensing Agreement.
Minor editing of this content by Kenneth A. Koenigshofer, PhD. Chaffey College.
Overview
We now continue exploration of social cognition by examining an important cognitive ability in humans essential for successful social interaction--theory of mind, a capacity that some scholars prefer to label “mentalizing” or “mindreading,” the ability to perceive and interpret other people’s behavior in terms of their mental states. The capacity may have evolved sometime in the last few million years. Theory of mind is thought to be a prerequisite for natural language acquisition, strategic social interaction, reflexive thought, and moral judgment. Humans need to understand minds in order to engage in the kinds of complex interactions that social communities (small and large) require--interactions that have given rise to the complex products of human cultural evolution. The capacity is severely limited in autism.
Theory of Mind
By
Brown University
One of the most remarkable human capacities is to perceive and understand mental states. This capacity, often labeled “theory of mind,” consists of an array of psychological processes that play essential roles in human social life. We review some of these roles, examine what happens when the capacity is deficient, and explore the many processes that make up the capacity to understand minds.
Learning Objectives
• Explain what theory of mind is.
• Enumerate the many domains of social life in which theory of mind is critical.
• Describe some characteristics of how autistic individuals differ in their processing of others’ minds.
• Describe and explain some of the many concepts and processes that comprise the human understanding of minds.
• Have a basic understanding of how ordinary people explain unintentional and intentional behavior.
Introduction
One of the most fascinating human capacities is the ability to perceive and interpret other people’s behavior in terms of their mental states. Having an appreciation for the workings of another person’s mind is considered a prerequisite for natural language acquisition (Baldwin & Tomasello, 1998), strategic social interaction (Zhang, Hedden, & Chia, 2012), reflexive thought (Bogdan, 2000), and moral judgment (Guglielmo, Monroe, & Malle, 2009). This capacity develops from early beginnings in the first year of life to the adult’s fast and often effortless understanding of others’ thoughts, feelings, and intentions. And though we must speculate about its evolutionary origin, we do have indications that the capacity evolved sometime in the last few million years.
In this module we will focus on two questions: What is the role of understanding others’ minds in human social life? And what is known about the mental processes that underlie such understanding? For simplicity, we will label this understanding “theory of mind,” even though it is not literally a “theory” that people have about the mind; rather, it is a capacity that some scholars prefer to label “mentalizing” or “mindreading.” But we will go behind all these labels by breaking down the capacity into distinct components: the specific concepts and mental processes that underlie the human understanding of minds.
First, let’s get clear about the roles that this understanding plays in social life.
The Role of Theory of Mind in Social Life
Put yourself in this scene: You observe two people’s movements, one behind a large wooden object, the other reaching behind him and then holding a thin object in front of the other. Without a theory of mind you would neither understand what this movement stream meant nor be able to predict either person’s likely responses. With the capacity to interpret certain physical movements in terms of mental states, perceivers can parse this complex scene into intentional actions of reaching and giving (Baird & Baldwin, 2001); they can interpret the actions as instances of offering and trading; and with an appropriate cultural script, they know that all that was going on was a customer pulling out her credit card with the intention to pay the cashier behind the register. People’s theory of mind thus frames and interprets perceptions of human behavior in a particular way—as perceptions of agents who can act intentionally and who have desires, beliefs, and other mental states that guide their actions (Perner, 1991; Wellman, 1990).
Not only would social perceivers without a theory of mind be utterly lost in a simple payment interaction; without a theory of mind, there would probably be no such things as cashiers, credit cards, and payment (Tomasello, 2003). Plain and simple, humans need to understand minds in order to engage in the kinds of complex interactions that social communities (small and large) require. And it is these complex social interactions that have given rise, in human cultural evolution, to houses, cities, and nations; to books, money, and computers; to education, law, and science.
The list of social interactions that rely deeply on theory of mind is long; here are a few highlights.
• Teaching another person new actions or rules by taking into account what the learner knows or doesn’t know and how one might best make him understand.
• Learning the words of a language by monitoring what other people attend to and are trying to do when they use certain words.
• Figuring out our social standing by trying to guess what others think and feel about us.
• Sharing experiences by telling a friend how much we liked a movie or by showing her something beautiful.
• Collaborating on a task by signaling to one another that we share a goal and understand and trust the other’s intention to pursue this joint goal.
Autism and Theory of Mind
Another way of appreciating the enormous impact that theory of mind has on social interactions is to study what happens when the capacity is severely limited, as in the case of autism (Tager-Flusberg, 2007). In a fascinating discussion in which (high-functioning) autistic individuals talk about their difficulties with other people’s minds (Blackburn, Gottschewski, George, & L—, 2000), one person reports: “I know people’s faces down to the acne scars on the left corners of their chins . . . and how the hairs of their eyebrows curl. . . . The best I can do is start picking up bits of data during my encounter with them because there’s not much else I can do. . . . I’m not sure what kind of information about them I’m attempting to process.” What seems to be missing, as another person with autism remarks, is an “automatic processing of ‘people information.’” Some autistic people report that they perceive others “in a more analytical way.” This analytical mode of processing, however, is very tiresome and slow: “Given time I may be able to analyze someone in various ways, and seem to get good results, but may not pick up on certain aspects of an interaction until I am obsessing over it hours or days later” (Blackburn et al., 2000).
So what is this magical potion that allows most people to gain quick and automatic access to other people’s minds and to recognize the meaning underlying human behavior? Scientific research has accumulated a good deal of knowledge in the past few decades, and here is a synopsis of what we know.
The Mental Processes Underlying Theory of Mind
The first thing to note is that “theory of mind” is not a single thing. What underlies people’s capacity to recognize and understand mental states is a whole host of components—a toolbox, as it were, for many different but related tasks in the social world (Malle, 2008). Figure 1 shows some of the most important tools, organized in a way that reflects the complexity of involved processes: from simple and automatic on the bottom to complex and deliberate on the top. This organization also reflects development—from tools that infants master within the first 6–12 months to tools they need to acquire over the next 3–5 years. Strikingly, the organization also reflects evolution: monkeys have available the tools on the bottom; chimpanzees have available the tools at the second level; but only humans master the remaining tools above. Let’s look at a few of them in more detail.
Agents, Goals, and Intentionality
The agent category allows humans to identify those moving objects in the world that can act on their own. Features that even very young children take to be indicators of being an agent include being self-propelled, having eyes, and reacting systematically to the interaction partner’s behavior, such as following gaze or imitating (Johnson, 2000; Premack, 1990).
The process of recognizing goals builds on this agent category, because agents are characteristically directed toward goal objects, which means they seek out, track, and often physically contact said objects. Even before the end of their first year, infants recognize that humans reach toward an object they strive for even if that object changes location or if the path to the object contains obstacles (Gergely, Nádasdy, Csibra, & Bíró, 1995; Woodward, 1998). What it means to recognize goals, therefore, is to see the systematic and predictable relationship between a particular agent pursuing a particular object across various circumstances.
Through learning to recognize the many ways by which agents pursue goals, humans learn to pick out behaviors that are intentional. The concept of intentionality is more sophisticated than the goal concept. For one thing, human perceivers recognize that some behaviors can be unintentional even if they were goal-directed—such as when you unintentionally make a fool of yourself even though you had the earnest goal of impressing your date. To act intentionally you need, aside from a goal, the right kinds of beliefs about how to achieve the goal. Moreover, the adult concept of intentionality requires that an agent have the skill to perform the intentional action in question: If I am flipping a coin, trying to make it land on heads, and if I get it to land on heads on my first try, you would not judge my action of making it land on heads as intentional—you would say it was luck (Malle & Knobe, 1997).
Imitation, Synchrony, and Empathy
Imitation and empathy are two other basic capacities that aid the understanding of mind from childhood on (Meltzoff & Decety, 2003). Imitation is the human tendency to carefully observe others’ behaviors and do as they do—even if it is the first time the perceiver has seen this behavior. A subtle, automatic form of imitation is called mimicry, and when people mutually mimic one another they can reach a state of synchrony. Have you ever noticed when two people in conversation take on similar gestures, body positions, even tone of voice? They “synchronize” their behaviors by way of (largely) unconscious imitation. Such synchrony can happen even at very low levels, such as negative physiological arousal (Levenson & Ruef, 1992), though the famous claim of synchrony in women’s menstrual cycles is a myth (Yang & Schank, 2006). Interestingly, people who enjoy an interaction synchronize their behaviors more, and increased synchrony (even manipulated in an experiment) makes people enjoy their interaction more (Chartrand & Bargh, 1999). Some research findings suggest that synchronizing is made possible by brain mechanisms that tightly link perceptual information with motor information (when I see you move your arm, my arm-moving program is activated). In monkeys, highly specialized so-called mirror neurons fire both when the monkey sees a certain action and when it performs that same action (Rizzolatti, Fogassi, & Gallese, 2001). In humans, however, things are a bit more complex. In many everyday settings, people perceive uncountable behaviors and fortunately don’t copy all of them (just consider walking in a crowd—hundreds of your mirror neurons would fire in a blaze of confusion). Human imitation and mirroring is selective, triggering primarily actions that are relevant to the perceiver’s current state or aim.
Automatic empathy builds on imitation and synchrony in a clever way. If Bill is sad and expresses this emotion in his face and body, and if Elena watches or interacts with Bill, then she will subtly imitate his dejected behavior and, through well-practiced associations of certain behaviors and emotions, she will feel a little sad as well (Sonnby-Borgström, Jönsson, & Svensson, 2003). Thus, she empathizes with him—whether she wants to or not. Try it yourself. Type “sad human faces” into your Internet search engine and select images from your results. Look at 20 photos and pay careful attention to what happens to your face and to your mood. Do you feel almost a “pull” of some of your facial muscles? Do you feel a tinge of melancholy?
Joint Attention, Visual Perspective Taking
Going beyond the automatic, humans are capable of actively engaging with other people’s mental states, such as when they enter into situations of joint attentionlike Marissa and Noah, who are each looking at an object and are both aware that each of them is looking at the object. This sounds more complicated than it really is. Just point to an object when a 3-year old is around and notice how both the child and you check in with each other, ensuring that you are really jointly engaging with the object. Such shared engagement is critical for children to learn the meaning of objects—both their value (is it safe and rewarding to approach?) and the words that refer to them (what do you call this?). When I hold up my keyboard and show it to you, we are jointly attending to it, and if I then say it’s called “Tastatur” in German, you know that I am referring to the keyboard and not to the table on which it had been resting.
Another important capacity of engagement is visual perspective taking: You are sitting at a dinner table and advise another person on where the salt is—do you consider that it is to her left even though it is to your right? When we overcome our egocentric perspective this way, we imaginatively adopt the other person’s spatial viewpoint and determine how the world looks from their perspective. In fact, there is evidence that we mentally “rotate” toward the other’s spatial location, because the farther away the person sits (e.g., 60, 90, or 120 degrees away from you) the longer it takes to adopt the person’s perspective (Michelon & Zacks, 2006).
Projection, Simulation (and the Specter of Egocentrism)
When imagining what it might be like to be in another person’s psychological position, humans have to go beyond mental rotation. One tool to understand the other’s thoughts or feelings is simulation—using one’s own mental states as a model for others’ mental states: “What would it feel like sitting across from the stern interrogator? I would feel scared . . .” An even simpler form of such modeling is the assumption that the other thinks, feels, wants what we do—which has been called the “like-me” assumption (Meltzoff, 2007) or the inclination toward social projection (Krueger, 2007). In a sense, this is an absence of perspective taking, because we assume that the other’s perspective equals our own. This can be an effective strategy if we share with the other person the same environment, background, knowledge, and goals, but it gets us into trouble when this presumed common ground is in reality lacking. Let’s say you know that Brianna doesn’t like Fred’s new curtains, but you hear her exclaim to Fred, “These are beautiful!” Now you have to predict whether Fred can figure out that Brianna was being sarcastic. It turns out that you will have a hard time suppressing your own knowledge in this case and you may overestimate how easy it is for Fred to spot the sarcasm (Keysar, 1994). Similarly, you will overestimate how visible that pimple is on your chin—even though it feels big and ugly to you, in reality very few people will ever notice it (Gilovich & Savitsky, 1999). So the next time when you spot a magnificent bird high up in the tree and you get impatient with your friend who just can’t see what is clearly obvious, remember: it’s obvious to you.
What all these examples show is that people use their own current state—of knowledge, concern, or perception—to grasp other people’s mental states. And though they often do so correctly, they also get things wrong at times. This is why couples counselors, political advisors, and Buddhists agree on at least one thing: we all need to try harder to recognize our egocentrism and actively take other people’s perspective—that is, grasp their actual mental states, even if (or especially when) they are different from our own.
Explicit Mental State Inference
The ability to truly take another person’s perspective requires that we separate what we want, feel, and know from what the other person is likely to want, feel, and know. To do so humans make use of a variety of information. For one thing, they rely on stored knowledge—both general knowledge (“Everybody would be nervous when threatened by a man with a gun”) and agent-specific knowledge (“Joe was fearless because he was trained in martial arts”). For another, they critically rely on perceived facts of the concrete situation—such as what is happening to the agent, the agent’s facial expressions and behaviors, and what the person saw or didn’t see.
This capacity of integrating multiple lines of information into a mental-state inference develops steadily within the first few years of life, and this process has led to a substantial body of research (Wellman, Cross, & Watson, 2001). The research began with a clever experiment by Wimmer and Perner (1983), who tested whether children can pass a false-belief test (see Figure 2). The child is shown a picture story of Sally, who puts her ball in a basket and leaves the room. While Sally is out of the room, Anne comes along and takes the ball from the basket and puts it inside a box. The child is then asked where Sally thinks the ball is located when she comes back to the room. Is she going to look first in the box or in the basket?
The right answer is that she will look in the basket, because that’s where she put it and thinks it is; but we have to infer this false belief against our own better knowledge that the ball is in the box. This is very difficult for children before the age of 4, and it usually takes some cognitive effort in adults (Epley, Morewedge, & Keysar, 2004).
The challenge is clear: People are good at automatically relating to other people, using their own minds as a fitting model for others’ minds. But people need to recognize when to step out of their own perspective and truly represent the other person’s perspective—which may harbor very different thoughts, feelings, and intentions.
Tools in Summary
We have seen that the human understanding of other minds relies on many tools. People process such information as motion, faces, and gestures and categorize it into such concepts as agent, intentional action, or fear. They rely on relatively automatic psychological processes, such as imitation, joint attention, and projection. And they rely on more effortful processes, such as simulation and mental-state inference. These processes all link behavior that humans observe to mental states that humans infer. If we call this stunning capacity a “theory,” it is a theory of mind and behavior.
Folk Explanations of Behavior
Nowhere is this mind–behavior link clearer than in people’s explanations of behavior—when they try to understand why somebody acted or felt a certain way. People have a strong need to answer such “why” questions, from the trivial to the significant: why the neighbor’s teenage daughter is wearing a short skirt in the middle of winter; why the policeman is suddenly so friendly; why the murderer killed three people. The need to explain this last behavior seems puzzling, because typical benefits of explanation are absent: We do not need to predict or control the criminal’s behavior since we will never have anything to do with him. Nonetheless, we have an insatiable desire to understand, to find meaning in this person’s behavior—and in people’s behavior generally.
This makes evolutionary sense for at least two reasons: 1) because we are highly social creatures by virtue of our genetic evolution as a species, and success in the social group, including being valued by others in your group, was especially important to our survival in our evolutionary past (if you "don't get" others and consequently aren't valued and liked by the group, they are less likely to risk themselves to save you from a predator or to share food and other resources with you), strong motivation to understand other people would be favored by natural selection; 2) strong motivation to understand behavior of others increases chances of being able to predict behavior of others, and to predict their intentions, thereby allowing prediction of their future behavior allowing strategic preparation for it increasing your success in your social interactions, essential to human adaptation.
Older theories of how people explain and understand behavior suggested that people merely identify causes of the behavior (e.g., Kelley, 1967). That is true for most unintentional behaviors—tripping, having a headache, calling someone by the wrong name. But to explain intentional behaviors, people use a more sophisticated framework of interpretation, which follows directly from their concept of intentionality and the associated mental states they infer (Malle, 2004). We have already mentioned the complexity of people’s concept of intentionality; here it is in full (Malle & Knobe, 1997): For an agent to perform a behavior intentionally, she must have a desire for an outcome (what we had called a goal), beliefs about how a particular action leads to the outcome, and an intention to perform that action; if the agent then actually performs the action with awareness and skill, people take it to be an intentional action. To explain why the agent performed the action, humans try to make the inverse inference of what desire and what beliefs the agent had that led her to so act, and these inferred desires and beliefs are the reasons for which she acted. What was her reason for wearing a short skirt in the winter? “She wanted to annoy her mother.” What was the policeman’s reason for suddenly being so nice? “He thought he was speaking with an influential politician.” What was his reason for killing three people? In fact, with such extreme actions, people are often at a loss for an answer. If they do offer an answer, they frequently retreat to “causal history explanations” (Malle, 1999), which step outside the agent’s own reasoning and refer instead to more general background facts—for example, that he was mentally ill or a member of an extremist group. But people clearly prefer to explain others’ actions by referring to their beliefs and desires, the specific reasons for which they acted.
By relying on a theory of mind, explanations of behavior make meaningful what would otherwise be inexplicable motions—just like in our initial example of two persons passing some object between them. We recognize that the customer wanted to pay and that’s why she passed her credit card to the cashier, who in turn knew that he was given a credit card and swiped it. It all seems perfectly clear, almost trivial to us. But that is only because humans have a theory of mind and use it to retrieve the relevant knowledge, simulate the other people’s perspective, infer beliefs and desires, and explain what a given action means. Humans do this effortlessly and often accurately. Moreover, they do it within seconds or less. What’s so special about that? Well, it takes years for a child to develop this capacity, and it took our species a few million years to evolve it. That’s pretty special.
Outside Resources
Blog: On the debate about menstrual synchrony
http://blogs.scientificamerican.com/context-and-variation/2011/11/16/menstrual-synchrony/
Blog: On the debates over mirror neurons
http://blogs.scientificamerican.com/guest-blog/2012/11/06/whats-so-special-about-mirror-neurons/
Book: First and last chapters of Zunshine, L. (2006). Why we read fiction: Theory of mind and the novel. Columbus, OH: Ohio State University Press.
https://ohiostatepress.org/Books/Book PDFs/Zunshine Why.pdf
Movie: A movie that portrays the social difficulties of a person with autism: Adam (Fox Searchlight Pictures, 2009)
http://www.imdb.com/title/tt1185836/?ref_=fn_tt_tt_1
ToM and Autism TEDx Talks
https://www.ted.com/playlists/153/the_autism_spectrum
Video: TED talk on autism
http://www.ted.com/talks/temple_grandin_the_world_needs_all_kinds_of_minds.html
Video: TED talk on empathy
http://blog.ted.com/2011/04/18/a-radical-experiment-in-empathy-sam-richards-at-ted-com/
Video: TED talk on theory of mind and moral judgment
http://www.ted.com/talks/rebecca_saxe_how_brains_make_moral_judgments.html
Video: Test used by Baron Cohen (prior to the core study) to investigate whether autistic children had a theory of mind by using a false belief task.
Video: Theory of mind development
Discussion Questions
1. Recall a situation in which you tried to infer what a person was thinking or feeling but you just couldn’t figure it out, and recall another situation in which you tried the same but succeeded. Which tools were you able to use in the successful case that you didn’t or couldn’t use in the failed case?
2. Mindfulness training improves keen awareness of one’s own mental states. Look up a few such training programs (easily found online) and develop a similar training program to improve awareness of other people’s minds.
3. In the near future we will have robots that closely interact with people. Which theory of mind tools should a robot definitely have? Which ones are less important? Why?
4. Humans assume that everybody has the capacity to make choices and perform intentional actions. But in a sense, a choice is just a series of brain states, caused by previous brain states and states of the world, all governed by the physical laws of the universe. Is the concept of choice an illusion?
5. The capacity to understand others’ minds is intimately related to another unique human capacity: language. How might these two capacities have evolved? Together? One before the other? Which one?
Vocabulary
Automatic empathy
A social perceiver unwittingly taking on the internal state of another person, usually because of mimicking the person’s expressive behavior and thereby feeling the expressed emotion.
False-belief test
An experimental procedure that assesses whether a perceiver recognizes that another person has a false belief—a belief that contradicts reality.
Folk explanations of behavior
People’s natural explanations for why somebody did something, felt something, etc. (differing substantially for unintentional and intentional behaviors).
Intention
An agent’s mental state of committing to perform an action that the agent believes will bring about a desired outcome.
Intentionality
The quality of an agent’s performing a behavior intentionally—that is, with skill and awareness and executing an intention (which is in turn based on a desire and relevant beliefs).
Joint attention
Two people attending to the same object and being aware that they both are attending to it.
Mimicry
Copying others’ behavior, usually without awareness.
Mirror neurons
Neurons identified in monkey brains that fire both when the monkey performs a certain action and when it perceives another agent performing that action.
Projection
A social perceiver’s assumption that the other person wants, knows, or feels the same as the perceiver wants, know, or feels.
Simulation
The process of representing the other person’s mental state.
Synchrony
Two people displaying the same behaviors or having the same internal states (typically because of mutual mimicry).
Theory of mind
The human capacity to understand minds, a capacity that is made up of a collection of concepts (e.g., agent, intentionality) and processes (e.g., goal detection, imitation, empathy, perspective taking).
Visual perspective taking
Can refer to visual perspective taking (perceiving something from another person’s spatial vantage point) or more generally to effortful mental state inference (trying to infer the other person’s thoughts, desires, emotions).
Attributions
Adapted by Kenneth A. Koenigshofer, Ph.D. from Malle, B. (2021). Theory of mind. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/a8wpytg3
Authors
• Bertram Malle, Professor at Brown University, was trained in psychology, philosophy, and linguistics in Austria and at Stanford University. He received an SESP Dissertation award, an NSF CAREER award, and was president of the Society of Philosophy and Psychology. His research focuses on social cognition, moral judgment, and more recently human-robot interaction.
Creative Commons License
Theory of Mind by Bertram Malle is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Permissions beyond the scope of this license may be available in our Licensing Agreement.
Paragraph on evolutionary rationale for motivation to understand others in "Folk Explanations of Behavior" added by Kenneth A. Koenigshofer, PhD. Chaffey College.
Overview
We now continue exploration of social cognition by examining a spectrum of disorders of social processing and the brain mechanisms involved. Here we encounter social neuroscience once again. Autism is in the category of pervasive developmental disorders, which includes Asperger's disorder, childhood disintegrative disorder, autistic disorder, and pervasive developmental disorder - not otherwise specified. These disorders, together, are labeled autism spectrum disorder (ASD). ASD is defined by the presence of profound difficulties in social interactions and communication combined with the presence of repetitive or restricted interests, cognition and behaviors. The social brain is a set of interconnected neuroanatomical structures that process social information, enabling the recognition of other individuals and the evaluation their mental states. The social brain is hypothesized to consist of the amygdala, the orbital frontal cortex (OFC), fusiform gyrus (FG), and the posterior superior temporal sulcus (STS) region, among other structures. Because autism is a developmental disorder, it is particularly important to diagnose and treat ASD early in life.
Autism: Insights from the Study of the Social Brain
By
Yale University
People with autism spectrum disorder (ASD) suffer from a profound social disability. Social neuroscience is the study of the parts of the brain that support social interactions or the “social brain.” This module provides an overview of ASD and focuses on understanding how social brain dysfunction leads to ASD. Our increasing understanding of the social brain and its dysfunction in ASD will allow us to better identify the genes that cause ASD and will help us to create and pick out treatments to better match individuals. Because social brain systems emerge in infancy, social neuroscience can help us to figure out how to diagnose ASD even before the symptoms of ASD are clearly present. This is a hopeful time because social brain systems remain malleable well into adulthood and thus open to creative new interventions that are informed by state-of-the-art science.
Learning Objectives
• Know the basic symptoms of ASD.
• Distinguish components of the social brain and understand their dysfunction in ASD.
• Appreciate how social neuroscience may facilitate the diagnosis and treatment of ASD.
Defining Autism Spectrum Disorder
Autism Spectrum Disorder (ASD) is a developmental disorder that usually emerges in the first three years and persists throughout the individual’s life. Though the key symptoms of ASD fall into three general categories (see below), each person with ASD exhibits symptoms in these domains in different ways and to varying degrees. This phenotypic heterogeneity reflects the high degree of variability in the genes underlying ASD (Geschwind & Levitt, 2007). Though we have identified genetic differences associated with individual cases of ASD, each accounts for only a small number of the actual cases, suggesting that no single genetic cause will apply in the majority of people with ASD. There is currently no biological test for ASD.
Autism is in the category of pervasive developmental disorders, which includes Asperger's disorder, childhood disintegrative disorder, autistic disorder, and pervasive developmental disorder - not otherwise specified. These disorders, together, are labeled autism spectrum disorder (ASD). ASD is defined by the presence of profound difficulties in social interactions and communication combined with the presence of repetitive or restricted interests, cognitions and behaviors. The diagnostic process involves a combination of parental report and clinical observation. Children with significant impairments across the social/communication domain who also exhibit repetitive behaviors can qualify for the ASD diagnosis. There is wide variability in the precise symptom profile an individual may exhibit.
Since Kanner first described ASD in 1943, important commonalities in symptom presentation have been used to compile criteria for the diagnosis of ASD. These diagnostic criteria have evolved during the past 70 years and continue to evolve (e.g., see the recent changes to the diagnostic criteria on the American Psychiatric Association’s website, http://www.dsm5.org/), yet impaired social functioning remains a required symptom for an ASD diagnosis. Deficits in social functioning are present in varying degrees for simple behaviors such as eye contact, and complex behaviors like navigating the give and take of a group conversation for individuals of all functioning levels (i.e. high or low IQ). Moreover, difficulties with social information processing occur in both visual (e.g., Pelphrey et al., 2002) and auditory (e.g., Dawson, Meltzoff, Osterling, Rinaldi, & Brown, 1998) sensory modalities.
Consider the results of an eye tracking study in which Pelphrey and colleagues (2002) observed that individuals with autism did not make use of the eyes when judging facial expressions of emotion (see right panels of Figure 1). While repetitive behaviors or language deficits are seen in other disorders (e.g., obsessive-compulsive disorder and specific language impairment, respectively), basic social deficits of this nature are unique to ASD. Onset of the social deficits appears to precede difficulties in other domains (Osterling, Dawson, & Munson, 2002) and may emerge as early as 6 months of age (Maestro et al., 2002).
Defining the Social Brain
Within the past few decades, research has elucidated specific brain circuits that support perception of humans and other species. This social perception refers to “the initial stages in the processing of information that culminates in the accurate analysis of the dispositions and intentions of other individuals” (Allison, Puce, & McCarthy, 2000). Basic social perception is a critical building block for more sophisticated social behaviors, such as thinking about the motives and emotions of others. Brothers (1990) first suggested the notion of a social brain, a set of interconnected neuroanatomical structures that process social information, enabling the recognition of other individuals and the evaluation their mental states (e.g., intentions, dispositions, desires, and beliefs).
The social brain is hypothesized to consist of the amygdala, the orbital frontal cortex (OFC), fusiform gyrus (FG), and the posterior superior temporal sulcus (STS) region, among other structures. Though all areas work in coordination to support social processing, each appears to serve a distinct role. The amygdala helps us recognize the emotional states of others (e.g., Morris et al., 1996) and also to experience and regulate our own emotions (e.g., LeDoux, 1992). The OFC supports the "reward" feelings we have when we are around other people (e.g., Rolls, 2000). The FG, located at the bottom of the surface of the temporal lobes detects faces and supports face recognition (e.g., Puce, Allison, Asgari, Gore, & McCarthy, 1996). The posterior STS region recognizes the biological motion, including eye, hand and other body movements, and helps to interpret and predict the actions and intentions of others (e.g., Pelphrey, Morris, Michelich, Allison, & McCarthy, 2005).
Current Understanding of Social Perception in ASD
The social brain is of great research interest because the social difficulties characteristic of ASD are thought to relate closely to the functioning of this brain network. Functional magnetic resonance imaging (fMRI) and event-related potentials (ERP) are complementary brain imaging methods used to study activity in the brain across the lifespan. Each method measures a distinct facet of brain activity and contributes unique information to our understanding of brain function.
FMRI uses powerful magnets to measure the levels of oxygen within the brain, which vary according to changes in neural activity. As the neurons in specific brain regions “work harder”, they require more oxygen. FMRI detects the brain regions that exhibit a relative increase in blood flow (and oxygen levels) while people listen to or view social stimuli in the MRI scanner. The areas of the brain most crucial for different social processes are thus identified, with spatial information being accurate to the millimeter.
In contrast, ERP provides direct measurements of the firing of groups of neurons in the cortex. Non-invasive sensors on the scalp record the small electrical currents created by this neuronal activity while the subject views stimuli or listens to specific kinds of information. While fMRI provides information about where brain activity occurs, ERP specifies when by detailing the timing of processing at the millisecond pace at which it unfolds.
ERP and fMRI are complementary, with fMRI providing excellent spatial resolution and ERP offering outstanding temporal resolution. Together, this information is critical to understanding the nature of social perception in ASD. To date, the most thoroughly investigated areas of the social brain in ASD are the superior temporal sulcus (STS), which underlies the perception and interpretation of biological motion, and the fusiform gyrus (FG), which supports face perception. Heightened sensitivity to biological motion (for humans, motion such as walking) serves an essential role in the development of humans and other highly social species. Emerging in the first days of life, the ability to detect biological motion helps to orient vulnerable young to critical sources of sustenance, support, and learning, and develops independent of visual experience with biological motion (e.g., Simion, Regolin, & Bulf, 2008). This inborn “life detector” serves as a foundation for the subsequent development of more complex social behaviors (Johnson, 2006).
From very early in life, children with ASD display reduced sensitivity to biological motion (Klin, Lin, Gorrindo, Ramsay, & Jones, 2009). Individuals with ASD have reduced activity in the STS during biological motion perception. Similarly, people at increased genetic risk for ASD but who do not develop symptoms of the disorder (i.e. unaffected siblings of individuals with ASD) show increased activity in this region, which is hypothesized to be a compensatory mechanism to offset genetic vulnerability (Kaiser et al., 2010).
In typical development, preferential attention to faces and the ability to recognize individual faces emerge in the first days of life (e.g., Goren, Sarty, & Wu, 1975). The special way in which the brain responds to faces usually emerges by three months of age (e.g., de Haan, Johnson, & Halit, 2003) and continues throughout the lifespan (e.g., Bentin et al., 1996). Children with ASD, however, tend to show decreased attention to human faces by six to 12 months (Osterling & Dawson, 1994). Children with ASD also show reduced activity in the FG when viewing faces (e.g., Schultz et al., 2000). Slowed processing of faces (McPartland, Dawson, Webb, Panagiotides, & Carver, 2004) is a characteristic of people with ASD that is shared by parents of children with ASD (Dawson, Webb, & McPartland, 2005) and infants at increased risk for developing ASD because of having a sibling with ASD (McCleery, Akshoomoff, Dobkins, & Carver, 2009). Behavioral and attentional differences in face perception and recognition are evident in children and adults with ASD as well (e.g., Hobson, 1986).
Exploring Diversity in ASD
Because of the limited quality of the behavioral methods used to diagnose ASD and current clinical diagnostic practice, which permits similar diagnoses despite distinct symptom profiles (McPartland, Webb, Keehn, & Dawson, 2011), it is possible that the group of children currently referred to as having ASD may actually represent different syndromes with distinct causes. Examination of the social brain may well reveal diagnostically meaningful subgroups of children with ASD. Measurements of the “where” and “when” of brain activity during social processing tasks provide reliable sources of the detailed information needed to profile children with ASD with greater accuracy. These profiles, in turn, may help to inform treatment of ASD by helping us to match specific treatments to specific profiles.
The integration of imaging methods is critical for this endeavor. Using face perception as an example, the combination of fMRI and ERP could identify who, of those individuals with ASD, shows anomalies in the FG and then determine the stage of information processing at which these impairments occur. Because different processing stages often reflect discrete cognitive processes, this level of understanding could encourage treatments that address specific processing deficits at the neural level.
For example, differences observed in the early processing stages might reflect problems with low-level visual perception, while later differences would indicate problems with higher-order processes, such as emotion recognition. These same principles can be applied to the broader network of social brain regions and, combined with measures of behavioral functioning, could offer a comprehensive profile of brain-behavior performance for a given individual. A fundamental goal for this kind of subgroup approach is to improve the ability to tailor treatments to the individual.
Another objective is to improve the power of other scientific tools. Most studies of individuals with ASD compare groups of individuals, for example, individuals on with ASD compared to typically developing peers. However, studies have also attempted to compare children across the autism spectrum by group according to differential diagnosis (e.g., Asperger’s disorder versus autistic disorder), or by other behavioral or cognitive characteristics (e.g., cognitively able versus intellectually disabled or anxious versus non-anxious). Yet, the power of a scientific study to detect these kinds of significant, meaningful, individual differences is only as strong as the accuracy of the factor used to define the compared groups.
The identification of distinct subgroups within the autism spectrum according to information about the brain would allow for a more accurate and detailed exposition of the individual differences seen in those with ASD. This is especially critical for the success of investigations into the genetic basis of ASD. As mentioned before, the genes discovered thus far account for only a small portion of ASD cases. If meaningful, quantitative distinctions in individuals with ASD are identified; a more focused examination into the genetic causes specific to each subgroup could then be pursued. Moreover, distinct findings from neuroimaging, or biomarkers, can help guide genetic research. Endophenotypes, or characteristics that are not immediately available to observation but that reflect an underlying genetic liability for disease, expose the most basic components of a complex psychiatric disorder and are more stable across the lifespan than observable behavior (Gottesman & Shields, 1973). By describing the key characteristics of ASD in these objective ways, neuroimaging research will facilitate identification of genetic contributions to ASD.
Atypical Brain Development Before the Emergence of Atypical Behavior
Because autism is a developmental disorder, it is particularly important to diagnose and treat ASD early in life. Early deficits in attention to biological motion, for instance, derail subsequent experiences in attending to higher level social information, thereby driving development toward more severe dysfunction and stimulating deficits in additional domains of functioning, such as language development. The lack of reliable predictors of the condition during the first year of life has been a major impediment to the effective treatment of ASD. Without early predictors, and in the absence of a firm diagnosis until behavioral symptoms emerge, treatment is often delayed for two or more years, eclipsing a crucial period in which intervention may be particularly successful in ameliorating some of the social and communicative impairments seen in ASD.
In response to the great need for sensitive (able to identify subtle cases) and specific (able to distinguish autism from other disorders) early indicators of ASD, such as biomarkers, many research groups from around the world have been studying patterns of infant development using prospective longitudinal studies of infant siblings of children with ASD and a comparison group of infant siblings without familial risks. Such designs gather longitudinal information about developmental trajectories across the first three years of life for both groups followed by clinical diagnosis at approximately 36 months.
These studies are problematic in that many of the social features of autism do not emerge in typical development until after 12 months of age, and it is not certain that these symptoms will manifest during the limited periods of observation involved in clinical evaluations or in pediatricians’ offices. Moreover, across development, but especially during infancy, behavior is widely variable and often unreliable, and at present, behavioral observation is the only means to detect symptoms of ASD and to confirm a diagnosis. This is quite problematic because, even highly sophisticated behavioral methods, such as eye tracking (see Figure 1), do not necessarily reveal reliable differences in infants with ASD (Ozonoff et al., 2010). However, measuring the brain activity associated with social perception can detect differences that do not appear in behavior until much later. The identification of biomarkers utilizing the imaging methods we have described offers promise for earlier detection of atypical social development.
ERP measures of brain response predict subsequent development of autism in infants as young as six months old who showed normal patterns of visual fixation (as measured by eye tracking) (Elsabbagh et al., 2012). This suggests the great promise of brain imaging for earlier recognition of ASD. With earlier detection, treatments could move from addressing existing symptoms to preventing their emergence by altering the course of abnormal brain development and steering it toward normality.
Hope for Improved Outcomes
The brain imaging research described above offers hope for the future of ASD treatment. Many of the functions of the social brain demonstrate significant plasticity, meaning that their functioning can be affected by experience over time. In contrast to theories that suggest difficulty processing complex information or communicating across large expanses of cortex (Minshew & Williams, 2007), this malleability of the social brain is a positive prognosticator for the development of treatment. The brains of people with ASD are not wired to process optimally social information. But this does not mean that these systems are irretrievably broken. Given the observed plasticity of the social brain, remediation of these difficulties may be possible with appropriate and timely intervention.
Outside Resources
Web: American Psychiatric Association’s website for the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders
http://www.dsm5.org
Web: Autism Science Foundation - organization supporting autism research by providing funding and other assistance to scientists and organizations conducting, facilitating, publicizing and disseminating autism research. The organization also provides information about autism to the general public and serves to increase awareness of autism spectrum disorders and the needs of individuals and families affected by autism.
http://www.autismsciencefoundation.org/
Web: Autism Speaks - Autism science and advocacy organization
http://www.autismspeaks.org/
Discussion Questions
1. How can neuroimaging inform our understanding of the causes of autism?
2. What are the ways in which neuroimaging, including fMRI and ERP, may benefit efforts to diagnosis and treat autism?
3. How can an understanding of the social brain help us to understand ASD?
4. What are the core symptoms of ASD, and why is the social brain of particular interest?
5. What are some of the components of the social brain, and what functions do they serve?
Vocabulary
Endophenotypes
A characteristic that reflects a genetic liability for disease and a more basic component of a complex clinical presentation. Endophenotypes are less developmentally malleable than overt behavior.
Measures the firing of groups of neurons in the cortex. As a person views or listens to specific types of information, neuronal activity creates small electrical currents that can be recorded from non-invasive sensors placed on the scalp. ERP provides excellent information about the timing of processing, clarifying brain activity at the millisecond pace at which it unfolds.
Functional magnetic resonance imaging (fMRI)
Entails the use of powerful magnets to measure the levels of oxygen within the brain that vary with changes in neural activity. That is, as the neurons in specific brain regions “work harder” when performing a specific task, they require more oxygen. By having people listen to or view social percepts in an MRI scanner, fMRI specifies the brain regions that evidence a relative increase in blood flow. In this way, fMRI provides excellent spatial information, pinpointing with millimeter accuracy, the brain regions most critical for different social processes.
Social brain
The set of neuroanatomical structures that allows us to understand the actions and intentions of other people.
Attributions
Adapted by Kenneth A. Koenigshofer, PhD, from Pelphrey, K. A. (2021). Autism: insights from the study of the social brain. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/yqdepwgt
Authors
• Dr. Kevin Pelphrey is the Harris Professor in the Yale Child Study Center and director of the Center for Translational Developmental Neuroscience and Center for Excellence in Autism Research and Treatment. His research focuses on the application of cognitive neuroscience and genetics to understanding the systems biology of neurodevelopmental disorders.
Creative Commons License
Autism: Insights from the Study of the Social Brain by Kevin A. Pelphrey is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Permissions beyond the scope of this license may be available in our Licensing Agreement. | textbooks/socialsci/Psychology/Biological_Psychology/Biopsychology_(OERI)_-_DRAFT_for_Review/18%3A_Supplemental_Content/18.14%3A_Chapter_14-_Mirror_Neurons_Theory_of_Mind_Social_Cognition_and_Neuroscience_of_Its_Disorders.txt |
Learning Objectives
1. Explain the first evidence that the FOXP2 gene was involved in human speech
2. Describe evidence that FOXP2 is involved in vocalization in other species
3. Describe the single nucleotide mutation on the FOXP2 gene, and the resulting alteration of the FOXP2 protein, which causes language difficulties
4. Describe the general location of the FOXP2 gene in humans by identifying the chromosome where it is found
5. Identify the differences between the human FOXP2 gene and the same gene in representative mammals and non-human primates
Overview
FOXP2 was the first gene discovered to be essential for human speech. It is expressed in many areas of the brain, including the basal ganglia and inferior frontal cortex, where it is essential for brain maturation and speech and language development (Enard et al., 2002). FOXP2 is found in many vertebrates ranging mouse to alligator and is involved in the vocalizations of many animals. For example, it is found in song birds and is important in their production of birdsong. In bats, FOXP2 is involved in echolocation. The gene has been highly conserved in mammals and few differences in the gene exist among mammal species. The protein that the gene produces is nearly identical in mice and primates.
The FOXP2 Gene
The FOXP2 (short for “forkhead box P2”) gene is the first gene that scientists ever associated with the human ability to speak (Nudel & Newbury, 2013). FOXP2 was first discovered in the KE family (medical name for this British family), about half of whose members had specific language impairments. Towards the end of 1980's, seven children of the family attended a special educational needs unit at a primary school in London. The head of the special needs unit discovered that the family had a speech disorder for three generations. Of the 30 family members, about half suffer from severe language deficiency, some are affected mildly, and few are unaffected (Watkins, et al., 1999). Their faces show rigidity at the lower half, and most cannot complete pronouncing a word. Many of them have severe stuttering and limited vocabulary. In particular, they have difficulty with consonants, and omit them, such as "boon" for "spoon", "able" for "table", and "bu" for "blue." Linguistic deficiency is also noted in written language both in reading and writing. They are characterized by lower nonverbal IQ in addition to their language difficulties. The first scientific report on the family's disorder, by Hurst, et al. (1990), showed that 16 family members were affected by severe abnormality, though their hearing was normal and some had normal intelligence, and that the condition was genetically inherited and was autosomal dominant.
Figure \(1\): FOXP2 protein and DNA. Forkhead box protein P2 (FOXP2) is a protein that, in humans, is encoded by the FOXP2 gene. FOXP2 is a member of the forkhead box family of transcription factors, proteins that regulate gene expression by binding to DNA. It has multiple effects and is expressed in the brain, heart, lungs and digestive system. Ribbon diagram of forkhead box P2 (FOXP2) protein. (Image from Wikimedia Commons; File:FOXP2 (2as5).png; https://commons.wikimedia.org/wiki/F...XP2_(2as5).png; by SWISS-MODEL, based on 2as5 from PDB; licensed under the Creative Commons Attribution-Share Alike 4.0 International license).
Using positron emission tomography (PET) and magnetic resonance imaging (MRI), Vargha-Khadem, et al. (1998) found that some brain regions were underactive (compared to baseline levels) in the KE family members and that some were overactive, when compared to normal people. The underactive regions included motor neurons that control face and mouth regions. The areas that were overactive included Broca's area, the speech center (which might indicate that the speech center is having to work much harder than in normal people in order to produce even faulty speech).
Fisher, et al. (1998) identified the exact location of the gene on the long arm of chromosome 7 (7q31). The chromosomal region (locus) was named SPCH1 (for speech-and-language-disorder-1), and it contains 70 genes. Using the known gene location of speech disorder from a boy, designated CS, of an unrelated family, they discovered in 2001 that the main gene responsible for speech impairment in both KE family and CS was FOXP2, and that this gene plays a major role in the origin and development of language (Lai, et al., 2001). Mutations in the gene result in speech and language problems (Vargha-Khadem & Liegeois, 2007) as seen in the KE family.
The exact problems caused by mutations in this gene remain hard to identify. This is not surprising when you consider the family of genes to which this one belongs. The FOX family of genes are transcription factors, which means that they produce proteins that can regulate the expression of a number of other genes by binding directly to their DNA. (The binding ability of these particular proteins comes from their forked shape, from which the gene family gets its name.) The FOXP2 gene would appear to play an important role in orchestrating the establishment of the neural pathways during embryonic development, some of which are required for normal vocalizations in several species studied, including humans. Surprisingly, this gene is extremely well preserved phylogenetically (across species): the protein that it produces is almost identical in mice and in primates, which are separated by some 130 million years of evolution.
The protein that the FOXP2 gene produces in humans differs by only two or three amino acids from the protein that it produces in other species. It is very likely these two or three amino acids make the difference between animals that cannot speak and humans who can. Moreover, the mutations that caused this difference are estimated to have occurred between 100,000 and 200,000 years ago, roughly the time that articulate language may have first emerged in human beings.
By performing a detailed analysis of the defective FOXP2 gene sequence in several members of the KE family, scientists were also able to identify the precise site of the mutation that caused this gene to malfunction in these individuals. This mutation occurs on exon 14 of the FOXP2 gene, when the guanine in a nucleotide is replaced by an adenine. As it happens, the part of the gene where this mutation occurs is precisely the one that codes for the “forkhead” portion of the protein—the part that binds to the DNA on other genes. This change in a single nucleotide on the FOXP2 gene has a direct impact on this protein, causing the amino acid arginine to be replaced with a histidine.
In the hundreds of normal subjects tested, the protein produced by FOXP2 always has an arginine at this particular site, while in the members of the KE family who suffered from specific language impairments, it always had a histidine. Hence there is not a shadow of a doubt about the mutation that causes this disorder. That said, it is still amazing to think that the mutation of a single one of the 2,500 nucleic bases in the FOXP2 gene is sufficient to impair language!
FOXP2 is also required for proper lung development, not only in humans but in other species as well. Additional research showed that Knockout mice with only one functional copy of the FOXP2 gene have significantly reduced vocalizations as pups (Shu et al., 2005). Knockout mice with no functional copies of FOXP2 are runted, display abnormalities in brain regions such as the Purkinje layer, and die an average of 21 days after birth from inadequate lung development.
Figure \(2\): The FOXP2 gene is located on the long (q) arm of chromosome 7 at position 31. More precisely, the FOXP2 gene is located from base pair 114,086,309 to base pair 114,693,771 on chromosome 7 (Image and caption from Wikimedia Commons; File:FOXP2 location.png; https://commons.wikimedia.org/wiki/F...2_location.png; by U.S. National Library of Medicine; in the public domain).
Evolution
The FOXP2 Gene is Found in Many Diverse Species
The FOXP2 gene is highly conserved in mammals (Enard et al., 2002). Conserved, in the evolutionary context, means that a trait or gene and its variants are found in a range of diverse species over long periods of evolutionary time suggesting that once it appeared it was retained as new species diverged from original ancestral species. The human gene differs from that in non-human primates by the substitution of just two amino acids, a threonine to asparagine substitution at position 303 (T303N) and an asparagine to serine substitution at position 325 (N325S) (Preuss, 2012). In mice it differs from that of humans by three substitutions, and in zebra finch by seven amino acids (Enard et al., 2002; Haesler et al., 2004; Teramitsu et al., 2004). One of the two amino acid differences between human and chimps also arose independently in carnivores and bats (Li et al., 2007; Shu et al., 2007). Similar FOXP2 proteins can be found in songbirds, fish, and reptiles such as alligators (Webb et al., 2005; Scharff & Haesler, 2005).
Figure \(3\): Mouse song system anatomy and syllable types. (A) Proposed anatomy of the rudimentary mouse forebrain vocal communication circuit based on Arriaga et al. (2012). Not shown are other connected brainstem regions, the amygdala, and insula. (B) Comparison with human, based on Arriaga et al. (2012) and Pfenning et al. (2014). (C) Comparison with songbird. (D) Sonograms of examples syllables of the four syllable categories quantified from a C57 male mouse USV song, labeled according to pitch jumps. Anatomical abbreviations: ADSt, anterior dorsal striatum; Amb, nucleus ambiguous; ASt, anterior striatum; aT, anterior thalamus; Av, nucleus avalanche; HVC, a letter-based name; LArea X, lateral Area X; LMO, lateral mesopallium oval nucleus; LMAN, lateral magnocellular nucleus of the nidopallium; LMC, laryngeal motor cortex; LSC, laryngeal somatosensory cortex; M1, primary motor cortex; M2, secondary motor cortex; NIf, interfacial nucleus of the nidopallium; PAG, periaqueductal gray; RA, robust nucleus of the arcopallium; T, thalamus; VL, ventral lateral nucleus of the thalamus; XIIts, 12th vocal motor nucleus, tracheosyringeal part (Figure and caption from Chabout et al., 2016).
The FOXP2 Gene Found in Neanderthal Fossils
DNA sampling from Homo neanderthalensis bones indicates that their FOXP2 gene is a little different though largely similar to those of Homo sapiens (i.e. humans) (Krause et al., 2005; Zimmer, 2016). Previous genetic analysis had suggested that the H. sapiens FOXP2 gene became fixed in the population around 125,000 years ago (Benítez-Burraco et al., 2008). Some researchers consider the Neanderthal findings to indicate that the gene instead swept through the population over 260,000 years ago, before our most recent common ancestor with the Neanderthals (Benítez-Burraco et al., 2008). Other researchers offer alternative explanations for how the H. sapiens version would have appeared in Neanderthals living 43,000 years ago (Benítez-Burraco et al., 2008).
Positive Selection for FOXP2
According to a 2002 study (Enard, et al., 2002), the FOXP2 gene showed indications of recent positive selection (Enard, et al., 2002; Toda et al., 1992). Some researchers have speculated that positive selection is crucial for the evolution of language in humans (Enard, et al., 2002).
The figure below (Figure 14.15.4) shows three types of selection. The top shows positive selection. Positive selection is also known as Darwinian selection and is the type of selection that Darwin envisioned as the primary mechanism of evolution (for more on the types of selection, see Module 3.5). The original population distribution, before selection, is shown in red. The population distribution after selection is shown in blue. In directional evolution, the population values for the trait move toward one extreme of the trait. By positive selection, a new adaptive trait can sweep through a population. However, "though advantageous mutations are of great interest, they are difficult to detect and analyze because of the fact that neutral and deleterious mutations predominate them in frequency" (Thomas Lab, University of Washington).
Figure \(4\): These charts depict the different types of genetic selection. On each graph, the x-axis variable is the type of phenotypic trait and the y-axis variable is the number of organisms. Group A is the original population and Group B is the population after selection. Graph 1 shows directional selection, in which a single extreme phenotype is favored. Graph 2 depicts stabilizing selection, where the intermediate phenotype is favored over the extreme traits. Graph 3 shows disruptive selection, in which the extreme phenotypes are favored over the intermediate (Image and caption from Wikimedia Commons; File:Genetic Distribution.svg; https://commons.wikimedia.org/wiki/F...stribution.svg; by Ealbert17; licensed under the Creative Commons Attribution-Share Alike 4.0 International license).
Recent Positive Selection for FOXP2 Disputed
Others, however, were unable to find a clear association between species with learned vocalizations and similar mutations in FOXP2 (Webb et al., 2005; Scharff & Haesler, 2005). A 2018 analysis of a large sample of globally distributed genomes confirmed there was no evidence of positive selection, suggesting that the original signal of positive selection may be driven by sample composition (Atkinson et al., 2018; Williams, 2020). Insertion of both human mutations into mice, whose version of FOXP2 otherwise differs from the human and chimpanzee versions in only one additional base pair, causes changes in vocalizations as well as other behavioral changes, such as a reduction in exploratory tendencies, and a decrease in maze learning time. A reduction in dopamine levels and changes in the morphology of certain nerve cells are also observed (Enard et al., 2009).
Figure \(5\): Human FOXP2 gene and evolutionary conservation is shown in a multiple alignment (at bottom of figure) in this image from the UCSC Genome Browser. Note that conservation tends to cluster around coding regions (exons).
Migration of Human Groups and Genetic and Linguistic Relationships
Around the end of World War II, the Italian population geneticist Luca Cavalli-Sforza began constructing genealogical trees that established relationships among populations throughout the world. By cross-tabulating data on several dozen genes, Cavalli-Sforza established a relationship between American Indians and Asians. This finding is consistent with the most common theory about how the New World was populated: by peoples who crossed from Siberia to Alaska when the Bering Strait was frozen over during the last great Ice Age, some 30,000 years ago.
Cavalli-Sforza’s findings assumed even greater importance when he correlated them with analogous studies on languages. When Cavalli-Sforza compared the genealogical trees established by geneticists with those established by linguists, with just a few exceptions, he found that the people who speak each of the 15 major families of languages are genetically related as well. The explanation for this remarkable concordance is that when a population migrates to a new territory, it takes its genes along as well as its language.
But there have been many criticisms of Cavalli-Sforza’s approach, and in particular his way of defining a population. In the works of Cavalli-Sforza and his followers, the first step is to define a population, by linguistic criteria among others. Correlations are then established between these populations and their languages, which seems like a dangerously circular approach. It has also been noted that these studies are more convincing on a small scale or a large scale, but far less so on an intermediate one. The reason is that it is easier to distinguish Inuit from Bantu, for example, than to differentiate the various populations that speak Bantu languages.
In addition, the DNA samples used in many studies come from blood banks, and the accompanying records may be biased or false, because for various reasons, when people give blood, they may report their ethnicity as different from what it actually is. This points to the fact that errors in interpretation may be found when controls are applied to both linguistic and genetic data.
Summary
The FOXP2 gene is found widely in the animal kingdom and the human version of the gene differs little from that in other species, yet the human variant appears to have made language possible. Mutation of a single one of the 2,500 nucleic bases in the FOXP2 gene is sufficient to impair language. Though many researchers believe that the FOXP2 gene played an important role in the evolution of human language, evidence for positive selection for it is still controversial.
Attributions
Contributed by Kenneth A. Koenigshofer, PhD. adapted from Genes that are essential for speech by Bruno Dubuc under a Copyleft license.
Some text and images from Wikipedia, KE Family, and FOXP2, retrieved September 4, 2021. Text for section, Evolution, from Wikipedia, FOXP2. | textbooks/socialsci/Psychology/Biological_Psychology/Biopsychology_(OERI)_-_DRAFT_for_Review/18%3A_Supplemental_Content/18.15%3A_Chapter_15-_A_Gene_Essential_for_Speech.txt |
Learning Objectives
1. Distinguish between continuity and discontinuity theories of the origins of language; provide examples
2. Distinguish between innate and cultural theories of the origins of language; provide examples
3. Define polygenism and monogenism
4. Define "spandrel" and "exaptation"
5. Contrast Chomsky's views with Deacon's views on the origins of language
6. Briefly describe the evidence related to whether Neanderthals had spoken language
Overview
In this section, we review a number of ideas about the origins of human language. Two prominent approaches are continuity and discontinuity theories. Continuity theorists assume that language evolved gradually from earlier forms of communication in non-human animals and hominids from as recently as 40,000 years ago, according to some theorists, to over 2 million years ago in Homo habilis, according to others. Discontinuity theorists believe that human language is so unique that it must have appeared relatively quickly in human evolution without being derived from any form of animal communication. Some theories propose that language is mostly innate, determined primarily by genes, while others hypothesize that human language has cultural origins, resulting from learning in social interaction, with limited and only general contributions from genetics.
Evolution of Language
From Primate Origins to a Language Ready Human Brain
The origin of language (spoken, signed, and written) and its relationship to human evolution are complex subjects requiring inferences from the fossil record, archeological evidence, contemporary language similarities and differences, studies of language acquisition, and comparisons between human language and communication in other animals (particularly other primates).
Language clearly depends upon the human brain having acquired features that made it capable of the production and understanding of vocal symbols. However, how human language and a language-ready brain evolved is not known. Nevertheless, most theorists assume that language must have evolved from earlier, more primitive forms of communication.
One integrative approach to the complex issue of the evolutionary origins of human language and a brain capable of human language comes from "comparative neuroprimatology." This is the study of the brains, behaviors and communication systems of monkeys, apes and humans in order to investigate "the biological and cultural evolution of the human language-ready brain" (Arbib et al., 2018, p. 371). The brains of many animals, including non-human primates, show left lateralization of vocalization in the brain, just as the dominant hemisphere for language in most humans is the left. This suggests a long evolutionary history of human language from earlier forms of vocalization.
Theoretical Approaches to the Origins of Language
Approaches to the origin of language can be sub-divided according to some underlying assumptions (Ulbaek, 1998):
• "Continuity theories" work from the assumption that language exhibits so much complexity that it could not have developed from nothing in its final form; therefore it must have evolved from earlier pre-linguistic systems among humans' primate ancestors.
• "Discontinuity theories" take the opposite approach—that language is such a unique trait that it cannot be compared to anything found among non-humans, and that it must have appeared fairly suddenly during the course of human evolution.
• Innate theories: some theories consider language mostly as an innate faculty—largely genetically encoded.
• Cultural theories: other theories regard language as a mainly cultural system—learned through social interaction.
Continuity Approaches
A majority of linguistic scholars believe continuity-based theories, but they vary in how they hypothesize language development. Among those who consider language as mostly innate, some—notably, Steven Pinker (Pinker & Bloom, 1990)—avoid speculating about specific precursors in nonhuman primates, stressing simply that the language faculty must have evolved in the usual gradual way (Pinker, 1994) as Darwin proposed for most traits. Others in this intellectual camp—notably Ib Ulbæk (1998)—hold that language evolved not from primate communication but from primate cognition, which is significantly more complex.
Those who consider language as learned socially, such as Michael Tomasello, propose that it developed from the cognitively controlled aspects of primate communication, these being mostly gestural as opposed to vocal (Pika & Mitani, 2006; Tomasello, 1996). Regarding the vocal precursors of human language, many continuity theorists hypothesize that language evolved from early human capacities for song (Dunn, et al., 2011; Vaneechoutte, 2014).
Discontinuity Approaches
Noam Chomsky, at the Massachusetts Institute of Technology, a proponent of discontinuity theory, argues that a single chance mutation occurred in one individual in the order of 100,000 years ago, installing the language faculty (a hypothetical component of the mind-brain) in "perfect" or "near-perfect" form (Chomsky, 1996). However, it seems unlikely that this would produce adaptive advantage unless a sufficiently large number of others also had similar capacities for communication at the same time.
When Did Language Develop?
Clearly, there are many theories about the origins of language, and the dates cited for its first appearance vary greatly from one author to another. They range from the time of Cro-Magnon man, about 40,000 years ago, to the time of Homo habilis, about 2 million years back.
The hypothesis that language dates as far back as the time of Homo habilis is supported by the resilience of tool cultures in Homo habilis and later hominid species. Tool making techniques (see module on Material Culture) must be passed from generation to generation to be sustained over long spans of time. This can be accomplished by imitation (younger members of the group learning by watching more skillful and experienced tool makers) or by verbal instruction or by a combination of both. Homo habilis "retained their tool cultures despite many climate change cycles at the timescales of centuries to millennia each, [suggesting that Homo habilis and later] species had sufficiently developed language abilities [including grammar] to verbally describe complete procedures" (Model, 2010, p. 7) for toolmaking. Research with non-human primates shows that toolmaking skills based on imitation alone, without verbal instruction, are lost under environmental changes like the changes in climate referred to above. "Chimpanzees, macaques and capuchin monkeys are all known to lose tool techniques under such circumstances" (Model, 2010, p. 7). Many experts content that the resilience of tool culture in Homo habilis supports the view that language existed in these early human ancestors.
Monogenism or Polygenism?
Regardless of how and when language emerged, another question arises immediately: did it do so once, or many times? In other words, do all languages have a common origin, a proto-language that gave rise to all the rest, or did several different dialects emerge, at various places in the world?
Those who argue for the multiple origins, or polygenism, of language, say that the first modern humans did not share the potential for the faculty of speech, and that only after they dispersed through migration did actual languages develop independently among various groups of Homo sapiens.
The proponents of polygenism base their arguments on events and behaviors that would have had little chance of occurring without spoken language, such as great migrations that would have required major planning and organizing efforts. From this premise, the polygenists have deduced, for example, that the peoples who left Africa and arrived in Australia about 60,000 years ago must have spoken a complex language before those who migrated to the Middle East.
The alternative view, the theory of monogenism, proposes that all languages have a common origin, a proto-language at one location that gave rise to language once, and from that original language all the world's languages developed.
Monogenists were greatly influenced by Meritt Ruhlen’s On the Origin of Languages, which posited the existence of a single proto-language over 50,000 years ago. Ruhlen’s work was based, among other things, on analyses of population genetics that showed a high correlation between the genetic diversification of human populations and the diversification of the languages that they spoke. But other studies have shown the the correspondences between genetic classifications of populations and genealogical classifications of languages are more uncertain than was once believed. The fact remains that even though Ruhlen’s work has been questioned on linguistic grounds, many people still endorse the key idea in his book: that all languages had a common origin. Among these proponents of monogenism, there are two major schools of thought. there are two major schools of thought.
Two Major Views within Monogenism
A Chance Mutation and Spandrels
Following Chomsky (see above), the first major view starts from the premise that the human species as we know it arose from an unlikely genetic mutation that occurred about 100,000 years ago, in which certain of the brain’s circuits were reorganized. This reorganization gave rise to the human “language instinct,” thus paving the way for the explosive growth in all the cognitive abilities that the powerful communication tool of language provides. On this view, language is an innate component of human brain organization, which includes a “universal grammar,” which all humans inherit within their innate brain organization. This universal grammar is therefore a species-specific human trait and is expressed in similarities in the grammars of all languages of the world. This universal grammar organizes and guides language learning regardless of the human language being acquired. This view makes it hard to imagine any intermediate form of language that could function without all the grammatical structures found in languages today.
This discontinuity view of the origins of language has been criticized by some experts as anti-evolutionist, but several renowned scholars of evolution have ideas consistent with the uniqueness of human language and with discontinuity views of its origins. For example, paleoanthropologist Ian Tattersall, writes that Homo sapiens sapiens “is not simply an improved version of its ancestors—it’s a new [development], qualitatively distinct from them.” For Tattersall and many other scientists, the mechanism that gave rise to language involved the relatively sudden combination of pre-existing elements that had not been selected specifically to produce this attribute but that, together, made it possible. On this view, characteristics evolved for other purposes make a new capability, like language, possible, so that a trait like language itself was at least initially not selected for by natural selection but rather emerged from other evolved capabilities.
This type of evolutionary mechanism is thought to have come into play many times in the course of evolution; the Harvard paleontologist Stephen Jay Gould calls it exaptation. Exaptation means that a trait, feature, or structure that evolved for one function takes on a different function; for example, feathers originally evolved to keep ancestral birds warm, but then in later descendants became essential for flight.
Steven Jay Gould calls the features that result from exaptation, such as language, “spandrels. ” In evolutionary biology, a spandrel is a phenotypic trait that is a byproduct of the evolution of some other characteristic, rather than a direct product of adaptive selection. These ideas were brought into biology by Steven Jay Gould and Richard Lewontin in a 1979 scientific paper in which they sought to temper the influence of adaptationism, the view that sees most organismal traits as adaptive products of natural selection. Gould and Lewontin argued that chance and other non-selection factors played a larger role in evolution than adaptationists claimed. They believed that many traits were evolved for other purposes become "recruited" to perform other unrelated functions during the course of evolution. These recruited "spandrels" are examples of exaptation.
Like Noam Chomsky, Steven Gould also believed that human language is so different from anything else in the animal kingdom that he did not see how it could have developed from ancestral cries or gestures, but he did imagine its having emerged as a side effect of the explosive growth of human cognitive abilities.
A More Adaptationist View
The second major school of monogenism posits a concept of the evolution of Homo sapiens in which language developed from cognitive faculties that were already well established, and that once it was present in some earlier form, it was then naturally selected for. In this view, the birth of language was triggered not by a random mutation (as the first view states), but simply by the availability of an increasingly powerful cognitive tool. Bit by bit, by natural selection, those groups of hominids who developed an articulate language that let them discuss past and imaginary events would thereby have gradually supplanted those groups that as yet had only a proto-language. The emphasis here on natural selection for language ability makes this approach more of an adaptationist view.
This second school of monogenism is identified with the linguist and psychologist, Steven Pinker, who believes that language may very well have been the target that evolution was "aiming for" (this phrase is not to be taken literally since evolution has no purpose or goal but happens automatically; see Chapter 3; natural selection selected for language ability only in the sense that those who were better at language survived and reproduced more offspring). Pinker argues that the brain has a general capacity for language—a concept often associated with connectionist theory in cognitive science (see module on connectionist networks in the chapter on learning and memory). Pinker invokes the Baldwin effect, for example, as a major evolutionary force that could have led to modern language (see discussion of Baldwin Effect below). The ability to learn language would therefore have become a target of natural selection, thus permitting the selection of language-acquisition devices that were genetically pre-wired into the brain’s circuits.
This theory of monogenism favored by Steven Pinker also implies intermediate forms of language that eventually led to our own. For example, Derek Bickerton, a linguist renowned for his work on the evolution of language, suggests that human language abilities evolved in two stages. In the first, humans would have used a proto-language of symbolic representations that took the concrete form of vocal and/or gestural signs. This stage might have lasted nearly 2 million years. Then, about 50 000 years ago, humans would have developed a more formal syntax that let them exchange ideas with significantly more precision and clarity. With syntax, people could not only label things (“leopard paw print”, “danger”, etc.), but also join several labels together to express even more meaning (“When you see a leopard paw print, watch out!”).
Thus, if symbolic representations, already present in the proto-languages, made the construction of the first mental models of reality possible, it was the emergence of syntax that gave human language the great richness that it has today. To give some idea of how the transition from symbolic representations to syntax may have occurred, Bickerton cites the example of the pidgin languages of the colonial period. These rudimentary languages were developed by people of different cultural origins who needed to communicate. Though the pidgin languages themselves had no grammar at all, when they were learned by a second generation, they became what are known as creoles: new, grammatical languages derived from multiple mother tongues.
Another important scholar of the origins of language, anthropologist Terrence Deacon, takes exception to the primacy of grammar, believing instead that the essential feature of language is its use of symbols. According to Deacon, the so-called symbols that some authors say animals use are actually only indexes. He says that people who try to teach language to chimpanzees always ensure that the things designated by the words or icons being taught are present in the animal’s environment, which makes these words or icons mere indexes. Deacon associates this inferior level of language, based on signs and icons, with that used by children in their earliest years. By contrast, says Deacon, articulate adult language depends on the specificity of the symbols, which in turn depends on the logical connections that each symbol in a language has with the others. For Deacon, it is this network of relationships, far more than the mere occurrence of arbitrary signs, that characterizes the symbols used by human beings.
Deacon therefore thinks that we must try to understand the evolution of language not in terms of innate grammatical functions, but rather in terms of the manipulation of symbols and of relationships among symbols. There is certainly a human predisposition for language, but this predisposition would be the result of the co-evolution of the brain and of language. What is innate, according to Deacon, is a set of mental abilities that give us certain natural tendencies, which are expressed in the same universal language structures. Thus Deacon offers a different concept from Chomsky, who associates the origins of universal grammar with a language-specific innovation in the brain.
Deacon sees this co-evolution of the brain and language as being rooted in the complexity of humans’ social lives, which involved not only a high degree of co-operation between the men and women of a community to acquire resources, but also exclusive monogamous relationships to ensure proper care for very young children who were greatly dependent on adults. This highly explosive mixture is not found in any other species (the great apes, for example, gather their food individually). To ensure the stability of the group, rituals and restrictions were required: in other words, abstractions that could be comprehended only if the individuals involved could understand and use symbols.
Universal Human Language Circuitry
The first changes in the neurons of the left hemisphere that accompanied the development of language faculties during hominization may have occurred about 100,000 years ago, or even earlier. But the truly explosive growth in these faculties most likely began with the evolution of the angular gyrus, about 50,000 years ago (see modules 14.10 and 14.11).
Together, the angular and supramarginal gyri constitute a multimodal associative area that receives auditory, visual, and somatosensory inputs. The neurons in this area are thus very well positioned to process the phonological and semantic aspect of language that enables us to identify and categorize objects.
The language areas of the brain are distinct from the circuits responsible for auditory perception of the words we hear or visual perception of the words we read. The auditory cortex lets us recognize sounds, an essential prerequisite for understanding language. The visual cortex, which lets us consciously see the outside world, is also crucial for language, because it enables us to read words and to recognize objects as the first step in identifying them by a name.
Scientists believe that articulate language as we now know it must have already appeared 50,000 or 60,000 years ago, because it was then that the various human ethnic groups became differentiated. But all these groups still retain the ability to learn any language spoken anywhere in the world. Thus a Polish or Chinese immigrant to New York City ends up speaking with a New York accent, and vice versa, which just goes to show that all of us have inherited the same linguistic potential.
Pidgin
A pidgin is a language created spontaneously from a mixture of several languages, so that the people who speak them can communicate. The people who develop a pidgin language agree on a limited vocabulary and employ only a rudimentary grammar. For example, in Franco-Vietnamese pidgin, this results in sentences such as “Moi faim. Moi tasse. Lui aver permission repos. Demain moi retour campagne.” [Me hunger. Me lie down. He have permission rest. Tomorrow me return country.]
The first documented pidgin, the Lingua Franca, was used by Mediterranean merchants in the Middle Ages. Another well known pidgin was developed from a mixture of Chinese, English, and Portuguese to facilitate trade in Canton, China during the 18th and 19th centuries. Another classic example is the pidgin developed by slaves in the Caribbean, whose cultural origins were too diverse for their own languages to survive after their forced transplantation.
Children who grow up together and learn a pidgin tend to spontaneously impose a lexical structure on it to create a creole: a true language whose vocabulary comes from other languages. But this does not happen with all pidgins, and some are lost or become obsolete.
According to researchers such as Derek Bickerton, people who find themselves in the particular circumstances described above revert to an older form of communication, what Bickerton calls a proto-language, of which pidgin would be the modern manifestation.
Baldwin Effect
In 1896, American psychologist James Mark Baldwin proposed an evolutionary mechanism that soon came to be known as the “Baldwin effect”. It is a process whereby a behavior that originally had to be learned can eventually become innate, that is, fixed in the genetic programming of the species concerned (Sznajder, et al., 2012). The effectiveness of the learning plays a key role in the Baldwin effect, which distinguishes it from Lamarckian inheritance of acquired characteristics.
The idea behind the Baldwin effect is that individuals who are able to learn a given kind of behavior more effectively may over the course of their lives acquire advantages that individuals whose brains are less plastic will not. Natural selection will therefore tend to favor those who always learn faster until, at some point in evolution, the behavior will no longer need to be learned at all: it will have become instinctive.
It should be noted that the Baldwin effect assumes that the environment remains relatively stable, because if it changed too much, there would be no selection against plasticity, which would become an important adaptive factor. But if the environment remains stable for a long time, natural selection may favor a mutation that makes the behavior innate and hence more robust and efficient.
The Baldwin effect, as an evolutionary mechanism that targets learning abilities, has been successfully simulated with many computer programs. Many scientists believe that it may have played a decisive role in the evolution of language, nevertheless the existence of the effect is controversial and many evolutionary biologists dismiss the idea (French & Messinger, 1994).
Neanderthal man
It was long believed that Neanderthal man could not communicate verbally—that Neanderthals must have had some primitive form of language, but could not produce the complete range of sounds of human language. According to a hypothesis advanced by American linguist Philip Lieberman, Neanderthals’ larynxes had not yet descended so low as those of Homo sapiens, so they would have had a great deal of difficulty in pronouncing the three main vowels present in the majority of the world’s languages (ee as in “beet”, oo as “boot” and a as in “aha!”).
However, some authors argue that to speak a rudimentary language, one need not master all of the vowels, so long as the language has a sufficient number of consonants.
Moreover, recent research has raised questions about Lieberman’s hypothesis. Many researchers find it hard to believe that Neanderthals, who produced sophisticated tools, adorned their bodies with bracelets and necklaces, buried their dead, and produced works of art, had little or no ability to communicate verbally.
Some authors even believe that the skull on which Lieberman based his work was not truly representative of Neanderthal man. Contrary to his findings, reconstructions of other Neanderthal skulls have shown that their base would have allowed the existence of a vocal tract very similar to that of modern humans. For example, the discovery in 1989 of the 60,000-year-old skull of a male Neanderthal with a hyoid bone (the bone that supports the larynx) even led some researchers to say that he had probably been able to speak.
One thing is certain: Neanderthals disappeared about 28,000 years ago, leaving the Earth to their rivals, Homo sapiens sapiens, who had everything they needed to use an articulate symbolic language with elaborate syntax. We should not discount the possibility that Neanderthals also had developed the ability to speak and that language may have played a significant role in their lives as well.
Summary
It is assumed by most theorists that language must have evolved from earlier, more primitive forms of communication such as pre-linguistic systems used by our primate ancestors. Some theorists such as Steve Pinker consider language to be mostly innate. Explosive growth of language capacities in humans, according to some theorists, began about 50,000 years ago with the evolution of the angular gyrus, a multimodal area of cortex. It is speculated that human language must have evolved at least 50,000 to 60,000 years ago when human ethnic groups differentiated yet all retained the ability to learn any human language with ease that they are exposed to in their early experience.
Attributions
Adapted by Kenneth A. Koenigshofer, PhD., from The origins of language by Bruno Dubuc, The Brain from Top to Bottom, under a Copyleft license. Some text also adapted from: Model, E. P. (2010, 2020). Origin of Language; usilacs.org. http://usilacs.org/wp-content/upload...-Wikipedia.pdf; licensed under the Creative Commons Attribution-ShareAlike License; retrieved 4/2/2022. | textbooks/socialsci/Psychology/Biological_Psychology/Biopsychology_(OERI)_-_DRAFT_for_Review/18%3A_Supplemental_Content/18.16%3A_Chapter_15-_The_Origins_of_Language.txt |
When we study biological psychology, we are interested in the biological processes that shape how our brains create our minds, thereby generating who we are and what we do – our sense of self and our behaviour. In this introductory section of the textbook, which consists of a single chapter, we will explore three distinct aspects of biological psychology.
We begin with a brief survey of the ways in which our understanding of the relationship between the mind and our physical body, especially the brain, has changed over time. Almost all biological psychologists now take a broadly materialistic view which assumes that the mind, once seen as a quite separate entity from the body, is simply another aspect of the physical functioning our brain.
We then explore the methods used to investigate the relationship between brain function and behaviour. Although contemporary neuroscientists have developed techniques that permit us to monitor, and potentially interfere with, brain function in ways that were unimaginable only twenty or thirty years ago, there are still fundamental limitations which it is important to understand.
All scientific study is subject to ethical constraints. Psychology as a discipline has developed a strong research ethics code and in the final section of the chapter we explore how this is reflected in studies that use either human or non-human animals.
There is also a brief postscript which introduces three key concepts from biology (cells , inheritance and evolution) that you may find helpful.
Learning Objectives
By the end of this section you will be able to:
• briefly describe the way in which our understanding of the relationship between brain and behaviour has evolved in the last two millennia
• understand how experimental approaches to investigating the relationship between brain and behaviour can be used
• appreciate some of the limitations of techniques that use either correlational techniques or experimental manipulations
• reflect on how the broad ethical principles that underpin psychological research apply in the context of biological psychology.
01: Background to Biological Psychology
Research at the interface of biology and psychology is amongst the most active in the whole of science. It seeks to answer some of the ‘big’ questions that have fascinated our species for thousands of years. What is the relationship between brain and mind? What does it mean to be conscious? Developments in our understanding of conditions such as depression or anxiety that can severely impact on our quality of life give hope for the development of more effective treatments.
One reason for rapid advances in biological psychology has been the technological progress that has provided tools to study brain processes in ways that were unimaginable just a few decades ago. It easy to forget that it was only 130 years ago, in the late nineteenth century, that Santiago Ramón y Cajal, Camillo Golgi and others were describing the detailed structures of nerve cells in the brain.
In addition to his skill in describing the detailed structure of the nervous system, Cajal made crucial theoretical advances and, for that reason, is often referred to as ‘the father of neuroscience’. He was the first to argue clearly that transmission of the nerve impulse is in one direction only and that individual neurons communicate at specialised structures called synapses. The detailed mechanisms that underlie communication – the nervous system, the action potential, and synaptic transmission – were not elucidated until the 1950s. At about the same time, records of the firing of single cells in the nervous system, especially within those parts of the brain that receive sensory stimuli, began to suggest the way in which information was coded and processed.
At that time the experimental methods for investigating the contribution of particular brain structures to aspects of behaviour were crude. They essentially involved permanently destroying (‘lesioning’) a few square millimetres of brain tissue containing several hundred thousand nerve cells, and then examining the effects on behaviour. Since then, there have been remarkable advances in our ability to study brain mechanisms and their relationship to behaviour. It is now possible to record the activity of many cells simultaneously. There are techniques that permit modulation of the activity of groups of nerve cells with identified functions for a short period of time before allowing them to return to normal functioning. As you will read in later chapters of this book, this has made possible a much greater understanding of the workings of the brain under normal circumstances, and the ways in which its functioning may be disturbed in different disease states.
As you read through this book it may seem as though the brain is a terribly ‘tidy’ organ. That, at least, is the impression that you might easily get as you look at the drawings and pictures that illustrate the text. But it is worth reflecting on how it feels to encounter a soft, jelly-like, living brain in practice. Here are a couple of sentences from the neurosurgeon Henry Marsh, describing the way in which he uses a small suction device to gradually approach, and then remove, a tumour located towards the centre of his patient’s brain. This particular tumour was located in the pineal gland, a structure with a long and fascinating history in neuroscience. He writes:
I look down my operating microscope, feeling my way downwards through the soft white substance of the brain, searching for the tumour. The idea that my sucker is moving through thought itself, through emotion and reason, that memories, dreams and reflections should consist of jelly, is simply too strange to understand. (Do No Harm: Stories of Life, Death and Brain Surgery. Weidenfeld & Nicolson, 2014).
Mind and brain: a historical context
Heart or brain as the basis for thought and emotion?
During much of recorded human history, there was uncertainty about whether the heart or the brain was responsible for organising our behaviour. The early debates on this topic are best recorded in the works of the Greek philosophers, because, to some extent at least, complete texts are available in a way that is not true for most other ancient human civilisations. In the 700 years from about the 5th century BCE to the 2nd century CE, the Greeks put forward two contrasting views.
One group of philosophers, of which Aristotle (~350 BCE) is the best known, held that it was the chest, most likely the heart, that was crucial in organising behaviour and thought.
The heart was the seat of the mind, and the brain had no important role other than, perhaps, to cool the blood. In threatening or exciting situations it is changes in heart function that we become consciously aware of, while we have no conscious access to the changes in brain activation that are occurring at the same time. So it is hardly surprising that the heart was ascribed ‘emotional’ and ‘cognitive’ functions, and that it still features in our communication and language in a way that the brain does not. Apologies can be ‘heartfelt’; the universal emoji for affection and love is a heart; Shakespeare has Macbeth conclude that ‘False face must hide what the false heart doth know’ (Macbeth 1.7.82) as he resolves to murder King Duncan and take the Scottish throne for himself.
In fact, there was a grain of truth in these ideas. Experiments conducted by Sarah Garfinkel and others in the last few years have demonstrated that pressure receptors in the arteries that lead from the heart become active on each heart contraction and can influence the processing of threat-related stimuli (Garfinkel and Critchley, 2016). It isn’t just the heart that can affect the way in which emotionally salient stimuli are processed by humans – abnormal stomach rhythms enhance the avoidance of visual stimuli, such as faeces, rotting meat, or sour milk, that elicit disgust (Nord et al., 2021). It is interesting that William James, whose Principles of Psychology (1890) is regarded as one of the foundational texts of modern psychology, put forward a similar idea in a short paper published in the 1880s (James 1884).
Another group of early Greek writers gave the brain a much more important role, of which Hippocrates and Plato (4th century BCE) and Galen (2nd century BCE) are best known. They developed the idea that, since the brain was physically connected by nerves to the sense organs and muscles, it was also most likely the location of the physical connection with the mind. Galen was a physician whose training was in Alexandria. He moved to Rome and frequently treated the injuries of gladiators. He accepted Hippocrates’ argument about the the importance of the brain in processing sensory information and generating behavioural responses. The immediate loss of consciousness that could be produced by a head injury confirmed his view. He also performed experiments that demonstrated the function of individual nerves; he showed that cutting the laryngeal nerve, which runs from the brain to the muscles of the larynx, would stop an animal from vocalising. He rejected another earlier belief that the lungs might be the seat of thought, suggesting that they simply acted as a bellows to drive air through the larynx and produce sounds.
In subsequent centuries Galen’s insights were forgotten in much of western Europe, but remained very influential in the Islamic world. Ibn Sina (still sometimes known as Avicenna, the Latinised version of his name) was born in 980 CE, and is generally acknowledged as the greatest of the physicians of the Persian Golden Age. He refined and corrected many of Galen’s ideas. He provided a detailed description of the effects of a stroke on behaviour, and correctly surmised that strokes might either result from blockages or bursts in the circulation of blood to the brain. He studied patients suffering from a disorder similar to severe depression that, at the time, was called the ‘love disorder’. When treating someone with this ‘disorder’ he used changes in the heart’s pulse rate as a way of identifying the names of individuals that were especially significant to them. He also, as might be expected of a physician, had a detailed knowledge of the effects of plant-derived drugs such as opium, the dried latex from poppy seed heads of which the active component is morphine (Heydari et al., 2013). He knew that it was especially valuable in treating pain, but had serious side effects including suppression of breathing and, with long term use, addiction (see Chapters 6 and 15).
Mind and body
In the intellectual ferment of post-Reformation Europe, the relationship between the mind and body, especially in the context of what we can know with certainty to be true, became a subject of great controversy.
René Descartes was a French philosopher whose most influential work in this area, the Discourse on Method, was published in 1837. He reached the conclusion that we can never be certain that our reasoning about the external world is correct, nor can we be sure that most of our own experiences are not simply dreams. However, he argued, the one thing we can be certain of is that, to use Bryan Magee’s free translation, I am consciously aware, therefore I know that I must exist: ‘Si je pense, donc je suis’ in the original French, or famously ‘Cogito, ergo sum’ in the later Latin translation (Magee, 1987).
This became the basis for Descartes’ argument for dualism. However, he also recognised that mind and body had to interact in some way. In his book De homine [About humans] written in 1633 but not published until just after his death, he suggested that this might happen through the pineal gland, as the only non-paired structure within the brain. He also thought that muscle action might be produced through some kind of pneumatic mechanism involving the movement of ‘animal spirits’ from the fluid-filled ventricles within the brain to the muscles. In this scheme, the pineal gland acted as a kind of valve between the mind and the brain. We now know that the pineal gland is in fact an endocrine gland that is important in regulating sleep patterns.
Descartes also described simple reflexes, such as the withdrawal of limb from heat or fire, as occurring via the spinal cord, and not involving the pineal gland (Descartes [1662], 1998). Although many of Descartes’ ideas about the way in which the body functioned were incorrect, he was a materialist, in the sense that he viewed the body as a mechanism.
Descartes’ legend to this drawing (Figure 1.6) reads:
For example, if the fire A is close to the foot B, the small particles of fire, which as you know move very swiftly, are able to move as well the part of the skin which they touch on the foot. In this way, by pulling at the little thread cc, which you see attached there, they at the same instant open e, which is the entry for the pore d, which is where this small thread terminates; just as, by pulling one end of a cord, you ring a bell which hangs at the other end…. Now when the entry of the pore, or the little tube, de, has thus been opened, the animal spirits flow into it from the cavity F, and through it they are carried partly into the muscles which serve to pull the foot back from the fire, partly into those which serve to turn the eyes and the head to look at it, and partly into those which serve to move the hands forward and to turn the whole body for its defense. (de Homine, 1662)
The common feature of both the heart- and early brain-centred ideas about the relationship between the body and behaviour was of a fluid-based mechanism that translated intentions into behaviour. In the brain-centred view, the fluid in the ventricles (Descartes’ ‘animal spirits’) connected to the muscles by nerves had this function, whereas in the heart-centred account, that role was taken by the blood.
Electrical phenomena were documented as long ago as 2600 BCE in Egypt, but it was not until the eighteenth century that they became a subject of serious scientific enquiry. In 1733 Stephen Hales suggested that the mechanism envisaged by Descartes might be electrical, rather than fluid, in nature, although this idea remained very controversial.
By 1791 Luigi Galvani had confirmed electrical stimulation of a frog sciatic nerve could produce contractions in the muscles of a dissected frog leg to which it was connected. He announced that he had demonstrated ‘the electric nature of animal spirits’.
In the 1850s Hermann von Helmholtz used the same frog nerve/muscle preparation to estimate that the speed of conduction in the frog sciatic nerve was about 30 metres per second. This disproved earlier suggestions that the nerve impulse might have a velocity that was as fast as, or even faster, than the speed of light! During the course of the twentieth century, gradual progress was made in understanding exactly how information is transmitted both within and between individual nerve cells. You can read more about this in Chapters 4 and 5.
At about the same time that Galvani and Helmholtz were uncovering the mechanisms of electrical conduction in nerves, there was also great interest in the extent to which different cognitive functions might be localised within the brain. Franz Gall and Johann Spurzheim, working during the first half of the nineteenth century, suggested that small brain areas, mainly located in the cortex, were responsible for different cognitive functions. They also believed that the extent to which individuals excelled in particular areas could be discovered by carefully examining the shape of the skull. Spurzheim coined the term phrenology to describe this technique. The idea became very popular in the early 1800s, but subsequently fell out of favour.
Gall and Spurzheim disagreed spectacularly after they began to work and publish independently. After one book by Spurzheim appeared, Gall wrote: ‘Mr. Spurzheim’s work is 361 pages long, of which he has copied 246 pages from me. …. Others have already accused him of plagiarism; it is at the least very ingenious to have made a book by cutting with scissors’ (Whitaker & Jarema, 2017).
In the 1860s, studies by physicians such as Paul Broca, working in Paris, began to correlate the loss of particular functions, such as language, with damage to specific areas of the brain. They could only determine this by dissections performed after death, whereas today techniques such as computed tomography (CT) and magnetic resonance imaging (MRI) scans reveal structural changes in the living brain. Studies of this kind could indicate that a particular area was necessary for that ability but did not indicate that they were the only areas of importance. The activation of different brain areas while humans perform both simple and complex cognitive tasks including speech can be measured using functional magnetic resonance imaging (fMRI) while the participant lies in the brain scanner.
Broca also noted that loss of spoken language was almost always associated with a lesion in the left cortex and often associated with weakness in the right limbs – the first clear example of cerebral lateralisation. Although he is usually credited with that discovery, Marc Dax, another neurologist working in the 1830s, appears to have made the same observation as Broca, although his data were not published until after his death in 1865 – just a few months before Broca’s own more comprehensive publication. Although it was assumed until fairly recently that brain lateralisation was uniquely human and associated with language there is now convincing evidence that it is widespread amongst vertebrates and has an ancient evolutionary origin (Vallortigara & Rogers, 2020).
Broca, like so many of the physicians and scientists discussed earlier, was a man of very wide interests. He published comparative studies of brain structure in different vertebrates and used them to support Charles Darwin’s ideas on evolution. He has been criticised for potentially racist views (Gould, 2006). He believed that different human races might represent different species and that they could be distinguished through anatomical differences in brain size and the ratios of limb measurements. Modern genetic studies demonstrate that living humans are a single species, although there is also evidence that early in our evolutionary history our species interbred with other early, and now extinct, humans including Neanderthals and Denisovans (Bergström et al., 2020).
Positivism and the study of behaviour
In the latter part of the nineteenth century, psychologists often relied on introspection for their primary data. But the French philosopher Auguste Compte who led the positivist movement, argued that the social sciences should adopt the same approach as physical scientists and rely solely on empirical observation. In the study of behaviour this meant relying on recording behaviour either in the laboratory or field. North American psychologists studying conditioning and animal learning including John Watson (of ‘Little Albert’ fame – see below) and Burrhus Skinner took this approach. Nikolaas Tinbergen and Konradt Lorenz used a similar framework as they developed the discipline of ethology in Europe.
Behaviorism in North America
Watson, in an article published in Psychological Review in 1913, suggested that ‘Psychology, as the behaviorist views it, is a purely objective, experimental branch of natural science which needs introspection as little as do the sciences of chemistry and physics’ (Watson, 1913). Watson had a varied scientific career, starting with studies on the neural basis of learning. He was particularly interested in the idea that neurons had to be myelinated in order to support learning. He then spent a year carrying out field studies on sooty and noddy terns using an experimental approach to investigate nest site and egg recognition. This was followed by experimental studies on conditioning using rats, which emphasised the role of simple stimulus-response relationships in behaviour. He resisted any consideration of more complex cognitive processes because, to him, they risked returning to a dualist separation of mind and body.
Towards the end of his short scientific career, he performed the infamous ‘Little Albert’ experiment in which he conditioned a nine-month-old baby, Albert B., to fear a white rat (Watson & Rayner, 1920). Initially, the baby showed no fear of the rat, but the experimenters then made a loud and unexpected sound (a hammer hitting a steel bar) each time the baby reached out towards the rat. After several pairings, Albert would cry if the rat were presented, but continued to play with wooden blocks that he was provided with in the same context. He became upset when presented with a rabbit, though to a lesser extent than with the rat. This is an experiment that would almost certainly not be approved under the ethical codes used in psychology today (see ‘Ethical Issues in Biological Psychology’ later in this chapter).
This type of procedure is now often referred to as Pavlovian conditioning, named after the Nobel prize winning Russian physiologist Ivan Pavlov. His experiments used dogs and measured salivation in response to the presentation of raw meat. The dogs were conditioned by pairing the sound of a a ticking metronome (not, as often stated, a bell) with the availability of the meat. Subsequently the sound of the metronome alone was enough to elicit salivation. Although Pavlov’s name is the one associated with the phenomenon, it was already well known by his time. The French physiologist Magendie described a similar observation in humans as early as 1836.
Skinner set out an even more radical approach in his book The Behaviour of Organisms (1938), arguing that cognitive or physiological levels of explanation are unnecessary to understand behaviour. In the Preface added to the 1966 edition, he wrote the following:
The simplest contingencies involve at least three terms – stimulus, response, and reinforcer – and at least one other variable (the deprivation associated with the reinforcer) is implied. This is very much more than input and output, and when all relevant variables are thus taken into account, there is no need to appeal to an inner apparatus, whether mental, physiological, or conceptual. The contingencies are quite enough to account for attending, remembering, learning, forgetting, generalizing, abstracting, and many other so-called cognitive processes.
Yet this approach had already been criticised. Donald Hebb, in The Organisation of Behavior, published in 1949, argued for a close relationship between the study of psychology and physiology. In the opening paragraphs of his book he contrasted his own approach with that of Skinner, saying:
A vigorous movement has appeared both in psychology and psychiatry to be rid of ‘physiologising’ that is, to stop using physiological hypotheses. This point of view has been clearly and effectively put by Skinner (1938), …… The present book is written in profound disagreement with such a program for psychology. (Hebb, 1949, page xiv of the Introduction).
Today Hebb is best remembered for the suggestion that learning involves information about two separate events, converging on a single nerve cell, and the connections being strengthened in such a way as to support either Pavlovian or operant conditioning. His hypothesis, which he acknowledged as having its origin in the writings of Tanzi and others some fifty years before, is often remembered by Carla Schatz’ mnemonic ‘Cells that fire together, wire together’. Today, that phrase is most often associated with the phenomenon of long term potentiation (LTP) which acts as an important model for the neural mechanisms involved in learning and memory.
There were other areas of psychology, especially the study of sensation and perception, where the radical approach of the behaviourists never took a full hold. Helmholtz, whose early work on neural conduction was so important, also made ground breaking contributions to the study of auditory and especially visual perception. He emphasised the importance of unconscious inferences in the way in which visual information is interpreted. All perceptual processing involves a mix of ‘bottom-up’ factors that derive from the sensory input and ‘top-down’ factors which involve our memories and experience of similar sensory input in the past. Alternative, more descriptive, terms for ‘bottom-up’ and ‘top-down’ are data driven and concept driven. You will learn much more about these processes in modules that explore cognitive psychology.
The development of ethology in Europe
Although the study of animal learning during the mid twentieth century was a dominant paradigm in biological psychology in North America, this was much less true in Europe. Here, an alternative approach evolved.
Konrad Lorenz and Nikolaas Tinbergen emphasised the detailed study of animal behaviour, often in a field rather than a laboratory setting. Both were fascinated by natural history when young. Lorenz was especially taken by the phenomenon of imprinting. It was first described in birds, such as chickens and geese, that leave the nest shortly after hatching and follow their parents. In a classic experiment he divided a clutch of newly-hatched greylag geese into two. One group was exposed to their mother, the other to him. After several days the young goslings were mixed together. When he and the mother goose walked in different directions the group of youngsters divided into two, depending on whom they were originally imprinted. The term imprinting is now often used much more generally for learning that occurs early in life but continues to influence behaviour into adulthood. Lorenz remains a controversial figure because of his association with Nazism during World War II.
Tinbergen began his scientific career in Holland, performing experiments that revealed how insects use landmarks to locate a burrow. They were reminiscent of Watson’s earlier studies with terns, though much better designed. During World War II he was held as hostage but survived and subsequently moved to Oxford University in the late 1940s. He is best remembered for an article written in 1963 on the aims of ethology, which will be the basis of the next section of this chapter.
Lorenz and Tinbergen, like Watson and Skinner, were interested in explaining behaviour in its own terms rather than exploring underlying brain or physiological mechanisms, and in this sense they were both adherents of the positivist approach championed by Auguste Compte for all of the social sciences. Tinbergen made this point very clearly in his book The Study of Instinct (1951). He acknowledges that the behaviour of many other animals can resemble that of a human experiencing an intense emotion (as Darwin had pointed out in The Expression of Emotions in Man and Animals in 1872) but went on to say that ‘because subjective phenomena cannot be observed objectively in animals, it is idle to claim or deny their existence’ (Tinbergen, 1951, p.4).
A cognitive perspective in animal learning and ethology
By the second half of the twentieth century, psychologists interested in human behaviour were becoming increasingly dissatisfied with the limitations of the behaviorist approach. Psychologists working on animal learning also realised that some of the phenomena that could be observed in their experiments could only be explained by postulating intervening cognitive mechanisms. Tony Dickinson, in his short text Contemporary Animal Learning Theory, published in 1980, used the example of sensory preconditioning. Dickinson was one of the initiators of the so-called ‘cognitive revolution’ in animal learning.
In a standard Pavlovian procedure, rats are initially trained to press a bar for small pellets of sweetened food. Then they are exposed to an initially neutral stimulus – a light in the wall of their cage – immediately prior to receiving a mild foot shock. After several such pairings the rats are exposed to the light alone – there is no shock. Nevertheless, the rats ‘freeze’ (remain immobile) and pause bar-pressing for the food reward for a few seconds before returning to their normal behaviour.
The critical modification in sensory preconditioning is to expose the rats to pairings of a light with a second neutral stimulus, in this case a sound, prior to the main conditioning task. Since nothing happened after these initial pairings the rats rapidly come to ignore them. Their ongoing behaviour, in this case bar-pressing for the food pellets, was not changed. Now the rats were conditioned to associate the light and shock in the standard way. Following conditioning they were exposed either to the light, or to the tone. Rats exposed to the tone paused in a similar way to rats that were exposed to the light despite never having experienced that sound preceding a shock. So, despite little or no change in their behaviour during those initial light-tone pairings, they clearly had learned something. Learning does not have to involve any overt behavioural change.
Dickinson set out the implications in the following way:
As we shall see, sensory preconditioning is but one of many examples of behaviourally silent learning, all of which provide difficulties for any view that equates learning with a change in behaviour. Something must change during learning and I shall argue that this change is best characterised as a modification of some internal structure. Whether or not we shall be able at some time to identify the neurophysiological substrate of these cognitive structures is an open question. It is clear, however, that we cannot do so at present. (Dickinson, 1980, p.5)
That was 1980!
As I write this, in 2020s, it is possible to give an optimistic answer Dickinson’s final query. Modern techniques from neuroscience can identify the brain structures and changes in small groups of neurons that support learning and memory. In one recent example, Eisuke Koya and his colleagues, who work in the School of Psychology at the University of Sussex, describe a simple learning task in which mice are exposed to a short series of clicks (the ‘to-be-conditioned’ stimulus) followed by an opportunity to drink a small quantity of a sucrose solution. After being exposed to such pairings over a period of a few days, they learn to approach the sucrose delivery port as soon as the clicks begin. The mice used in these experiments were genetically modified in such a way that nerve cells in the frontal cortex that were activated during the conditioning trials could be made to glow with a green fluorescence. The research established that small, stable groups of nerve cells (‘neuronal ensembles’) were activated from one day to the next. The researchers were also able manipulate these cells in such as way as to produce an abnormal change in their activity during subsequent tests in which the animals were exposed to the conditioned stimulus. Approach to the sucrose delivery port was disrupted when this was done, suggesting that the activation of these cells contributed in an important way to the learnt behaviour of the mouse (Brebner et al., 2020). Experiments of this kind are approaching the goal of identifying the neural structures and mechanisms that support learning and memory, and demonstrate how psychologists and neuroscientists can collaborate to tackle the fundamental problems of biological psychology.
Field studies of non-human primates
Researchers in the field of animal learning were not the only ones who found it difficult to explain their experimental results without invoking cognitive processes that could not be directly observed. Ethologists faced a similar challenge.
Jane Goodall began her fieldwork on chimpanzees in the early 1960s and, as she got to know and was accepted by the troop that she was studying at Gombe Stream in Tanzania, gathered evidence about their rich social and emotional lives and the use of tools that revolutionised studies of primate behaviour. To the consternation of her colleagues at Cambridge, she also gave names to the chimpanzees rather than the numbers which would supposedly lead to more objective study.
At about the same time, Alison Jolly was beginning her studies of lemur behaviour in Madagascar. She argued that the major driving force in evolution of primate cognition came from the complex demands that come from living in complex and long lasting social groups (Jolly, 1966).
Field studies carried out since then have demonstrated long-lasting social relationships in primate species such as baboons and vervet monkeys. Calls that the animals make in the context of aggressive encounters are interpreted in terms of an animal’s prior knowledge of social hierarchies within their group and their prior behaviour. For example, female chacma baboons, studied by Dorothy Cheney and Robert Seyfarth, have a call, the ‘reconciliatory grunt’, that is given just after an aggressive encounter to indicate a peaceful conclusion between the two individuals. When the call was played to a female who had just been involved in a mutual grooming session with another female, she behaved in a way that implied that the call must be directed at someone else and not her. However, if she had been involved in an aggressive encounter with that same female a little earlier, she behaved in a way that implied that the grunt had been directed at her (Engh et al., 2006).
Field studies of other long lived mammals, including elephants and dolphins, suggest that they also have complex social networks and sophisticated cognitive abilities.
Two features stand out at the end of this very short historical survey of ideas about the relationship between the brain and behaviour. The first is that, among contemporary psychologists and neuroscientists, there is an almost universal acceptance of some form of materialism. In other words all of the complexity in our behaviour, including such relatively less well understood areas such as consciousness, are a consequence of physical mechanisms operating in our bodies, and primarily in the nervous system. The second is that, despite a hiatus that lasted for at least the first half of the twentieth century, the study of phenomena such as emotion and consciousness are no longer seen as ‘off limits’ for scientific study. One challenge for modern neuroscience is to understand how the nervous system builds, uses and attaches emotional weight to internal representations of aspects of the external world.
What kinds of questions does Biological Psychology ask?
Since biological psychology is concerned with both behaviour and relevant physiological and brain mechanisms, it will often start with some interesting behavioural observations or experimental data. Once the behaviour of interest has been adequately documented, it is time to ask some questions. The ethologist Niko Tinbergen suggested that were four broad kinds of questions that might be of interest (Tinbergen 1963). The first two have a timescale within the life cycle of an individual animal and are concerned with:
(i) the underlying causes of changes in behaviour, such as brain mechanisms or hormonal changes, and
(ii) the development of behaviour, for example as an individual matures to adulthood.
These are sometimes referred to as the proximate causes of the behaviour. The remaining two questions are set in a much broader time frame and can be thought of as the ultimate causes (Bateson & Laland, 2013). They are concerned with:
(iii) the evolutionary relationships between patterns of behaviour in different species, and
(iv) the advantages of particular patterns of behaviour in the context of natural selection.
A couple of examples should illustrate how these questions differ from one another and yet still address the question of why a particular behaviour occurs in the form that it does.
Example 1: Bird song
The chaffinch (Fringilla coelebs) is a common European songbird. In the early Spring, adult males have a striking breeding plumage and their bills darken. At the same time the birds begin to perch in conspicuous places within a small territory that they defend from other males and sing. The females do not sing, although they use a variety of other calls. So, why do male chaffinches sing?
Causes and mechanisms. The changes in bill colour and onset of singing are associated with the increase in day length in the Spring. Experimental studies in other songbird species, such as those by Fenando Nottebohm in canaries, have shown that there is an increase in the secretion of testosterone at this time, and that experimental administration of this steroid hormone produces the same changes in appearance and behaviour (Nottebohm, 2002). Further studies have demonstrated that there are changes in the bird’s brain at this time.
The most surprising result was a clear demonstration that new neurons are formed through cell division in the areas of the brain involved in song. In the mid 1980s, when these studies were performed, the consensus was that neurons were never added to a mature vertebrate brain. Nottebohm’s work forced a re-examination of this idea. The methods that his lab used were repeated in other species, and demonstrated that the same thing could also happen in mammals, including humans. In the years since these ground-breaking studies, a good deal more has been learned about circuits within the songbird brain that support song production.
Development. You do not need to be an experienced birdwatcher to recognise the song of a chaffinch. It consists of several short trills followed a characteristic terminal flourish. This chaffinch sonagram shows the loudness (upper panel) and frequency changes in the song of a chaffinch. Alternatively, play the song as a short video.
One or more interactive elements has been excluded from this version of the text. You can view them online here: https://openpress.sussex.ac.uk/introductiontobiologicalpsychology/?p=532#oembed-1
However, although it is difficult to mistake the song of a chaffinch for that of a different species, the song has unexpected complexity. An individual may sing several subtly different song types and there are also clear differences in the song over the different parts of their widespread distribution through Europe. This raises several questions about the way in which a young chaffinch develops its song repertoire. If a nestling is reared in an acoustically isolated environment, it develops a highly abnormal song lacking the detailed structure typical of a chaffinch. However, if a bird reared in the same way is exposed to tape-recorded song, then it accurately copies the song types and incorporates them into its repertoire. Of course there will be be a considerable gap between hearing the song as a summer fledgling, and then singing the song in the following spring, so this is another example of behaviourally silent learning. In the wild, things turn out to be be much more complex. A chaffinch may acquire some song types in the first summer, but additional ones may also be learnt from neighbours in the following spring as they set up territories. It is clear that one early, popular idea about the development of song types is incorrect: they are not exclusively learnt from a bird’s father (Riebel et al., 2015).
Evolutionary relationships. Birds, like mammals, reptiles and amphibians, are vertebrates. Although there is tremendous variety in their appearance and behaviour, there are common features, such as the presence of a backbone. There are also strong similarities in broad aspects of brain structure and functioning. Songbirds are one of several groups of living birds and there are some 5000 different species. They all have a well-developed syrinx (the rough equivalent of the human larynx) which has a complex musculature that allows the bird to sing. There is tremendous variation from one species to another, from the complex, extended song of thrushes such as the blackbird and nightingale, to the much simpler song of the chaffinch. Songbirds evolved about 45 million years ago, and birds diverged from the vertebrate line that also gave rise to mammals some 320 million years ago. Our common ancestor probably resembled a small lizard. Present day lizards can show some degree of behavioural flexibility and social learning, so there are interesting questions about the extent to which some of the more advanced cognitive abilities of birds and mammals evolved independently or build on components already present in that common ancestor.
Function or current utility. Song is energetically expensive at a time of the year when food is not at its most abundant. It can also be dangerous. There is a risk that a sparrowhawk will appear over the top of the hedge on which the chaffinch is singing and provide the hawk with its next meal! It follows that song must also potentially enhance the biological fitness of the bird in some way. There are at least two factors at play here. A male has to attract a female to nest in his territory and mate with her. His song also advertises to other males that he holds, or is attempting to hold, that space, and may be a prelude for fighting over ownership of the best areas. There is also experimental evidence that characteristics of the song, particularly the complexity of the final trill, may be attractive to females and lead them to prefer one male over another. Although the evidence is not completely convincing for chaffinches, this idea would fit in with findings from a variety of other species. The croak of a frog, the roar of a red deer stag or the colours of a peacock’s tail may act as signals about the quality of the individual making the call, and may have the advantage of being hard to falsify – so called ‘honest signals’. However the precise way in which such signals evolve remains unclear (Penn & Számadó, 2020; Smith, 1991).
Example 2: Anxiety and fear
Tinbergen’s general approach can be productive in thinking about any aspect of behaviour. In humans anxiety or fear is an unpleasant emotional experience that may come in many forms including panic and phobias of various kinds. The emotion of fear is often evoked by quite specific threat-related stimuli – perhaps a snake (snake phobia) or wide open spaces (agoraphobia). In the same way as for bird song, we can ask the question ‘why?’ and break it down into queries about either proximate or ultimate causes.
Causes and mechanisms. The underlying physiological and brain mechanisms are well studied. They include increases in heart rate, release of hormones such as adrenalin, and activation of a specific brain network that includes the amygdala. One part of the amygdala, the central nucleus, is responsible for activating these different physiological changes in a coordinated manner (LeDoux, 2012). An understanding of these types of mechanisms has clinical relevance. Drugs that act selectively on these threat-processing circuits may have value as treatments for anxiety. Indeed benzodiazepines such as valium are still widely used in this way and are known to have especially potent effects in the amygdala. An important, but still unanswered, question is how these physiological responses relate to the conscious feeling of fear.
Development. We also know a good deal about the way in fear may develop during an individual’s lifespan. Simple conditioning may often play a role and there is also good evidence that species as varied as rodents and primates are more likely to become fearful and avoid some types of object rather than others. In social species observational learning may also be important. Studies of young rhesus monkeys illustrate these points. A rhesus infant will initially show little avoidance or fear of model snakes or flowers. However if they are allowed to watch an edited video in which an adult rhesus appears to respond fearfully to either the flowers or a snake, they themselves develop fear responses to the snake but not to the flower (Cook & Mineka, 1990). This suggests an innate tendency to become fearful of some kinds of object, such as a snake, that can potentiate the effects of observational learning. In a similar way many rodent species will avoid odours associated with potential predators such as a fox or cat without having any previous experience of those animals. However, especially when young, those responses may be amplified if they observe an adult responding strongly to the same stimulus.
Evolutionary relationships. Comparative studies of the specific behaviour patterns associated with fear and the underlying physiological and brain mechanisms suggest that they have been conserved through vertebrate evolution. Charles Darwin, in the Expression of the Emotions in Man and Animals, provided some of the first really detailed behavioural descriptions of facial expressions associated with fear and especially emphasised their role in communication. Detailed comparisons of the neural circuitry in mammals, birds, amphibians and reptiles suggest that the amygdala, and especially its connections to the autonomic nervous system which activate the hormonal and other physiological responses to fear-evoking stimuli, are conserved through the entire vertebrate evolutionary line and must therefore have originated at least 400 million years ago. So it is not surprising that one of the responses of a Fijian ground frog to the presence of a potential predator (a cane toad) is an increase in the stress hormone corticosterone as well as a behavioural response, in this case immobility, that reduces the likelihood of being eaten. Exposure to a stressful situation in humans produces the same hormonal response, although it is cortisol, which is almost structurally identical to corticosterone, that is released.
Function or current utility. Questions of function, or current utility, can be thought about at at multiple levels. It is clear that fear, or the perception of threat-related stimuli can be a powerful driver of learning. As we saw earlier in this chapter, previously neutral stimuli that predict threat or danger come to evoke the same responses as the threat itself (Pavlovian conditioning). In the natural environment such responses are likely to enhance biological fitness. However in addition to thinking about the likely function of fear systems in a rather global manner, it is also possible to analyse the individual behavioural elements that make up a a fear response. One such element, described by Darwin (Darwin, Charles, 1872) and also recognised in the later studies of Paul Ekman, is that the eyebrows are raised which results in the sclera (white) of the eye becoming much more obvious (Jack et al., 2014 includes a video example). The original function of this response may simply have been to widen the field of view but, especially in primates, it can now also serve as a way of communicating fear within a social group. It is likely that behavioural responses frequently gain additional functions during evolution, perhaps even making the original function irrelevant. This is the reason that the term ‘current utility’ is often preferred to function (Bateson & Laland, 2013). If a functional hypothesis is be tested experimentally it will always be current utility that will be assessed. When a particular characteristic or feature acquires additional functions in this way they are sometimes described as exaptations rather than adaptations.
Scientific strategies in Biological Psychology
The first phase of any scientific investigation is likely to descriptive. In biological psychology, this is a point at which the influence of an ethological approach is most obvious. It is easiest to describe how this phase proceeds by using some specific examples. Once the behaviour of interest has been clearly characterised it is often time to collect some empirical data. This will often involve either collecting behavioural and physiological data and correlating them together or taking a more experimental approach in which environmental or physiological are deliberately manipulated. A combination of these approaches will begin to elucidate the way in which neural processes influence behaviour and, in turn are influenced by the consequences of that behaviour.
Describing behaviour: facial expressions and individual variation during conditioning
Many mammals, including rodents and primates, including humans, make distinctive facial expressions as they try out potential food sources. Humans will lick their lips as they eat something sweet and gooey. A food or drink that is unexpectedly sour (like pure lemon juice) or bitter (perhaps mature leaves of kale or some other member of the cabbage family) might elicit a gaping response in which which the mouth opens wide and, in more extreme cases, saliva may drip out of the mouth.
These kinds of response can be observed in quite young babies. Indeed, as any parent is likely to know, they are really common as an infant transitions from breast feeding to solid foods. It may seem surprising but very similar responses can be observed in rats or mice as they drink sweet, sour or bitter solutions. It demonstrates that these are responses that are likely to have been conserved over relatively long periods of evolutionary time. They may serve a dual function. A response like gaping will help to remove something that may be toxic from the mouth – bitterness is often is often a signal that a plant contains harmful toxins. However it also likely that, at least in some species, the ‘current utility’ of these expressions also includes a communicative function in species that feed in social groups. This would be another example of an exaptation (i.e. an additional function that becomes adaptive later).
Detailed measurement of these facial responses forms the basis of the so-called ‘taste reactivity task’ in which controlled amounts of solutions with different taste properties are infused into the mouth of a rat or mouse and the facial expressions quantified (Berridge, 2019). The task was initially devised to investigate the role of different brain structures in taste processing. A little later some detailed studies with a variety of different solutions revealed that the extent to which they evoked ingestive (‘nice’) and aversive (‘nasty’) responses could, at least to some extent, vary independently. It then became clear that there were drug and brain manipulations that had no effect on the facial expressions evoked by a liked, or rewarding, sweet solution. However those same manipulations did reduce the extent to which an animal would be prepared to work (e.g. press a lever, perhaps several times) in order to gain access to that solution. The important implication was that the extent to which something is ‘liked’, measured by facial expressions, may depend on different factors to those that affect whether it is ‘wanted’, measured by effort to obtain that reward. Although this distinction first arose in the investigation of what might seem an obscure corner of biological psychology, it has, as you will read in Chapter 15, Addiction, become an important theoretical idea that helps explain some otherwise puzzling features of addiction to drugs, food and other rewarding stimuli.
One feature of animal behaviour, humans included, that seem irritating at first is that there is often substantial variation between individuals exposed to the same experimental manipulation. It can tempting to ignore it, or to to choose measures at least minimise it. But this can be a mistake as this example from Pavlovian conditioning demonstrates.
As a group of rats learns the association between the illumination of light and the delivery of a food pellet they retrieve the pellet more rapidly and the averaged behaviour of the group generates a smooth ‘learning’ curve. However, careful observation of individuals reveals several things. First, the changes in individual animals are much more discontinuous – almost as though at some point, individual rats ‘get’ the task, but at different times during the training process! The behaviour of individuals also varies in other, potentially interesting, ways. Some rats approach the light when it illuminates, rearing up to investigate it, and continue to do so even when they subsequently approach the place where the pellet is delivered. Other rats move immediately to the location where the food pellet will be delivered, apparently taking no further interest in the light. The behaviour of the former group is referred to as ‘sign tracking’ and the latter as ‘goal tracking’. It turns out that sign trackers and goal trackers differ in other interesting ways. For example, sign-tracking rats show greater impulsivity in other tasks and acquire alcohol self administration more readily. The same kinds of distinction may also show up in human behaviour and predict vulnerability to drug addiction and relapse.
Although the descriptive phase is likely to begin any serious scientific investigation, once interesting observations have been made they are likely to raise the kinds of questions that were discussed in the last section. What are the brain and physiological mechanisms that underlie the behaviour? How do the behaviour patterns develop through lifespan? and so forth. In working out how to tackle such questions there are a number of potential ways forward. They typically use a combination of two general strategies.
Investigational strategies: correlational approaches, and experimental manipulations
One strategy is to observe changes in behaviour, typically in a carefully controlled test situation or using a clearly defined set of behaviour patterns when doing fieldwork, while measuring changes in physiological and brain function that are likely to be relevant. Then, using appropriate statistical techniques, the changes in behaviour and physiology can be correlated together. The second strategy is to deliberately manipulate a test situation in order to determine the extent to which an imposed change in in physiology or brain function leads to a change in behaviour, or a change in behaviours leads to a physiological change.
The issue with the first correlational strategy is that, although the results may suggest that there is some type of causal relationship between behaviour and physiology, they don’t clarify the nature of the relationship. Behaviour A may cause changes in physiological or brain variable B. But equally, the physiology may influence the behaviour. Finally, it may be that there is no direct mechanistic linkage between behaviour and physiology. Instead, some third variable is independently affecting both. A further complication is that there may be important feedback loops that influence the outcome. So, although correlational studies can be very useful, they do have limitations when it comes to their interpretation. The second strategy, involving deliberate experimental manipulation, has a better chance of determining the direction of causation. But it may raise other problems. Deliberately interfering with the functioning of a complex biological system may lead it to respond in unpredictable ways and may also be ethically problematic.
Let’s see how this works in practice by considering the behaviour of red deer during the autumn rutting season. At this time, a successful male deer may establish a ‘harem’ of female deer and defend them against other males. His behaviour makes it more likely that only he will have the opportunity to mate with them, and hence that the resulting offspring would enhance the representation of his genes in the next generation. Females also gain from being in the harem by gaining some protection from harassment by other males, which provides the opportunity to feed in a more uninterrupted way. They may choose the harem of a particular male on the basis of his perceived fitness.
It would first be possible to correlate the gradual increase in aggressive behaviour during the red deer rut with the increasing blood testosterone that occurs during the autumn. It might be tempting to conclude that the increased testosterone level causes the increased level of aggressive behaviour directed at other males. However there are at least two, and perhaps more, possibilities that would need to be excluded first.
First, it is possible that the changes in behaviour and increased testosterone level are triggered independently by some third factor. One alternative would be that decreasing day length acts as that trigger. The day length signal might be detected within the pineal gland (Descartes’ supposed valve from the soul to the body!) and independently trigger both the hormonal and behavioural changes.
A second possibility is that the behavioural changes actually trigger the hormonal change. This isn’t a completely implausible suggestion. Such effects have been documented in a number of mammals, including human (male) tennis players, where testosterone levels increase after a match that they have just won. In the same way, testosterone secretion might be sensitive to whether aggressive encounters between the male deer are won or lost.
An experimental approach can be used to overcome the difficulty in deciding what causes what. In the case of the role of testosterone in aggression, one possibility strategy is to remove the source of testosterone and determine whether aggressive behaviour continues. In fact this has been common practice for millennia in managing farmed animals. An intact bull may be very aggressive but castrated male cattle (bullocks) are typically much less so. In experimental animals the specific role of testosterone could be demonstrated by administering the hormone to a castrated animal and showing that aggressive behaviour returns to the expected level. Experiments of exactly this kind have been performed on red deer, and indicate that testosterone does indeed restore rutting behaviour when administered to a castrated male in the autumn. However the effect of the hormone on rutting behaviour is absent when the same treatment is given in the spring, although there is some increase in aggressive behaviour (Lincoln et al., 1972). So, factors like day length and hormone level interact in a more complex way than might be expected.
A similar experimental approach can be taken in relation to the contribution that particular brain structures or identified groups of neurons may make to a specific behaviour pattern. Suppose that we have already discovered, perhaps by making recordings of neural activity, that neurons in a particular structure (let’s call it area ‘X’) become more active while an animal is feeding. How can we demonstrate that those neurons are important in actually generating the behaviour rather than, for example, just responding to consequences of eating food? In other words, how can we determine that activation of area X causes feeding as opposed to feeding causing activation of area X? If stimulation of the nerve cells within area X leads to the animal, when not hungry, beginning to feed on a highly palatable food, then you might assume that demonstrates their critical role – authors reporting such an experiment will often state that the cells in this area are ‘sufficient’ to generate to feeding. However this doesn’t show that those same cells are always active in the many other experimental situations in which that animal might begin to feed. In the same way, suppose an animal fails to eat when the cells in that same area X are inactivated. That finding doesn’t demonstrate that these cells are always ‘necessary’ for feeding to occur. For example, suppose the original test situation were eating a palatable food when already sated, the animal might still eat when those same cells were inactivated after they had been deprived of food for a few hours (Yoshihara & Yoshihara 2018). It is also possible that inactivation of those cells might interfere with other types of behaviour, suggesting that they had no unique importance in relation to feeding. This highlights the moral that demonstrating causation, and especially the direction of causation, is rarely easy!
In studies where only correlations have been measured it is tempting for those presenting the research to slip from initially saying that an association has been observed to then discussing the results in terms of a causal mechanism that hasn’t been fully demonstrated. This is something to watch for when critically assessing research publications. It often happens in the discussion of results based on fieldwork when an experimental approach may be much more challenging. A combination of correlational and experimental approaches can also be taken in studying questions about the development of behaviour.
This combination of approaches can be taken in the study of human behaviour and its relationship to particular brain or other physiological changes. However, the experimental approach can be limited by the more significant ethical concerns that may arise. Clinical data has been used since the time of Galen, Ibn Sina, Broca and others to correlate the brain damage that occurs after strokes, or other forms of brain damage, with changes in behaviour. Classic studies include the patient Tan, studied by Broca, where loss of the ability to speak was linked to damage in the left temporal cortex; and Phineas Gage, whose damage to the prefrontal cortex was associated with more widespread changes in behaviour. This last example also provides a cautionary tale. There is an unresolved dispute as to how substantial and permanent Gage’s changes in behaviour actually were, despite the typical clarity of textbook presentations (Macmillan, 2000).
Ethical issues in Biological Psychology
One consequence of our growing appreciation of the potentially rich cognitive and emotional lives of animal species other than humans has been a concern about the ethical position of their use in experimental or observational studies. Psychology as a discipline has also become much more concerned to treat human participants in an ethical manner, ruling out work such as the Little Albert study that we discussed earlier.
What does it mean, to behave a morally or ethically good way? Philosophers have debated this subject since the dawn of recorded human history. One rational approach which synthesises some of the different possible approaches discussed by the psychologist Stephen Pinker is to ‘only do to others what you would be happy to have done to you’ (Pinker, 2021, p. 66 et seq.). It is rational in the sense that it combines personal self-interest in a social environment but also survives the change in perspective that comes with being the giver or receiver of a particular action. It is also the basis of the moral codes promoted by many of the world’s religions. This type of philosophical approach is consistent with the Code of Human Research Ethics put forward by the British Psychology Society (BPS Code of Human Research Ethics – The British Psychological Society, 2021), which is based on four fundamental principles:
1. Respect for the autonomy and dignity of persons
2. Scientific value
3. Social responsibility
4. Maximising benefit and minimising harm
The code goes on to explain how these principles underpin the ways in which experiments are designed, participants treated, and results disseminated. The last three principles also apply in a relatively straightforward way to research that involves non-human animal species. Applying the first principle is more complex and partly dependent on the moral status that humans give to non-humans.
Philosophers take a variety of positions on morality that are often contradictory. However, two important approaches are utilitarianism, which developed from the writings of the English philosopher Jeremy Bentham in the nineteenth century, and a rights-based approach. Bentham, using concepts developed by the French philosopher Helvétius a century earlier, suggested that good actions are ones which maximise happiness and minimise distress: the ‘greatest felicity’ principle. While he recognised that non-human animals might experience pain and distress, he also regarded that as of lesser importance than the wellbeing of humans.
In the twentieth century writers such as Peter Singer and Richard Ryder rejected this ‘human-centred’ approach, terming it ‘speciesism’, by analogy with racism. But nevertheless they accepted, with this modification, the utilitarian approach of Bentham. Within this framework some use of non-human animals may be ethically justified, though every effort must be made to enhance benefits and especially to reduce any negative impact on the lives of the animals used during the course of the studies.
In contrast, Tom Regan has rejected the view that benefits and dis-benefits could be added together using some form of utilitarian arithmetic to decide whether an action was acceptable or not. Instead, he asserted that at least some non-human animals have the same rights as a human to life and freedom from distress. They are, to use his phrase, ‘subjects of a life’. Regan discusses a number of attributes that might help in deciding whether a particular group of animals meet this criterion. Regan’s approach would rule out the use of many species of animal in science, agriculture and many other contexts
The first UK law designed to protect non-human animals used in scientific research was the Cruelty to Animals Act 1876. It was replaced by the Animals (Scientific Procedures) Act 1986 in line with the EU Directive 86/609/EEC (now replaced by Directive 2010/63/EU). The legislation takes a broadly utilitarian approach to judging whether a proposed set of experiment is, or is not, ethical. In other words, there is an attempt to weigh up the potential benefits of the work in terms of advancing scientific knowledge, or the clinical treatment of human and animal disease, which is set against the dis-benefits in terms of impact on the welfare of the animals that will be used in that research. However, it incorporates elements of a more rights-based approach, in that experiments involving the use of old world apes (chimpanzees, gorillas etc) are prohibited. At present, with the exception of cephalopods, the current legislation ignores invertebrates. However there is increasing evidence sentience may be more widely distributed than previously appreciated, especially amongst decapods (crabs and lobsters), so it would not be surprising if the legal definition of a protected species was widened in the future.
At a practical level William Russell and Rex Birch suggested (Russell & Burch 1959) that there are three important considerations when designing an experiment that might, potentially, use non-humans:
1. Replacement – could the use of non-human animals be replaced either by the ethical use of human participants or by the use of a non-animal technique – perhaps based on cultured cells?
2. Reduction – could the use of animals be minimised in such a way that the results would remain statistically valid?
3. Refinement – could the experimental programme be modified in a way that eliminated, or at least minimised, any pain or distress, perhaps by using positive rewards (e.g. palatable food) rather than punishment (e.g. electric shock) to motivate performance in a particular test situation?
In the UK, these practical ideas have to be addressed before experimental work is approved in the context of a broader cost-benefit analysis. The proposal has then to be considered by an ethics committee which includes lay members before it receives approval from the relevant government department.
Key Takeaways
• There was an increasing realisation from the period 200 BCE to 1700 CE that the brain was critical for overt behaviour, cognition and emotion. The contributions of Galen, Ibn Sina and Descartes are of particular note.
• During the 17th – 19th centuries it became clear that there was a considerable degree of localisation of function within the nervous system. Older fluid-based notions of information flow within the nervous system were gradually replaced within an understanding that it depended on electrical and chemical phenomena.
• During the first 70 years of the 20th century the mechanisms underlying nerve impulse conduction and synaptic transmission were clarified. However, the strong influence of the positivist movement within psychology and ethology devalued the investigation of cognition and emotion.
• In the latter part of the 20th and early part of the 21st century, there has been increasing integration of neuroscientific techniques into behavioural studies. During this period it was also accepted that aspects of cognition and emotion that cannot be directly observed are proper subjects for study in biological psychology.
• The investigation of any aspect of behaviour may involve the study of (i) causation and mechanisms, (ii) development, (iii) evolution and (iv) function or current utility. These are Tinbergen’s ‘four questions’.
• In the study of causation and development a mix of correlational and experimental strategies can be used. They each have their own advantages and pitfalls.
• Ethical considerations are important considerations in any study of biological psychology. They are based on the strong principle-based ethical code shared by all psychologists, and the specific principles of refinement, reduction and replacement.
Postscript: three basic concepts from biology
Cells
The first descriptions of cells were made in the middle of the seventeenth century by Antonie van Leewenhoek and Robert Hooke. Only in the middle of the nineteenth century was it accepted that all living organisms are made up from one or more cells as their basic organisational unit. In 1855 Rudolf Virchow proposed that all cells arise from pre-existing cells by cell division. Each cell contains the different types of organelles that allows that cell to function. Critical amongst these, at least in all multicellular organisms, is the nucleus, which contains the chromosomes made up of DNA which, in its detailed chemical structure, encodes all the information that the cell needs to reproduce. The cell also has mitochondria that provide the cell with energy and a membrane that bounds it, as well as many other types of organelle.
Cells are grouped into tissues, composed of one or more different cell types, and different tissues are combined together to form organs, such as heart, lungs and brain, formed of many different types of tissue.
Inheritance
When a cell divides, the daughter cell needs the necessary information to duplicate the functions of its parent. The plant breeding experiments of Gregor Mendel in the late 1800s, when combined with the detailed studies of Thomas Hunt Morgan on the ways in which chromosomes replicate during cell division in the early twentieth century, suggested that DNA, which made up much of their substance, must be the molecule that had that function. In the 1950s James Watson, Francis Crick, Rosalind Franklin and others showed that the detailed ordering of one of four nucleotide bases in the double helical chain that makes up DNA provided the code which could be translated into the ordering of amino acids in proteins. Proteins are fundamental to cell function – they function as enzymes, allowing simple chemical transformations to build a cell’s structure, generate energy and provide mechanisms that allow communication from one cell to another within complex organisms made of potentially billions of cells. Changes in a specific base within the DNA sequence are one form of mutation (so-called single nucleotide polymorphisms – SNPs) that can lead to a change in the structure and functioning of a protein. Larger scale mutations may involve the loss or duplication of parts of a chromosome and are often associated with more substantial changes in body structure, functioning or behaviour.
Evolution by natural selection
The idea that one type (or species) of animal can give rise to another as result of gradual change from one generation to another recurs in the writings of Islamic, Chinese and Greek philosophers, although some Greek writers – such as Plato and his pupil Aristotle – held the opposite view, and held that the form of individual species was fixed and unchanging. But it was Charles Darwin who gave the first clear account of the role of natural selection in evolutionary change. His theory proposed three essential postulates:
• that individuals differ from one to another;
• that this variation may be inherited;
• and that some individuals, as results of this variation, leave greater numbers of offspring.
The consequence is that the characteristics of more successful individuals will become more common in the population. In this way a population, perhaps subject to different environmental pressures over the areas in which it is found, may gradually split into two, and evolve mechanisms that make cross-breeding less likely so as to preserve those inherited differences which adapt them to their differing environments.
Darwin’s ideas were very controversial and opposed by many public figures and religious leaders as well as by other scientists. Ironically, Virchow – who had been on the right side of the argument in relation to the way in which new cells arise by division – was one of Darwin’s most vociferous opponents in Germany. He regarded Darwin’s ideas as an attack on the moral basis of society. Outstanding scientists can be right in one area, but hopelessly wrong in others!
Plan for the remainder of this textbook
The following chapters of this text cover the broad topic of biological psychology. They begin with an account of the structure of the brain and nervous system, and the functioning of the cellular units that make it up. The discussion then moves to the way in which the nervous system processes and interprets diverse types of sensory information, and the motor mechanisms that generate an observable behavioural output. In addition to a focus on the understanding of ‘normal’ behaviour, you will read about the extent to which clinical conditions, such as anxiety and depression, may be accompanied by specific changes in brain function and the way in which clinically useful drugs may affect both the function of individual neurones and larger scale brain circuits. Future editions of this book will also include chapters that discuss topics such as motivation, emotion and the neural mechanisms that underpin learning and memory.
References
Bateson, P., & Laland, K. N. (2013). Tinbergen’s four questions: An appreciation and an update. Trends in Ecology & Evolution, 28(12), 712–718. https://doi.org/10.1016/j.tree.2013.09.013.
Bergström, A., McCarthy, S. A., Hui, R., Almarri, M. A., Ayub, Q., Danecek, P., Chen, Y., Felkel, S., Hallast, P., Kamm, J., Blanché, H., Deleuze, J.-F., Cann, H., Mallick, S., Reich, D., Sandhu, M. S., Skoglund, P., Scally, A., Xue, Y., … Tyler-Smith, C. (2020). Insights into human genetic variation and population history from 929 diverse genomes. Science, 367(6484), eaay5012. https://doi.org/10.1126/science.aay5012
Berridge, K. C. (2019). Affective valence in the brain: Modules or modes? Nature Reviews Neuroscience, 20(4), 225–234. https://doi.org/10.1038/s41583-019-0122-8.
Brebner, L. S., Ziminski, J. J., Margetts-Smith, G., Sieburg, M. C., Reeve, H. M., Nowotny, T., Hirrlinger, J., Heintz, T. G., Lagnado, L., Kato, S., Kobayashi, K., Ramsey, L. A., Hall, C. N., Crombag, H. S., & Koya, E. (2020). The Emergence of a Stable Neuronal Ensemble from a Wider Pool of Activated Neurons in the Dorsal Medial Prefrontal Cortex during Appetitive Learning in Mice. Journal of Neuroscience, 40(2), 395–410. https://doi.org/10.1523/JNEUROSCI.1496-19.2019.
Cook, M., & Mineka, S. (1990). Selective associations in the observational conditioning of fear in rhesus monkeys. Journal of Experimental Psychology. Animal Behavior Processes, 16(4), 372–389.
Darwin, Charles. (1872). The Expression of the Emotions in Animals and Man.
Descartes, R. (1998). Descartes: The World and Other Writings (S. Gaukroger, Ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511605727
Dickinson, A. (1980). Contemporary Animal Learning Theory. Cambridge University Press.
Engh, A. L., Hoffmeier, R. R., Cheney, D. L., & Seyfarth, R. M. (2006). Who, me? Can baboons infer the target of vocalizations? Animal Behaviour, 71(2), 381–387. https://doi.org/10.1016/j.anbehav.2005.05.009
Garfinkel, S. N., & Critchley, H. D. (2016). Threat and the Body: How the Heart Supports Fear Processing. Trends in Cognitive Sciences, 20(1), 34–46. https://doi.org/10.1016/j.tics.2015.10.005
Goodall, J. (2017, January 20). Remembering My Mentor: Robert Hinde. Jane Goodall’s Good for All News. https://news.janegoodall.org/2017/01/20/remembering-my-mentor-robert-hinde/
Gould, S. J. (2006). The Mismeasure of Man. W. W. Norton & Company.
Hebb, D. O. (1949). The Organization of Behavior. Chapman & Hall.
Heydari, M., Hashem Hashempur, M., & Zargaran, A. (2013). Medicinal aspects of opium as described in Avicenna’s Canon of Medicine. Acta Medico-Historica Adriatica : AMHA, 11(1), 101–112.
Jack, R. E., Garrod, O. G. B., & Schyns, P. G. (2014). Dynamic Facial Expressions of Emotion Transmit an Evolving Hierarchy of Signals over Time. Current Biology, 24(2), 187–192. https://doi.org/10.1016/j.cub.2013.11.064
James, William. (1884). II.—What is an emotion? Mind, os-IX(34), 188–205.
Jolly, A. (1966). Lemur Social Behavior and Primate Intelligence. Science, 153(3735), 501–506. https://doi.org/10.1126/science.153.3735.501
LeDoux, J. (2012). Rethinking the emotional brain. Neuron, 73(4), 653–676. https://doi.org/10.1016/j.neuron.2012.02.004
Lincoln, G. A., Guinness, F., & Short, R. V. (1972). The way in which testosterone controls the social and sexual behavior of the red deer stag (Cervus elaphus). Hormones and Behavior, 3(4), 375–396. https://doi.org/10.1016/0018-506X(72)90027-X
Macmillan, M. (2000). Restoring Phineas Gage: A 150th Retrospective. Journal of the History of the Neurosciences, 9(1), 46–66. https://doi.org/10.1076/0964-704X(200004)9:1;1-2;FT046
Magee, B. (1987). The great philosophers: An introduction to Western philosophy. Oxford University Press.
Marsh, H. (2014). Do No Harm: Stories of Life, Death and Brain Surgery. Hachette UK.
Narayan, E. J., Cockrem, J. F., & Hero, J.-M. (2013). Sight of a Predator Induces a Corticosterone Stress Response and Generates Fear in an Amphibian. PLOS ONE, 8(8), e73564. https://doi.org/10.1371/journal.pone.0073564
Nord, C. L., Dalmaijer, E. S., Armstrong, T., Baker, K., & Dalgleish, T. (2021). A Causal Role for Gastric Rhythm in Human Disgust Avoidance. Current Biology, 31(3), 629-634.e3. https://doi.org/10.1016/j.cub.2020.10.087
Nottebohm, F. (2002). Neuronal replacement in adult brain. Brain Research Bulletin, 57(6), 737–749. https://doi.org/10.1016/S0361-9230(02)00750-5
Penn, D. J., & Számadó, S. (2020). The Handicap Principle: How an erroneous hypothesis became a scientific principle. Biological Reviews, 95(1), 267–290. https://doi.org/10.1111/brv.12563
Riebel, K., Lachlan, R. F., & Slater, P. J. B. (2015). Learning and Cultural Transmission in Chaffinch Song. In M. Naguib, H. J. Brockmann, J. C. Mitani, L. W. Simmons, L. Barrett, S. Healy, & P. J. B. Slater (Eds.), Advances in the Study of Behavior (Vol. 47, pp. 181–227). Academic Press. https://doi.org/10.1016/bs.asb.2015.01.001
Russell, W. M. S., Burch, R. L., & Hume, C. W. (1992). The Principles of Humane Experimental Technique. Universities Federation for Animal Welfare..
Skinner, B. F. (1938). The Behavior of Organisms: An Experimental Analysis. Appleton-Century-Crofts.
Skinner, B. F. (1988). Preface to The Behavior of Organisms. Journal of the Experimental Analysis of Behavior, 50(2), 355–358. https://doi.org/10.1901/jeab.1988.50-355
Smith, J. M. (1991). Theories of sexual selection. Trends in Ecology & Evolution, 6(5), 146–151. https://doi.org/10.1016/0169-5347(91)90055-3
Swanson, L. W., Newman, E., Araque, A., & Dubinsky, J. M. (2017). The Beautiful Brain: The Drawings of Santiago Ramon y Cajal. Abrams.
The Deer Year | Isle of Rum Red Deer Project. (n.d.). Retrieved 30 August 2022, from https://rumdeer.bio.ed.ac.uk/deer-year
Tinbergen, N. (1951). The Study of Instinct. Oxford University Press.
Tinbergen, N. (1963). On aims and methods of Ethology. Zeitschrift Für Tierpsychologie, 20(4), 410–433. https://doi.org/10.1111/j.1439-0310.1963.tb01161.x
Vallortigara, G., & Rogers, L. J. (2020). A function for the bicameral mind. Cortex, 124, 274–285. https://doi.org/10.1016/j.cortex.2019.11.018
Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review, 20(2), 158–177. https://doi.org/10.1037/h0074428
Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3(1), 1–14. https://doi.org/10.1037/h0069608
Whitaker, H., & Jarema, G. (2017). The split between Gall and Spurzheim (1813–1818). Journal of the History of the Neurosciences, 26(2), 216–223. https://doi.org/10.1080/0964704X.2016.1204807
Yoshihara, M., & Yoshihara, M. (2018). ‘Necessary and sufficient’ in biology is not necessarily necessary – confusions and erroneous conclusions resulting from misapplied logic in the field of biology, especially neuroscience. Journal of Neurogenetics, 32(2), 53–64. https://doi.org/10.1080/01677063.2018.1468443
About the Author
Pete Clifton is Professor of Psychology at the University of Sussex. He was the founding Head of the School of Psychology, holding that position from June 2009 to July 2014. He is a Chartered Psychologist and Fellow of the British Psychological Society. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/01%3A_Background_to_Biological_Psychology/1.01%3A_Introduction_to_biological_psychology.txt |
In the next few chapters, we’ll explore how information is received and processed by the brain, leading to generation of behaviour. The first step in this process is to learn how the nervous system is organised, in terms of the cells and structures that it contains.
02: Organisation of the nervous system
Learning Objectives
By the end of this chapter, you will:
• understand the organisation and main components of the nervous system
• have a sense of how information flows through the nervous system.
What does the brain do?
All our thoughts and actions are biological structures and processes that work together to enable us to successfully exist and interact with the world, performing behaviours that keep us fed, watered and safe. In this chapter we will learn about the different parts of the nervous system that orchestrate these behaviours.
First, however, it is worth considering, on a very general level, what our brains and the nervous system do. They take in information from the outside world, and our bodies, and work out what is the best thing to do next. They then cause changes in our bodies to enable that thing to happen, whether that’s running away from a lion, catching a ball, or going to sleep.
In this chapter, we are going to explore the structures, circuits and cells of the nervous system, in order to understand broadly how information flows into, through, and from the brain. You will learn a bit about how these structures and cells generate behaviours and internal responses that allow us to successfully adapt to and interact with what’s going on around and inside us. You’ll learn much more about this in following chapters of the book.
The nervous system as a computer
The nervous system is the network of neurons and supporting cells, termed glia, that do this job of detecting something, transmitting that information, integrating it with other information, and sending an instruction to other parts of our body to do something about it. In other words, our nervous system is like a computer. It takes an input, performs a computation on that input (using the programs running on that computer – these determine what computation is performed), and generates an output. In fact, every part of the nervous system does this same ‘input – computation – output’ job, but using different inputs, running different programs and generating different outputs. The whole nervous system might detect visual information that a lion is coming, compute that it would be a good idea to run away, and generate patterns of muscle contractions in your legs to make you run. On a microscopic scale, a single nerve cell, or neuron, might receive inputs carrying information about light falling on your retina in different locations, and integrate that information to conclude and output the information that the light falling on the retina was forming a vertical line. The program run by a given cell or structure in the nervous system is determined by how that cell or structure is connected to other cells or structures, and the biological rules that govern those connections and how they change over time. We’ll learn much more about that throughout the next few chapters of this book.
First of all though, we need to learn our way around the nervous system. This chapter gives you an introduction to the anatomy of the nervous system. It should help you understand the organisation of the nervous system as well as introduce the function of some of its major components. You’ll learn much more about how these structures perform their functions in later chapters.
Parts of the nervous system
The nervous system is made up of the central nervous system and the peripheral nervous system (Figure 2.1).
The central nervous system (CNS) comprises the brain and the spinal cord, while the peripheral nervous system (PNS) is the network of neurons and nerves that lie outside these two structures and connect the CNS with the rest of the body. It includes most of the cranial nerves, that connect to the brain as well as the spinal nerves that take information to and from the spinal cord. The PNS provides the input to the CNS, which computes what to do with that information, and sends outputs back to the body, via the PNS. Symmetry around the midline is a general feature of nervous system organisation.
The peripheral nervous system
The PNS can be subdivided into two parts: the somatic and autonomic nervous systems. The autonomic nervous system can then be subdivided into three further divisions: the sympathetic, parasympathetic and enteric nervous systems.
The somatic nervous system
The somatic nervous system deals with interactions with the external environment: sensing the outside world via sensory neurons, and sending signals via motor neurons to control skeletal muscles to generate movements and behaviours that interact with that world. Many of these behaviours are voluntary, and are initiated by complex decision making processes in the brain. You hear a voice calling, you interpret the language, and turn towards the sound of your name. The somatic nervous system can also generate involuntary movements, however, via reflexes, in which a sensory input activates a motor response without voluntary control. The simplest of these reflexes involve only a single sensory neuron activating a single motor neuron. An example is the muscle stretch reflex, in which sensory neurons detect stretch in a muscle, causing motor neurons to activate the same muscle to contract it more and counter the stretch. So if you lean to one side, stretching core postural muscles, this reflex constricts those muscles, keeping you stable. Or if someone adds a heavy weight to something you’re carrying, stretching your arm muscles, they then constrict so you don’t drop the load. Even these simplest reflexes involve information transfer from PNS to CNS, as the connection or synapse between these two neurons occurs in the spinal cord – part of the CNS.
Indeed, there are no neurons that exist wholly in the peripheral somatic nervous system: somatic sensory neurons synapse for the first time in the CNS, while somatic motor neurons’ cell bodies are found in the CNS, with their axons leaving the CNS to innervate muscles.
These afferents (carrying sensory information inwards to the CNS) and efferents (carrying motor information outwards from the CNS) form cranial nerves and spinal nerves. (Nerves are just bundles of axons – the long projections that each neuron has to carry electrical impulses). Cranial nerves innervate the head and carry information including about smell, taste, hearing, and control of facial muscles to and from their targets directly into the brainstem. Spinal nerves carry information to and from the skin and skeletal muscles to the spinal cord. There are 31 pairs of spinal nerves, which carry sensory and motor information from specific parts of the body into the spinal cord. The region of skin innervated by afferents from a given spinal nerve is called a dermatome, while the muscles contacted by efferents from a single nerve are called a myotome.
The sensory and motor parts of the nerve split apart at the spinal cord. Sensory afferents enter the dorsal root of the spinal cord, their cell bodies forming the dorsal root ganglion just outside the spinal cord. Motor neurons exit the spinal cord from the ventral root (Figure 2.3) before synapsing at neuromuscular junctions on skeletal muscle where they release acetylcholine to initiate muscle contraction (see the section Interacting with the world).
The autonomic nervous system
In contrast with the more voluntary control mediated by the somatic nervous system, the autonomic nervous system mediates interactions with the body’s internal environment, for example regulating heart rate. These interactions are broadly involuntary reflexes, though modulated by the brain, and some of this regulation can be consciously done, for example people can train themselves to exert control over their heart rate. As in the somatic nervous system, sensory neurons provide information about the internal organs to the CNS, and motor neurons produce effects on the internal organs, often by modulating the tone of smooth muscle, for example to change blood vessel diameters. Outside the autonomic nervous system, non-neuronal pathways can also send information about the internal body state to the brain. For example, neurons in a brain region called the hypothalamus can detect increases in blood temperature, activating brain circuits that can then cause autonomic nervous system activation and increase blood vessel dilation in the skin as well as sweat gland activation.
The sympathetic, parasympatheticand enteric nervous systems
Now let’s consider the different divisions of the autonomic nervous system:
The enteric nervous system is a large mesh of neurons which is embedded in the wall of the gastrointestinal system, from the oesophagus to the anus, and regulates motility and secretion of hormones. In humans, it contains around 500 million neurons, 0.5% of the number found in the brain and 5 times more than are found in the spinal cord. It can function without input from the brain, though can also be regulated by descending input.
The sympathetic and parasympathetic divisions of the autonomic nervous system are often thought of as the ‘fight-and-flight’ and ‘rest-and-digest’ systems, respectively, as they generate motor responses that broadly promote action or relaxation. For example, sympathetic activation increases heart rate, and increases blood flow to the brain, heart and skeletal muscles, priming the body for action. Conversely, activation of the parasympathetic nervous system reduces heart rate and blood flow to the brain, heart and skeletal muscles, instead directing blood flow to the gut and stimulating digestion.
While it is useful to think of the distinction between ‘fight and flight’ and ‘rest and digest’ functions of the two systems, the body doesn’t switch in a binary manner between one or the other being active but rather the body’s state depends on the balance of activity of the two systems at any one time. Furthermore, this balance is not uniform across the body, as is apparent from the need to independently regulate different organs, for example to control heart rate and bladder release.
The sympathetic preganglionic neurons leave the thoracic and lumbar spinal cord to synapse in either the sympathetic chain ganglia just outside the spinal cord, or in the prevertebral ganglia including the solar plexus or mesenteric ganglia within the abdomen. Preganglionic neurons use acetylcholine as their neurotransmitter. Postganglionic neurons – the motor neurons of the sympathetic division – use noradrenaline as their neurotransmitter, and often travel along the same nerves as the somatic nervous system [NB: Noradrenaline is called norepinephrine in the US, and adrenaline is called epinephrine].
Parasympathetic neurons leave the CNS via cranial nerves or via sacral regions of the spinal cord. These neurons synapse in ganglia that are generally very close to the organs to be contacted, so parasympathetic preganglionic neurons are much longer than the postganglionic neurons. The vast majority of parasympathetic fibres form the vagus nerve, which innervates most of the organs in the thorax and abdomen. Both pre and post-ganglionic parasympathetic neurons use acetylcholine as a neurotransmitter.
Key Takeaways: Peripheral Nervous System
• The PNS delivers sensory information to the CNS and sends instructions from the central nervous system to control motor outputs
• The PNS is made up of the somatic and autonomic nervous systems, dealing with interactions with the external and internal environments, respectively
• The autonomic nervous system comprises the enteric nervous system in the gut and the sympathetic, and parasympathetic divisions which have broadly opposing effects on our internal organs
• Sensory neurons synapse first in the CNS. Somatic motor neurons exit the CNS and release acetylcholine onto skeletal muscles, whereas autonomic neurons synapse onto motor neurons at ganglia outside the CNS
• Acetylcholine is the neurotransmitter released by preganglionic neurons and parasympathetic motor neurons, while noradrenaline is released by sympathetic motor neurons.
The Central Nervous System
Compass directions
The CNS comprises the brain and spinal cord. They, particularly the brain, are complex 3D structures, so before we explore them, it’s useful to consider the language we can use to describe what exactly we are looking at and where different parts are located with respect to other regions.
We can look at the surface of the brain from different angles. In humans, if we look from the front, we are looking at the anterior surface, or from the back we are looking at the posterior surface. If we look at the top, we are looking at the superior surface or from below, the inferior surface. These words can also be used to describe relative positions of things within the brain too (e.g. visual cortex is posterior to auditory cortex).
To confuse matters, however, the front of the brain can also be referred to as rostral (meaning towards the nose), the back as caudal (meaning towards the tail), the top as dorsal (towards the back) and the bottom as ventral (towards the stomach).
In the brain these terms don’t really make sense – dorsal regions are towards the top, not towards the back of the head. They make much more sense in the spinal cord – dorsal spinal cord really is towards the back, not the top. The reason for the confusion is that humans walk upright so our brain is angled relative to the spinal cord.
In most animals (e.g. think of mice), the brain continues in a straight line from the spinal cord, so the dorsal brain is aligned with the back of the animal. In humans, however, the top of our head points in a different direction to our back. All in all, this means we have lots of words we can use to describe whether we’re looking at the front, back, top or bottom of the brain.
We don’t just want to look at the brain from the outside surfaces, though, but to see inside at the many structures within. To do that we can virtually or physically slice through it, creating sagittal, coronal or horizontal/transverse slices. In doing so, we can notice that symmetry is a general feature of CNS organisation: the left and right halves of the brain and spinal cord are symmetrical around a midline. We can describe structures’ locations with respect to this midline as being medial (closer to the midline) or lateral (closer to the side), as well as describing their anterior-posterior/rostral-caudal and superior-inferior/dorsal-ventral dimensions.
Fig 2.8. Anatomical slices allow us to visualise inside the brain
The spinal cord
The spinal cord can be divided into segments, each of which connects to a pair of sensory and motor nerves. Towards the head are 8 cervical segments, below which are 12 thoracic segments, 5 lumbar segments and 5 sacral segments (Figure 2.4).
The spinal cord carries afferent, somatosensory (touch) information up to the brain and efferent (motor) information to the muscles of the body. It comprises grey matter (neuronal cell bodies and short range connections) around a central canal, containing cerebrospinal fluid, surrounded by a number of white matter tracts containing myelinated and unmyelinated axons forming connections to other regions. Both the grey and white matter are organised. For example, the dorsal column of the white matter contains axons from somatosensory neurons whose cell bodies are located in the dorsal root ganglia, while the lateral corticospinal tract contains axons from motor neurons from the cerebral cortex which control voluntary movement of the limbs. The grey matter is where connections between different neurons form and contains cell bodies, dendrites and synapses as well as axons. Spinal cord grey matter can be divided into three ‘horns’ (Figure 2.9), the dorsal horn containing neurons carrying sensory information, the lateral horn largely containing sympathetic motor neurons, and the ventral horn containing cells conveying motor information.
The brain
The brain itself is made up of the brainstem, the cerebellum and the forebrain.
The brainstem
A lot of the volume of the brainstem contains white matter tracts carrying information up to the rest of the brain or down to the spinal cord, as well as to and from the cranial nerves that provide sensory and motor inputs to the face and neck. Within the medulla, the pyramids, so called for their shape, are prominent white matter bundles that carry descending motor axons to the spinal cord. On the ventral surface of the pons are the bridge-like wide mass of transverse fibres (perpendicular to the axis of the brain stem and spinal cord), from which the pons derives its name (pons is Latin for bridge). These fibres connect the brainstem to the cerebellum.
Nestled within the numerous white matter tracts passing through the brainstem are a number of grey matter nuclei (clusters of neuronal cell bodies). These nuclei include motor or sensory cranial nerve nuclei, containing cell bodies of neurons that project into or receive connections from the cranial nerves, as well as the dorsal column nuclei where many neurons carrying touch information from the spinal cord form their first connections. Also within the brainstem are nuclei that produce neuromodulators that are released over relatively large regions of forebrain and are involved in regulating arousal, attention, mood, movement, motivation and memory. Many of these neuromodulators are well known and will make several appearances elsewhere in this book: Dopamine is produced in neurons in the ventral tegmental area and substantial nigra pars compacta of the midbrain, serotonin from neurons in the Raphe nuclei that extend from the medulla to the midbrain, noradrenaline in neurons in locus coeruleus of the midbrain and the medial reticular zone of the pons, and acetylcholine in neurons of the pedunculopontine nucleus in the pons (as well as in the basal forebrain). Other brainstem nuclei, particularly in the medulla, regulate key functions for sustaining life, including breathing, heart rate, swallowing and consciousness. Brainstem damage can therefore be life-threatening, and can occur due to brain swelling compressing the brainstem against the skull.
Cerebellum
The cerebellum, or ‘little brain’, lies inferior to the occipital and temporal lobes of the cerebral cortex, and posterior to the pons.
Its cells are organised in clear layers – that is, it has a laminar structure – with a distinct connectivity that has made it a very interesting structure for neuroscientists to study. It receives inputs via ‘mossy fibres’ from nuclei in the pons, which in turn receive information from wide areas of the cerebral cortex, containing sensory and other information. These mossy fibres then synapse onto granule cells – small neurons which are packed together to form the granule cell layer. The human brain contains 50 billion granule cells – about 3/4 of the total number of neurons in the brain. The axons of these granule cells rise vertically from the granule cell layer into the molecular layer, where they split in two and send axons in opposite directions, forming a T shape. The axons of different granule cells are aligned parallel with each other, and are termed parallel fibres.
These parallel fibres synapse onto the highly branched, flat dendritic trees of Purkinje cells, the output cell of the cerebellum which sends connections to the deep cerebellar nuclei from which information is sent to the thalamus and onto the cerebral cortex. Purkinje cells also receive synaptic input from ‘climbing fibres’, axons of cells that originate in the medulla and carry information from across the brain, particularly information about ongoing motor processes.
Neuroscientists have been able to interrogate and understand how this highly organised circuitry mediates some of the key functions of the cerebellum (see section X), allowing us detailed insights into how the cerebellum performs some of its key functions. The cerebellum is important for bringing together diverse sensory information and using this to guide motor behaviours, making it important for balance and motor learning, such as learning to ride a bicycle, or your fingers learning to play a new tune on the piano. However, while the sensory-motor functions of the cerebellum are the best understood, it also receives many different sorts of information from across the brain. Functional imaging studies and other experiments have demonstrated cerebellar involvement in processes as diverse as language comprehension, autobiographical memory and attention.
Forebrain
The forebrain comprises the diencephalon and, surrounding it the cerebrum – two cerebral hemispheres containing an outer layer- the cerebral cortex and subcortical structures such as the hippocampus, basal ganglia and amygdala.The two cerebral hemispheres are connected to each other via the corpus callosum, a very large white matter tract, with many other white matter tracts connecting different parts of the forebrain to each other.
Diencephalon
Extending from the midbrain, the diencephalon’s major components are the thalamus and the hypothalamus.
The thalamus is an information hub, relaying ascending and descending information from widespread brain areas. It is organised into functionally specialised nuclei which process information of certain modalities. For example the dorsal lateral geniculate nucleus of the thalamus receives visual information from the optic nerve and sends projections to primary visual cortex, while the medial geniculate nucleus receives auditory information from the inferior colliculus and projects to auditory cortex. Rather than simply relaying information in one direction, however, a key feature of thalamic processing is that nuclei also receive descending information from the cortex, forming circuits termed thalamocortical loops (or corticothalamic loops). These thalamocortical loops are not limited to sensory processing, but from higher order areas and motor areas as well, and can include additional structures in their circuitry.
For example, the anterior thalamus plays an important role in memory, receiving information from the hippocampus, mamillary bodies of the hypothalamus as well as the cerebral cortex, and projecting to cingulate cortex. Thalamocortical loops that incorporate the striatum and other nuclei of the basal ganglia are also important for motor control and motivated behaviour, via the ventrolateral, mediodorsal and anterior thalamic nuclei. These loops, and others like them we will hear about in other circuits, demonstrate that the ‘input-computation-output’ function of the nervous system is not simply in one direction – instead, outputs from a given region often feed back to structures providing inputs to that region, as well as sending outputs to ‘upstream’ brain areas.
The hypothalamus is located below the thalamus, above the pituitary gland. It consists of around 22 nuclei and is highly connected to the brainstem, the amygdala and the hippocampus. The hypothalamus is involved in regulation of many homeostatic processes, such as the control of eating and drinking, temperature regulation and circadian rhythms, as well as emotion and memory processing and sexual behaviour. Some hypothalamic nuclei are sexually dimorphic, being structurally and functionally different in males and females. The hypothalamus can effect changes on the body’s physiology both by its projections via the brainstem to the autonomic nervous system, and by regulating hormone release via its connections with the adjacent pituitary gland. It is also involved in motivated behaviours such as defensive freezing or flight behaviours.
Cerebral cortex
Richly folded in humans, to maximise its surface area, the cerebral cortex is the outermost layer of the forebrain.
These folds form characteristic sulci (grooves) and gyri (ridges), the largest of which separate the cerebral cortex into 4 lobes, the frontal, temporal, parietal and occipital lobes. Most cerebral cortex is neocortex (new cortex) with 6 layers of neurons, containing different densities and types of neurons. Layer 1 has very few cell bodies and mostly contains the tips of dendrites and axons. Layers 2 and 3 contain cell bodies of neurons that receive and send projections to nearby cortical regions. Ascending inputs to the cerebral cortex from the thalamus arrive in layer 4, while cells that send descending projections to other brain areas are found in layers 5 and 6. These layers are different thicknesses across the cortex, varying with the function of that area. Sensory cortices receive lots of afferent inputs so have a thick layer 4, while motor regions send a lot of efferents to downstream regions, so have thick layers 5 and 6. There is more connectivity vertically through these layers than horizontally, so neurons in the same vertical ‘column’ of cortex tend to have the same response properties (i.e. they are all activated by the same sort of stimulus).
By studying the cytoarchitecture, or organisation of the cell layers across the cortex, in 1909 a German anatomist called Korbinian Brodmann divided the cerebral cortex into 52 different areas, now called Brodmann areas. Many of these have subsequently been subdivided into smaller regions. The different cellular organisation of these regions indicates differences in the circuitry and information processing within the region. Indeed many Brodmann areas have been shown to correspond with different functional specialisations. Area 17 is primary visual cortex, for example, and area 4 is primary motor cortex. Generally the functions of different cortical regions can be categorised as being sensory, motor or associative. Sensory information first enters primary sensory cortices, with further processing occurring in secondary sensory areas. Where multimodal information is processed within an area (e.g. both auditory and visual), that area is considered association cortex. These areas are important for a multitude of functions from understanding and generating language to spatial processing, abstract thinking, planning and memory. Conversely, primary motor cortex contains motor neurons that send their axons to the spinal cord to execute voluntary movements, while secondary or premotor areas project to primary motor cortex and help select or coordinate movements.
These functional areas as defined by cytoarchitecture can be further functionally subdivided. For example, primary sensory cortices are topographically organised: adjacent parts of the skin are represented by adjacent bits of somatosensory cortex – somatotopic organisation, and adjacent regions of the retina are represented by adjacent parts of primary visual cortex – retinotopic organisation. These representations can be even further subdivided into columns processing different stimuli (orientation of visual stimuli, for example).
While most brain areas are structurally symmetrical, there is lateralisation of some functions that are subserved by the cerebral cortex. Firstly, as mentioned above, sensory processing generally occurs in the opposite side of the body from where that sensory input is received (i.e. left somatosensory cortex processes stimuli applied to the right side of the body). Secondly, lesion studies reveal lateralisation in the function of information streams through each side of the brain. Lesions to the left hemisphere visual processing streams result in deficits in perceiving fine details, while lesions to the right hemisphere impair perception of the wider field of view, or ‘big picture’. Finally, language production and comprehension are typically localised to the left hemisphere, particularly in right-handed people.
Neurons in the cerebral cortex project to a multitude of different brain areas as well as to the spinal cord, but the most common projection target for cortical neurons are other cortical neurons, and most of these projections are to nearby neurons. 80% of intracortical projections occur to neurons in the same area, while most connections between areas are also to nearby areas. Only 5% of intracortical connections are long range, to more distant cortical regions or across the corpus callous (transcallosal), between the two hemispheres. These connections contribute to the formation of several large scale brain networks that contribute to different aspects of perceptual and cognitive function (see later).
Basal ganglia
The basal ganglia are a group of subcortical nuclei (i.e. they lie beneath the cerebral cortex). They comprise the dorsal striatum, which is made up of the caudate nucleus and the putamen, the ventral striatum, or nucleus accumbens and the external and internal global pallidus. Two other components of the basal ganglia circuitry are actually not in the cerebrum: the subthalamic nucleus is within the diencephalon and the substantial nigra is in the midbrain.
Information flows from widespread areas of the cerebral cortex, through the basal ganglia and thalamus, and back to cerebral cortex. These cortico-basal ganglia-thalamo-cortical loops are important for selecting motor actions, e.g. starting and stopping behaviours, and for aspects of motivated behaviour i.e. selecting actions based on whether they are likely to result in something good or bad happening to the individual. They include excitatory and inhibitory pathways, the balance of which is important for inhibiting or initiating motor outputs (see Interacting with the world).
Disruptions to this circuitry can cause an imbalance between these facilitatory and inhibitory effects, as is seen in a number of neurological conditions including Parkinson’s disease and Huntingdon’s disease, as well as schizophrenia, Tourette’s syndrome, obsessive-compulsive disorder and addiction (see Dysfunctions of the nervous system).
Hippocampus
The hippocampus is an important structure involved in episodic memory, spatial processing and contextual learning.
Fig. 2.19. The hippocampus
It has a distinctly different laminar structure to the cerebral cortex. It is allocortex, with fewer cell layers than neocortex, with a densely packed layer of pyramidal cell bodies, above and below which are layers with only dendritic and axonal processes and much sparser inhibitory neurons. It is formed of two interlinked U-shaped folds, the dentate gyrus and the hippocampus ‘proper’, curved together into an elegant 3D shape, like a seahorse (hippocampus) or a ram’s horn (Cornu ammonis) from which the hippocampal subfields CA1, CA2 and CA3 are named. The hippocampus receives most of its inputs from cortical or subcortical regions via the entorhinal cortex and most of its outputs are sent to cortical or subcortical regions via the subiculum. The fornix, another important output pathway, also connects the hippocampus, via CA3, to the mammillary bodies of the diencephalon.
Like the cerebellum, the hippocampus has a well-characterised neuronal circuitry that has provided useful insights into how it fulfils its function. From the entorhinal cortex, inputs via the perforant path synapse on granule cells in the dentate gyrus, which themselves send mossy fibres to CA3. CA3 pyramidal neurons send Schaffer collaterals (axon branches) to CA2 (which is small) and CA1, as well as recurrent collaterals – axon branches that synapse back onto CA3 cells. Changes, during learning, in the strength of connections in these subregions are thought to be important for particular aspects of memory formation and retrieval, particularly the ability to remember more of an event or stimulus when exposed to only part of it (pattern completion), and to remember events or stimuli as distinct from each other (pattern separation).
Damage to the hippocampus produces memory deficits and occurs early in Alzheimer’s disease. Selective hippocampal damage and associated amnesia can also occur when the brain is deprived of oxygen, for example during birth. Because of its recurrent collateral connectivity, whereby excitatory neurons can excite themselves, the hippocampus is also a common focus of epileptic activity, where neuronal circuits are over activated, producing seizures.
Amygdala
The amygdala, named for its almond shape, sits adjacent to the hippocampus beneath the cerebral cortex within the temporal lobe (see Fig. 2.19).
It is important for processing of emotions and for the impact of emotions on learning and has been shown to be particularly involved in fear learning. It is made up of a complex of different nuclei, including the basolateral, corticomedial and centromedial nuclei. It receives inputs from wide regions of sensory and prefrontal cortex, then hippocampus and visceral information from brainstem nuclei, making it able to integrate information about the state of the body with contextual information. Its efferents go to cerebral cortex, particularly prefrontal and cingulate cortex, hippocampus, ventral striatum, thalamus and hypothalamus. This connectivity allows it to produce emotional responses appropriate to a given context; for example, when a stimulus appears that is associated with punishment, a fear response can be produced by altering hormone release via the hypothalamus, triggering freezing behaviours and activation of the sympathetic nervous system via the brainstem.
People with lesions to the amygdala display a reduction in emotional behaviour and a placidness or ‘flatness of affect’, and show reduced learning about emotional or frightening stimuli or situations.
Key Takeaways: Central Nervous System
• The CNS is made up of the brain and spinal cord
• We can describe where we are in the CNS using the compass directions lateral & medial, anterior & posterior, superior & inferior, dorsal & ventral and rostral & caudal
• The spinal cord takes information to and from the brain to the periphery, as well as performing some information processing in its central grey matter
• The brainstem contains lots of white matter tracts and nuclei containing motor and sensory information and neurons that deal with automatic regulatory functions such as control of heart rate and breathing
• The cerebellum is a laminar structure with a distinct circuitry that supports sensory-motor learning
• The thalamus is an information hub that transfers information to the brain from the periphery and vice versa as well as participating in lots of thalamocortical loops with widespread cortical regions
• The hypothalamus contains lots of nuclei that have important roles in control of homeostasis such as thirst and appetite regulation and is also involved in memory
• The cerebral cortex has 6 layers which are different thicknesses in different functional regions. The cerebral cortex connects to subcortical regions but most often neurons connect to other cortical neurons
• The basal ganglia are subcortical nuclei that are important for initiating and selecting motor behaviours and for motivated behaviour
• The hippocampus is an important structure for memory and has a distinct circuitry that supports its role in forming associations
• The amygdala is intimately connected with the cortex, hippocampus and brainstem and is important for emotional learning and behaviours.
Non-neuronal brain structures
In addition to these various brain regions, a number of other structures of the brain’s anatomy are important to appreciate in order to understand the overall physiology of this organ.
Cerebrospinal fluid and meninges
Within the brain and spinal cord are a series of connected spaces containing cerebrospinal fluid (CSF). The central canal of the spinal cord extends into the brainstem before expanding in the region of the pons to form the fourth ventricle. At the top of the fourth ventricle another narrow channel, the cerebral aqueduct, connects to another broader chamber, the third ventricle, at the level of the diencephalon. From the third ventricle another two narrow channels connect to the large lateral ventricles that extend deep into the cerebrum. The CSF that fills the canals and ventricles is made by ependymal cells that line the ventricles in a specialised membrane structure called the choroid plexus. These ependymal cells surround capillaries of the vascular system and produce CSF by filtering the blood. CSF is similar in content to blood plasma, being mostly water but containing ions and glucose (click here for a table with more information), though it has less protein content than plasma.
CSF flows from the fourth ventricle into the space between the membranes or meninges that cover the brain. There are three meninges; the pia is a delicate membrane that directly covers the brain and spinal cord. Above the pia is a fluid-filled space, then the arachnoid membrane, so called for its web-like strands that connect to the pia through this subarachnoid space. Above the arachnoid is the tough outer membrane – the dura – which supports large blood vessels that drain cerebral blood towards the heart. CSF flows from the ventricles into the subarachnoid space then circulates the brain, before draining into veins in the dural sinuses. CSF functions as a shock absorber for the brain, cushioning the brain from damage during knocks to the head, as well as clearing metabolic waste products from the brain into the blood.
Vasculature
The brain is an energetically demanding organ. It is only 2% of the body’s mass, but uses 20% of its energy when the body is resting (i.e when muscles are not active). It relies on a constant supply of oxygen and glucose in the blood to sustain neurons, with disruption of blood flow to the brain leading to a loss of consciousness within 10 seconds. To deliver a constant flow of oxygenated blood, the brain has a complex and tightly regulated vasculature that directs blood to the most active brain regions. Four arteries feed the brain with oxygenated blood, forming a circle – the circle of Willis – that ensures that a reduction of blood flow to one artery can be compensated for by redistribution of flow from the others. Several large arteries branch off the circle of Willis to perfuse different regions of the brain. Branches of these major arteries form smaller and smaller arteries and arterioles that pass through the subarachnoid space before diving into the brain, before branching yet further to form a dense capillary network.
There are a mind-blowing 1 to 2 metres of capillaries in every cubic millimetre of brain tissue. These capillaries are less than 10 microns in diameter, though, so they only take up about 2% of the brain volume. This means that, in cerebral cortex, each neuron is only around 10-20 microns from its nearest capillary. This dense vascular network can therefore supply oxygen and glucose very close to active neurons.
Specialised mechanisms allow the blood supply to be fine-tuned to brain regions that need it most. Cells called smooth muscle cells or pericytes in blood vessel walls can dilate or constrict to regulate blood flowing through the vessel. Active neurons and astrocytes produce molecules that dilate smooth muscle cells and pericytes on local arterioles and capillaries, increasing blood flow to these regions of increased activity. In fact this increase in blood flow usually supplies more oxygen than is needed, so that blood oxygen levels increase in active brain regions. This increase in blood oxygen gives rise to the BOLD (blood oxygen level dependent) signal that can be detected using magnetic resonance imaging and is often used as a surrogate for neuronal activity in experiments studying the function of different brain regions.
Another specialised feature of the brain’s vascular system is the blood-brain barrier (BBB). The endothelial cells lining blood vessels in the brain are very tightly joined together and express relatively few transporter proteins that allow molecules to be transported from the blood into the brain. This means that it is harder for molecules and cells to access the brain from the blood, protecting the brain from circulating toxins or immune cells.
The brain’s vasculature can be impacted in several neurological conditions, leading to alterations in brain function. In ischaemic or haemorrhage stroke, there is a reduction in the blood supplied to the brain due to either a blockage or leakage in blood vessels feeding the brain. This reduces the oxygen available to the brain region fed by the lesioned vessel, damaging the neurons in that region and causing corresponding functional deficits. In other diseases there is a less severe decrease in blood flow. In Alzheimer’s disease there is a decrease in brain blood flow many years before symptoms develop. It is not yet known what links this decrease in blood flow has with the development of Alzheimer’s disease, but a decreased ability to clear toxic proteins from the brain, an increase in BBB permeability, or a chronic lack of oxygen supply may all be factors.
Key Takeaways: Non-neuronal brain structures
• The ventricles are cavities within the brain that contain CSF
• The brain is surrounded by 3 layers of meninges
• CSF flows between the ventricles and the subarachnoid space within the meninges
• The brain’s blood vessels are specialised, to direct blood flow to active brain regions and to regulate the degree to which molecules and cells in the blood can access the brain.
Overall in this chapter, you have learnt general principles about information flow in the brain, as well as some of the major neuronal and non-neuronal structures that mediate this transfer of information. Next we will explore the signalling and non-signalling cells of the nervous system to understand how they support information flow and computation.
References
Iturria-Medina, Y., Sotero, R., Toussaint, P. et al. Early role of vascular dysregulation on late-onset Alzheimer’s disease based on multifactorial data-driven analysis. Nat Commun 7, 11934 (2016). https://doi.org/10.1038/ncomms11934
About the Author
Dr Catherine Hall is a member of the Sussex Neuroscience Steering Committee, the University Senate, convenes the core first year module “Psychobiology” and lectures on topics relating to basic neuroscience, neurovascular function and dementia. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/02%3A_Organisation_of_the_nervous_system/2.01%3A_Exploring_the_brain-_a_tour_of_the_structures_of_the_nervous_system.txt |
Learning Objectives
By the end of this chapter, you will be aware of:
• the main cells in the nervous system
• what these cells look like
• what their functions are.
In each region of the central and peripheral nervous systems are specialised cells that perform or support the fundamental function of the nervous system – the detection of information about the world, integration with information about the internal body state and past experience, and the generation of an appropriate behaviour. These cells can broadly be classified as either neurons or glia.
Neurons are the cells that perform the signalling and information processing. They detect inputs, integrate information, and send signals to other cells, be they other neurons (forming neuronal circuits) or non-neural cells (such as muscles or endocrine cells), to produce a behavioural or physiological effect. Glia or glial cells play numerous supporting roles for the neurons. This support was originally thought to be structural – the term ‘glia’ is derived from the Greek for ‘glue’ – but is now appreciated to be highly complex, involving dynamic communication between glia, neurons and other cell types, and is able to modulate neuronal communication. In addition to neurons and glia, neural tissue contains a large number of vascular cells. As we saw in Chapter X, brain tissue is densely vascularised in order to provide sufficient metabolites for energy-hungry neurons to function correctly.
Neurons
The basic form of a neuron is shown in Fig 2.24. It has a cell body, or soma, with branching processes called dendrites and a thin process called an axon. The axon can also be branched, forming axon collaterals. We talked in the last chapter (Exploring the brain) about the general function of the brain being to take in information, perform a computation on that information to work out what to do next, then to produce an output. As mentioned above, this same ‘input-computation-output’ function is also performed by individual neurons.
The dendrites, or dendritic tree, are where most inputs to the cell are received. These inputs are integrated across the dendritic tree and soma before the cell ‘decides’ whether the inputs are strong enough to trigger an electrical output signal down the axon (the action potential, see Chapter 5: Neuronal transmission). The site of this decision is the top of the axon furthest from the terminals, termed the axon initial segment. The action potential travels along the axon to the axon terminal. The axon terminal is very close to, but not touching, a dendrite of another cell. This tiny gap is specialised for passing messages between two cells and is called a synapse. At the synapse, action potentials cause release of a chemical messenger – a neurotransmitter – which transmits the signal across the gap to the next cell in the circuit.
Neuron morphology affects computation
All neurons have this basic morphology, but nevertheless come in a multitude of shapes and sizes.
Most neurons are multipolar neurons, with a branched dendritic tree and a single axon. Some neurons, particularly sensory neurons (e.g. in the retina), are bipolar having a single dendrite coming out from one end of the soma, and a single axon from the other (though these may be branched near their ends). Pseudounipolar neurons have a single process, classified as an axon, which receives inputs at one end and releases transmitter at the other end. These different shapes and sizes alter how neurons perform computations.
Because the job of a neuron is to add up all its inputs and decide whether to fire an output action potential, the number of these inputs and where they are located affects how this summation happens.
For example, in the cerebellum (Figure 2.12), the Purkinje cells receive inputs from granule cell axons and axons from deep cerebellar nuclei in the pons, called climbing fibres.
The climbing fibres are very branched and make lots of connections (synapses) to each Purkinje cell, while the granule cells’ axons, called parallel fibres, are simple and form only a single synapse to each Purkinje cell. This means that the connection between a single granule cell and a Purkinje cell is weaker than the connection between the climbing fibre neuron and the Purkinje cell.
Another way that neuron morphology can affect the computations it undertakes can be seen if we zoom in on the dendrites (Figure 2.27).
Some dendrites are covered with small protrusions, called dendritic spines, whereas others are smooth. Synapses can form on to the spine, or onto the neck of the spine, and this means that some inputs can ‘gate’ the effect of other inputs, altering their impact on the neuron.
Different classes of neurons
There are many different ways in which neurons can be classified and subdivided, depending on what aspect of neuronal function is being focussed on. As we have seen above, we can define neurons by their morphology, and morphology can also be used to further classify the multitude of multipolar neurons: For example, pyramidal cells have a characteristic pyramidal shaped soma, long dendrite pointing upwards (an apical dendrite), tufty basal dendrites, and an axon that often forms several collaterals. Purkinje cells of the cerebellum have a round soma, a flat highly branched dendritic tree at the top of the soma and a single long axon. Granule cells have a small cell body, a simple dendritic tree and an axon that splits in two. Chandelier cells have a highly branched axon arbour that forms distinctive ‘candle-like’ connections with the initial segments of lots of axons.
We can also define neurons by their effect on other neurons, being excitatory or inhibitory, depending on whether they make the neurons they connect to more or less likely to fire an action potential (more of this in Chapter 5: Neuronal transmission).
Of the examples given above, pyramidal cells and granule cells are excitatory and Purkinje cells and chandelier cells are inhibitory. Neurons can also be classified by the type of neurotransmitter they release: glutamatergic neurons release glutamate, GABAergic neurons release GABA (gamma aminobutyric acid), dopaminergic neurons release dopamine, and so on. As we will see later, these categories broadly overlap: glutamatergic neurons are excitatory, because glutamate excites cells and GABAergic neurons are inhibitory, because GABA inhibits cells. However other neurotransmitters such as dopamine can have different downstream effects depending on what proteins that cell expresses at the synapse.
Neurons can also be classified based on their connectivity and role in a circuit, but this can get complicated! Neurons that project a long way to a different brain region are termed principal neurons, while those that project locally are termed interneurons. Principal neurons are often excitatory, but not always (for example Purkinje cells output information from the cerebellum and are inhibitory). However, in some brain areas it is hard to decide whether a cell should be termed an interneuron or not. Is it helpful to call neocortical pyramidal cells that project to far cortical areas ‘principal cells’ but very similar cells that project to the next cortical column ‘interneurons’? Are cerebellar granule cells interneurons because they project within the cerebellum, though they project to a distinct cell layer? Instead, the term ‘interneuron’ is only commonly used for inhibitory cells, referred to as inhibitory interneurons. A chandelier cell is an example of an inhibitory interneuron. Excitatory cells in local circuits are instead usually referred to by other features, e.g. location and morphology (e.g. a Layer 5 neocortical pyramidal cell as distinct from a Layer 2/3 pyramidal cell in the example given above).
Glia
There are five main types of glial cells.
Astrocytes can also release many substances onto neurons and other cells (e.g. ATP, lactate, glucose), modulating their activity and providing metabolic support. Specialised astrocyte processes, termed ‘end feet’ communicate with local blood vessels, altering local blood flow and taking up glucose from the blood. These end feet surround blood vessels, forming part of the BBB, and others extend to the surface of the brain, forming a thin layer just under the pia. This barrier of astrocyte end feet is termed the glia limitans, and stops (or regulates) molecules and cells from entering or leaving the nervous tissue. Astrocytes also react to damage to brain tissue, becoming ‘activated’ and expressing different molecules when they are exposed to infection. They can form a scar around sites of damage. While this can be helpful, it also causes problems if cells remain activated for a long time, and the scar tissue that forms can stop neurons from making new connections through.
Oligodendrocytes and Schwann Cells perform similar roles in the CNS and PNS respectively. Both cells wrap layers of a fatty substance called myelin around neuronal axons. Oligodendrocytes do so by sending multiple myelinating processes to nearby axons, whereas Schwann cells in the PNS each have only one myelinating process. These layers of myelin insulate axons allowing action potentials to be conducted more quickly and robustly (see Chapter 5: Neuronal transmission).
Being so closely associated with axons, oligodendrocytes and Schwann cells also provide support to axons by releasing some molecules and taking up others, regulating the extracellular environment around axons in a similar manner to the role of astrocytes at synapses. In multiple sclerosis, the body’s immune cells seem to attack oligodendrocytes, leading to demyelination of axons. This impairs signalling in the axons that were ensheathed by the damaged cells, causing a variety of neurological problems, depending on which axons are affected and what signals they were carrying.
Microglia are small cells which play an important role in repairing damaged brain tissue. Unlike the rest of the body, immune cells cannot readily enter the brain from the blood because they are prevented by the BBB. Instead, microglia act as the brain’s resident immune cell. Their processes constantly extend and retract, surveying the brain for signs of damage or infection. When they find a site of damage, they ‘activate’ and migrate to this region, forming a barrier between healthy and damaged tissue and removing debris of dying cells. Microglia are also often associated with synapses and blood vessels, and increasingly appreciated to have important roles not just in controlling damage, but in shaping normal brain function as well, regulating synaptic transmission and signalling to blood vessels. Like astrocytes, while microglial responses to damage are usually thought to be beneficial, when they are activated for a long period of time they can themselves be harmful. This may happen in disease such as Alzheimer’s, and after a stroke.
The last type of glial cell is the ependymal cell, which we discussed briefly in the last chapter. These cells line the ventricles and produce CSF.
Vascular cells
We heard in the previous chapter that the brain contains a dense vascular network to provide a constant, tightly regulated supply of energy (mainly oxygen and glucose) to the brain. The main cells that make up the blood vessels are endothelial cells, which form the vessel wall next to the blood, and vascular mural cells: smooth muscle cells on larger vessels (arteries and arterioles) and pericytes on smaller vessels (capillaries), which wrap around endothelial cells (Figure 2.33).
As we heard earlier, endothelial cells form tight junctions with adjacent endothelial cells and with pericytes, forming the BBB. A major function of endothelial cells is to regulate entry of cells and molecules across the BBB, as well as to clear waste molecules from the brain into the blood. They regulate the entry of small molecules by expressing different transporter proteins which allow certain molecules to cross the BBB into the brain. To allow immune cells into the brain, endothelial cells express proteins that stick to immune cells in the blood which then crawl between or through the endothelial cells.
Another function of endothelial cells is control of blood flow. Endothelial cells can respond to signals in the blood or the brain to produce molecules that contract or dilate smooth muscle cells or pericytes, altering the diameter of blood vessels and changing blood flow.
Smooth muscle cells are ring-shaped cells that wrap around the vessel, while pericytes have a distinct cell body and processes that extend along and around the blood vessel. In addition to responding to signals from endothelial cells, these smooth muscle cells and pericytes can also constrict and dilate in response to signals from neurons and astrocytes, changing blood flow to match alterations in neuronal activity. Pericytes are also important for stabilising newly formed-blood vessels and work together with endothelial cells to control the BBB.
The brain is full!
All these cells with their complex structures are often shown in cartoons with lots of space between them. In reality, however, the different cells’ processes are closely intertwined and crammed together, taking up almost all the available space. We can see this experimentally by looking at 3D reconstructions from electron micrographs – images of sequential slivers of a very small bit of tissue taken with an electron microscope.
In these image sequences, structures can be labelled and traced in each image, and then assembled to get a reconstruction of the different cells in a small volume of tissue. From such images we can see in superb detail how different structures connect, e.g. which dendrites an axon contacts. However, tracking and reconstructing every process in even a small volume is very computationally expensive, and it’s not yet possible to do this for even a whole cortical column, never mind a whole brain or brain region.
Key Takeaways: Cells of the nervous system
Neurons all have a soma, axon and dendrites but come in lots of shapes and sizes, and produce different neurotransmitter molecules, meaning they can perform lots of different computations.
There are 5 different types of glia in the brain (astrocytes, oligodendrocytes, Schwann cells, microglia and ependymal cells), which have many roles, including:
• controlling the extracellular environment for neurons (astrocytes, oligodendrocytes, Schwann cells, microglia)
• providing physical and metabolic support (astrocytes, oligodendrocytes, Schwann cells)
• insulating axons to allow fast neuronal transmission (oligodendrocytes, Schwann cells)
• detecting and combatting infection and tissue damage (microglia, astrocytes)
• contacting and communicating with blood vessels (astrocytes, microglia)
• producing CSF (ependymal cells)
Endothelial cells, smooth muscle cells and pericytes form a dense network of blood vessels and control the brain’s energy supply, as well as what cells and molecules can go between the blood and the brain tissue.
About the Author
Dr Catherine Hall is a member of the Sussex Neuroscience Steering Committee, the University Senate, convenes the core first year module “Psychobiology” and lectures on topics relating to basic neuroscience, neurovascular function and dementia. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/02%3A_Organisation_of_the_nervous_system/2.02%3A_Under_the_microscope_-_Cells_of_the_nervous_system.txt |
Having learnt how the nervous system is made up of cells and structures that receive, process and pass on information to select and generate behaviours, we are now going to learn how neurons actually perform this key function, by interrogating the mechanisms by which they receive, integrate and generate signals in order to communicate with each other.
03: Neuronal communication
Learning objectives
By the end of this chapter, you will:
• understand common electrical terms and how they relate to electrical signalling by neurons
• understand the ionic basis of the membrane potential.
Key electricity concepts
Neurons signal electrically.
They receive inputs from other cells, sum up all these inputs, and generate an electrical impulse, called an action potential, which they send along their axon. Neurons are not the only cells that to use electricity to function. Muscle cells also use electrical signals to constrict and dilate.
We will be going into some detail to understand how neurons are able to use electricity to signal in this manner, but first it’s worth going over some key electricity concepts.
• Electrical currents are flows of charged particles. In an electrical circuit in a torch, where a battery powers a lamp (Figure 3.1a), the charged particles are negatively charged electrons flowing in a wire. In your body the charged particles are ions – such as the sodium ion, Na+.
• Charged particles flow because they are repelled by similar charges and attracted by opposite charges, i.e. positively charged particles attract negatively charged particles, while negatively charged particles repel other negatively charged particles, and positively charged particles repel other positively charged particles.
• Charged particles only flow if they can pass through the substance that they are in. Electrons can only flow around an electrical circuit when the circuit is complete. If the circuit is broken by opening a switch, because the electrons can’t easily pass through air, they can’t flow any more and the torch lamp will go off. The ability of a material to let electricity flow through it is termed conductance. The inverse of conductance is resistance – a measure of how much a material resists the flow of electricity.
• Voltage is a measure of how much potential there is for charged particles to flow and is a measure of stored electrical energy. This electrical potential is analogous to storing water high up in a water tower. Because of gravity, the water has lots of potential to flow, but it cannot do this until a tap is opened. When a tap is turned on water flows out of the pipe (Figure 3.1b). A battery works like a water tower to store electrical energy. Batteries have a positive and a negative pole. When a circuit is connected, electrons are repelled from the negative pole towards the positive pole of the battery.
The current flowing in a circuit is related to the voltage across the circuit and the conductance or resistance of the wires making up the circuit, according to Ohm’s Law.
Ohm’s Law
Current is proportional to voltage and conductance, and inversely proportional to resistance:
Current = Voltage x Conductance
OR
Current = Voltage / Resistance
You can come back and look at Ohm’s law later when you start thinking about currents flowing in neurons.
As you can see, if resistance goes up, and the voltage stays the same, the current (flow of charged particles) will decrease. Conversely when the resistance goes down, the current will increase. In the water pipe analogy (Figure 3.1b), high resistance is like having narrow pipes. If the hole in the middle of the pipe is tiny, you won’t get very much water squirting out, never mind how big the water pressure is, but if you make the hole bigger (increasing the conductance or reducing the resistance), a lot of water will flow out of the hole. As the water tank empties, however, the water pressure (voltage) decreases, and the water flow (current) will reduce.
Nerves conduct electricity more slowly than wires
Electrical currents in the body are not exactly the same as electrical currents in a wire.
In 1849, Hermann von Helmholtz measured the speed that electricity flows in a frog’s sciatic (leg) nerve, by stimulating it electrically at one end and measuring the electrical signal at the other end. He found that the speed that electricity flowed (or was ‘conducted’) down a nerve was 30-40 m/s, around a million times slower than electricity travels through a wire.
So why is electrical signalling in nerves so much slower than in wires? In a wire, electrons (small negatively charged particles) travel along the wire, and they can do this very quickly in materials like metals that conduct electricity well. In nerves, however, the charged particles are ions, not electrons. They are positively (or sometimes negatively) charged particles that are much bigger than electrons, and they don’t move down the nerve like electrons do. Instead, during a nerve impulse – termed an action potential – positively charged ions move into the neuronal axon from the outside. When positive ions move into the cell, the inside of the cell becomes more positive.
This little bit of the axon becoming more positive triggers positive ion movements into the next little bit of the axon, which also becomes positive, triggering the ion movements across the next bit of axon, and so on, like a Mexican wave of a positive potential flowing along the nerve. On balance there has still been an electrical signal that’s moved from one end of the axon to the other, but it has got there more slowly than if electrons had just travelled along the wire.
A short history of electrophysiology
The importance of electricity in animating our bodies – a step, in a way, towards generating behaviours – was discovered in the late 18th century by Lucia and Luigi Galvani.
In a laboratory in their home, the couple discovered that electricity applied to a frog’s leg made the muscle twitch. The frog’s leg muscle also twitched when it was connected to the nerve with a material that conducts electricity. They concluded that an ‘animal electricity’ is generated by the body to contract muscles.
The study of how electricity is generated and used by the body is now termed electrophysiology. Animal electricity was further studied and made (in)famous by the Galvanis’ nephew, Giovanni Aldini, who performed public demonstrations of animal electricity on the bodies of executed prisoners as well as oxen’s heads. Tales of these demonstrations of ‘Galvanism’ inspired the young Mary Shelley to write Frankenstein, in which the monster is animated using electricity.
In the mid-nineteenth century, with the development of tools to measure electrical currents, the German physiologist Emil Du-Bois Reymond was able to measure the change in current that occurs in nerves and muscles when activated – what we now term the ‘action potential’ – while Hermann von Helmholtz was able to measure the speed of conduction of electrical transmission down a nerve.
Further technological developments allowed Julius Bernstein, who had worked with both Du-Bois Reymond and Helmholtz, to record the time course of the action potential for the first time in 1868. He showed that the action potential was about 1 ms in duration and that, at its peak, the voltage rises above zero. Bernstein also measured the resting membrane potential as being around -60 mV, building on ideas developed by Walther Nernst, who proposed that the resting membrane potential is set by the potassium conductance of the membrane. Charles Ernest Overton added to this the concept that sodium and potassium exchange is critical for the excitability of cells.
The ionic basis of the action potential was fully elucidated between 1939 and 1952 by Alan Hodgkin and Andrew Huxley, who used the squid giant axon to make the first intracellular recordings of the action potential. They developed the use of the voltage clamp, which uses a feedback amplifier to hold a cell’s voltage at a set level. The feedback amplifier does this by detecting small changes in voltage and injecting current to reverse these changes so that the voltage across the membrane does not change. This injected current is opposite to that flowing across a cell’s membrane – if positive charge is flowing into the cell, it depolarises the cell (makes it more positive) and the amplifier will inject negative charge to counter this depolarisation. Conversely positive charge leaving the cell would make the cell would become more negative (hyperpolarised) so the amplifier will inject positive charge to counter the outward positive current and keep the voltage across the membrane constant. Therefore, the amount of current injected by the amplifier can be used to work out what currents are actually flowing across the cell’s membrane. Using voltage clamp, Hodgkin and Huxley were able to dissect the inward and outward currents and subsequently mathematically model the properties of sodium and potassium influx to accurately reproduce the action potential. These models were subsequently found to match the gating properties of voltage gated sodium and potassium channels. You’ll learn all about this in the next chapter.
The development of patch clamping by Erwin Neher and Bert Sakmann in the 1970s and early 1980s enabled recording of very small current changes, including from single ion channels. Patch clamping involves using a glass microelectrode with a very small tip, that can be placed against a cell membrane. Applying suction tightly seals the electrode tip onto the cell so that current can only flow from the electrode across the attached membrane, reducing noise and allowing the properties of single ion channels to be studied. If the electrode is pulled away from the cell, a little patch of membrane remains on the electrode, forming an inside out patch. Different drugs can then be applied to the bath to see how they change the activity of the ion channels in this tiny patch of membrane. Alternatively, when the electrode is attached to the cell, the membrane patch attached to the electrode can be ruptured by applying increased suction. This “whole cell” configuration allows membrane currents from the whole cell to be studied. Pulling the electrode away from the cell at this point can form an “outside-out” patch. These different adaptations of patch clamp electrophysiology are key tools in the study of electrical properties of neuronal signalling today.
Overall, electrophysiology has generated a wealth of knowledge about how electrical signals are integrated and generated by neurons, how different ion channels contribute to these signals, and how ion channelopathies (dysfunction of ion channels) contribute to disease. For example, Dravet’s Syndrome is a severe familial epilepsy that is caused by mutations in the SCN1A gene. This gene encodes a sodium channel that is mostly found in inhibitory interneurons. Because the mutation stops sodium channels working as well, interneurons are not as able to fire action potentials to inhibit excitatory neurons which become overactive, causing seizures.
How do cells such as neurons signal electrically?
Cells signal electrically by controlling how ions cross their membranes, changing the voltage across the cell membrane. This voltage change across the cell membrane is the electrical signal. The most common ions that move across the cell membrane to cause this voltage change are sodium ions (Na+), potassium ions (K+), chloride ions (Cl) and calcium ions (Ca2+). These ions carry different charges – sodium and potassium ions each have a single positive charge, chloride has a single negative charge, and calcium ions carry two positive charges. Positively charged ions are called cations, while negatively charged ions are called anions. The amount of charge carried by an ion is its valence.
In addition to carrying different charges, ions are different sizes.
Some ion flow, or flux, happens at rest, and other flux happens during signalling.
In this section, we’ll consider what’s going on at rest, and in the next section, examine what happens to make an electrical signal.
The plasma membrane around a cell is made of a phospholipid bilayer
Cells are surrounded by a plasma membrane that keeps the inside separate from the outside. This membrane is made of molecules called phospholipids. Phospholipids have three main parts: two fatty tails that are hydrophobic (meaning they ‘fear water’) and a head that is hydrophilic (meaning it ‘loves water’). Water molecules are slightly charged, with positively charged and negatively charged zones (they are dipoles). This means that other particles that are charged are attracted to them, whereas particles with no charge are repelled by them. The phospholipid head carries a negatively charged phosphate group, so is attracted to water, while the uncharged fatty tails are repelled by water, but will happily mix with other uncharged tails. This means that the phospholipid molecules line up to form a bilayer (two parallel layers of phospholipid molecules) with their fatty tails next to each other on the inside of the membrane, and the hydrophilic heads lined up facing the watery inside and outside of the cell.
Small molecules such as oxygen and carbon dioxide can diffuse across the membrane, but because the inside of the membrane is uncharged and hydrophobic, water and other charged particles can’t cross it. This means the inside of the cell is kept separate from the outside and the intracellular fluid, or cytosol, inside the cell can have a different constitution than the fluid outside the cell – the extracellular fluid.
Components of intracellular vs extracellular fluid
Intracellular (ICF) and extracellular fluids (ECF) are made up of different substances (Figure 3.10). Both are mostly water, but the concentration of ions and other substances is very different. Of particular note, there is a higher concentration of potassium ions inside the cell compared to outside the cell (~130 mM inside vs. ~4 mM outside), and a high concentration of sodium ions outside the cell compared to the inside of the cell (~145 mM outside vs. ~15 mM inside). There are also more chloride and calcium ions outside the cell than inside the cell. ICF also contains more protein and a higher concentration of organic anions than ECF.
Ion channels and transporters allow substances to cross the plasma membrane
If the cell membrane was just made up of the phospholipid bilayer and nothing else, then no ions would ever be able to cross the membrane, and no electrical signalling would be possible. However, lots of proteins are embedded in the lipid membrane. Some of these are transporter proteins that can shuttle specific molecules across the membrane (Figure 3.11). For example, glucose is brought into the cell via glucose transporters.
As well as transporters, ion channels are also proteins that are embedded in the plasma membrane (Figure 3.12).
These proteins form a pore in their centre which essentially makes a hole in the membrane. They can be open all the time (leak channels) or opened by different triggers, such as voltage changes (voltage-gated ion channels) or binding of different molecules (ligand-gated ion channels). Many of these ion channels are selective, i.e. they only let certain ions through. Examples of selective ion channels include potassium leak channels or voltage-gated sodium, potassium or calcium channels. This ion selectivity means that cells can control ion fluxes across their membranes by opening certain ion channels.
The resting membrane potential
At rest, in the absence of any neuronal signalling activity, it turns out a certain type of ion channel – potassium leak channels – are open. This means that, at rest, potassium ions (K+) can leak out of the cell. Because of the K+ concentration gradient across the cell – i.e. because there are more K+ ions inside the cell – as they wiggle and jiggle and randomly move about, some ions will find these holes in the membrane and pass through them to exit the cell (Figure 3.13).
Once some positively charged K+ ions have left the cell, however, that leaves an imbalance of positive and negative charges on the inside of the cell. The inside of the cell is now more negatively charged compared to the outside of the cell. There is now a voltage, or potential difference across the cell (Figure 3.14), which we could think of as an electrical gradient.
But K+ ions are positively charged, so they are attracted to negative charges and repelled by positive charges. So once there is a potential difference or electrical gradient across the cell’s membrane, the K+ ions are repelled by the positive charge outside of the cell, and attracted to the negative charge inside of the cell. For K+ ions, the electrical gradient therefore works in the opposite direction to the concentration gradient. The concentration gradient of K+, with high concentrations inside the cell and low concentrations outside the cell, tends to make K+ ions leave the cell, while the electrical gradient tends to make K+ ions enter the cell.
We call the combination of the effect of the electrical and the concentration gradient an electrochemical gradient. This movement of K+ ions out of the cell through leak channels is the main driver of the resting membrane potential of the cell – the voltage difference across its membrane at rest – which is around -70 mV in neurons.
Equilibrium potentials
EK = -80 mV
K+ ions will leave the cell down the concentration gradient until the electrical gradient is so negative that K+ ions are stopped from leaving. At this point K+ is in equilibrium – the number of ions leaving because of the concentration gradient is the same as the number entering due to the electrical gradient, so there is no net movement of K+ across the membrane. The voltage difference across the cell at which this equilibrium is reached is called the equilibrium potential for a given ion. It is dictated by the concentration difference across the membrane and the charge of the ion. We can consider different ions and how their electrochemical gradients shape the equilibrium potential for each.
As we saw above, because K+ is positively charged and is at a higher concentration inside the cell, it tends to leave the cell when channels permeable to K+ are opened in the cell membrane. Positive K+ ions leaving the cell make the cell’s membrane potential (the electrical gradient or voltage difference across the cell’s membrane) more negative. The membrane becomes more and more negative until it reaches the equilibrium potential for K+ when there is no longer any net flux (flow) of K+. Therefore the equilibrium potential for K+ is negative. For most cells, it is around -80 mV. This is often written as EK (or the electrical potential for K+) = -80 mV.
ENa = +62 mV
There is more sodium (Na+) outside the cell than inside the cell, so if ion channels that are permeable to Na+ open in the membrane, sodium will tend to enter down its concentration gradient. Na+ is positively charged, so initially it is attracted to the negative potential on the inside of the cell. Na+ entry makes the inside of the cell more positive, though, until enough Na+ has entered to make the inside of the cell so positive that it repels further Na+ entry. – i.e. it reaches equilibrium. This happens at around + 62 mV. Therefore the equilibrium potential for Na+ (ENa; the electrical potential across the cell where there is no net flux of Na+ ions) is +62 mV.
ECl = -65 mV
There is more chloride (Cl-) outside the cell than inside the cell, so when ion channels that are permeable to Cl- open in the membrane, Cl- tends to enter the cell. As Cl- is negatively charged, its entry makes the cell’s membrane potential more negative, until it reaches equilibrium, being sufficiently negative to repel further Cl- entry. This happens at around -65 mV, so ECl = -65 mV.
The Nernst Equation
We can mathematically calculate the equilibrium potential for different ions using the Nernst Equation.
This equation might look complicated, but if we break it down we can see that it just relates the concentration gradient across the membrane of an ion X ([X]out/[X]in, where [X] is the concentration of the ion of interest) and the charge or valence on that ion (z) to the equilibrium potential (EEq). R and F are just constants, and T is the temperature, which is constant inside the body, so we can ignore R, F, and T here, as they will always stay the same.
Logarithms and the Nernst Equation
To understand the Nernst Equation fully, we also need to understand what ‘ln’ means. This is an instruction, which means ‘take the natural log of the number inside the brackets’. (In this case, that number is the ratio of the outside and inside concentrations). The log of a number is the power to which a base number has to be raised to equal the original number. The base number can be anything. In the first example below, we are using base 10, but in the case of the natural logarithm (ln) it is a specific mathematical constant called ‘e’ or Euler’s constant. It’s roughly 2.71828.
Example 1: Logarithms to the base 10:
y is the power to which 10 must be raised to equal x, so y is the log to the base 10 (log10) of x.
y = log10(x)
10y = x
Example 2: Logarithms to the base e (natural logarithms):
In the equations below, y is the power to which e must be raised to equal x, so y is the natural log (ln, also written loge) of x.
y = ln(x)
ey = x
More about logarithms and powers
Raising a number to a positive power makes it bigger, whereas raising a number to a negative power makes it smaller (x-1 means the same as 1/x; x-2 means the same as 1/x2). Conversely, the log of a number between 0 and 1 is negative, and the log of a number over 1 is positive. For example (using 10 as the base, not e, to make the sums clearer):
102 = 100; log10(100) = 2
10-2 = 1/100 = 0.01; log10(0.01) = -2
You can find more background on powers and logarithms here. We can now look at K+, Na+ and Cl and see how their equilibrium constants come out of this equation, even just by broadly considering the charge of the ion and whether there is more of an ion on the inside or outside of the cell.
EK = -80 mV
There is more K+ inside the cell than outside the cell, so ([K+]out/[K+]in) < 1. The natural log of <1 is negative. We then need to multiply this by the charge, which is +1 for K+. Therefore the equilibrium potential, EK, is negative.
ENa = +62 mV
There is more Na+ outside the cell than inside the cell, so [Na+]out/[Na+]in) >1. The natural log of numbers greater than 1 is positive, and this is multiplied by the charge of +1 for Na+. Therefore the equilibrium potential, ENa, is positive.
ECl = -65 mV
There is more Cl outside the cell than inside the cell, so ([Cl]out/[Cl]in) >1. The natural log of numbers greater than 1 is positive, but this is then multiplied by the charge of -1 for Cl. The equilibrium potential, ECl, is therefore negative.
The membrane potential
We have discussed that when the cell is at rest, potassium leak channels are open and this drives the resting membrane potential to be negative, at around -70 mV. But we also just saw the equilibrium potential for potassium is -80 mV. If the resting membrane potential is set by potassium flux through leak channels, why is the resting membrane potential not the same as EK? The answer is that at rest the membrane is actually also a tiny bit permeable to Na+ as a small number of sodium channels are also open. This pushes the resting membrane potential a tiny bit away from the equilibrium potential for potassium towards the equilibrium potential for sodium. The resting membrane potential is closest to EK as the membrane is most permeable to K+ ions (more K+ channels are open), but is a bit more positive than EK because of the small amount of permeability at rest to Na+ ions.
In fact, at any point during neuronal signalling or at rest, the membrane potential is set by the electrochemical gradients to different ions and the relative permeability of the membrane to these ions. Cells control their membrane potentials by opening and closing ion channels in the membrane to alter the permeability to different ions, which then flow down their electrochemical gradients into or out of the cell. When sodium channels open, for example, the permeability to Na+ increases and Na+ ions enter the cell, driving the membrane potential to more positive potentials towards the equilibrium potential for Na+, ENa. When sodium channels close, the permeability to Na+ decreases again and, as the membrane is now more permeable to K+ than Na+, the membrane potential will again become more negative, returning to the resting membrane potential. There are lots of types of ion channels that are selective for different ions and have different gating properties, i.e. they are opened and closed by different stimuli, such as changes in membrane voltage, and binding of specific molecules. We’ll discuss these more in later chapters.
The Goldman-Hodgkin-Katz equation
The Goldman-Hodgkin-Katz equation allows the membrane potential of the cell (Em) to be calculated from the permeabilities of the membrane to different ions (pK, pNa and pCl for K+, Na+ and Cl, respectively) and their concentration gradients (Figure 3.19). As for the Nernst equation (see Box, Figure 3.18), R and F are constants, and T is temperature, so RT/F can be considered unchanging. During neuronal signalling, the permeability of the membrane to different ions changes, and the membrane potential is weighted in favour of the equilibrium potential of the ion with the greatest permeability at that moment. Note that the Cl concentration gradient is expressed in reverse compared to K+ and Na+, to account for the fact that, being negatively charged, it is oppositely charged to K+ and Na+.
The sodium-potassium ATPase
We have seen above that during rest and neuronal signalling, ions flow through ion channels down their electrochemical gradients, altering the membrane potential. This flow of ions down their electrochemical gradients does not require any energy. The membrane potential is controlled by changing the permeability of the membrane to different ions, and not by changing the concentration gradients between the inside and outside of the cell. Very few ions need to flow to change the membrane potential of a cell, which means that the concentrations of ions inside and outside the cell do not change very much over the short term. However, because the membrane potential does not sit at the equilibrium potential for any ion, even at rest, there is a net K+ current or flux out of the cell and, a net Na+ flux into the cell.
Over the longer term, however, these fluxes would dissipate the ionic concentration gradients if cells did not have a mechanism to continually pump ions back to where they came from. The pump that does this really important job is the sodium-potassium pump, or the Na+/K+ ATPase. This is a protein that sits in the plasma membrane and pumps sodium out of the cell and potassium back into the cell. Because this pumping occurs against the ions’ electrochemical gradients, it requires energy in the form of ATP to pump the ions back and maintain their concentration gradients. The sodium-potassium ATPase removes a phosphate group from ATP, to form ADP, releasing some energy, which changes the shape (or conformation) of the Na+/K+ ATPase enabling it to move 3 Na+ ions out of the cell and 2 K+ ions into the cell for every ATP molecule used. Because 3 Na+ ions are removed for every 2 K+ ions brought into the cell, the Na+/K+ ATPase is electrogenic, causing a net export of positive charge. This contributes a little bit to the negative resting membrane potential, but by far the strongest effect the Na+/K+ ATPase has on the resting membrane potential is to maintain the potassium electrochemical gradient, so that the equilibrium potential for potassium is maintained. Because even at rest there are ion fluxes, the Na+/K+ ATPase is always at work, but it’s activity is increased when neurons are signalling and so more ions need to be pumped back.
Maintaining ion concentration gradients is so important for sustaining neuronal activity that the Na+/K+ ATPase is the single most energy-consuming process in the brain, consuming over half of all the energy it uses. As the brain is a very energetically expensive organ, using 20% of the body’s energy at rest, despite comprising only 2% of the body’s mass, the Na+/K+ ATPase alone uses over 10% of the energy used by the whole body – quite staggering given there are over 20,000 different types of proteins in our bodies at any one time!
Key Takeaways
• Electrical signalling in neurons (and other cells) works because they have ion channels that allow specific ions to flow across neuronal membranes and change the membrane potential of the cell.
• The membrane potential of the cell is determined by the concentration gradient of ions across its membrane, and the permeability of its membrane to those ions.
• Ions flow down their electrochemical gradients, which doesn’t need any energy, but energy in the form of ATP must be used up to fuel the Na+/K+ ATPase which pumps ions back up their electrochemical gradients to maintain their concentration gradients across the membrane.
About the Author
Dr Catherine Hall is a member of the Sussex Neuroscience Steering Committee, the University Senate, convenes the core first year module “Psychobiology” and lectures on topics relating to basic neuroscience, neurovascular function and dementia. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/03%3A_Neuronal_communication/3.01%3A_Electrophysiology-_electrical_signalling_in_the_body.txt |
Learning Objectives
By the end of this chapter, you will understand:
• that neurons signal electrically within each cell and chemically between cells
• the ionic basis of the action potential and how it is conducted
• the processes involved in synaptic transmission
• how neurons integrate information at synapses.
In the last chapter, we learnt about electrical signalling in the brain and how electrochemical gradients and ion channels allow neurons to set their membrane potential. In this chapter we will learn how these processes generate the signals within and between neurons that form the basis for the information processing in the brain.
Signals are transmitted electrically within neurons and chemically between neurons, at synapses. Electrical signals within neurons take the form of action potentials and synaptic potentials. We can talk of electrical signals in cells as producing a positive change in the membrane potential – termed depolarisation, or a negative change in the membrane potential, termed hyperpolarisation.
Action potentials
An action potential is a brief electrical signal that is conducted from the axon hillock where the neuron’s soma joins the axon, along the axon to the axon terminals. It can be measured from electrodes placed in or near a neuron connected to a voltmeter (Figure 3.21). This electrical signal is a rapid, localised change in the membrane voltage which transiently changes from the negative resting membrane potential to a positive membrane potential. A positive shift in the membrane potential like this is termed depolarisation. The membrane then rapidly (within 1 ms) becomes negative again – it repolarises – and then shifts even more negative, becoming hyperpolarised before returning to the resting membrane potential less than 5 ms after it first depolarised (Figure 3.22). This transient voltage change then spreads like a wave down the axon with a conduction velocity of between 1 and 100 m/s.
The action potential is caused by opening and closing of voltage-gated ion channels
What is happening within the axon to cause these changes in membrane voltage?
As discussed above, the way in which neurons generally alter their membrane potentials is by changing their membrane permeability to different ions by opening and closing ion channels, and that is exactly what is happening during the action potential. The ion channels that open and close to form the action potential are voltage-gated ion channels. As their name suggests, these channels open or close depending on the voltage across the membrane. There are many different types of voltage-gated ion channels, which differ in their thresholds for activation – the voltages at which they open and close – as well as their selectivity for ions. When they open, ions flow down their electrochemical gradients towards their equilibrium potentials.
Voltage-gated sodium and potassium channels open to depolarise then hyperpolarise the membrane
The upstroke (when the voltage depolarises rapidly) of the action potential (Figure 3.23) is caused by the opening of voltage-gated sodium channels that have a threshold for opening of -55 mV. When the membrane of the neuron depolarises to -55 mV, these voltage-gated sodium channels start to open. Sodium ions flood into the cell, depolarising the membrane and opening even more sodium channels, causing a very rapid depolarisation of the membrane. This feedforward activation of sodium channels makes the action potential an all or nothing event (it either happens, or it does not). If the threshold is reached, sodium channels open, accelerating depolarisation happens and an action potential occurs (or ‘fires’). If the threshold is not reached, sodium channels do not open and no action potential will fire. Furthermore, the action potential is always the same size and is not graded by the size of the incoming depolarisation.
If the sodium channels stayed open, then the membrane potential would stabilise at the equilibrium potential for sodium (ENa), at +62 mV, but instead the voltage reaches only around +40 mV before hyperpolarising again, so the membrane is depolarised for less than 1 ms. The depolarisation is so brief for two reasons: firstly, the voltage-gated sodium channels rapidly inactivate, closing the channel and preventing further Na+ influx to the cell. Secondly, a second type of voltage-gated channel activates: the voltage-gated potassium channel. Some of these voltage-gated potassium channels activate at the same threshold as the sodium channels but more slowly, and others activate at a more positive voltage (around +30 mV). Both these factors mean that opening of voltage-gated potassium channels is delayed relative to the Na+ influx. When channels open, however, K+ leaves the cell, causing the membrane to become more negative, or hyperpolarised, producing the falling phase or downward stroke of the action potential (Fig. 3.23).
Increased K+ permeability causes an afterhyperpolarisation
Many voltage-gated potassium channels switch off quite slowly after the membrane potential falls below their threshold voltage. This means that after the membrane potential has repolarised, reaching the resting membrane potential, there are still some voltage-gated potassium channels open, in addition to the potassium leak channels that are always open. Because the membrane is now more permeable to K+ than at rest, the membrane potential hyperpolarises below the resting membrane potential, getting even nearer to the equilibrium potential for K+, EK. This hyperpolarised phase is termed the afterhyperpolarisation. Then as the voltage-gated potassium channels close, the permeability of the membrane for potassium returns to normal and the membrane potential depolarises slightly back to the resting membrane potential.
Sodium channel inactivation causes the refractory period for action potential firing
The opening and closing of voltage-gated sodium and potassium channels at different threshold voltages and inactivation of sodium channels occur because gates in the proteins move to open and close the pore region in the centre of the channel that allows ions to flow across the membrane (Figure 3.24). At the resting membrane potential, voltage-gated sodium and potassium channels both have a conformation or shape that means part of the protein blocks the ion channel’s pore (i.e. it is like there is a closed gate blocking the pore ). When the threshold voltage is reached, the shape of the ion channel proteins change slightly so that this gate opens to let ions through. This gate opens quickly in voltage-gated sodium channels but more slowly, or at more depolarised potentials in voltage-gated potassium channels, so during the rising phase of the action potential only the sodium channel gates are open. After a very short time, however, an inactivation gate on the intracellular side of the voltage-gated sodium channel swings shut, blocking the pore from the inside and stopping any more Na+ flux . As the voltage-gated potassium channels open, during the falling phase of the action potential, voltage gated sodium channels are inactivated. Even when the membrane falls below the threshold voltage, closing the voltage-sensitive gate, the sodium channels’ inactivation gates are still closed. This means that the sodium channels cannot re-open, and the neuron cannot fire another action potential until the inactivation gates reopen.
This period of time when firing of another action potential is impossible is called the absolute refractory period (Figure 3.25). Sodium channels’ inactivation gates start to reopen during the falling phase of the action potential, when voltage-gated potassium channels are still open. At this stage, it becomes possible to fire another action potential, but a stronger stimulus is needed to activate the sodium channels. This period is the relative refractory period (Figure 3.25). Stronger stimuli (that depolarise a neuron more) can therefore produce a faster firing rate in a target neuron than weaker stimuli by intruding into the relative refractory period.
Action potential propagation
Action potentials are initiated in the axon’s initial segment near the soma, right next to the axon hillock. If the membrane potential there depolarises sufficiently to trigger voltage-gated sodium channels to open, then an action potential will fire in that section of membrane. In an unmyelinated axon (Figure 3.26), some of the positive charge (Na+ ions) that enters the cell during the rising phase of the action potential spreads to the adjacent bit of membrane, depolarising that membrane and opening voltage-gated sodium channels there, producing an action potential, which spreads onwards to the next bit of membrane, such that a wave of depolarisation and repolarisation spreads down the axon all the way to the axon terminals. Sodium channel inactivation prevents upstream spread of the action potential back towards the soma: because the upstream membrane is in the absolute refractory period, the action potential can only spread downstream to membrane in which sodium channels are not inactivated.
Increasing the axon diameterand myelinating the axon increases conduction speed
Action potentials spread quite slowly along small unmyelinated axons – around 0.5-2 m/s – because each bit of membrane has to fire an action potential and propagate it to the next bit of membrane. This speed of conduction would be too slow to get up-to-date information about what is going on at the far end of our bodies – imagine wiggling your toe and only knowing 4 seconds later that you actually had wiggled it! Luckily, action potential conduction can be increased in two major ways.
Firstly, conduction speed is increased by increasing the diameter of the axon, which reduces the resistance to current flow within the axon, allowing depolarisation to passively spread further down the axon and therefore more rapidly activate action potential firing in downstream membrane.
Secondly, myelination of axons increases conduction speed. The layers of myelin that are tightly wrapped around axons by oligodendrocytes (in the CNS) or Schwann cells (in the PNS) insulate the axon membrane from current loss across the membrane. Axon membrane ensheathed in myelin layers does not contain ion channels – it has low permeability and high resistance to current flow. This allows current to spread further inside the axon without leaking out of the cell, allowing current to spread further down the axon without being dissipated. The myelin sheath also decreases the membrane capacitance – the amount of charge stored at the membrane. Charge gets stored at the membrane when positive and negative charges are attracted to each other across the thin plasma membrane, holding them near each other at opposite sides of the membrane. By wrapping tightly around the membrane, myelin increases the distance between the intracellular and extracellular fluids containing charged particles so they are less attracted to each other across the ensheathed membrane. The lowered capacitance allows current to spread further (and faster) inside the axon as ions do not get stuck at the membrane.
The result of myelination is that depolarisation can rapidly spread passively along relatively long distances of axon, but it cannot spread down the whole length of the axon. The signal still needs to be boosted periodically by generating a new action potential. This happens at nodes of Ranvier, which are gaps in the myelin sheath that are packed with ion channels. When the nodes of Ranvier depolarise, their voltage-gated sodium channels open, triggering a new action potential which can then passively spread across the ensheathed internode region of the axon to the next node of Ranvier (Figure 3.26). Because the action potential rapidly jumps between nodes, this form of conduction is called saltatory conduction (from Latin ‘saltare’ – ‘to jump’). Large diameter, myelinated axons can conduct action potentials at speeds up to 100 m/s, meaning information about toe-wiggling can reach your brain in a respectable 0.02 s. Indeed, sensory neurons carrying information about where our bodies are in space have some of the fastest propagating axons of any cell.
Demyelinating conditions, such as multiple sclerosis and Guillain-Barré Syndrome, cause a multitude of symptoms, including altered sensation, muscle weakness and cognitive impairments, due to loss of myelin sheaths, disrupted neuronal communication and eventual axonal degeneration.
Energy use by action potentials
The flow of ions through voltage-gated ion channels during the action potential occurs down their electrochemical gradients so it does not itself use energy. During an action potential very few ions actually flow so the concentration gradients do not change significantly over the short term. Over the longer term, however these ions need to be pumped back to maintain concentration gradients and the resting membrane potential so that further action potentials can fire. This is achieved by the Na+/K+ ATPase, using ATP. Myelination of axons helps speed action potential conduction, but also makes action potential firing more energy efficient, because fewer ions need to flow to depolarise the myelinated membrane. Fewer ions therefore need to be pumped back across the membrane, so less ATP is needed by the Na+/K+ ATPase.
Action potential: Key takeaways
• When the membrane reaches a threshold voltage, voltage-gated sodium channels briefly open, depolarising the cell
• Voltage-gated potassium channels open and repolarise the cell
• Depolarisation spreads along the membrane activating nearby sodium channels
• Inactivation of sodium channels means action potential propagates in one direction and sets a limit on firing frequency
• Action potentials are all-or-nothing, and only occur once the threshold for sodium channel activation is met
• Myelination speeds action potentials and makes them more energy-efficient.
Communication between neurons
Neurons signal electrically, using action potentials to communicate between the soma and the axon terminals. The action potential signals that the soma and axon’s initial segment depolarised to the threshold voltage. But what generates that depolarisation in the first place? What is the signal that depolarises a neuron to make it fire an action potential? We saw in chapter 3 that neurons integrate lots of inputs and compute whether or not to fire an action potential. In sensory neurons, these inputs to a neuron might be information from the outside (or internal) world, for example stretch of the skin, a painful heat, or a delicious smell. You will learn more about how these types of stimuli generate inputs in neurons in later chapters. But for most neurons, the inputs come from other neurons, via connections, or synapses. While neurons communicate electrically within a cell, communication between neurons is usually chemical – a chemical or neurotransmitter is released from one neuron and acts to generate a signal on the next neuron.
Synaptic transmission
During synaptic transmission, an action potential in a neuron – the presynaptic neuron – causes a neurotransmitter to be released into a tiny gap called the synaptic cleft between two neurons. The neurotransmitter diffuses across the synaptic cleft and binds to receptors on the neuron receiving the signal – the postsynaptic neuron, which produces a change in the postsynaptic cell. Looking into this process in more detail, we can split the processes of synaptic transmission into a number of separate steps (Figure 3.27):
• An action potential arrives at the axon terminal (or presynaptic terminal), depolarising it.
• Depolarisation of the presynaptic terminal opens a new type of voltage-gated ion channel – the voltage-gated calcium channel, which has a threshold for activation of around -10 mV. When these channels open, calcium (Ca2+) enters the cell down its electrochemical gradient, as there is a higher concentration of Ca2+ in the extracellular fluid compared to the intracellular fluid (1.5-2 mM outside the cell, vs. 0.05 – 0.1 mM inside the cell), and its positive charge attracts it into the negatively charged cell. Unlike Na+ and K+, Ca2+ is not present at high enough concentrations to affect the membrane potential of the cell. Instead, an increase in intracellular Ca2+ concentration can trigger different signalling cascades in the cell, by binding to different proteins.
• Ca2+ entering through voltage-gated calcium channels binds to a protein called synaptotagmin.
• The presynaptic terminal contains lots of little membrane ‘bags’ called synaptic vesicles, which are packed with neurotransmitter. Some of these vesicles are close to an area on the plasma membrane of the cell called the ‘active zone, whereas the vesicles that are still being packed with neurotransmitter are further away from the membrane and nearer the centre of the presynaptic terminal. The vesicles at the active zone are ‘docked’, being held close to the plasma membrane by a complex of proteins called SNARE proteins. When calcium binds to synaptotagmin, the membranes of the vesicle and the plasma membrane of the cell are brought even closer together and fuse, releasing the contents of the vesicle (neurotransmitter molecules) into the extracellular space of the synaptic cleft. The vesicles that are already docked at the active zone are more readily released so are the first to fuse with the membrane and release their neurotransmitter.
• The synaptic cleft is very narrow, so neurotransmitter molecules can quickly diffuse across from the presynaptic terminal to the post-synaptic cell.
• The postsynaptic cell’s membrane (usually part of a dendrite) contains receptors for the neurotransmitter molecules that are released from the presynaptic cell. A receptor is a protein that can bind a specific molecule – termed a ligand. Many of these receptors are part of ligand-gated ion channels. These are ion channels that open when a specific molecule binds to them. Ions flow through the open ion channels, down their electrochemical gradients, producing a change in the membrane voltage in the post-synaptic cell.
• To terminate synaptic signalling, neurotransmitter must be removed from the synaptic cleft. This is achieved by transporters on neurons or astrocytes, proteins which take up neurotransmitter into the cell where it can be broken down, recycled or repackaged. Some neurotransmitters may also be broken down by proteins that are present in the synaptic cleft.
Excitatory synapses
Excitatory synapses make the post-synaptic neuron more likely to fire an action potential by producing a depolarisation in the post-synaptic cell, moving it towards the threshold potential for opening voltage-gated sodium channels. This happens when Na+ ions are allowed to flow into the cell.
The main excitatory neurotransmitter in the brain is glutamate (acetylcholine is an important excitatory neurotransmitter in the peripheral nervous system). Glutamate’s main receptors are AMPA and NMDA receptors. AMPA receptors are ligand-gated ion channels that let both Na+ and K+ pass through them. Though K+ ions leave the cell when AMPA receptors open, the main effect is an influx of Na+, so when glutamate binds AMPA receptors, the membrane depolarises towards the threshold for firing an action potential. This depolarising change in membrane potential is termed an excitatory post-synaptic potential (EPSP; Figure 3.28) and lasts several (> 10) milliseconds.
NMDA receptors are also ligand-gated ion channels and are permeable to Ca2+ as well as Na+ and K+. However they are also voltage-dependent, as they are blocked by Mg2+ ions unless the membrane potential is depolarised. They are also slower than AMPA receptors to open and close. Because of this they do not contribute much to the EPSP. However they play a really important role in altering synaptic strength – or how much of an effect a presynaptic action potential can have on the postsynaptic cell.
Metabotropic glutamate receptors are often also present. Metabotropic receptors are also known as G-protein coupled receptors. These proteins bind glutamate but do not directly open an ion channel. Instead they trigger other intracellular signalling pathways that can make other changes to the cell, for example altering the properties of other ion channels. Because their action is via intracellular signalling pathways, they have slower effects than ionotropic receptors (receptors such as AMPA and NMDA receptors that are part of, and directly activate, ion channels).
Usually an EPSP from a single synapse won’t depolarise the post-synaptic neuron enough to reach the threshold for firing an action potential. Instead multiple synaptic inputs need to be summed together to get a big enough EPSP (Figure 3.30). If the presynaptic neuron fires lots of action potentials in a short space of time, then the inputs into a single synapse can add together to form a larger EPSP. This is temporal summation. Additionally, if different excitatory synapses are active at the same time, then these EPSPs can spatially summate to generate a larger EPSP. Both temporal and spatial summation happen to integrate the inputs onto a postsynaptic cell, to determine whether it fires an action potential.
Inhibitory synapses
Inhibitory synapses make the post-synaptic neuron less likely to fire an action potential, by hyperpolarising the membrane, or by preventing it from depolarising by holding the membrane below that needed to activate sodium channels.
The main inhibitory neurotransmitter in the brain is GABA (gamma aminobutyric acid), whose main receptors are GABAA and GABAB receptors. GABAA receptors are ligand-gated ion channels that are permeable to Cl ions when GABA is bound. Because Cl ions enter the cell on activation and the equilibrium potential for Cl (ECl) is -65 mV, opening GABAA channels will tend to keep the membrane potential near -65 mV. As this is below the threshold for activation of sodium channels, this will inhibit the post-synaptic neuron from firing an action potential. Depending on the membrane voltage of the cell when these channels open, the membrane potential might slightly hyperpolarise or depolarise the cell. In each case, however, this membrane potential change is inhibitory (an inhibitory post-synaptic potential or IPSP) because it is holding the membrane potential away from that needed to fire an action potential. For example, if the neuron’s membrane potential is -75 mV when GABAA receptors open, the membrane potential will move towards ECl so will depolarise slightly to -65 mV. However the open GABAA receptors prevent the membrane from depolarising beyond -65 mV to the threshold for firing an action potential. If the membrane potential is more positive than ECl, e.g. -60 mV, then opening GABAA channels will make the membrane potential more negative or hyperpolarised, until it reaches -65 mV. In both cases, opening the GABAA channels has made the neuron less likely to reach threshold for action potential firing.
GABAB receptors are metabotropic receptors that are linked to activation of potassium channels, increasing K+ permeability. Their activation therefore shifts the membrane potential towards EK, or -80 mV, hyperpolarising the cell. GABAB-mediated membrane potential changes are therefore also IPSPs as they hyperpolarise the membrane away from the threshold for action potential firing, but because they require intracellular signalling these IPSPs are slower than GABAA-mediated membrane potential changes.
Synaptic integration
Postsynaptic cells use temporal and spatial summation to integrate all the different synaptic inputs to the cell. If the net effect of all the inputs is to depolarise the axon initial segment above the threshold for activating sodium channels, the cell will fire an action potential. The way in which all these inputs are integrated to generate an output (action potential) is therefore the basis of how neurons perform the computations on which our thoughts and feelings depend.
Neurons can perform different computations based on their morphology and the spatial organisation of their excitatory and inhibitory inputs, as this alters how they are summated (Fig. 3.32). Most synaptic inputs are onto the dendrites of a neuron, but some may be onto the soma or even the axon. Synaptic inputs to the distal end of dendrites (far from the soma) will potentially have a smaller effect on the membrane potential at the axon initial segment than an input onto the soma, because the signal degrades over the distance they need to travel, while inputs onto the axon initial segment itself can have an even stronger effect than those onto the soma. Excitatory inputs onto distal dendrites can also be gated by inhibitory synapses that are more proximal to the soma on the same dendrite, so the EPSP cannot reach the soma. The ability of EPSPs and IPSPs to spread along dendrites is also determined by factors such as the number and type of ion channels in the dendritic membranes, as well as the size of the cell. If there are few ion channels, then charge cannot as easily leak out across the membrane and dissipate the potential chance. Similarly, a given input will spread further in a small cell than a larger, highly branched cell, as less charge gets lost at the membrane (the smaller cell has a lower capacitance). However, dendrites also express voltage-gated ion channels that can boost signals from distal dendrites.
Neurons’ computation can therefore be affected by many factors, from the location and strength of individual synapses, to the shape of the cell and the number and location of ion channels expressed. Many of these properties can be modified based on the cell’s activity, allowing alterations to the contribution that different synaptic connections play on the decision to fire an action potential. This plasticity in synaptic connectivity is critical for allowing associations to be formed and broken between neurons, forming the basis of learning and memory as well as shaping how we perceive the world.
Gap Junctions
While most connections between neurons are via chemical synapses, direct electrical connections also occur. These are called gap junctions and are formed by pairs of hemichannels, one on each cell, made up of a complex of proteins called connexins. Compared to other ion channels, gap junctions are relatively non-selective, allowing cations (positively-charged ions) and anions (negatively-charged ions) through as well as small molecules such as ATP. Though regulation of their opening is possibly, they are usually open, meaning that electrical signals can spread through connected cells. Gap junctions are more common during development and are rare between excitatory cells in mature nervous systems. They are most common between certain inhibitory interneurons in the brain and the retina, as well as between glia, such as astrocytes.
Other neurotransmitters
While glutamate is the main excitatory neurotransmitter in the brain, and GABA is the main inhibitory neurotransmitter in the brain, there are many other neurotransmitters that can also be released at synapses. These can be broadly divided in to different categories, based on the chemical structure of the neurotransmitter molecules. All activate their own specific receptors.
Amino acid neurotransmitters include glutamate, GABA and also glycine, which is the major inhibitory neurotransmitter in the brainstem and spinal cord.
Monoamine neurotransmitters include noradrenaline, dopamine and serotonin. There are specific populations of monoaminergic neurons in the brain that originate in specific midbrain and brainstem nuclei and send projections to widespread brain regions, modulating processes such as reward, attention and alertness. Noradrenaline is also an excitatory transmitter in the peripheral nervous system.
Peptide neurotransmitters include naturally occurring opioid peptides – endorphins, enkephalins and dynorhpins – that activate the same receptors as opiate drugs such as morphine and heroin. There are numerous other peptide neurotransmitters, including oxytocin and somatostatin. Peptide neurotransmitters are often co-released at synapses with GABA or serotonin.
Purine neurotransmitters include ATP, the cell’s main energy currency, and its breakdown product adenosine.
Acetylcholine is unlike other neurotransmitters structurally. It is a common excitatory neurotransmitter in the peripheral nervous system, including at the neuromuscular junction, and is also released by many neurons in the brain, where it is involved in regulating alertness, memory and attention.
Synaptic transmission : Key takeaways
• When an action potential arrives at an axon terminal, voltage-gated calcium channels open, allowing Ca2+ influx into the terminal
• Ca2+ binds synaptotagmin, pulling synaptic vesicles very close to the plasma membrane. This triggers fusion of synaptic vesicles with the plasma membrane, releasing neurotransmitter into the synaptic cleft.
• Neurotransmitter diffuses across the synaptic cleft and binds to ionotropic or metabotropic receptors on the postsynaptic cell.
• Receptors for excitatory neurotransmitters such as glutamate trigger Na+ entry into the postsynaptic cell, depolarising the membrane (producing an EPSP), making it more likely the postsynaptic cell will depolarise to the threshold for firing an action potential.
• Inhibitory neurotransmitters such as GABA activate receptors that keep the membrane potential negative with respect to the threshold for firing an action potential (generating an IPSP).
• Postsynaptic neurons integrate different excitatory and inhibitory inputs to decide whether to fire an action potential.
• The location and strength of different synapses, as well as the shape of the post-synaptic cell and expression of different ion channels modify the integration of different inputs – changing these can alter the computation done by the cell.
About the Author
Dr Catherine Hall is a member of the Sussex Neuroscience Steering Committee, the University Senate, convenes the core first year module “Psychobiology” and lectures on topics relating to basic neuroscience, neurovascular function and dementia. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/03%3A_Neuronal_communication/3.02%3A_Neuronal_transmission.txt |
Learning Objectives
• To gain knowledge and understanding of how drugs enter the body and the time course of their effects
• To gain a basic understanding of how general classes of drugs interact with neurons to alter their function.
Having learnt about how neurons in the brain communicate, let’s now consider how drugs can affect their function.
In a fictional example, Sam has both high cholesterol and attention deficit hyperactivity disorder (ADHD). To help alleviate their symptoms, the GP prescribes them atorvastatin to lower their cholesterol, while a psychiatrist prescribes lisdexamfetamine to help improve their attention.
Both atorvastatin and lisdexamfetamine are considered drugs. Researchers who design drugs and investigate how they act on the body are often called pharmacologists (they study pharmacology). While a general pharmacologist might explore the use of atorvastatin or lisdexamfetamine, someone who researches psychopharmacology might be more interested in understanding how lisdexamfetamine can reduce symptoms of ADHD.
These scientists don’t just develop drugs or observe changes in symptoms after administration; they also ask various other questions! For example, a psychopharmacologist may consider the following:
• What parts of the brain does a drug act on?
Exercise
Can you think of other important questions that a psychopharmacologist might investigate?
• Does a drug have its effect because it interacts with a specific receptor type?
• How does the long-term administration of a drug impact brain biology?
• After a drug is taken, how long do its effects last?
• Can a drug’s chemical structure be changed so that its effects can be prolonged?
• Would taking medicine in a certain way (e.g., oral vs nasal) improve the drug’s ability to act on the brain?
• Could brain biology explain why there is individual variation in the capacity of drugs to ameliorate certain conditions?
Building on your knowledge of neurobiology, this chapter will explore the concepts needed to understand how a psychopharmacologist might approach addressing these questions.
Classifying drugs
Before exploring how drugs act on the body and brain, we need to clarify how we refer to different drugs; it can be confusing because certain compounds can go by different names. For example, a psychopharmacologist may describe methylphenidate as a psychostimulant (or a phenethylamine), a norepinephrine–dopamine reuptake inhibitor. In contrast, a chemist might refer to the drug by its chemical structure: C14H19NO2. Furthermore, methylphenidate might be prescribed by a psychiatrist as ‘Ritalin’ and referred to by the UK government (2022) as a ‘Class B controlled substance’ (which has severe penalties for illegal possession and intent to supply). While the following is likely not an exhaustive list of methods to categorise drugs, they can broadly be referred to in the following ways:
• Source
• Chemical structure
• Relative mechanism of action in the brain
• Therapeutic use or effect
• Marketed names
• Legal or social status
We will now focus on three of these categories that you are likely to encounter in your studies of biopsychology.
Classification by source
Drugs come from various places – some are naturally occurring, while others are created in the laboratory. Cocaine (C17H21NO4) is an example of a naturally occurring drug because it is directly extracted from the leaves of the coca plant. Opium is also naturally occurring, taken from the unripe seed pods of the opium poppy. In other words, all the molecules that give cocaine and opium their psychoactive properties are already present in the plant itself.
Semisynthetic drugs are chemically derived from naturally occurring substances. An example of a semisynthetic drug is heroin (a modified molecule of morphine, the main active ingredient of opium). The drug lysergic acid diethylamide (LSD) is also semisynthetic, originally derived from the grain ergot fungus.
Finally, some drugs are entirely synthetic, made from start to finish in the laboratory. Methadone, amphetamine, and 3,4-methylenedioxymethamphetamine (MDMA or ecstasy) are all examples of synthetic psychoactive substances.
Relative mechanism of action in the brain
We will discuss the details of how drugs might act in the brain in the pharmacodynamics section (Section 4). For now, it’s essential to understand that certain drugs can have very similar molecular targets in the brain.
For example, opium (natural), heroin (semisynthetic), and methadone (synthetic) all act on opioid receptors in the brain. Therefore, these opioid-targeting drugs might be considered ‘variations on a theme’. Despite their overall affinity for binding to opioid receptors, it’s important to remember that the biological and psychological effects may still differ. There are different types of opioid receptors (and these might be differentially located across the brain), and certain opioid drugs might bind to some of these receptors more readily than others. These drugs may also differ in terms of how quickly they reach the brain after being administered, as well as how fast they are eliminated from the body (see Pharmacokinetics, below).
It’s also crucial to remember that the brain does not express opioid receptors with the sole purpose of mediating the effects of drugs like methadone or heroin. The body already has endogenous opioids circulating in regions of the nervous system; these molecules play essential functions, like enabling us to feel pain and pleasure and helping to regulate our respiration (Le Merrer et al., 2009; Corder et al., 2018). In contrast, drugs are exogenous compounds that originate outside the human body.
Therapeutic use or effect
Drugs can also be classified according to their biological, behavioural, or psychological effects. Drugs that target opioid receptors treat pain and are therefore called analgesics. Drugs that excite the central nervous system (CNS) and make us more alert are called stimulants (e.g., cocaine, amphetamine, nicotine). In contrast, substances with the opposite effect are depressants (e.g., alcohol, benzodiazepines). Some types of hallucinogens (e.g., mescaline, LSD, psilocybin) and psychotherapeutics (e.g., antidepressants like sertraline and mirtazapine) are drugs that alter psychological states.
It is also possible that some drugs can fall under different categories or are otherwise unclear what type they belong to. Ecstasy (MDMA) has a chemical structure like the stimulant amphetamine, yet it also can have hallucinogenic effects. Ecstasy is also sometimes referred to as an empathogen–entactogen because of the emotional state of relatedness, openness, or sympathy that it can create (Nichols, 2022). For all of these drugs, the effects and potential therapeutic use depend on how much and by what method they are administered.
Pharmacokinetics
Pharmacokinetics is a subfield of pharmacology that studies how drugs:
1. are absorbed by the body,
2. distributed,
3. metabolised, and
4. excreted from the body.
Thinking about the ‘journey’ a drug goes on may be helpful to understand these concepts.
A drug might first enter the body from a variety of routes. Nicotine, for example, could be smoked in a cigarette or taken via a patch applied to the skin. For this module, the effects of nicotine that we are most interested in studying are those happening in the brain. We will review how drugs like nicotine get to the brain.
Finally, you will learn how drugs, as well as their metabolites, can be removed from the body in urine.
Based on the example timeline shown (left) for nicotine, we can plot the concentration of a drug in the body (Figure 3.37). When a drug is initially administered, the concentration in the body increases (absorption phase). Then, at a particular timepoint (Tmax), the concentration reaches its highest level (Cmax).
Next, during the elimination phase, the concentration of the drug in the body decreases – this happens because the drug is both metabolised and excreted. At some point, the level of the drug decreases so that it is half the value of Cmax; the amount of time it takes to reach this point is the drug’s half-life (T1/2).
Absorption and distribution
Several factors influence how quickly a drug is taken up (absorbed) by the body. Perhaps the most obvious is how the drug is administered. Several non-invasive and invasive methods for drug administration are shown in Table 3.1.
Non-Invasive Methods of Drug Administration
Oral Into the mouth
Sublingual Under the tongue
Nasal Absorption through blood capillaries lining nasal cavities
Rectal Like oral, but can be done in unconscious individuals because it doesn’t require swallowing
Transdermal On the skin via a patch
Inhalation Into the lungs, which have a large surface area and are highly vascularised
Invasive Methods of Drug Administration
Subcutaneous Under the skin, but not into the muscle
Intramuscular Into the muscle
Intravenous Directly into the vein, so directly into the body’s bloodstream
Epidural Into the space between the dura mater and vertebrae, used in spinal anaesthesia
Injections methods primarily used in animals (e.g., in rodent models of mental health)
Intraperitoneal Injection into the peritoneal cavity surrounding the intestines
Intracranial Injection into either the tissue of a specific brain region or CSF-filled ventricle; since these drugs are injected into the brain, they do not need to access the body’s circulatory system
Table 3.1. Routes of administration
With so many injection methods, how is the best method for delivering a drug determined? It turns out that there are many factors influencing that decision. Intravenous (IV) infusions might be the quickest to enter the body, but a drug administered via this route might have the shortest length of action in the body (a short half-life; quickly into and out of the body). So, an IV administration of an analgesic might lead to rapid pain relief, but the effect might not last for long. Plus, some people are afraid of needles, and training is required to administer IV injections. Thus, IV injections do not allow patients to care for themselves independently.
Perhaps on the opposite end of the administration spectrum from IV injections are non-invasive oral administrations. Most individuals can swallow medications, so this method ensures a level of independent care. Unlike IV administration, however, drugs do not immediately enter the bloodstream when taken orally. Therefore, the desired effects of medications swallowed are slower than drugs administered through IV injections.
Further complicating this is that drugs taken by the oral route are absorbed through the gastrointestinal system, and not by the mouth. This has two primary consequences. First, drugs can initially be destroyed by stomach acids, limiting the maximum effect a drug can have (e.g., for a specific medication, Cmax might be higher after IV than after oral administration). Furthermore, the stomach environment is constantly changing (especially after meals!), impacting how much of the drug eventually reaches other parts of the body. Also, after exiting the stomach, drugs enter the liver, where they undergo first-pass metabolism; this can further destroy orally administered medications, reducing the concentration of the drug that reaches the rest of the body. That said, some medications (e.g., lisdexamfetamine) are designed in a certain way so that a person initially swallows an inactive prodrug (Mattingly, 2010). When the prodrug undergoes first-pass metabolism, it is converted into the active drug (the amphetamine) that can later impact brain function.
A few other vital implications of drug administration routes impact how quickly and for how long drugs have their effect. If a drug is administered into an area of the body with a large surface area and a high level of blood circulation (e.g., the lungs), then the drug can enter the bloodstream quicker and be faster at having its desired impact. In contrast, a drug would be much slower to act, and potentially work for a longer duration, if it first needs to cross several cell layers before eventually arriving at a blood vessel (e.g., transdermal delivery). Furthermore, depot binding might occur if drugs become sequestered into inactive sites of the body where there are no receptors for them to bind to (e.g., in fat stores); these fat stores may slowly release a drug or its metabolites, further prolonging their actions on the body.
If not injected via the IV route, drugs can be slow to enter the circulatory system because they first need to pass through various membranes before entering the bloodstream (e.g., stomach wall, capillaries, etc.). While some endogenous compounds have the luxury of helper proteins designed to transport the molecule across the membrane, exogenous drugs usually do not have this mechanism. Instead, drugs most often flow from high areas of concentration to lower regions (via their concentration gradient) and eventually cross membranes they encounter simply through passive diffusion. Since our body’s membranes are made of lipids (a lipid bilayer), the ability of drugs to pass through membranes is determined by their lipid solubility and ionisation.
Briefly, most drugs are either weak acids or bases. When a drug is dissolved in a solution, it becomes ionised (charged). The more a drug is ionised, the less lipid-soluble it becomes, decreasing its ability to cross cell membranes. In general, drugs that are acids are less ionised in more acidic solutions, while drugs that are bases are less ionised in more basic solutions. So, for example, the drug aspirin is a weak acid. If aspirin is taken orally in a tablet, it first goes to the stomach. The stomach is strongly acidic (pH 2.0), so aspirin remains primarily in its non-ionised form. Because it is non-ionized and thus lipid-soluble, aspirin can pass through the stomach lining and enter a blood vessel. Blood, however, is slightly basic (pH 7.4); this would result in aspirin becoming ionised, making it less likely to leave the vessel because it’s more challenging to cross membranes (i.e., it’s now less lipid-soluble). The situation where a drug is ‘stuck’ in a compartment because it is highly ionised and low in lipid solubility is called ion trapping (Ellis and Blake, 1993). Concentration gradients can rectify ion trapping; the high concentration in one bodily compartment compared to a neighbouring compartment can encourage the drug to move across membranes to the lower concentration region. Various formulas are used to calculate drug diffusion but are beyond the scope of this module.
Finally, for drugs to enter the brain, as you’ve read about elsewhere, they must first cross the blood-brain barrier (BBB). Drugs that are lipid-soluble can most easily pass through the BBB. So, for example, because heroin is more lipid-soluble than morphine, it can more quickly pass through the BBB and arrive in the brain (Pimentel et al., 2020). Therefore, heroin tends to be faster acting than morphine; this may contribute to its addictive qualities.
Patients use prescribed medications to improve mental health because the drugs impact brain function. However, as described above, delivering drugs directly to the brain is challenging. Because drugs spread across our body via the bloodstream, they act in the periphery before reaching the brain; this can lead to unwanted side effects. Thus, part of the job of a psychopharmacologist is to develop medications that can improve mental well-being via their actions on the brain while minimising undesirable and unwanted effects.
Metabolism and excretion
We have already discussed one way of inactivating drugs: first-pass metabolism in the liver, where microsomal enzymes can break down medications into simpler compounds. Through biotransformation, the liver can metabolise drugs so that they are more ionised; this causes them to lose their lipid solubility, further preventing them from crossing the BBB to enter the brain. Finally, metabolised drugs are primarily excreted by the body via the kidney (urine), but other excretion products include bile, faeces, breath, sweat and saliva.
Special enzymes in both the blood and the brain can also break down drugs. For example, when in the brain, heroin can be metabolised into morphine. This raises an important example – drugs can be metabolised into molecules that are also biologically active. While morphine and heroin might have similar effects, the metabolites of other drugs can have opposing effects. For example, alcohol is metabolised into acetaldehyde via the alcohol dehydrogenase enzyme (Figure 3.39). If acetaldehyde accumulates in the body, it can make a person feel sick. Acetaldehyde itself is metabolised by aldehyde dehydrogenase into acetic acid. There are drugs for alcohol use disorder that block aldehyde dehydrogenase (Disulfiram), thereby resulting in increased levels of acetaldehyde in the body (Veverka et al., 1997). Because the effects of Disulfiram are unpleasant, it is believed that the administration of this drug might prevent people from drinking alcohol in the first place. While this might sound useful, compliance with this treatment is often an issue (Mutschler et al., 2016).
Finally, there is also significant individual variation in metabolism. For example, there might be sex differences in levels of certain enzymes. Women may have lower levels of gastric alcohol dehydrogenase than men – so for a given dose of alcohol, more alcohol enters the bloodstream (Frezza et al., 1990). There is also individual adaptation – chronic drinkers have higher levels of alcohol dehydrogenase. In this example, someone with an alcohol use disorder might need more alcohol than someone else to achieve the desired effects of alcohol – this is an example of tolerance resulting from a state of enzyme induction (increased rate of metabolism due to enhanced expression of genes for drug-metabolising enzymes; tolerance is discussed again later in this chapter). Age also impacts metabolism; older individuals have reduced liver function, and this might lead to exaggerated alcohol effects (Meier & Seitz, 2008). Finally, genetics can impact metabolism as well; some individuals have a polymorphism in the gene encoding aldehyde dehydrogenase – lack of this enzyme means there’s a greater accumulation of acetaldehyde and thus more of its unpleasant effects (Goedde & Agarwal, 1987).
Pharmacodynamics
While pharmacokinetics focuses on how a drug spreads across the body and is eliminated, pharmacodynamics studies the effect a drug has once it reaches its target in the body. So, while pharmacokinetics explains how a drug eventually passes through the BBB to get to the brain, pharmacodynamics describes what type of receptor a drug binds to in the brain and what consequence this has on neuronal signalling. Notably, while this example discusses a drug targeting a receptor, medicines can interact with many different types of molecules in the brain, impacting brain function in numerous ways. The figure below gives a few examples of how a drug can affect synaptic transmission. Once again, it’s crucial to recognise that drugs are acting on cellular mechanisms that already exist to help us survive. For example, nicotine binds to an ionotropic receptor that usually binds the neurotransmitter acetylcholine. This particular receptor also binds nicotine, so we call it the nicotinic acetylcholine receptor.
Agonist drugs and dose-response curves
Throughout this and future modules, you will learn about many different types of drugs and their impact on brain biology. For now, we will focus on two general kinds of drugs: agonists and antagonists that bind to receptors. Receptors are molecules that a drug (or an endogenous ligand) bind to and initiate a biological effect. ‘Receptor’ is a very general term – we often think of receptors as proteins that are inserted into the membrane of cells, but this does not have to be the case (receptors can also be in the cytoplasm, for example). Some examples of where receptors can be located are shown in Figure 3.41. For example, receptors can be found on post-synaptic neurons (e.g., nicotinic acetylcholine receptors) and on pre-synaptic terminals (known as autoreceptors, which help self-regulate neurotransmitter release; e.g., the dopamine D3-type receptor). Drugs are often not limited to binding one particular receptor – they are often considered ‘dirty’, binding to multiple types of receptors to varying degrees across the body. This is one reason why drugs often have unwanted side effects. Second generation antipsychotic medications are notorious for impacting multiple types of receptors, and this may be why their is significant individual variation in their tolerability (Kishimoto et al., 2019).
Drugs that are considered agonists can bind to a receptor and initiate some type of biological effect, such as turning on an intracellular cascade of signalling events. Because of this, we often think of agonists working via a ‘lock-and-key’ mechanism – inserting a drug into a receptor enables events to occur. It is critical to recognise that drugs tend to bind to receptors weakly and can rapidly dissociate from the receptor. Therefore, the acute impact drugs have is reversible. This is important because when a drug is no longer bound to a receptor, the endogenous ligand for that receptor can once again bind.
How much biological impact a drug has on the brain is, in part, dependent upon the number of receptors that are available to bind the drug. Therefore, increasing the number of drug molecules in the brain will also increase the probability of binding to a receptor. While larger doses of a drug can have a more significant impact on biology, there is always a limit. The maximum effect of a drug is achieved when the drug is continuously bound to all receptors; that is, a drug reaches its maximal effect when all receptors are occupied – this is known as the law of mass action and can be described by a dose-response curve.
As you can see from Figure 3.43 above, dose-response curves have a typical S-shape. They are usually plotted with a logarithmic function of ‘dose’ of drug administered on the x-axis and a measured response on the y-axis. Looking at the figure, you can see that at some point, increasing the dose of the drug no longer produces a bigger response; at this point (known as effective dose 100, ED100), the drug is occupying all of the available receptors and therefore is having its maximum effect. Another important point on the graph is the ED50 for the drug. ED50 is a drug-potency measure representing the dose that produces half of the maximal effect. Alternatively, ED50 can characterise the amount of a drug that produces an effect in half of the population to which it was administered. Finally, it is also crucial to remember that most medications have various effects on the body and can even interact with multiple receptors. Binding to Receptor A might impact pain perception, while binding to Receptor B might impact blood pressure. If Receptor B is more prevalent than Receptor A, then the dose required to affect blood pressure maximally would be higher than that to alter pain perception. Accordingly, there would also be a different ED50 value for each response.
Drugs often have side effects that are either undesirable or dangerous; dose-response curves can also be used to characterise these effects. For example, one unwanted effect of a drug is sedation. The dose of the medicine that produces this effect in 50% of subjects is referred to as the toxic dose 50 (TD50). Using this information, doctors can calculate a margin of safety, known as the therapeutic index (or therapeutic window) (TI = TD50 / ED50), which indicates how much the dose of a drug may be raised safely. Just as drugs might have multiple desirable effects, there may also be numerous toxic effects, each with a different TD50. Finally, in the therapeutic index formula, TD50 can be substituted with the lethal dose 50 (LD50), which is the dose of a drug that can kill 50% of subjects.
Comparing dose-response curves of agonist drugs
Up until now, we have primarily discussed the efficacy of drugs: the maximum effect they can produce. Drugs also differ in potency: how much medication is needed to produce an effect. Figure 3.43 shows dose-response curves for three opioid drugs with similar efficacies; hydromorphine, morphine, and codeine can all effectively reduce pain. However, different doses of these drugs are required to relieve pain – a higher concentration of codeine is needed for pain relief than morphine. Because of this, the ED50 of each of these drugs are also different. Drugs with a lower ED50 are considered more potent than drugs with a higher ED50.
The figure also displays the dose-response curve for aspirin, a non-opioid drug that can be used to reduce pain. Not only is a higher dose of aspirin required to reach similar levels of pain relief compared to opioids like morphine, but pain can also not be entirely relieved by any dose of aspirin. So, aspirin is both less potent and less efficacious for relieving pain than morphine.
There are likely several reasons for these differences in potency between drugs. For example, the pharmacokinetics are likely different between medications; if one drug has an enhanced ability to cross the BBB, then more molecules of that drug will bind receptors, and that drug will have supreme efficacy. In addition, some drugs might have a greater affinity for receptors than other drugs; a drug with higher affinity will likely stay bound to the receptor for a more extended period and thus keep on having an effect. Differences in the efficacy of drugs likely signify that those medications work through different mechanisms. While both morphine and aspirin relieve pain, morphine works by binding opioid receptors, and aspirin instead inactivates the cyclooxygenase enzyme.
Antagonist drugs
Agonist drugs binding to receptors can cause a biological response – as such, they are said to have intrinsic activity. In contrast, antagonists bind to receptors and counteract either an agonist or endogenous ligand’s effect on a receptor. Therefore, one can measure the effectiveness of an antagonist by observing how its administration impacts the dose-response curves of agonist drugs. Unlike agonist drugs that follow a ‘lock and key’ mechanism to initiate biological effects, it may be helpful to imagine an antagonist as a key that fits into a lock but does not turn (Figure 3.45).
There are a couple of categories of antagonists that you should be familiar with (Figure 3.46). Competitive antagonists bind to the same site on a receptor as an agonist or endogenous ligand. Because of this, this type of antagonist competes with the endogenous ligand for available binding sites. Therefore, a higher dose of the agonist drug would need to be administered to outcompete the presence of an antagonist; this would shift the ED50 of the agonist dose-response curve to the right. Theoretically, if there is so much agonist that absolutely no antagonist molecules can bind to a receptor, and if the agonist occupies all available receptors, then the agonist can reach the same ED100 as in the absence of an antagonist.
Unlike competitive antagonists, non-competitive agonists bind to a different part of a receptor than an agonist or endogenous ligand; therefore, they do not compete for binding. In effect, non-competitive antagonists make receptors unavailable for agonist drug action. While non-competitive antagonists still shift the dose-response curves of agonists to the right (Figure 3.46 B), they can also decrease the maximum possible effect an agonist or endogenous ligand has (i.e., they reduce the ED100). Because non-competitive antagonists bind to different receptor sites as agonist drugs, simply increasing the dose of an agonist cannot overcome this blockade.
When discussing agonists, we mentioned that most drugs form weak bonds with receptors. This means that the effects of the drug are reversible because the drug can easily dissociate from a receptor. Most antagonist drugs work similarly, and their interaction with receptors is temporary. However, the effects of some antagonist drugs are irreversible – they form a long-lasting bond with receptors. One example of such an antagonist is alpha-bungarotoxin (from banded krait venom), which blocks acetylcholine receptors at neuromuscular junctions. This can result in paralysis, respiratory failure, and death. However, there is a hypothetical scenario to overcome the effects of irreversible antagonists – synthesising new receptors. Suppose enough new receptors are formed and become functional, and the irreversible antagonist is no longer in the system (e.g., it has been eliminated via urine). In that case, these new receptors can begin to restore biological function (when they bind agonists or endogenous ligands).
Other types of agonists
There are three other types of ‘agonist’ drugs you may encounter in your studies. First, indirect agonists (or allosteric modulators) can bind to a different receptor part than a (regular) full agonist or endogenous ligand. These indirect agonists help full agonists, or endogenous ligands, have their full effects. Benzodiazepines are an example of an indirect agonist because they bind to GABAA receptors, and they enhance the channel’s conductance when GABA (the endogenous ligand) is also attached.
Second, partial agonists bind to the same receptor site as agonist drugs, but they have low efficacy (Figure 3.47). Therefore, defining a drug as a partial agonist is relative – the response to a partial agonist must be less than the maximum response produced by a full agonist. Importantly, when both full and partial agonists are administered simultaneously, they compete for the same receptor binding site. In this scenario, because the partial agonist is less effective at producing a biological response, it antagonises the effect of the full agonist. In other words, it is impossible for the body to produce a full response to the agonist because partial agonists (which are less efficacious) occupy the agonist-binding sites. Such an effect can be overcome by increasing the dose of a full agonist, allowing it to outcompete the partial agonist for binding to receptor sites. Because of these effects, partial agonists are sometimes called mixed agonist-antagonist drugs.
The final type of drug we will discuss is the inverse agonist. Some receptors in the body have substantial endogenous activity, even when ligands are not bound to them. This observation breaks the general rule that receptors have no activity when they are not bound to a ligand. Inverse agonists reduce this spontaneous activity, resulting in a descending dose-response curve (Figure 3.47). Although their mechanism is complex, some beta-carboline alkaloids are considered inverse agonists. Beta-carboline alkaloids bind to GABAA receptors at the same site as benzodiazepines. While benzodiazepines facilitate chloride conductance through the receptor channel and decrease anxiety, beta-carboline alkaloids have the opposite effects when administered (Evans & Lowry, 2007). The anxiety-inducing effects of these inverse agonists lead some people to call them ‘anti-benzodiazepines’. By contrast, drugs that are competitive antagonists at the GABAA receptor do not influence the receptor’s function on their own, but instead block the ability of full, partial, or inverse agonists to alter the receptor’s activity.
Effects of repeated drug use
If an individual is repeatedly administered a specific dose of an agonist drug, then the ability of the drug to exert effects on the body might change. If the drug effects get smaller, this is known as tolerance. So, if an individual has developed tolerance to a particular drug, then the dose of the drug might need to be increased so that the drug is still efficacious. Sometimes, drug effects get bigger and bigger with repeated administrations – this finding is known as sensitisation or reverse-tolerance. Because drugs can have multiple effects on the nervous system and behaviour, some drug responses may undergo tolerance, while others are sensitised. For example, repeated administration of amphetamine can result in tolerance to the euphoria-inducing effects of the drug, but sensitisation to specific psychomotor or psychosis-associated impacts.
Exercise
To help you think about these concepts, try drawing dose-response curves for the development of tolerance and sensitisation. Remember that, with tolerance, more drug is required to get the same effect. In contrast, with sensitisation less drug is needed to get the same effect.
Because certain drugs target similar receptors in the nervous system, sometimes cross-tolerance happens, where one drug also reduces the effects of another drug. For example, alcohol drinkers might be less affected by benzodiazepines since the impact of both types of drugs are dependent upon GABA transmission and the expression of GABA receptors (Lê et al., 1986). Mechanisms underlying drug sensitisation might be a bit less studied than tolerance. One example, however, of sensitisation is the ability of certain drugs (like amphetamine) to increase levels of the neurotransmitter dopamine across administrations (Singer et al., 2009). You will learn more about drug tolerance and sensitisation when studying addiction.
Summary
Key Takeaways
• Multiple classification systems for drugs exist
• Pharmacokinetics involves the absorption, distribution, and elimination of drugs from the body
• Pharmacodynamics involves how drugs interact with receptors and alter the functional state of the receptor.
In this chapter, you have learned about different categories of drugs and how they impact the body through pharmacokinetic and pharmacodynamic processes (Figure 3.48). Entire modules are often devoted to pharmacology, and many of the concepts we described can be further quantified via mathematical formulas, allowing for precise drug comparisons. As you study different psychiatric conditions and their biomedical treatments, be sure to refer to this chapter to help you understand how medications can be used for many individuals to improve mental wellbeing.
References
Corder, G., Castro, D. C., Bruchas, M. R., Scherrer, G. (2018). Endogenous and Exogenous Opioids in Pain. Annual Reviews of Neuroscience, 41, 453–473. https://dx.doi.org/10.1146/annurev-neuro-080317-061522.
Ellis, G. A., Blake, D. R. (1993). Why are non-steroidal anti-inflammatory drugs so variable in their efficacy? A description of ion trapping. Annals of the Rheumatic Diseases, 52, 241–243. https://dx.doi.org/10.1136/ard.52.3.241.
Evans, A. K., & Lowry, C. A. (2007). Pharmacology of the beta-carboline FG-7,142, a partial inverse agonist at the benzodiazepine allosteric site of the GABA A receptor: neurochemical, neurophysiological, and behavioral effects. CNS Drug Reviews, 13(4), 475–501. https://doi.org/10.1111/j.1527-3458.2007.00025.x
Frezza, M., di Padova, C., Pozzato, G., Terpin, M., Baraona, E., & Lieber, C. S. (1990). High blood alcohol levels in women. The role of decreased gastric alcohol dehydrogenase activity and first-pass metabolism. The New England Journal of Medicine, 322(2), 95–99. https://doi.org/10.1056/NEJM199001113220205
Goedde, H. W., & Agarwal, D. P. (1987). Polymorphism of aldehyde dehydrogenase and alcohol sensitivity. Enzyme, 37(1–2), 29–44. https://doi.org/10.1159/000469239
Kishimoto, T., Hagi, K., Nitta, M., Kane, J. M., Correll, C. U. (2019). Long-term effectiveness of oral second-generation antipsychotics in patients with schizophrenia and related disorders: a systematic review and meta-analysis of direct head-to-head comparisons. World Psychiatry, 18(2), 208-224. https://doi.org/10.1002/wps.20632
Lê, A. D., Khanna, J. M., Kalant, H., & Grossi, F. (1986). Tolerance to and cross-tolerance among ethanol, pentobarbital and chlordiazepoxide. Pharmacology, Biochemistry, and Behavior, 24(1), 93–98. https://doi.org/10.1016/0091-3057(86)90050-x
Le Merrer, J., Becker, J. A. J., Befort, K., Kieffer, B. L. (2009). Reward processing by the opioid system in the brain. Physiological Reviews, 89, 1379–1412. https://dx.doi.org/10.1152/physrev.00005.2009.
Mattingly, G. (2010). Lisdexamfetamine dimesylate: a prodrug stimulant for the treatment of ADHD in children and adults. CNS Spectrums, 15, 315–325. https://dx.doi.org/10.1017/s1092852900027541.
Mutschler, J., Grosshans, M., Soyka, M., & Rösner, S. (2016). Current Findings and Mechanisms of Action of Disulfiram in the Treatment of Alcohol Dependence. Pharmacopsychiatry, 49(4), 137–141. https://doi.org/10.1055/s-0042-103592
Nichols, D. E. (2022). Entactogens: How the Name for a Novel Class of Psychoactive Agents Originated. Frontiers in Psychiatry, 13, 863088. https://dx.doi.org/10.3389/fpsyt.2022.863088.
Pimentel, E., Sivalingam, K., Doke, M., Samikkannu, T. (2020). Effects of Drugs of Abuse on the Blood-Brain Barrier: A Brief Overview. Frontiers in Neuroscience 14:513. https://dx.doi.org/10.3389/fnins.2020.00513.
Singer, B. F., Tanabe, L. M., Gorny, G., Jake-Matthews, C., Li, Y., Kolb, B., & Vezina, P. (2009). Amphetamine-induced changes in dendritic morphology in rat forebrain correspond to associative drug conditioning rather than nonassociative drug sensitization. Biological Psychiatry, 65(10), 835–840. https://doi.org/10.1016/j.biopsych.2008.12.020
UK Government (2022). List of most commonly encountered drugs currently controlled under the misuse of drugs legislation. (Accessed 2022-12-13). https://www.gov.uk/government/publications/controlled-drugs-list–2/list-of-most-commonly-encountered-drugs-currently-controlled-under-the-misuse-of-drugs-legislation
Veverka, K. A., Johnson, K. L., Mays, D. C., Lipsky, J. J., Naylor, S. (1997). Inhibition of aldehyde dehydrogenase by disulfiram and its metabolite methyl diethylthiocarbamoyl-sulfoxide. Biochemical Pharmacology, 53, 511–518. https://doi.org/10.1016/S0006-2952(96)00767-8
About the Author
Dr Bryan Singer is a lecturer in the School of Psychology at the University of Sussex (Brighton, UK). Bryan’s lab is part of the highly collaborative Behavioural and Clinical Neuroscience group. He is the Director of the Sussex Addiction Research and Intervention Centre (SARIC) and a member of Sussex Neuroscience. Bryan also has an Associate role at The Open University (UK). | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/03%3A_Neuronal_communication/3.03%3A_Psychopharmacology-_How_do_drugs_work_on_the_brain.txt |
When you think about your senses, you will likely note that we have five senses: touch, hearing, sight, smell and taste.
As you will discover shortly, whilst this is true, it is also an oversimplification of the exquisite sensory systems our bodies possess. Even the simple names given to the senses do not do justice to the experiences they provide us with and the complexity that underpins our sensory processing.
In this section, you will learn about how the body senses the world around us. We will take each sensory system in turn and consider the sensory stimulus, how it is detected by the body, the pathways through the nervous system that the sensory information takes, and how it is processed within the brain to create a perception of the world.
Learning Objectives
By the end of this section you will be able to:
• Identify and describe the sensory stimuli for the different sensory systems
• Explain and compare how each sensory system detects sensory stimuli, converting the information into electrical signals for use within the nervous system
• Describe the pathways sensory signals take from the sense organ to the brain, noting any key processing that occurs at different points in the pathway and relating this to our perception of the stimulus
• Discuss the wider importance of our sensory systems as indicated by their functions beyond sensory perception and the impact of sensory impairment on an individual and their families.
04: Sensing the environment and perceiving the world
Touch comes before sight, before speech. It is the first language and the last, and it always tells the truth.
Margaret Atwood, Poet and Novelist
As the opening quote suggests, touch is fundamental to our experiences of the world, including our interactions with others. It is therefore the place we begin our journey through the senses.
Spend a moment with your eyes closed and focus on the sensations you can feel on your skin and in your body. What kinds of things can you detect?
You could have come up with a range of answers here. For example, you might have noted the feel of your clothes on your skin, how rough or smooth the fabric is or how tightly they fit. You might have felt a cool breeze across part of your body from an open door or window. You could even have realised that your body position feels a little uncomfortable, even painful, where you have sat in the same position for too long.
What you are experiencing in the exercise above is somatosensation which means bodily senses. This includes the sense of touch, but also includes the sensing of temperature, pain and proprioception, of which the latter can be defined as the sense of our own body position. In this first section we will focus on touch, before examining pain in the next section.
Sensing touch: getting to grips with the skin
To understand how we detect touch information, we need to understand a little about the structure of our skin. The skin is the largest organ in the human body and it incorporates the sensory receptor cells that allow us to detect touch as well as blood vessels, sweat glands and various other specialised structures. Critically for our sense of touch, the skin can be described as a viscous liquid, much like golden syrup or honey. It is said to have viscoelasticity, which means that when forces are applied to it, stresses and strains are created within the skin that can be detected by sensory receptor cells.
Note that these sensory receptor cells are distinct from the receptors you would have read about when studying neurotransmitters. Sensory receptor cells are whole cells designed to detect sensory signals or stimuli rather than a subcellular structure which binds to neurotransmitters or other small molecules.
There are four main types of sensory receptor cells which are critical for our sense of touch, each named after the biologist who discovered them:
• Meissner’s corpuscles
• Merkel’s discs
• Pacinian corpuscles
• Ruffini’s endings
These receptors are mechanoreceptors because they detect a mechanical stimulus. They can all be classed as a type of modified neuron. This means that they have a cell body and axon and are capable of producing an action potential.
The four types of receptors are shown in Figure 4.1. You can see from this figure that they are positioned at different depths within the skin. Merkel’s discs and Meissner’s corpuscles are positioned superficially, whilst the Pacinian corpuscles and Ruffini’s endings are positioned deep within the skin. The positioning gives us a clue about what kind of stimuli these different touch receptors detect.
If a receptor is positioned very deeply within the skin, would you expect it to be able to detect very light or gentle touch?
No, you would not. The stresses and strains set up in the skin will be proportional to the stimulus so a very light touch to the skin will only create forces within the the superficial areas of the skin.
Scientists have now characterised these different receptors and have a good understanding of the types of stimuli they each respond to. A common feature of sensory systems is adaptation, which is a change in response of the receptor – normally a decrease – to a constant stimulus. Adaptation is very important in sensory systems because key information for our survival often comes from changing stimuli rather than constant ones. By signalling change, our sensory systems avoid wasting energy on signals that merely communicate that everything is staying the same, and therefore provide no new information. On this basis, touch receptors can be defined as fast or slow adapting receptors. The fast adapting receptors will stop responding very quickly to a constant stimulus whilst the slow adapting ones will likely continue to respond, albeit at a lesser level, to constant stimuli. This is summarised in Table 1.
Sensory receptor cells
Location
Activating stimulus
Adaptation
Meissner’s corpuscles
Superficial
Light touch and vibration
Fast
Merkel’s discs
Superficial Light touch and pressure Slow
Pacinian corpuscles
Deep
Heavy pressure and vibration
Fast
Ruffini’s endings
Deep Skin stretch Slow
Table 1. Characteristics of sensory receptor cells for touch
Before we look at how the body converts touch signals into neural signals, it is important to take note of the overall distribution of these receptors across the body, because this explains why some parts of the body are much more sensitive than others. If you have time, and someone willing to help you, have a go at the brief experiment outlined in Box 1 before you continue reading. If you do not have time, you can return to this at any point.
Box 1: Two-point discrimination experiment
This experiment demonstrates that different areas of the body differ in their ability to distinguish between a single stimulus and two stimuli placed on the skin. When two points (e.g., the ends of a cocktail stick or compass) are gently touched on the skin at the same time, they are usually felt as two different points. However, if they are very close together, they may only be detected as a single point. Different areas of the body will have different thresholds for which separation is felt and this is the ‘two-point discrimination threshold’.
You will need a helper for this experiment. Read all the steps carefully before you start and gather the following pieces of equipment:
• A large paper clip or similar object with two small points that can be bent into a ‘U’ shape such that both points or ends are level
• Material to serve as a blindfold
• A ruler or tape measure
Now follow the steps below (this assumes you are the experimenter and your helper is the participant):
Agree which of the three body parts you will investigate from the following: index finger, palm of hand, upper arm, forehead, thigh.
Explain to your participant that, whilst they are blindfolded, you will touch their skin with the paperclip and ask them to tell you whether they felt one or two points after each touch, noting in your explanation that you will randomly select one or two points to touch them with. Let them know that you will not tell them if they are right or wrong after each guess.
Once your participant is blindfolded you can begin testing. For the first body area apply either two points or a single point in a random order. For the two-point touches you should begin with the points very close together and gradually widen them. You are looking for the point at which they correctly state that they detect two points. At this point measure the distance between the two points. Remember not to tell your participant if they are correct when they guess. You should then repeat the process in the same body area, this time beginning with the points further apart, and bringing them together. As before, note the final distance at which they detect two points. Work out the average of the two distances and note this down as your discrimination threshold.
Now repeat all this on the other two body areas.
For reference here are some typical values in millimetres (mm):
• Index finger – 2 mm
• Upper arm – 47 mm
• Palm of the hand – 13 mm
• Forehead – 18 mm
• Thigh – 46 mm
If you map the thresholds for all areas of the body, you will find that some areas are more sensitive than others. For example, the upper lip and fingertips are much more sensitive than the back. This sensitivity arises because of the different receptor types and the receptive fields, that is the skin area where a touch will be detected by a single receptor cell. Where there is a high density of receptors with small, distinct receptive fields, areas are very sensitive. This is because two points, close together are still likely to fall in different receptive fields and therefore be perceived separately. In contrast areas where receptive fields are large or overlapping, leave areas less sensitive because two points will likely activate the same receptor and so be perceived as a single stimulus.
So far we have focused on the skin and the receptors within it that can detect touch information but we have not yet looked at how that detection occurs. For a sensory signal to be used by the nervous system, it must be converted into a neural signal, or a change in membrane potential. The process whereby a sensory stimulus is converted into a electrical signal in the form of a membrane potential is referred to as transduction.
From touch to nerve impulse
Transduction is a common process across all of our sensory systems but exactly how it works varies with the sensory stimulus and receptor involved. Much of what we know about transduction in touch comes from investigations in Pacinian corpuscles because these have been the easiest to access for laboratory tests. Many of the studies done have actually focused on cells within cats, which closely resemble those in humans.
Figure 4.2 shows the structure of a Pacinian corpuscle. In this figure you can see the corpuscle is made up of multiple layers, like an onion skin. In the middle of these layers there is a sensory nerve ending with an unmyelinated tip. Remember that myelin is a fatty substance that typically covers axons to provide electrical insulation.
When a force is applied to the skin, the layered corpuscle acts as a mechanical filter and the strain created by the force is transmitted to the unmyelinated tip through the corpuscle. The membrane of this tip contains mechano-sensitive ion channels. The term mechano-sensitive indicates that the channels will open and close depending on the mechanical force applied to them.
During transduction, the force applied causes the ion channels to open. This causes an influx of sodium ions (Na+) into the unmyelinated tip of the Pacinian corpuscle. You should remember from your studies of the resting and action potentials that ions will move down their electrical and chemical gradients. In this case, sodium ions, which are positively charged, are more abundant outside the corpuscle in the positively-charged extracellular space. When the ion channels open, they move into the negatively-charged cell which has a lower concentration of the ion.
Using your understanding of action potentials, what impact do you think this influx of sodium ions will have on the Pacinian corpuscle?
Hopefully you have noted that it would depolarise the cell as the inside becomes less negative than it is at rest. ‘Rest’ in this case means ‘in the absence of any touch stimulus’.
This depolarisation is referred to as a receptor potential because it is a change in membrane potential within a sensory receptor cell caused by the presence of a sensory stimulus. The receptor potential is similar to a post-synaptic potential in that it degrades quite rapidly, but if there is sufficient depolarisation at the point where the unmyelinated tip meets the first myelinated region (Figure 4.2), an action potential will be triggered.
You should recall from your studies of action potentials that they involve a coordinated movement of sodium and potassium ions across the membrane and this type of signal can be transmitted over long distances. This is particularly important in the senses because some of our sensory receptor cells are a very long way from our spinal cord and brain. In the case of touch, sensory receptor cells in the toe must be able to send signals over a metre to the spinal cord.
Before we look closely at the pathway touch information takes to the brain, it is useful to note the relationship between the size of the stimulus and the size of the receptor potential. Figure 4.3 shows that as the stimulus increases in intensity, the receptor potential gets larger.
Action potentials are all-or-nothing signals, meaning that they cannot change in size. What do you think a larger receptor potential means for the action potentials created?
Because the action potentials cannot get bigger, they must encode the larger stimulus in another way. The way they do this is through a greater frequency of action potentials.
Now that an action potential has been created in the neurons that detect touch, this information must travel to the brain.
Touch pathways to the brain
Whilst the sensory endings of these sensory receptor cells are found all over the body, their cell bodies are found in the dorsal root ganglion, shown in Figure 4.4.
Recall that the structure of the spinal cord is relatively simple and repeats from the base (sacral regions) to the top (cervical regions). This repeating structure means that there is a dorsal root ganglion at every segment, or height, of the spinal cord. Exactly which one the information enters into depends on where in the body the information has come from. Figure 5 shows the cervical, thoracic, lumber and sacral nerves entering the different segments of the spinal cord and the regions of the body they receive information from.
Using Figure 4.5, can you identify which spinal nerve comes from the thumb?
C6 carries information from the thumb into the spinal cord.
Note that the figure makes no reference to information coming from our face. There is a separate system for carrying somatosensory information from the face, called the trigeminal system, which operates in a very similar way to that described here except that the sensory neurons enter the central nervous system at the brainstem instead of the spinal cord.
Once information from the sensory nerve ending reaches the cell body in the appropriate dorsal root ganglion, it carries on into the spinal cord via the dorsal root, which is formed of the axons of these sensory cells. These neurons have a slightly different structure to typical neurons found in the brain because they have a bifurcating axon, meaning their axon splits in two (Figure 4.6) and this allows the same neuron to transmit information from the sensory nerve ending where the mechano-sensitive channels are, beyond the cell body and into the spinal cord.
There are multiple pathways by which touch information can reach the brain, but here we will focus on the most critical pathway, called the dorsal column/medial lemniscal pathway. This pathway is shown in Figure 4.7.
The axons enter the spinal cord and pass directly up it, on the same side of the midline, until they enter the dorsal column nuclei (DCN) in the medulla where they synapse with the next neuron in the pathway. This next neuron is referred to as the ‘second order neuron’ because the sensory receptor cell is a modified neuron, and was therefore the first order neuron. The axons of the second order neurons travel in a pathway called the medial lemniscus to the thalamus. Specifically, they reach an area called the ventral posterior lateral thalamic nucleus (VPL), where they synapse again with the third order neuron. The third order neuron carries the signal to the primary somatosensory cortex (S1) within the parietal lobe.
Representation of touch in the somatosensory cortex has long been understood to be topographically organised, meaning that areas of the body are represented in a way that is proportional to the input they receive, creating a mini map of the body in S1. This is known as the somatosensory homunculus or ‘little man’ and this was first proposed in 1937 (Figure 4.8) (Penfield & Boldrey, 1937). Much of the research that led to this proposal was conducted by Penfield, a neurosurgeon, who applied electrical stimulation to the cortical surface doing surgery in patients with epilepsy (Box 2).
Box 2: Mapping the brain
Wilder Penfield (1891-1976, Figure 9) conducted surgery in patients with epilepsy or brain tumours. During this surgery he would apply a small electrical stimulation to the outer surface of the exposed cortex. Patients were conscious during this surgery and able to communicate with Penfield, meaning they could tell Penfield what they felt when he applied stimulation to different regions. The patient being conscious is not uncommon in brain surgery and allows the surgeons to carefully target specific areas.
Over the years, Penfield conducted cortical stimulation on over 100 patients and he kept meticulous notes and drawings indicating responses to specific areas of stimulation. It is from this research that the homunculus, which is Latin for ‘very small human’, was born. Penfield’s work is not without its limitations – it is noteworthy that exact stimulation patterns and intensity were not recorded, meaning that the final representation may not be entirely accurate (Matias, 2020). However, despite the potential inaccuracies, the idea has persisted and is still used to inspire or explain research almost 100 years later (Pan, Peck, Young, & Holodny, 2012).
The sensory homunculus is matched by a motor homunculus mapped onto the motor cortex (Figure 4.10). The two representations are connected and the connection between the two is likely to be critical for some aspects of movement including fine motor control. For example, researchers have found that impaired connectivity between these areas can underpin poor fine motor control in autism spectrum disorder (Thompson et al., 2017).
It is important to recognise that touch processing does not stop at the level of the primary somatosensory cortex. Research suggests an extensive cortical network is involved in processing touch information with signals from S1 continuing the secondary somatosensory cortex (S2) also located in the parietal cortex, and the insular cortex, a cortical area nestled deep within the cortical folds between the parietal and temporal lobes (Rullmann, Preusser, & Pleger, 2019).
The perception of touch
You have now considered how touch stimuli are detected by sensory receptor cells, the process of transduction and how the information travels to, and is represented in, the brain. In this final section we will look at how touch is perceived, that is, what meaning can be gained from our sense of touch.
Touch could be perceived as simply the physical encountering of objects in our environment but in fact it is much more than this. We do not simply encounter objects and identify that they are present. Rather we can glean detail of the object’s size, weight, texture, stiffness and various other characteristics through our sense of touch. All of this allows us to identify specific objects and make appropriate behavioural responses to them.
Exactly what we perceive from touch may depend on the type of touch that we engage in. Touch can be categorised as either active or passive. Active touch requires the movement of the fingers over the object, that is an intentional interaction with the stimulus, whilst passive touch simply involves the object being pressed against the fingers. An example of active touch is when you reach out to feel the fabric of clothes you are considering buying and run it between your fingers to establish qualities such as smoothness, thickness and weight. By contrast, passive touch would be having the fabric pushed against your fingers.
Early research suggested that active touch may be more informative in determining object shape (Gibson, 1962), but later work controlling for details such as the pressure with which the object was applied to the skin, showed little difference between the two types of touch, or even an advantage of passive touch (Chapman, 1994). Activation in the primary somatosensory cortex has been shown to differ between the two modes, with greater activation under active touch conditions. It has been suggested that the greater activation from active touch could arise because of activation from the motor cortex into the primary somatosensory cortex as the fingers move (Simões-Franklin, Whitaker, & Newell, 2011).
Researchers have also argued that the differences in these two types of touch can be underpinned by the role of other systems, specifically proprioception. Active touch will activate the same sensory receptor cells in the skin as passive touch, but it will also activate proprioceptive receptors, that is the ones that detect the position of the body, in this example, the fingers. It has been proposed that both the touch and proprioceptive inputs converge in the brain, causing greater excitation (Cybulska-Klosowicz et al., 2020). However, other researchers have countered this, suggesting that the two inputs compete rather than combine within the brain (Dione & Facchini, 2021). There is still much work to be done to fully understand the effects of active and passive touch on the brain.
Touch and social bonding
It should be clear from what you have read so far that touch is extremely important for perceiving the world around us including identifying the objects that our bodies come into contact with. This kind of touch can be considered as discriminatory touch. However, we do not just come into contact with inanimate objects! Much of the physical contact we have in the world is with other people and in this context, touch perception is often about affective experience, rather than discriminatory. This affective touch plays a critical role in social bonding (Portnova, Proskurnina, Sokolova, Skorokhodov, & Varlamov, 2020).
Affective touch begins from a very early age, with parent-infant touch a key part of the nurturing process. A large body of research now shows links between early nurturing tactile interactions to later life social and emotional functioning. This relationship is thought to be mediated in part by the hypothalamic-pituitary-adrenal (HPA) axis which underpins our body’s stress responses and can be suppressed by various hormones, including oxytocin (Walker & McGlone, 2013). Research in rodents has shown that greater nurturing behaviour in the form of licking, grooming, huddling, and playing results in a greater density of connections in the somatosensory cortex of the offspring (Seelke, Perkeybile, Grunewald, Bales, & Krubitzer, 2016).
Such comparisons are difficult to conduct in people because it would be unethical to divide human infants and parents into high and low nurturing conditions. However, one approach is to consider a group who would likely have experienced reduced nurturing without any experimental intervention. This approach was taken by a group of researchers who compared care-leavers with non care-leavers in the UK (Devine et al., 2020). They noted that the main reason for entering care in the UK is neglect and abuse and inferred from this that those in care may have received reduced tactile nurturing in infancy. The researchers used a range of measures to look at sensitivity and found care-leavers to be less sensitive to the affective components of touch.
It is not just early nurturing that can alter touch sensitivity and affective touch. Research has found that levels of empathy (Schaefer, Kühnel, Rumpel, & Gärtner, 2021) and loneliness (Saporta et al., 2022) can also have an effect on how people perceive affective touch. Additionally, the presence of certain diagnoses may also impact on touch perception. For example, individuals with Autism Spectrum Disorder have impaired responses to interpersonal touch (Baranek, David, Poe, Stone, & Watson, 2006).
You have now completed the first section on the senses with this section on touch. This section has introduced you to some key concepts including transduction, sensory pathways and the wider social implications of our senses.
Key takeaways
In this section you have learnt:
• The sense organ for touch is the skin, the largest organ in the body, which contains the sensory receptor cells critical for touch – Meissner’s corpuscles, Merkel’s discs, Pacinian corpuscles and Ruffini’s endings
• Each type of sensory receptor cell for touch can be found in a specific location within the skin and can give rise to a specific sensation. Receptors also differ in how quickly they adapt
• The type of receptors found and how densely they are packed determines the sensitivity of different parts of the body
• Sensory information enters the spinal cord and rises to the level of dorsal column nuclei in the medulla before crossing to the contralateral side in the medial lemniscus. After this it enters the ventral posterior lateral thalamic nuclei before continuing to the primary somatosensory area and other cortical regions for further processing
• Touch can serve both discriminatory and affective functions and can be considered in terms of active and passive touch
• Affective touch is critical for social and emotional functioning through its role in social bonding. Early adversity in the form of lack of nurturing tactile stimulation can have a long-lasting impact. Altered perception of affective touch can also be related to empathy, loneliness and the presence of diagnoses such as autism spectrum disorder.
References
Baranek, G. T., David, F. J., Poe, M. D., Stone, W. L., & Watson, L. R. (2006). Sensory Experiences Questionnaire: discriminating sensory features in young children with autism, developmental delays, and typical development. J Child Psychol Psychiatry, 47(6), 591-601. https://doi.org/10.1111/j.1469-7610.2005.01546.x
Chapman, C. E. (1994). Active versus passive touch: factors influencing the transmission of somatosensory signals to primary somatosensory cortex. Canadian Journal of Physiology and Pharmacology, 72(5), 558-570. https://doi.org/10.1139/y94-080
Cybulska-Klosowicz, A., Tremblay, F., Jiang, W., Bourgeon, S., Meftah, E.-M., & Chapman, C. E. (2020). Differential effects of the mode of touch, active and passive, on experience-driven plasticity in the S1 cutaneous digit representation of adult macaque monkeys. Journal of Neurophysiology, 123(3), 1072-1089. https://doi.org/10.1152/jn.00014.2019
Devine, S. L., Walker, S. C., Makdani, A., Stockton, E. R., McFarquhar, M. J., McGlone, F. P., & Trotter, P. D. (2020). Childhood Adversity and Affective Touch Perception: A Comparison of United Kingdom Care Leavers and Non-care Leavers. Frontiers in Psychology, 11. https://doi.org/10.3389/fpsyg.2020.557171
Dione, M., & Facchini, J. (2021). Experience-driven remodeling of S1 digit representation in awake monkeys: the challenge of comparing active and passive touch. J Neurophysiol, 125(3), 805-808. https://doi.org/10.1152/jn.00380.2020
Gibson, J. J. (1962). Observations on active touch. Psychological Review, 69(6), 477-491. https://doi.org/10.1037/h0046962
Matias, C. M. (2020). Edwin Boldrey and Wilder Penfield’s Homunculus: From Past to Present. World Neurosurg, 135, 14-15. https://doi.org/10.1016/j.wneu.2019.11.144
Pan, C., Peck, K. K., Young, R. J., & Holodny, A. I. (2012). Somatotopic organization of motor pathways in the internal capsule: a probabilistic diffusion tractography study. AJNR Am J Neuroradiol, 33(7), 1274-1280. https://doi.org/10.3174/ajnr.A2952
Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389-443. https://doi.org/10.1093/brain/60.4.389
Portnova, G. V., Proskurnina, E. V., Sokolova, S. V., Skorokhodov, I. V., & Varlamov, A. A. (2020). Perceived pleasantness of gentle touch in healthy individuals is related to salivary oxytocin response and EEG markers of arousal. Exp Brain Res, 238(10), 2257-2268. https://doi.org/10.1007/s00221-020-05891-y
Rullmann, M., Preusser, S., & Pleger, B. (2019). Prefrontal and posterior parietal contributions to the perceptual awareness of touch. Sci Rep, 9(1), 16981. https://doi.org/10.1038/s41598-019-53637-w
Saporta, N., Peled-Avron, L., Scheele, D., Lieberz, J., Hurlemann, R., & Shamay-Tsoory, S. G. (2022). Touched by loneliness-how loneliness impacts the response to observed human touch: a tDCS study. Soc Cogn Affect Neurosci, 17(1), 142-150. https://doi.org/10.1093/scan/nsab122
Schaefer, M., Kühnel, A., Rumpel, F., & Gärtner, M. (2021). Dispositional empathy predicts primary somatosensory cortex activity while receiving touch by a hand. Sci Rep, 11(1), 11294. https://doi.org/10.1038/s41598-021-90344-x
Seelke, A. M. H., Perkeybile, A. M., Grunewald, R., Bales, K. L., & Krubitzer, L. A. (2016). Individual differences in cortical connections of somatosensory cortex are associated with parental rearing style in prairie voles (Microtus ochrogaster). Journal of Comparative Neurology, 524(3), 564-577. https://doi.org/10.1002/cne.23837
Simões-Franklin, C., Whitaker, T. A., & Newell, F. N. (2011). Active and passive touch differentially activate somatosensory cortex in texture perception. Hum Brain Mapp, 32(7), 1067-1080. https://doi.org/10.1002/hbm.21091
Thompson, A., Murphy, D., Dell’Acqua, F., Ecker, C., McAlonan, G., Howells, H., . . . Lombardo, M. V. (2017). Impaired Communication Between the Motor and Somatosensory Homunculus Is Associated With Poor Manual Dexterity in Autism Spectrum Disorder. Biol Psychiatry, 81(3), 211-219. https://doi.org/10.1016/j.biopsych.2016.06.020
Walker, S., & McGlone, F. (2013). The social brain: neurobiological basis of affiliative behaviours and psychological well-being. Neuropeptides, 47(6), 379-393. https://doi.org/10.1016/j.npep.2013.10.008
About the Author
Dr Ellie Dommett studied psychology at Sheffield University. She went on to complete an MSc Neuroscience at the Institute of Psychiatry before returning to Sheffield for her doctorate, investigating the superior colliculus, a midbrain multisensory structure. After a post-doctoral research post at Oxford University she became a lecturer at the Open University before joining King’s College London, where she is now a Reader in Neuroscience. She conducts research into Attention Deficit Hyperactivity Disorder, focusing on identifying novel management approaches. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/04%3A_Sensing_the_environment_and_perceiving_the_world/4.01%3A_Feeling_the_world-_our_sense_of_touch.txt |
There are wounds that never show on the body that are deeper and more hurtful than anything that bleeds.
Laurell K. Hamilton, Novelist
The opening quote illustrates the complexity of pain. When we think of pain in simple terms, we might think of it as the experience that arises from a cut or bruise to the body. It is, after all, one of the bodily senses that comes under the banner of somatosensation. This simple idea is correct, and it is our starting point in this section, but it does not fully encompass the experience of pain, as you will soon learn. Therefore in this section we will discuss three different types of pain, beginning with nociceptive pain, which refers to the kind of pain that arises from a bodily injury.
Nociception: detecting bodily injury
In the previous section you learnt about four different types of modified neurons which act as sensory receptor cells responsible for our sense of touch. There is also a fifth type of modified neuron in the skin which is responsible for detecting tissue damage, and this is called the nociceptor. Unlike the touch receptors, nociceptors do not have any associated structures such as capsules: instead, they are referred to as free nerve endings. On the surface of the free nerve endings there are different types of channels, which means they can detect different types of stimuli.
Keeping in mind the idea of damage to the body, as opposed to heartbreak or other types of pain, what type of stimuli might be detected by nociceptors?
You could have come up with a range of ideas here. In the opening paragraph we mentioned cuts and bruises, so you may have identified stimuli that cause damage to tissue or apply great pressure. You might also have noted that extreme temperatures can cause pain, or some chemical substances. All of these would have been correct.
For each type of stimulus that causes bodily damage, the nociceptor must detect the stimulus and produce a receptor potential through the process of transduction.
This process varies according to the stimulus type. The first type of noxious stimulus that can be detected is intense pressure, for example, pinching or crushing. This type of stimulus is detected by mechano-nociceptors. The process of transduction here is very similar to that which was outlined for touch receptors.
Which kind of ion channels open in the nerve endings of touch receptors to produce a receptor potential?
Mechano-sensitive ions channels open, allowing sodium ions into the nerve ending causing a depolarising receptor potential.
Another type of transduction occurs when tissue is damaged. The damage results in cell membranes being ruptured, so that the chemical constituents of a cell which are typically found within the intracellular space spill into the extracellular space.
Can you identify an ion which is normally found in high concentration inside neurons but at a lower concentration outside?
Potassium
Substances that can be released from the inside of the cell include potassium ions, which are critical to the function of neurons, but there are other ions. For example, hydrogen ions also increase in the extracellular space when tissue damage occurs, causing a decrease in pH and an increase in acidity. Substances such as bradykinin and prostaglandins can also be released from damaged cells.
These chemicals directly increase as a consequence of tissue damage, but there are also other indirect changes which impact on the chemical constituents of the extracellular space. When tissue is damaged the immune system makes a response to protect the body. This response includes release of several chemicals in the area: histamine, serotonin, and adenosine triphosphate (ATP). All these changes give the nociceptors plenty to detect! Some of the substances directly activate the nociceptor (e.g., potassium and bradykinin) whilst others sensitise them (e.g., prostaglandins).
The final type of stimuli that can activate nociceptors are those that are very hot (>45°C) or very cold (<5°C). Transduction of these stimuli depends on heat-sensing channels. When these channels detect hot or cold stimuli, they open and allow both sodium and calcium ions into the nociceptor. The consequence of this is depolarisation in the form of the receptor potential.
What would you expect to happen after the receptor potential is induced?
If the receptor potential is large enough, an action potential will be triggered.
As you might expect, the receptor potential can trigger an action potential in the nociceptor and this information can then be transmitted to the central nervous system. Before we look at the pathway to the spinal cord and the brain, it is helpful to note a few features of nociceptive pain.
Firstly, although our starting point here was nociceptors being located in the skin, they are in fact found throughout the body. The only two areas where they are not found are inside bones and in the brain. The latter explains why brain surgery can be conducted with awake patients, as described in the previous section on touch. Other organs, such as the heart, lungs or bladder do have nociceptors and activation of these is referred to as visceral pain. Visceral pain is a special type of nociceptive pain. It is much rarer than the typical nociceptive pain that arises from our muscles, skin or joints, which is technically referred to as somatic pain. The rarety of visceral pain contributes to an interesting phenomenon called referred pain (Box 3).
Secondly, nociceptors can alter their sensitivity following tissue damage, resulting in hyperalgesia or allodynia. Hyperalgesia refers to an increased sensitivity following injury (Gold and Gebhart, 2010).
From an evolutionary perspective, why might hyperalgesia be beneficial?
It is likely to support greater period of rest to allow recovery.
Hyperalgesia is considered to be part of ‘sickness behaviour’, that is, the behaviour that we have evolved to allow any infection or illness to run its course (Hart, 1998). Allodynia refers to nociceptors becoming sensitive to non-noxious stimuli, for example, a gentle touch, after injury.
The distinction between the two is illustrated in Figure 4.11.
Box 3: Referred pain – what hurts anyway?
Referred pain is an interesting phenomenon where injury to one area of the body creates a perception of pain arising from a different area. There are some well-known examples of this shown in the table below:
Site of injury
Site of pain perception
Heart
Left arm, shoulder and jaw
Throat
Head
Gall bladder
Right shoulder blade
Lower back
Legs
Table 2. Examples of referred pain
Referred pain can be problematic because it can make it harder for health professionals to diagnose and treat the problem if they are unable to locate the source of the pain. Some examples are common – for example, a heart attack presenting as pain down the left arm and shoulder – meaning that these are easily identified, but for others, it can cause delays to treatment.
There is no consensus on exactly how referred pain arises but it is thought that convergence of information from the visceral site and the somatic site, for example, the heart and the shoulder respectively, onto a single neuron, resulting in an ambiguous signal in the brain. Perception is driven both by the sensory stimulus (bottom-up processing) and the information from memory and past experiences (top-down processing) and when faced with an ambiguous signal, the brain interprets the situation according to what it most expects. We are more likely to have hurt our shoulder than our heart and so the injury to our heart is perceived as a pain in our shoulder.
Pain pathways: getting pain information to the brain
Once an action potential has been produced in the nociceptor the information can be transmitted to the brain. As with the somatosensory neurons responsible for detecting touch, nociceptors also have their cell bodies in the dorsal root ganglion. From here they enter the spinal cord. There are then two key routes information can take. The simplest route is shown in Figure 12 and shows that the nociceptor connects to an interneuron, which in turn synapses with a motor neuron, forming a reflex arc. This pathway is responsible for the pain withdrawal reflex, for example, moving your hand away from a shard of glass or hot pan handle. The pathway does not travel via the brain and therefore works below conscious awareness to allow a fast response, withdrawing the body from any source of pain.
The remaining pathway travels to the brain and is shown in Figure 4.13. It is this pathway that underpins our conscious perception of pain, which can only occur when the signal reaches the brain. This pathway is referred to as the spinothalamic tract. Nociceptors feeding into this pathway terminate in the superficial areas of the spinal cord and synapse with lamina I neurons, also referred to as transmission cells, which form the second order neuron that crosses the midline and travels up to the thalamus, hence the name spinothalamic.
Using Figure 4.13, can you identify which thalamic nuclei are within the spinothalamic tract?
The ventroposterior lateral nuclei (VPL) and the central laminar nuclei.
From the VPL, third order neurons continue to the secondary somatosensory cortex. Additionally, neurons synapsing in the central laminar nucleus in the thalamus connect with neurons which carry the signal onwards to the insula and cingulate cortex. As with touch information, pain arising from the face region travels separately in the trigeminal pathway.
You may have spotted that the pathways responsible for touch travel up the spinal cord ipsilaterally (i.e. on the same side at the sensory input) and cross the midline in the brainstem, whilst the pathways for nociception travel up contralaterally (i.e. on the opposite side), having crossed the midline in the spinal cord. This gives rise to an unusual condition called Brown-Séquard Syndome (Box 4).
Box 4: Brown-Séquard Syndome
Brown-Séquard Syndome is named after the Victorian scientist (Figure 4.14) who first described and explained a rare spinal injury, presenting his case study at the British Medical Association’s annual meeting in 1862.
In the case study he presented the syndrome, characterised by loss of touch on one side of the body and loss of pain sensation on the other, was caused by a traumatic injury (Shams & Arain, 2020). However, the syndrome can, albeit less commonly, arise through non-traumatic injury, such as multiple sclerosis or decompression sickness. The syndrome occurs due to incomplete damage to the spinal cord, such that only one side is severed. The unusual pattern of lost sensation arises because touch information and pain information ascend on different sides of the midline within the spinal cord. Touch ascends ipsilaterally to the stimulus whilst pain ascends contralaterally. When one side of the spinal cord is cut, touch sensation is lost on the same side as the injury and pain on the opposite side.
Prognosis for individuals with Brown-Séquard Syndome varies depending on the cause and the extent of the damage, but because the syndrome only sees partial damage to the spinal cord, it is possible to have significant recovery, provided complications such as infections can be avoided.
Clearly, it is very important that information about bodily injury can reach the brain, but pain is also an unpleasant sensation, which is not always beneficial.
Can you think of a situation when it is helpful not to be aware of, or focus, on a bodily injury you are experiencing?
You could have come up with lots of ideas here. Perhaps the most obvious one is when your survival depends on being able to mobilise. Battlefield injuries are associated with this situation, where an individual reports not being fully aware of their injuries until they are away from the frontline. Similar reports can be found for people in sports matches.
The fact that there are situations when pain can be modulated suggests that it is not a simple case of the nociceptor sending an uninterruptible signal to the brain. In fact you have already learnt about the one type of input to the spinothalamic tract that can interrupt the signal about a noxious stimulus before it reaches the brain.
Think about when you knock your elbow, head or knee on something. What do you typically do immediately, without thinking?
The most common reaction here is to rub the site that you have knocked. That is, provide touch stimulation. You often see this when young children fall or hurt themselves, with the caregiver ‘rubbing it [the injured site] better’.
A key explanation of why touching the site of injury may provide pain relief comes from the Gate Control Theory. This theory was first put proposed by Ronald Melzack and Patrick Wall in 1965 and describes how pain perception can be modulated by touch. In their theory, Melzack and Wall suggest that there is a gating mechanism within the spinal cord which, when activated, results in a closing of the gate, and can prevent pain signals reaching the brain.
Recall that in the spinal cord the nociceptor synapses with lamina I cells, also called transmission cells. However, other sensory inputs also enter the spinal cord, in the form of touch sensory receptor neurons. According to Melzack and Wall, activation of the touch receptors can result in the signal from the nociceptor being blocked.
Figure 4.15 shows the proposed circuitry for this. In the figure you can see the touch receptor and the nociceptor. Both are synapsing with the lamina I transmission cell and another neuron in lamina II of the spinal cord. This small lamina II neuron, called an interneuron, is critical because it is thought to act as the gate due to its ability to inhibit the transmission cell. When the nociceptor alone is activated, Wall and Melzack proposed that it will inhibit the lamina II cell and simultaneously excite the transmission cell. The effect of inhibiting the lamina II cell is to remove the inhibition that cell normally exerts on the transmission cell. In effect this results in direct excitation of the transmission cell by the nociceptor, and indirect dishibition, that is, removal of all inhibition, on the same cell through the interneuron. This means the transmission cell is excited and a signal reaches the brain causing the perception of pain.
If the touch receptor neuron is also excited, this has the effect of exciting the inhibitory interneuron in lamina II, which results in the suppression of release of glutamate from the nociceptor into the synapse with the transmission cell [this isn’t what is shown in the figure- should it be redrawn to show the lamina II cell inhibiting the synapse from the nociceptor, or rewrite the text to remove “suppression of release of glutamate from the nociceptor into the synapse with the transmission cell and”?] and inhibition of the transmission cell directly, in effect a two-pronged attack on transmission cell excitation, reducing the likelihood of an action potential being produced and the signal about pain reaching the brain.
The Gate Control Theory is just a theory, and like all theories there is evidence both for and against the theory being accurate. For example, lamina II interneurons have been found to contain GABA, an inhibitory neurotransmitter, supporting the theory. However, nociceptors have so far only been found to form excitatory synapses, against the proposed theory. Although elements of this theory may be inaccurate, it has proved highly influential.
We indicated above that you have learnt about the first way that pain signals can be modulated when we described the role of touch. The other way pain signals can be altered is by actions of the brain itself.
Pain pathways: descending control of pain from the brain
Within the brainstem there are several structures which can modulate our experience of pain. We will begin with the periaqueductal grey (PAG), which is also referred to as the central grey. This structure is activated by activity in the spinothalamic tract and research has shown that electrical stimulation of the PAG results in a powerful pain-relieving or analgesic effect (Fardin et al., 1984). It is here that some endogenous opioids are thought to act.
This analgesia is thought to occur because of the connections between the PAG and two structures called the locus coeruleus and raphe nuclei. The locus coeruleus contains noradrenergic neurons and the raphe nuclei contains serotoninergic neurons. It is thought that both of these neurons send signals down to the spinal cord to the lamina II interneurons, which in turn suppress lamina I transmission cells that form the spinothalamic tract.
What would be the impact of suppressing the spinothalamic tract?
Reduced activity in this pathway would reduce the amount of activity reaching the thalamus and other areas of brain, minimising our perception of pain.
This descending pathway is an example of negative feedback. A noxious stimulus causes excitation in the spinothalamic tract which in turn activates the PAG. The PAG then sends a signal down to the spinal cord to interrupt, or silence, the incoming signal from nociceptors.
We have already discussed some situations where it is beneficial to not experience the full unpleasantness of pain, for example, in the case of a battlefield injury or where survival depends on being able to focus on getting to safety. It is likely this pathway from the PAG is critical in these situation. However, there are other, more everyday situations, where it is helpful not to focus on pain and therefore to have a way of modulating the experience.
Think back to the last time you did any of these things: a) went to the dentist for a filling b) had a vaccination by an injection or c) had a piercing or tattoo. How did you manage the pain associated with this experience?
You could have come up with all kinds of things here, but one thing they would likely all have had in common is distraction. Your dentist may have placed a poster on the ceiling of a busy image for you to look at, or had music playing. The doctor or nurse giving the vaccination or your piercer or tattooist will likely have chatted away to distract you.
In these situations you are experiencing attentional analgesia, that is where attention is directed away from the threatening event, resulting in decreased pain perception. Researchers have shown that this kind of analgesia does involve the PAG but also involves several other structures. Oliva et al. (2022) have demonstrated that attentional analgesia involves parallel descending pathways from the anterior cingulate cortex (ACC) to the locus coeruleus and from the ACC to the PAG and onto a region of the medulla, both extending down to the spinal cord.
Before we leave pathways behind, we need to introduce the second type of pain. Recall that we said we would examine three types of pain at the start of this section. The first is the nociceptive pain, that is pain that arises through actual bodily injury. The second type of pain is neuropathic pain. This type of pain arises through damage to the nociceptors and pathways that carry nociceptive information. One example of this is so-called thalamic syndrome. This syndrome arises when the thalamus is damaged, for example, by a stroke. Individuals with this syndrome experience intense burning or crushing pain from any sort of contact with the skin at a specific location. The pain is neuropathic because there is no actual damage to the location on the body where the sensation is felt but there is damage to the neurons forming the pathways that would typically carry pain information to the brain about this bodily region.
We now turn our attention to the final type of pain, the kind that arises without any damage to the body, including the neurons that process pain. This is psychogenic pain.
Pain without physical damage: psychogenic pain
The opening quote to this section indicated that pain can extend beyond physical damage. This idea is in keeping with our own day-to-day experiences and our use of language. For example, we talk of heartache and life events breaking or damaging us. In these cases, there can be no physical injury to the body or the nerves that normally carry nociceptive information, meaning there is no nociceptive or neuropathic pain, and yet, our experiences are best described as painful. This is psychogenic pain and it refers to a type of pain that can be attributed to psychological factors.
There is a huge range of psychological factors that could result in feelings described as pain. In the previous paragraph we mentioned heartache which could arise from a relationship breakdown or bereavement, but there are other, perhaps less obvious factors as well. For example, the Social Pain Theory (MacDonald and Leary, 2005) suggests that being excluded from a social group or desirable interpersonal relationship can cause social pain, due to rejection, which is similar to physical pain. They also suggest that this social pain serves the same purpose as physical pain which is to respond to any threat to survival including reproduction. Furthermore, there is some evidence that social exclusion and rejection involve similar areas of the brain.
Researchers used functional magnetic resonance imaging (fMRI) to demonstrate overlap between the areas involved in physical pain and the experience of social exclusion. Eisenberger and colleagues asked people to play a virtual ball game whilst having their brain scanned. They found that the anterior cingulate cortex was more active when participants were excluded from the game and that this activity was positively correlated with the self-reported distress felt by participants (Eisenberger et al., 2003). Recall that this area of the cortex is also activated by physical pain. Future studies went on to demonstrate paracetamol, which can provide relief from physical pain, can also decrease activity in this region and the perceived social pain felt (De Wall et al., 2010).
This provides an important link to our last section on pain – its treatment.
Treating pain: medication and beyond
It would not be appropriate to talk about pain without discussing pain treatment. Whilst some pain will resolve without treatment, other pain will require treatment or management. The importance of pain treatment is illustrated by examining the consequences of not treating pain. Failure to treat chronic pain, that is a pain persisting for more than three months, can result in altered mood, mental health disorders, cognitive impairments, sleep disruption and, overall, a reduction in quality of life (Delgado-Gallén et al., 2021).
You have now read about three different types of pain: nociceiptive (encompassing somatic and visceral), neuropathic and psychogenic. It is probably not surprising to learn that with such a range of pain experiences, there is no single treatment that will be effective for all types of pain in all individuals. Additionally, psychogenic pain is rarely treated by healthcare professionals, although the underlying psychological factors may be addressed through talking therapies.
One important consideration when treating pain is whether the pain is acute, for example, from a cut or even broken bone, or chronic, for example, from nerve damage that cannot repair. Some treatments may be effective in the short term, and therefore suitable for acute pain, but not suitable for chronic pain, for example, because of side effects of long term pain medication.
Treatment of acute pain is typically the most straightforward and is often achieved with drug treatments. These can be categorised according to where in the pain pathway they act:
• Acting at the sensory nerve ending: Medicines such as non-steroidal anti-inflammatory drugs (NSAIDS e.g., ibuprofen) act at the sensory nerve ending of nociceptors to block the sensitization of nociceptors by prostaglandins.
• Acting on the nociceptor axon: Medicines such as local anaesthetics (e.g., lidocaine) act to block sodium channels in the cell membrane, prevent depolarisation and, therefore, action potentials.
• Acting in the spinal cord: Medicines such as opioids, gabapentin and ketamine act in the spinal cord, likely through a range of mechanisms.
• Acting in the brain: Opioids may also act on the brain in the thalamus and sensory cortex, along with antidepressant drugs. These drugs can also alter mood meaning the pain may continue but its impact is reduced.
Some of these drugs may also be used to treat chronic pain but consideration needs to be given to side effects. For example, long term use of NSAIDS is associated with stomach problems, and long term use of opioids comes with the risk of addiction. Decisions about long term drug use must therefore be made carefully and on an individual basis. For example, long term opioid use may be deemed appropriate where the pain is due to a terminal condition.
Other treatments that can be used for acute or chronic pain include stimulation techniques such Transcutaneous Electrical Nerve Stimulation or TENS. TENS machines provide a low-voltage electrical stimulation to the site of pain. It is thought that this low level of stimulation, activates the touch receptors, which, as you should recall from the discussion of the Gate Control Theory, could in turn reduce perceived pain. Although TENS is not typically used for acute injuries, it is sometimes used to treat the acute pain during labour and period pains. A systematic review of the literature on labour pains, which included data from 1671 women found little difference in the pain perceived by women receiving TENS compared to those in control groups, not receiving TENS (Dowswell et al., 2009). However, results for period pains are more positive with results from 260 individuals indicating that when compared to a sham TENS condition (i.e., the machine is attached but not switched on), TENS provided significant pain relief (Arik et al., 2022).
Studies into the effectiveness of TENS in chronic pain have looked at a range of conditions. For example, a review of the literature investigating osteroarthritis in the knee, a condition which affects 16% of individuals over 15 years of age worldwide (Cui et al., 2020), found TENS to be effective at reducing pain and improve walking ability (Wu et al., 2022).
Surgical approaches may also be taken to treating chronic pain. Clearly any surgery carries risks and therefore this type of treatment is only used in extreme cases. One situation in which surgical approaches may be used is in the treatment of intractable pain found in up to 90% of individuals with terminal cancer. In this situation, surgery may be deemed an appropriate treatment. The most common types of surgery conducted are cordotomy and myelotomy (Bentley et al., 2014). In the cordotomy surgeons cut the spinothalamic tract on one side of the spinal cord.
If only one side of the spinal cord has the spinothalamic tract cut, would pain from both sides of the body be reduced?
No, only pain from the contralateral side of the body as the spinothalamic tract cross the midline immediately on entering the spinal cord.
A cordotomy is a suitable treatment for unilateral pain, that is, pain on one side of the body. In a myelotomy, the surgeons cut at the middle of the spinal cord, again targeting the spinothalamic neurons, this time at the point they cross.
As you will likely have gathered, chronic pain typically requires a multifaceted approach which may include psychological interventions including cognitive behavioural therapy. This kind of multifaceted treatment is typically delivered at pain management clinics where individuals are supported by a team of professionals including pain consultants, physiotherapists, psychologists and occupational therapists. Such clinics keep the individual at the centre of treatment and they are active in their pain management, with the view to educating them about their pain and finding suitable, but often minimal, analgesic requirements.
The exact cause of the chronic pain will determine, in part, how successfully it can be treated. One type of chronic pain that is still considered very hard to treat, even with a multifaceted approach, is phantom limb pain. This type of pain consists of ongoing painful sensations that appear to be coming from part of the limb that is no longer there. This can occur in up to 80% of amputees (Richardson and Kulkarn, 2017).
However, the name ‘phantom limb’ is actually quite misleading because these kind of painful sensations are not limited to missing limbs. Up to 80% of patients who have had a mastectomy (a breast removed), typically for the treatment of breast cancer, may experience both non-painful and painful sensations arising from the missing breast (Ramesh and Bhatnagar, 2009). Exactly why phantom pain happens is still not fully understood, but is likely to be due to changes in how the nervous system is wired or connected following the amputation or mastectomy. A review of studies investigating treatments for phantom limb pain by Richardson and Kulkarn (2017) found that over 38 different therapies had been investigated including a range of drug treatments and transcutaneous magnetic stimulation (TMS), a technique similar to TENS, applying a magnetic pulse instead of an electrical one, and mirror therapy (Box 5). They concluded that despite the range of therapies test, results were insufficient to support use of any of these treatments.
Box 5: Novel approaches to pain management: Mirror Therapy
This novel treatment was first described by neuroscientist Vilayanur Ramachandran. In this treatment the patient positions a mirror box between their intact limb and the missing limb. They then look into the mirror to see a reflection of their intact limb, creating a visual representation of the missing limb (Figure 16). They can then make movements with their intact limb whilst looking at the reflection. This movement can create the perception of individual re-gaining control over the missing limb. Where the pain arises from a clenched or cramped feeling in the phantom limb, movement of the intact limb to a different position, could relieve pain.
All the treatments available for pain could, in part, have their effects attributed to the placebo effect. This is where an individual gains some benefit without receiving any real treatment. The placebo effect is not specific to pain treatment, it can occur in treatment for any condition. In the context of pain, the placebo effect could be responsible if an individual gains pain relief from swallowing a tablet, even if that tablet had no impact on pain processing, or from being connected to a TENS machine, even if it is not switched on. The placebo effect is a complicated phenomenon (see also the chapter Placebos: a psychological and biological perspective); there are several reasons it might occur, including (Perfitt and Plunkett, 2020):
• Conditioned behaviour: people learn to associate pain relief with taking a tablet or receiving and injection so if when it is an inert or inactive substance, they experienced a conditioned response of pain relief.
• Expectation: people expect to get better after seeing a doctor or receiving treatment and so experience pain relief because of this expectation.
Given we know that pain perception can be modulated by descending pathways from the brain, either of these top-down mechanisms are plausible.
It is also important to recognise that other effects could occur which are mistaken for the placebo effect. For example, people may just get better over time because that is the natural trajectory of their condition, meaning there is no placebo effect, they just recovered. There is also an effect called the Hawthorne effect, which refers to the fact that simply observing people in an experiment or trial will change their behaviour. For example, in a study on drug treatment for osteoarthritis, those who are part of the trial may be more likely to complete recommended exercises than those who are not and so may experience pain relief from a placebo drug treatment, not because of the placebo effect but because they are mobilising the joint more and regularly providing an account of their behaviour, in comparison to individuals not part of a trial.
What do you think the placebo effect, or even the Hawthorne effect, means for clinical trials trying to test the effectiveness of new treatments?
These trials need to be very carefully designed to ensure that the group of people receiving the new treatment are compared to an appropriate control group. For example, it might be appropriate to have a TENS group, a sham TENS group and a third group on a waiting list who are assessed but do not receive a real or placebo treatment.
We have now reached the end of our exploration of the somatosensory system, covering touch and pain.
Key takeaways
In this section you have learnt:
• Nociceptive pain arises when there is actual physical injury to the body and it is detected by nociceptors capable of responding to mechanical, chemical and thermal signals
• Nociceptive pain can be divided into pain arising from the muscles, skin or joints, called somatic pain, and pain arising from the internal organs, which is called visceral pain. We can sometimes struggle to identify the location of visceral pain and misattribute it to somatic pain, a phenomenon known as referred pain
• Nociceptors can alter their sensitivity giving rise to hyperalgesia and allodynia. Both of these may serve to the protect the body to allow any injury or damage to pass
• When nociceptive information enters the spinal cord it can form a reflex arc with a motor neuron or be transmitted up to the brain via the spinothalamic tract. From the thalamus, information about noxious stimuli is sent onto various cortical areas
• The Gate Control Theory proposes that signals in the spinothalamic tract can be blocked by activation of lamina II interneurons in the spinal cord, which are activated by touch
• Descending control of pain, by areas such as the PAG and the anterior cingulate cortex can also exert a powerful methods of pain control
• Neuropathic pain arises when the pathways which process pain information are damaged, creating a perception of pain in the absence of damage to that body part
• The final type of pain is psychogenic pain, that is pain arising from psychological factors such as relationship breakdown or social exclusion. Imaging from brain scanning suggests the experience of psychogenic pain activates similar areas of the brain to physical pain
• Pain treatment focuses on nociceptive and neuropathic pain and can include a range of approaches including drug treatment, stimulation approaches, surgery and psychological therapy. Several treatments may be combined and delivered by specialised clinics where the pain is chronic.
References
Arik, M. I., Kiloatar, H., Aslan, B., & Icelli, M. (2022). The effect of TENS for pain relief in women with primary dysmenorrhea: A systematic review and meta-analysis. Explore (New York, N.Y.), 18(1), 108–113. https://doi.org/10.1016/j.explore.2020.08.005
Bentley, J. N., Viswanathan, A., Rosenberg, W. S., & Patil, P. G. (2014). Treatment of medically refractory cancer pain with a combination of intrathecal neuromodulation and neurosurgical ablation: case series and literature review. Pain Medicine (Malden, Mass.), 15(9), 1488–1495. https://doi.org/10.1111/pme.12481
Cui, A., Li, H., Wang, D., Zhong, J., Chen, Y., & Lu, H. (2020). Global, regional prevalence, incidence and risk factors of knee osteoarthritis in population-based studies. EClinicalMedicine, 29, 100587. https://doi.org/10.1016/j.eclinm.2020.100587
Delgado-Gallén, S., Soler, M. D., Albu, S., Pachón-García, C., Alviárez-Schulze, V., Solana-Sánchez, J., Bartrés-Faz, D., Tormos, J. M., Pascual-Leone, A., & Cattaneo, G. (2021). Cognitive Reserve as a Protective Factor of Mental Health in Middle-Aged Adults Affected by Chronic Pain. Front. Psych., 12, 752623. https://doi.org/10.3389/fpsyg.2021.752623
DeWall, C. N., MacDonald, G., Webster, G. D., Masten, C. L., Baumeister, R. F., Powell, C., … Eisenberger, N. I. (2010). Acetaminophen reduces social pain: Behavioral and neural evidence. Psychol. Sci., 21(7), 931–937. https://doi.org/10.1177/ 0956797610374741
Dowswell, T., Bedwell, C., Lavender, T., & Neilson, J. P. (2009). Transcutaneous electrical nerve stimulation (TENS) for pain relief in labour. The Cochrane database of systematic reviews, (2), CD007214. https://doi.org/10.1002/14651858.CD007214.pub2
Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (2003). Does rejection hurt? An fMRI study of social exclusion. Advancement of Science, 302(5643), 290–292. https://doi.org/10.1126/science.1089134
Fardin, V., Oliveras, J. L., & Besson, J. M. (1984). A reinvestigation of the analgesic effects induced by stimulation of the periaqueductal gray matter in the rat. II. Differential characteristics of the analgesia induced by ventral and dorsal PAG stimulation. Brain Res. 306(1-2), 125–139. https://doi.org/10.1016/0006-8993(84)90361-5
Gold, M. S., & Gebhart, G. F. (2010). Nociceptor sensitization in pain pathogenesis. Nat. Med., 16(11), 1248–1257. https://doi.org/10.1038/nm.2235
Hart, B. L. (1988). Biological basis of the behavior of sick animals. Neuroscience and Biobehavioral Review, 12(2), 123–137. https://doi.org/10.1016/S0149-7634(88)80004-6
Macdonald, G., & Leary, M. R. (2005). Why does social exclusion hurt? The relationship between social and physical pain. Psychol. Bull., 13(2), 202–223. https://doi. org/10.1037/0033-2909.131.2.202.
Melzack, R., & Wall, P. D. (1965). Pain mechanisms: a new theory. Science, 150(3699), 971–979. https://doi.org/10.1126/science.150.3699.971
Oliva, V., Hartley-Davies, R., Moran, R., Pickering, A. E., & Brooks, J. C. (2022). Simultaneous brain, brainstem, and spinal cord pharmacological-fMRI reveals involvement of an endogenous opioid network in attentional analgesia. eLife, 11, e71877. https://doi.org/10.7554/eLife.71877
Perfitt, J. S., Plunkett, N., & Jones, S. (2020). Placebo effect in the management of chronic pain. BJA Education, 20(11), 382–387. https://doi.org/10.1016/j.bjae.2020.07.002
Ramesh, Shukla, N. K., & Bhatnagar, S. (2009). Phantom breast syndrome. Indian Journal of Palliative Care, 15(2), 103–107. https://doi.org/10.4103/0973-1075.58453
Richardson, C., & Kulkarni, J. (2017). A review of the management of phantom limb pain: challenges and solutions. Journal of Pain Research, 10, 1861–1870. https://doi.org/10.2147/JPR.S124664
Shams, S., & Arain, A. (2020). Brown Sequard Syndrome. StatPearls [Internet]. https://www.ncbi.nlm.nih.gov/books/NBK538135/
Wu, Y., Zhu, F., Chen, W., & Zhang, M. (2022). Effects of transcutaneous electrical nerve stimulation (TENS) in people with knee osteoarthritis: A systematic review and meta-analysis. Clinical rehabilitation, 36(4), 472–485. https://doi.org/10.1177/02692155211065636
About the Author
Dr Ellie Dommett studied psychology at Sheffield University. She went on to complete an MSc Neuroscience at the Institute of Psychiatry before returning to Sheffield for her doctorate, investigating the superior colliculus, a midbrain multisensory structure. After a post-doctoral research post at Oxford University she became a lecturer at the Open University before joining King’s College London, where she is now a Reader in Neuroscience. She conducts research into Attention Deficit Hyperactivity Disorder, focusing on identifying novel management approaches. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/04%3A_Sensing_the_environment_and_perceiving_the_world/4.02%3A_From_physical_injury_to_heartache-_sensing_pain.txt |
I always say deafness is a silent disability: you can’t see it, and it’s not life-threatening, so it has to touch your life in some way in order for it to be on your radar.
Rachel Shenton, Actress and Activist
Rachel Shenton, quoted above, is an actress who starred in, created and co-produced The Silent Child (2017), an award-winning film based on her own experiences as the child of a parent who became deaf after chemotherapy. The quote illustrates the challenge of deafness, which in turn demonstrates our reliance on hearing. As you will see in this section, hearing is critical for safely navigating the world and communicating with others. Consequently, hearing loss can have a devastating impact on individuals. To understand the importance of hearing and how the brain processes sound, we begin with the sound stimulus itself.
Making waves: the sound signal
The stimulus that is detected by our auditory system is a sound wave – a longitudinal wave produced from fluctuations in air pressure by vibration of objects. The vibration creates regions where the air particles are closer together (compressions) and regions where they are further apart (rarefractions) as the wave moves away from the source (Figure 4.17).
The nature of the sound signal is such that the source of the sound is not in direct physical contact with our bodies. This is different from the bodily senses described in the first two sections because in the senses of touch and pain the stimulus contacts the body directly. Because of this difference touch and pain are referred to as proximal senses. By contrast, in hearing, the signal originates from a source not in direct contact with the body and is transmitted through the air. This makes hearing a distal, rather than proximal, sense.
The characteristics of the sound wave are important for our perception of sound. Three key characteristics are shown in Figure 4.18: frequency, amplitude and phase.
The frequency is the time it takes for one full cycle of the wave to repeat, and is measured in Hertz (Hz). One Hertz is simply one cycle per second. Humans can hear sounds with a frequency of 20 – 20000Hz (20 KHz). Examples of low frequency sounds, which are generally considered to be under 500 Hz, include the sounds of waves and elephants! In contrast, higher frequency sounds include the sound of whistling and nails on a chalk board. Amplitude is the amount of fluctuation in air pressure that is produced by the wave. The amplitude of a wave is measured in pascals (Pa), the unit of pressure. However, in most cases when considering the auditory system, this is converted into intensity and intensities are discussed in relative terms using the unit of the decibel (dB). Using this unit the range of intensities humans can typically hear are 0-140 dB. Above this level can be very harmful to our auditory system. Although you may see sound intensity expressed in dB, another expression is also commonly used. Where the intensity of sound is expressed with reference to a standard intensity (the lowest intensity a young person can hear a sound of 1000 Hz), it is written as dB SPL. The SPL stands for sound pressure level. Normal conversation is typically at a level of around 60 dB SPL.
Unlike frequency and amplitude, phase is a relative characteristic because it describes the relationship between different waves. They can be said to be in phase, meaning they have peaks at the same time or out of phase, meaning that they are at different stages in their cycle at anyone point in time.
The three characteristics above and the diagrams shown indicate a certain simplicity about sound signals. However, the waves shown here are pure waves, the sort you might expect from a tuning fork that emits a sound at a single frequency. These are quite different to the sound waves produced by more natural sources, which will often contain multiple different frequencies all combined together giving a less smooth appearance (Figure 4.19).
In addition, it is rare that only a single sound is present in our environment, and sound sources also move around! This can make sound detection and perception a very complex process and to understand how this happens we have to start with the ear.
Sound detection: the structure of the ear
The human ear is often the focus of ridicule but it is a highly specialised structure. The ear can be divided into three different parts which perform distinct functions:
• The outer ear which is responsible for gathering sound and funnelling it inwards, but also has some protective features
• The middle ear which helps prepare the signal for receipt in the inner ear and serves a protective function
• The inner ear which contains the sensory receptor cells for hearing, called hair cells. It is in the inner ear that transduction takes place.
Figure 4.20 shows the structure of the ear divided into these three sections.
Although transduction happens in the inner ear, the outer and middle ear have key functions and so it is important that we briefly consider these.
The outer ear consists of the pinna (or auricle), which is the visible part that sticks out of the side of our heads. In most species the pinna can move but in humans they are static. The key function of the outer ear is in funnelling sound inwards, but the ridges of the pinna (the lumps and bumps you can feel in the ear) also play a role in helping us localise sound sources. Additional to this and often overlooked is the protective function of the outer ear. Ear wax found in the outer ear provides a water-resistant coating which is antibacterial and antifungal, creating an acidic environment hostile to pathogens. There are also tiny hairs in the outer ear, preventing entry of small particles or insects.
The middle ear sits behind the tympanic membrane (or ear drum) which divides the outer and middle ear. The middle ear is an air-filled chamber containing three tiny bones, called the ossicles. These bones are connected in such away that they create a lever between the tympanic membrane and the cochlea of the inner ear, which is necessary because the the cochlear is fluid-filled.
Spend a moment thinking about the last time you went swimming or even put your head under the water in a bath. What happens to the sounds you could hear beforehand?
The sounds get much quieter, and will likely be muffled, if at all audible, when your ear is filled with water.
Hopefully you will have noted that when your ear contains water from a pool or the bath, sound becomes very hard to hear. This is because the particles in the water are harder to displace than particles in air, which results in most of the sound being reflected back off the surface of the water. In fact only around 0.01% of sound is transmitted into water from the air, which explains why it is hard to hear underwater.
Because the inner ear is fluid-filled, this gives rise to a similar issue as hearing under water because the sound wave must move from the air-filled middle ear to the fluid-filler inner ear. To achieve this without loss of signal, the signal is amplified in the middle ear, by the lever actions of the ossicles, along with changes in the area of the bones contacting the tympanic membrane and cochlea, both of which result in a 20-fold increase in pressure changes as the sound wave enters the cochlea.
As with the outer ear, the middle ear also has a protective function in the form of the middle ear reflex. This reflex is triggered by sounds over 70 dB SPL and involves muscles in the middle ear locking the position of the ossicles.
What would happen if the ossicles could not move?
The signal could not be transmitted from the outer ear to the inner ear.
We now turn our attention to the inner ear and, specifically, the cochlea, which is the structure important for hearing (other parts of the inner ear form part of the vestibular system which is important for balance). The cochlea consists of a tiny tube, curled up like a snail. A small window into the cochlea, called the oval window (Figure 4.21a), is the point at which the sound wave enters the inner ear, via the actions of the ossicles.
The tube of the cochlea is separated into three different chambers by membranes. The key chamber to consider here is the scala media which sits between the basilar membrane and Reissner’s membrane, and contains the organ of corti (Figures 4.21b, c).
The cells critical for transduction of sound are the inner hair cells which can be seen in Figure 4.21c.
These cells are referred to as hair cells because they contain hair-like stereocillia protruding from one end. The end from which the stereocillia protrude is referred to as the apical end. They project into a fluid called endolymph, whilst the other end of the cell, the basal end, sits in perilymph. The endolymph contains a very high concentration of potassium ions.
How does this differ from typical extracellular space?
Normally potassium is at a low concentration outside the cell and a higher concentration inside, so this is the opposite to what is normally found.
When a sound wave is transmitted to the cochlea, it causes the movement of fluid in the chambers which in turn moves the basilar membrane upon which the inner hair cells sit. This movement causes their stereocilia to bend. When they bend, mechano-sensitive ion channels in the tips open and potassium floods into the hair cell causing depolarisation (Figure 4.22). This is the auditory receptor potential.
Spend a moment looking at Figure 4.22. What typical neuronal features can you see? How are these cells different from neurons?
There are calcium gated channels and synaptic vesicles but there is no axon.
You should have noted that the inner hair cells only have some of the typical structural components of neurons. This is because, unlike the sensory receptor cells for the somatosensory system, these are not modified neurons and they cannot produce action potentials. Instead, when sound is detected, the receptor potential results in the release of glutamate from the basal end of the hair cell where it synapses with neurons that form the cochlear nerve to the brain. If sufficient glutamate binds to the AMPA receptors on these neurons, an action potential will be produced and the sound signal will travel to the brain.
Auditory pathways: what goes up must come down
The cochlear nerve leaves the cochlea and enters the brain at the level of the brainstem, synapsing with neurons in the cochlear nuclear complex before travelling via the trapezoid body to the superior olive, also located in the brainstem. This is the first structure in the pathway to receive information from both ears. Prior to this in the cochlear nuclear complex, information is only received from the ipsilateral ear. After leaving the superior olive, the auditory pathway continues in the lateral leminiscus to the inferior colliculus in the midbrain before travelling to the medial geniculate nucleus of the thalamus. From the thalamus, as with the other senses you have learnt about, the signal is sent onto the cortex. In this case, the primary auditory cortex in the temporal lobe. This complex ascending pathway is illustrated in Figure 4.23.
You will learn about the types of processing that occurs at different stages of this pathway shortly but it is also important to recognise that the primary auditory cortex is not the end of the road for sound processing.
Where did touch and pain information go after the primary somatosensory cortex?
In both cases, information was sent onto other cortical regions, including secondary sensory areas and areas of the frontal cortex.
As with touch and pain information, auditory information from the primary sensory cortex, in this case the primary auditory cortex, is carried to other cortical areas for further processing. Information from the primary auditory cortex divides into two separate pathways or streams: the ventral ‘what’ pathway and the dorsal ‘where’ pathway.
The ventral pathway travels down and forward and includes the superior temporal region and the ventrolateral prefrontal cortex. It is considered critical for auditory object recognition, hence the ‘what’ name (Bizley & Cohen, 2013). There is not yet a clear consensus on the exact role in recognition that the different structures in the pathway play, but it is known that activity in this pathway may be modulated by emotion (Kryklywy, Macpherson, Greening, & Mitchell, 2013).
In contrast to the ventral pathway the dorsal pathway travels up and forward, going into the posterodorsal cortex in the parietal lobe and forwards into the dorsal lateral prefrontal cortex (Figure 24). This pathway is critical for identifying the location of sound, as suggested by the ‘where’ name. As with the ventral pathway, the exact role of individual structure is not clear but it too can be modulated by other functions. Researchers have found that whilst it is not impacted by emotion (Kryklywy et al., 2013) it is, perhaps unsurprisingly, modulated by spatial attention (Tata & Ward, 2005).
Recall that when discussing pain pathways you learnt about a pathway which extends from higher regions of the brain to lower regions – a descending pathway. This type of pathway also exists in hearing. The auditory cortex sends projections down to the medial geniculate nucleus, inferior colliculus, superior olive and cochlear nuclear complex, meaning every structure in the ascending pathway receives descending input. Additionally, there are connections from the superior olive directly onto the inner and outer hair cells. These descending connections have been linked to several different functions including protection from loud noises, learning about relevant auditory stimuli, altering responses in accordance with the sleep/wake cycle and the effects of attention (Terreros & Delano, 2015).
Perceiving sound: from the wave to meaning
In order to create an accurate perception of sound information we need to extract key information from the sound signal. In the section on the sound signal we identified three key features of sound: frequency, intensity and phase. In this section we will consider these as you learn about how key features of sound are perceived, beginning with frequency.
The frequency of a sound is thought to be coded by the auditory system in two different ways, both of which begin in the cochlea. The first method of coding is termed a place code because this coding method relies on stimuli of different frequencies being detected in different places within the cochlea. Therefore, if the brain can tell where in the cochlea the sound was detected, the frequency can be deduced. Figure 4.25 shows how different frequencies can be mapped within the cochlea according to this method. At the basal end of the cochlea sounds with a higher frequency are represented whilst at the apical end, low frequency sounds are detected. The difference in location arises because the different sound frequencies cause different displacement of the basilar membrane. Consequently, the peak of the displacement along the length of the membrane differs according to frequency, and only hair cells at this location will produce a receptor potential. Each hair cell is said to have a characteristic frequency to which it will respond.
Although there is some support for a place code of frequency information, there is also evidence from studies in humans that we might be able to detect smaller changes in sound frequency that would be possible from place coding alone.
This led researchers to consider other possible explanations and to the proposal of a temporal code. This proposal is based on research which shows a relationship between the frequency of the incoming sound wave and the firing of action potentials in the cochlear nerve (Wever & Bray, 1930), which is illustrated in Figure 4.26. Thus when an action potential occurs, it provides information about the frequency of the sound.
Recall that we can hear sounds of up to 20,000 Hz or 20 KHz. How does this compare to the firing rate of neurons?
This is much higher than the firing rate of neurons. Typical neurons are thought to be able to fire at up to 1000 Hz.
Given the constraints of firing rate, it is not possible for temporal code to account for the range of frequencies that we can perceive. Wever and Bray (1930) proposed that groups of neurons could work together to account for higher frequencies, as illustrated in Figure 4.27.
The two coding mechanisms are not mutually exclusive and researchers now believe that temporal code may operate at very low frequencies (< 50 Hz) and place code may operate at higher frequencies (> 3000 Hz) with all intermediate frequencies being coded for my both mechanisms. Irrespective of which coding method is used for frequency in the cochlea, once encoded, this information is preserved throughout the auditory pathway.
Sound frequency can be considered an objective characteristic of the wave but the perceptual quality it most closely relates to is pitch. This means that typically sounds of high frequency are perceived as having a high pitch.
The second key characteristic of sound to consider is intensity. As with frequency, intensity information is believed to be coded initially in the cochlea and then transmitted up the ascending pathway. Also in line with the coding of frequency, there are two suggested mechanisms for coding intensity. The first method suggests that intensity can be encoded according to firing rate in the auditory nerve. To understand this it is important to remember the relationship between stimulus and receptor potential which was first described in the section on touch. You should recall that the larger the stimulus, the bigger the receptor potential. In the case of sound, the more intense the stimulus, the larger the receptor potential will be, because the ion channels will be held open longer with a larger amplitude sound wave. This means that more potassium can flood into the hair cell causing greater depolarisation and subsequently greater release of glutamate. The more glutamate that is released, the greater the amount that is likely to bind to the post-synaptic neuron forming the auditory nerve. Given action potentials are all-or-none, the action potentials stay the same size but the frequency of them is increased.
The second method of encoding intensity is thought to be the number of neurons firing. Recall from Figure 4.25b that sound waves will result in a specific position of maximal displacement of the basilar membrane, and so typically only activate hair cells with the corresponding frequency which in turn signal to specific neurons in the cochlear nerve. However, it is suggested that as a sound signal becomes more intense there will be sufficient displacement to activate hair cells either side of the characteristic frequency, albeit to a lesser extent, and therefore more neurons within the cochlear nerve may produce action potentials.
You may have noticed that the methods for coding frequency and intensity here overlap.
Considering the mechanisms described, how would you know whether an increased firing rate in the cochlear nerve is caused by a higher frequency or a greater intensity of a sound?
The short answer is that the signal will be ambiguous and you may not know straight away.
The overlapping coding mechanism can make it difficult to achieve accurate perception; indeed we know that perception of loudness, the perceptual experience that most closely correlates with sound intensity, is impacted significantly by the frequency of sound. It is likely that the combination of multiple coding mechanisms supports our perception because of this. Furthermore, small head movements can be made which can impact on intensity of sound and therefore inform our perception of both frequency and intensity when the signal is ambiguous.
This leads us nicely onto the coding of sound location, which requires information from both ears to be considered together. For that reason sound localisation coding cannot take place in the cochlea and so happens in the ascending auditory pathway.
Which is the first structure in the pathway to receive auditory signals from both ears?
It is the superior olive in the brainstem.
The superior olive can be divided into the medial and lateral superior olive and each is thought to use a distinct mechanism for coding location of sound. Neurons within the medial superior olive receive excitatory inputs from both cochlear nuclear complexes (i.e., the one of the right and left), which allows them to act as coincidence detectors. To explain this a little more it is helpful to think about possible positions of sound sources relative to your head. Figure 4.28 shows the two horizontal planes of sound: left to right and back to front.
We will ignore stimuli falling exactly behind or exactly in front for a moment and focus on those to the left or right. Sound waves travel at a speed of 348 m/s (which you may also see written as ms-1) and a sound travelling from one side of the body will reach the ear on that side ahead of the other side. The average distance between the ears is 20cm so this means that sound waves coming directly from, for example, the right side, will hit the right ear 0.6 ms before they reach the left ear and vice versa if sound was coming from the left. Shorter delays between the sounds arriving at the left and right ear are experienced for sounds coming from less extreme right or left positions. This time delay means that neurons in the cochlear nerve closest to the sound source will fire first. This head start is maintained in the cochlear nuclear complex. Neurons in the medial superior olive are thought to be arranged such that they can detect specific time delays and thus code the origin of the sound. Figure 29 illustrates how this is possible. If a sound is coming from the left side, the signal from the left cochlear nuclear complex will reach the superior olive first and likely get all the way along to neuron C before the signal from the right cochlear nuclear complex combines with it, maximally exciting the neuron.
Using Figure 4.29, what would happen if the sound was from exactly in front or behind?
The input from the two cochlear nuclear complexes would likely combine on neuron B. Neuron B is therefore, in effect, a coincidence detector for no time delay between signals coming from the two ears. The brain can therefore deduce that the sound location is not to the left or the right – but it can’t tell from these signals if the sound is in front of or behind the person.
This method, termed interaural (between the ears) time delay, is thought to be effective for lower frequencies, but for higher frequencies another method can be used by the lateral superior olive. Neurons in this area are thought to receive excitatory inputs from the ipsilateral cochlear nuclear complex and inhibitory inputs from the contralateral complex. These neurons detect the interaural intensity difference, that is the reduction in intensity caused by the sound travelling across the head. Importantly the drop of intensity as sound moves around the head is greater for higher frequency sounds. The detection of interaural time and intensity differences are therefore complementary, favouring low and high frequency sounds, respectively.
The two mechanisms outlined for perceiving location here are bottom-up methods. They rely completely on the data we receive, but there are additional cues to localisation. For example, high frequency components of a sound diminish more than low frequency components when something is further away, so the relative amount of low and high frequencies can tell us something about the sound’s location.
What would we need to know to make use of this cue?
We would need to know what properties (the intensity of different frequencies) to expect in the sound to work out if they are altered due to distance. Use of this cue therefore requires us to have some prior experience of the sound.
By combining all the information about frequency, intensity and localisation we are able to create a percept of the auditory world. However, before we move on it is important to note that whilst much of the auditory coding appears to take place in lower areas of the auditory system, this information is preserved and processed throughout the cortex. More importantly, it is also combined with top-down input and several structures will co-operate to create a perception of complex stimuli such as music, including areas of the brain involved in memory and emotion (Warren, 2008).
Hearing loss: causes, impact and treatment
As indicated in the opening quote to this section, hearing loss can be a difficult and debilitating experience. There are several different types of hearing loss and each comes with a different prognosis. To begin with it is helpful to categorise types of hearing loss according to the location of the impairment:
• Conductive hearing loss occurs when the impairment is within the outer or middle ear, that is, the conduction of sound to the cochlea is interrupted.
• Cochlear hearing loss occurs when there is damage to the cochlea itself.
• Retrocochlear hearing loss occurs when the damage is to the cochlear nerve of areas of the brain which process sound. The latter two categories are often considered collectively under the classification of sensorineural hearing loss.
The effects of hearing loss are typically considered in terms of hearing threshold and hearing discrimination. Threshold refers to the quietest sound that someone is able to hear in a controlled environment, whilst discrimination refers to their ability to concentrate on a sound in a noisy environment. This means that we can also categorise hearing loss by the extent of the impairment as indicated in Table 3.
Hearing Loss Classification
Hearing level (dB HL)
Impairment
Mild
20-39
Following speech is difficult esp. in noisy environment
Moderate 40-69 Difficulty following speech without hearing aid
Severe
70-89
Usually need to lip read or use sign language even with hearing aid
Profound 90-120 Usually need to lip read or use sign language; hearing aid ineffective
Table 3. Different classes of hearing loss
You should have spotted that the unit given in Table 3 is not the typical dB or dB SPL. This is a specific type of unit, dB HL or hearing level, used for hearing loss (Box 6).
Box 6: Measuring hearing loss
If someone is suspected of having hearing loss they will typically undergo tests at a hearing clinic to establish the presence and extent of hearing loss. This can be done with an instrument called an audiometer, which produces sounds at different frequencies that are played to the person through headphone (Figure 4.30).
Note to artist/editor – drawing similar to this but person without headphone also has a laptop. AWAITING IMAGE FROM ELIZA
The threshold set for the tests is that of a healthy young listener and this is considered to be 0 dB. If someone has a hearing impairment they are unlikely to be able to hear the sound at this threshold and the intensity will have to be increased for them to hear it, which they can indicate by pressing a button. The amount by which is it increased is the dB HL level. For example, if someone must have the sound raised by 45 dB in order to detect the sound they will have moderate hearing loss because the value of 45 dB HL falls into that category (Table 3).
Conductive hearing loss typically impacts only on hearing threshold such that the threshold becomes higher, i.e., the quietest sound that someone can hear is louder than the sound someone without hearing loss can hear. Although conductive hearing loss can be caused by changes within any structure of the outer and middle ear, the most common occurrence is due to a build up of fluid in the middle ear, giving rise to a condition called otitis media with effusion, or glue ear. This condition is one of the most common illnesses found in children and the most common cause of hearing loss within this age group (Hall, Maw, Midgley, Golding, & Steer, 2014).
Why would fluid in the middle ear be problematic?
This is normally an air filled structure, and the presence of fluid would result in much of the sound being reflected back from the middle ear and so the signal will not reach the inner ear for transduction.
Glue ear typically arises in just one ear, but can occur in both. It generally only causes mild hearing loss. It is thought to be more common in children than adults because the fluid build-up arises due to the eustachian tube not draining properly. This tube connects the ear to the throat and normally drains the moisture from the air in the middle ear. In young children its function can be impacted adversely by the growth of adenoid tissue, which blocks the throat end of the tube meaning it cannot drain and fluid gradually builds up. However, several risk factors for glue ear have been identified.
These include iron deficiency (Akcan et al., 2019), allergies, specifically to dust mites (Norhafizah, Salina, & Goh, 2020), and exposure to second hand smoke as well as shorter duration of breast feeding (Kırıs et al., 2012; Owen et al., 1993). Social risk factors have also been identified including living in a larger family (Norhafizah et al., 2020), being part of a lower socioeconomic group (Kırıs et al., 2012) and longer hours spent in group childcare (Owen et al., 1993).
The risk factors of glue ear are possibly less important than the potential consequences of the condition. It can result in pain and disturbed sleep which can in turn create behavioural problems, but the largest area of concern is on educational outcomes, due to delays in language development and social isolation as children struggle to interact with their peers. Studies have demonstrated poorer educational outcomes for children who experience chronic glue ear (Hall et al., 2014; Hill, Hall, Williams, & Emond, 2019) but it is likely that they can catch up over time, meaning any long lasting impact is minimal.
Despite the potential for disruption to educational outcomes, the first line of treatment for glue ear is simply to watch and wait and treat any concurrent infections. If the condition does not improve in a few months, grommets may be used. These are tiny plastic inserts put into the tympanic membrane to allow the fluid to drain. This minor surgery is not without risk because it can cause scarring of the membrane which may impact on its elasticity.
Whilst glue ear is the most common form of conductive hearing loss, the most common form of sensorineural hearing loss is Noise Induced Hearing Loss (NIHL). This type of hearing loss is caused by exposure to high intensity noises, from a range of contexts (e.g., industrial, military and recreational) and normally comes on over a period of time so gets greater with age, as hair cells are damaged or die. It is thought to affect around 5% of the population and typically results in bilateral hearing loss that affects both the hearing threshold and discrimination. Severity can vary and its impact is frequency dependent with the biggest loss of sensitivity at higher frequencies (~4000 Hz) that coincide with many of the every day sounds we hear, including speech.
At present there is no treatment for NIHL and instead it is recommended that preventative measures should be taken, for example through the use of personal protective equipment (PPE).
What challenges can you see to this approach [using PPE]?
This assumes that PPE is readily available, which it may not be. For example, in the case of military noises, civilians in war zones are unlikely to be able to access PPE. It also assumes that PPE can be worn without impact. A musician is likely to need to hear the sounds being produced and so although use of some form of PPE may be possible, doing so may not be practical.
The impact of NIHL on an individual is substantial. For example research has demonstrated that the extent of hearing loss in adults is correlated with measures of social isolation, distress and even suicide ideation (Akram, Nawaz, Rafi, & Akram, 2018). Other studies indicate NIHL can result in frustration, anxiety, stress, resentment, depression, and fatigue (Canton & Williams, 2012). There are also reported effects on employment with negative effects on employment opportunities and productivity (Canton & Williams, 2012; Neitzel, Swinburn, Hammer, & Eisenberg, 2017). Additionally, given NIHL will typically occur in older people, it may be harder to diagnose because they mistake it for a natural decline in hearing that occurs as people get older, meaning they may not recognise the need for preventive action if it is possible, or the need to seek help.
Looking across the senses
We have now reached the end of the section on hearing, but before we continue to look at the visual system it is helpful to spend a moment reflecting on the systems you have learnt about so far.
Exercises
1. Compare and contrast the mechanisms by which touch, pain and sound signals are transduced.
There are several similarities you could have mentioned here. For example, all these systems can include mechano-sensitive ion channels, that is, those that are opened by mechanical force. Additionally, they all involve the influx of a positively charged ion causes a depolarising receptor potential. There are also key differences as well. For example, whilst touch and hearing only use mechano-sensitive channels, pain can also use thermo-sensitive and chemo-sensitive channels. Furthermore, the ions that create the receptor potential differ. In somatosensation, the incoming ion is sodium, as is typical for depolarisation across the nervous system, whilst in hearing it is potassium due to the potassium-rich endolymph.
2. Considering the pathway to the brain, what do you notice is common to all the sensory systems discussed so far?
In all systems, the thalamus receives the signal on the way to the primary sensory cortex for that system. Additionally, there are typically, projections to a range of cortical areas after the primary sensory cortex.
3. Extracting key features of the sensory signal is important. What common features are detected across all three systems?
In all cases the intensity and location of the stimulus is encoded. Additionally, in touch and hearing, frequency of information is encoded.
Summarising hearing
Key Takeaways
• Our sense of hearing relies on the detection of a longitudinal wave created by vibration of objects in air. These waves typically vary in frequency, amplitude and phase
• The three-part structure of the ear allows us to funnel sounds inwards and amplify the signal before it reaches the fluid-filled cochlea of the inner ear where transduction takes place
• Transduction occurs in specialised hair cells which contain mechano-sensitive channels that open in response to vibration caused by sound waves. This results in an influx of potassium producing a receptor potential
• Unlike the somatosensory system, the hair cell is not a modified neuron and therefore cannot itself produce an action potential. Instead, an action potential is produced in neurons of the cochlear nerve, when the hair cell releases glutamate which binds to AMPA receptors on these neurons. From here the signal can travel to the brain
• The ascending auditory pathway is complex, travelling through two brainstem nuclei (cochlear nuclear complex and superior olive) before ascending to the midbrain inferior colliculus, the medial geniculate nucleus of the thalamus and then the primary auditory cortex. From here it travels in dorsal and ventral pathways to the prefrontal cortex, to determine where and what the sound is, respectively
• There are also descending pathways from the primary auditory cortex which can influence all structures in the ascending pathway
• Key features are extracted from the sound wave beginning in the cochlea. There are two proposed coding mechanisms for frequency extraction: place coding and temporal coding. Place coding uses position-specific transduction in the cochlea whilst temporal coding locks transduction and subsequent cochlea nerve firing to the frequency of the incoming sound wave. Once coded in the cochlea this information is retained throughout the auditory pathway
• Intensity coding is thought to occur either through the firing rate of the cochlea nerve or the number of neurons firing.
• Location coding requires input from both ears and therefore first occurs outside the cochlea at the level of the superior olive. Two mechanisms are proposed: interaural time delays and interaural intensity differences
• Hearing loss can be categorised according to where in the auditory system the impairment occurs. Conductive hearing loss arises when damage occurs to the outer or middle ear and sensorineural hearing loss arises when damage is in the cochlea or beyond
• Different types of hearing loss impact hearing threshold and hearing discrimination differently. The extent of hearing loss can vary as can the availability of treatments
• Hearing loss is associated with a range of risk factors and can have a significant impact on the individual including their social contact with others, occupational status and, in children, academic development.
References
Akcan, F. A., Dündar, Y., Bayram Akcan, H., Cebeci, D., Sungur, M. A., & Ünlü, İ. (2019). The association between iron deficiency and otitis media with effusion. J Int Adv Otol, 15(1), 18-21. https://dx.doi.org/10.5152/iao.2018.5394
Akram, B., Nawaz, J., Rafi, Z., & Akram, A. (2018). Social exclusion, mental health and suicidal ideation among adults with hearing loss: Protective and risk factors. Journal of the Medical Association Pakistan, 68(3), 388-393. https://jpma.org.pk/article-details/8601?article_id=8601
Bizley, J. K., & Cohen, Y. E. (2013). The what, where and how of auditory-object perception. Nat Rev Neurosci, 14(10), 693-707. https://dx.doi.org/10.1038/nrn3565
Canton, K., & Williams, W. (2012). The consequences of noise-induced hearing loss on dairy farm communities in New Zealand. J Agromedicine, 17(4), 354-363. https://dx.doi.org/10.1080/1059924x.2012.713840
Hall, A. J., Maw, R., Midgley, E., Golding, J., & Steer, C. (2014). Glue ear, hearing loss and IQ: an association moderated by the child’s home environment. PloS One, 9(2), e87021. https://doi.org/10.1371/journal.pone.0087021
Hill, M., Hall, A., Williams, C., & Emond, A. M. (2019). Impact of co-occurring hearing and visual difficulties in childhood on educational outcomes: a longitudinal cohort study. BMJ Paediatrics Open, 3(1), e000389. http://dx.doi.org/10.1136/bmjpo-2018-000389
Kırıs, M., Muderris, T., Kara, T., Bercin, S., Cankaya, H., & Sevil, E. (2012). Prevalence and risk factors of otitis media with effusion in school children in Eastern Anatolia. Int J Pediatr Otorhinolaryngol, 76(7), 1030-1035. https://dx.doi.org/10.1016/j.ijporl.2012.03.027
Kryklywy, J. H., Macpherson, E. A., Greening, S. G., & Mitchell, D. G. (2013). Emotion modulates activity in the ‘what’ but not ‘where’ auditory processing pathway. Neuroimage, 82
Neitzel, R. L., Swinburn, T. K., Hammer, M. S., & Eisenberg, D. (2017). Economic Impact of Hearing Loss and Reduction of Noise-Induced Hearing Loss in the United States. J Speech Lang Hear Res, 60(1), 182-189. https://dx.doi.org/10.1044/2016_jslhr-h-15-0365
Norhafizah, S., Salina, H., & Goh, B. S. (2020). Prevalence of allergic rhinitis in children with otitis media with effusion. Eur Ann Allergy Clin Immunol, 52(3), 121-130. https://dx.doi.org/10.23822/EurAnnACI.1764-1489.119
Owen, M. J., Baldwin, C. D., Swank, P. R., Pannu, A. K., Johnson, D. L., & Howie, V. M. (1993). Relation of infant feeding practices, cigarette smoke exposure, and group child care to the onset and duration of otitis media with effusion in the first two years of life. J Pediatr, 123(5), 702-711. https://dx.doi.org/10.1016/s0022-3476(05)80843-1
Tata, M. S., & Ward, L. M. (2005). Spatial attention modulates activity in a posterior “where” auditory pathway. Neuropsychologia, 43
Terreros, G., & Delano, P. H. (2015). Corticofugal modulation of peripheral auditory responses. Front Syst Neurosci, 9
Warren, J. (2008). How does the brain process music? Clinical Medicine, 8(1), 32-36. https://doi.org/10.7861/clinmedicine.8-1-32
Wever, E. G., & Bray, C. W. (1930). The nature of acoustic response: The relation between sound frequency and frequency of impulses in the auditory nerve. Journal of Experimental Psychology, 13(5), 373. https://doi.org/10.1037/h0075820
About the Author
Dr Ellie Dommett studied psychology at Sheffield University. She went on to complete an MSc Neuroscience at the Institute of Psychiatry before returning to Sheffield for her doctorate, investigating the superior colliculus, a midbrain multisensory structure. After a post-doctoral research post at Oxford University she became a lecturer at the Open University before joining King’s College London, where she is now a Reader in Neuroscience. She conducts research into Attention Deficit Hyperactivity Disorder, focusing on identifying novel management approaches. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/04%3A_Sensing_the_environment_and_perceiving_the_world/4.03%3A_Perceiving_sound-_our_sense_of_hearing.txt |
Just by seeing is believing, I don’t need to question why.
Sung by Elvis Presley. Lyrics by Red West & Glen Spreen
The song lyric above from ‘Seeing is Believing’, made famous by Elvis Presley, encapsulates the power we give our sense of vision. This lyric is one of many examples in our language which indicates how important we consider vision to be. For example, phrases such as ‘I see’, intended to mean that we understand, or ‘A picture paints a thousand words,’ rely on the metaphor of vision. Similarly, in business and industry, organisations typically have vision statements, which outline what they want to achieve. All these phrases point to the importance of vision in our everyday lives. In keeping with our approach to the other senses, we will now begin our journey to understanding vision, with the signal that reaches our senses – the visual stimulus.
Light: the wave and the particle
The signal detected by the visual system is light that is either reflected from a surface or emitted from a source, such as a light bulb or natural sources of light like the sun. We can detect light ranging in intensity or luminance, measured in candela per metre squared (cd m-2), from 10-6 to 108 cd m-2. To give some context to this, this incorporates everything from a dimly lit night sky to the sun. A typical computer screen, like the one you may be reading from now, has a luminance of 50-300 cd m-2. The light we can detect is just a small part of electromagnetic spectrum (Figure 4.31).
This electromagnetic spectrum includes other signals you may be familiar with, such as radio waves, X-rays and microwaves, but the visible light spectrum spans the wavelengths of 380-780 nm, which corresponds to a frequency range of 7.9 x 1014 – 3.8 x 1014 Hz (790000000000000 – 380000000000000 Hz), what we see as the colours from violet to red.
Looking at Figure 4.31, which wavelength and frequency is associated with violet?
Violet is at the end of the visible light spectrum with a wavelength of 380nm and a frequency of 7.9 x 1014 Hz.
Waves in this spectrum are transverse waves and consist of simultaneous variations in electrical and magnetic fields at right angles to each other (Figure 32). Unlike the sound waves you learnt about for hearing, electromagnetic waves do not require a medium to be transmitted, so light can travel through a vacuum.
The perceptive amongst you will have spotted that the title to this section indicates that light is not just a wave, but also a particle. This subheading hints at a fierce scientific debate between some of the most famous scientists in history – a Who’s Who of the Royal Society. Isaac Newton believed that light was made up of particles, referred to as photons, whilst his rival Robert Hooke believed light was a wave. Over time, experiments and calculations by James Clerk Maxwell appeared to prove Hooke right, and the once-fierce debate was calmed.
However, the discovery of the photoelectric effect (Box 7) reignited this debate and drew the attention of Albert Einstein. He proposed a new theory: that light was not made of waves or particles, but both – light was made of wave packets or photons. This idea was elaborated on to show that some experimental findings are best explained when we conceive light as a wave, whilst others work best when it is described as a particle, and others still can work with either explanation. It was this work that led to Einstein’s Nobel Prize in physics.
The Photoelectric Effect
The photoelectric effect refers to the emission of electrons from a material when an electromagnetic radiation hits that material. It is best demonstrated when the material is a metal. When an electromagnetic radiation hits a metal, the energy within it can transfer to the electrons within the metal, and if that energy exceeds the binding energy (the energy keeping the electron in the metal), the electrons can be ejected from the substance. Critically though this will not happen with just any radiation, it only happens with very high energy sources. The energy within electromagnetic spectrum is related to its frequency and wavelength, such that the waves with the highest frequency, and therefore lowest wavelength, have the most energy. This means light at the violet end of the visible spectrum has more energy than light at the red end. The effect can be demonstrated using an electroscope (Figure 4.33). In this gold-leaf electroscope, the two gold leaves hanging down are separated when negatively charged but, if high energy photons are delivered to the plate above, causing the loss of electrons, the leaves fall back together. The fact that this effect only worked for some wavelengths of light was critical in understanding light as both a wave and a particle.
Now that we have examined the nature of the signal in vision, it is helpful to look at how that signal can be detected. From this it might be expected that the next section will focus on transduction, but in the visual system there is quite a lot to do before we reach the sensory receptor cells for transduction, so we start by looking at the structure of the eye.
From light source to retina: bringing the world into focus
The sense organ of the visual system is the eye and, just like the ear, it is made up of several different parts, all of which play a critical role in ensuring we have accurate vision (Figure 4.34).
Figure 4.34 shows that there are several structures which the light must pass through before it gets to the retina, where the sensory receptor cells, referred to as photoreceptors, are located. These structures have a dioptric effect and are referred to as the dioptric apparatus, which simply means that they are involved in refracting or bending the light to a focal point. Despite there being several structures involved, the refractive power in the eye comes almost entirely from the cornea and the lens. The cornea has a fixed refractive power, but the lens can alter its power by becoming fatter or flatter. When the ciliary muscles contract the lens becomes rounder, increasing its refractive power and the ability to bend the light waves. This allows sources at different distances from the eye to be brought into focus, which means that the light waves are brought to a focal point on the surface of the retina (Figure 4.35).
Despite this process appearing quite simple in comparison to much of what you have learnt in this chapter about the senses, refractive errors are extremely common. There are different types of refractive errors, but you are mostly likely to have heard of:
• Myopia or short-sightedness which makes distant objects look blurry
• Hyperopia or long-sightedness which makes nearby objects look blurry
• Presbyopia which makes it hard for middle-aged and older adults to see things up close
Collectively these conditions are thought to impact 2.2 billion people worldwide (World Health Organisation, 2018), with 800 million people with an impairment that could be addressed with glasses or contact lenses (World Health Organisation, 2021). To correct for these refractive errors requires lenses to be produced that can increase (hyperopia) or decrease (myopia) the overall refraction of light (see Box: Refractive errors, below ).
Refractive errors and corrective lenses
Myopia and hyperopia are two of the most common refractive errors. Myopia arises when the refractive power of the eye is too great and the focal point occurs before the retina (Figure 4.36a), whilst hyperopia occurs when the refractive power of the eye is too low, and the image has therefore not been focused by the time it reaches the retina – it would effectively have a focal point behind the retina (Figure 4.36c). To address this, lenses need to be placed in front of the eye in the form of glasses or contact lenses. For myopia, the lens counters the normal refractive power of the eye (Figure 4.36b) whilst for hyperopia it bends the light in the same direction as the eye (Figure 4.36d).
Presbyopia typically arises with age and is caused by the gradual hardening of the lens in the eye. As it hardens, flexibility is lost which means that it is difficult to focus. The solution is bifocal or varifocal lenses which have different refractive powers at different positions of the lens.
Assuming that the light waves can be brought to a focal point on the retina, the visual system can produce an unblurred image. As stated above, the retina contains the photoreceptors that form the sensory receptor cell of the visual system. However, it also contains many other types of cells in quite a complex layered structure (Figure 4.37).
If you examine Figure 4.37 you will see that the photoreceptors form the deepest layer of the retina, that is, the one furthest from the light source. You should also spot that there are two different types of photoreceptors: rods and cones. These two different types of photoreceptors allow the visual system to operate over a wide range of luminance and wavelength conditions.
Rods outnumber cones by around 20:1 and they are found predominantly in the peripheral area of the retina rather than the fovea or central point of the retina. They are much more sensitive to light than cones, meaning they are suitable for scotopic vision – that is night vision or vision in dimly lit environments. They also provide lower acuity visual information because they are connected in groups rather than singularly to the next type of cell in the retina. This means that the brain cannot be sure exactly which of a small number of rods a signal originated from. There is only one type of rod in the human eye, and it is most sensitive to light with a wavelength of 498 nm.
Look back at Figure 4.31. What colour does this wavelength correspond to?
This corresponds to a green-blue colour.
In contrast to rods, cones are found in a much greater number within the fovea and provide us with high acuity vision due to one-to-one connections with other cells creating very small receptive fields. They are less sensitive than rods and so best suited to photopic or day vision. There are, however, three different types of cones, each with different spectral sensitivities (Figure 4.38).
Although the cones are often referred to as short (S), medium (M) and long wave cones (L), indicative of their wavelength sensitivity, they are sometimes called blue, green and red cones, corresponding to the colours we perceive of the wavelengths that optimally activate them.
Looking at Figure 4.38, what cone would you expect to react if a light with a wavelength of 540 nm was to be detected by the retina?
You would expect to see that the red and the green cone would react because this is within their spectral sensitivity.
In the question above, we asked you about photoreceptors reacting, which leads on to the next stage of our journey through the visual system to look at this process in detail as we turn our attention to transduction.
Photoreceptors and visual transduction
To understand transduction it is helpful to look at the structure of the photoreceptors in a little more detail (Figure 4.39).
Both rods and cones contain an outer segment which includes a photosensitive pigment which can be broken down by light. The pigment in rods, which is referred to rhodopsin, contains a protein – opsin – attached to a molecule called 11-cis-retinal. The pigment in cones is generally referred to as iodopsin and still consists of opsin and 11-cis-retinal but these are three slightly different opsin molecules that have different spectral sensitivies. The process of phototransduction is similar for all rods and cones. It is described below in detail for rods, referring to rhodopsin rather than the different cone opsins, though the process is analogous in cones.
The process of visual transduction (or phototransduction) is more complex than the process for touch, pain or hearing so we will need to break this down into a series of steps, but it is also helpful to have an oversight (no pun intended) from the start (Figure 4.40).
The first stage of transduction happens when the energy from a photon of light reaching the retina is absorbed by rhodopsin. The absorption of energy forces the 11-cis-retinal to undergo a transformational change and become all-trans-retinal – this ‘activates’ rhodopsin.
In the second stage, the newly-activated rhodopsin interacts with a G-protein called transducin. We briefly met G-proteins in the “Neurotransmission” chapter. Like rhodopsin, metabotropic neurotransmitter receptors are G-protein coupled receptors (GPCRs). G-proteins are a group of proteins which are involved in transmitting signals from outside of a cell to inside it and are so-called as they bind guanine nucleotides. In phototransduction, activation of the G protein transducin by rhodopsin transmits information about the light from outside the photoreceptor to inside it, while in neurotransmission, ligand binding to the receptor activates the G protein to transmit information about neurotransmitter presence at the synapse. In each case the activated G protein releases guanosine diphosphate (GDP) bound to it and instead binds a guanosine triphosphate (GTP) molecule.
In the third stage of transduction, GTP binding to the G protein, transducin, results in the β and γ subunits of transducin dissociating from the α subunit and bound GTP molecule. In the fourth stage, the α subunit and bound GTP interact with a second protein called phosphodiesterase (PDE) which in turn becomes activated.
What would you expect to happen at some point during transduction if a receptor potential is to be produced?
You would need to see ions channels open or close to allow a change in ions moving across the membrane, carrying the charge that makes up the receptor potential.
The final step sees activated PDE breakdown a molecule called cyclic guanosine monophosphate (cGMP). cGMP is produced by guanylyl cyclase and opens cGMP-gated ion channels in the cell’s membrane that allow sodium and calcium ions to enter the cell. Thus when cGMP is broken down by PDE, these ion channels close and no more calcium or sodium can enter the cell. This might not be quite what you expected to happen because in the previous senses we have looked at, transduction involves channels opening and positively charged ions coming into the cell, depolarising the cell. However in light detection, the reverse occurs: light detection causes cessation of a depolarising current and a hyperpolarisation of the membrane. The current that flows when no photons of light are being absorbed is called the ‘dark current’. One suggested reason that the visual system operates in this way is to minimise background noise. To explain this further, when there is a dark current, there is a steady flow of sodium into the cell in the absence of light. This means that any minor fluctuations in sodium channel openings will not impact the cell very much – the noise will effectively be ignored. It is only when a large number of channels close in the light that the cell membrane potential will be affected, giving rise to a clear signal.
In any event, the decrease in intracellular calcium that occurs because of channels closing when light hits the retina results in a reduction in the release of glutamate from the photoreceptor. This in turn impacts on the production of action potentials in the bipolar cells that synapse with the rods and cones. There are broadly two types of bipolar cells – ON cells and OFF cells. OFF bipolar cells respond to the decrease in glutamate release during light stimulation with a decrease in action potential firing, i.e. a decrease in glutamate causes a decrease in excitation and reduced firing. However, ON bipolar cells respond to the decrease of glutamate during light stimulation with an increase in action potential firing. Glutamate is usually excitatory, so how can a decrease in glutamate during light cause an increase in bipolar cell firing? This happens because instead of expressing ionotropic AMPA glutamate receptors, ON cells express a specific metabotropic glutamate receptor, mGluR6. When mGluR6 is activated, its G-protein subunits close a non-specific cation (positive ion) channel, hyperpolarising the cell. When glutamate release is reduced, mGluR6 is inactive, allowing the cation channel to open, and sodium ions to enter the bipolar cell, depolarising it and causing action potentials to fire. Bipolar cells in turn connect to the retinal ganglion cells, whose axons form the optic nerve and transmit action potentials from the eye to the brain.
Visual pathways: to the visual cortex and beyond
The axons of the retinal ganglion cells form the optic nerve and leave the eye through the blind spot. From the optic nerve, two routes that can be taken, a cortical and a subcortical route. The cortical route is the pathway that is responsible for much of our higher processing of visual information and has been the focus of a large amount of research and, as such, it is a logical starting point. Figure 41 shows the route that visual information typically takes from the eye to the primary visual cortex, located in the occipital lobe. In contrast to the pathway from the ear to the primary auditory cortex this pathway looks quite simple, but information is very carefully sorted throughout the pathway.
Starting from the eye, information leaves via the optic nerve. The optic nerves from both eyes meet at the optic chiasm which can be seen on the underside of the brain (Figure 4.41). At this point information is arranged such that signals from the left visual field of both eyes continues its pathway via the right side of the brain, whilst information from the right visual field of both eyes travels onwards in the left side of the brain. The first stop in the brain is the lateral geniculate nucleus (LGN) which is part of the thalamus. Each LGN is divided into six layers. Three of these layers receive information from one eye and three receive it from the other. These layers are said to be retinotopically mapped, which means that adjacent neurons will receive information about adjacent regions in the visual field.
From the LGN, information travels, via the optic radiation, to the primary visual cortex (V1), sometimes also referred to as the striate cortex because of its striped appearance. As we learnt in an earlier chapter (Exploring the brain), the cortex consists of a series of layers from the outside of the brain to the inside. The most dorsal or outer layer is labelled Layer I and the deepest or innermost layer is layer VI. Information from the LGN enters the primary visual cortex in layer IV where different layers of the LGN enter different subsections of layer IV (Figure 4.42).
There will be many thousands of cortical neurons receiving information from each small region of the retina and these cells are organised into columns which respond to specific stimulus features such as orientation. This means that cells in one orientation column preferentially respond to a specific orientation (e.g. lines at 45° clockwise from the vertical) whilst those in the next column will respond to a slightly different orientation. Across all columns, all orientations can be represented. Primary visual cortex can also be divided into columns that respond preferentially to one eye or the other – these are termed ‘ocular dominance columns’. Theoretically, the cortex can be split into ‘hypercolumns’ each of which contains representations from both ocular dominance columns and all orientations for each part of the visual field, though these do not map as neatly onto the cortical surface as was once theorised (Bartfeld and Grinvald, 1992).
However, despite the exquisite organisation of the primary visual cortex, information does not stop at this point. In fact visual information travels to many different cortical regions – with 30 identified so far.
Can you recollect how auditory information was divided after the primary auditory cortex?
It was divided into a dorsal and ventral pathway.
Visual information can also be divided into a dorsal and ventral pathway. The ventral pathway which includes V1, V2, V4, and further regions in inferior temporal areas is thought to be responsible for object identity i.e., a ‘what’ pathway. The dorsal stream which includes V1, V2, V3, V5 supports detection of location and visually-controlled movements (e.g., reaching for an object), i.e., a ‘where’ pathway (Figure 4.43).
We mentioned that there is also a subcortical pathway that visual information can take through the brain. In fact there are several different subcortical structures that receive visual information but one of the main ones is a structure call the superior colliculus. This name may sound familiar because you have already learnt about the inferior colliculus in your exploration of hearing. The superior colliculus sits just above the inferior colliculus, on the surface of the midbrain. Although often overlooked when describing visual processing, the superior colliculus is thought to be involved in localisation and motion coding. It has also been implicated in an interesting phenomenon termed Blindsight (see Box below, Blindsight: I am blind and yet I see).
Blindsight: I am blind and yet I see
Blindsight was first described in the 1970s by researchers who had identified residual visual functioning in individuals who were deemed to be clinically blind due to damage to the visual cortex (Pöppel, Held, & Frost, 1973; L Weiskrantz & Warrington, 1974). These individuals reported being unable to see, but could detect, localise or discriminate stimuli that they were unaware of at higher than chance levels (i.e., greater than the levels that would be expected if they were just guessing). Later work allowed a further distinction to be made into blindsight Type 1, where the individual could guess certain features of the stimulus at higher levels than chance e.g., type of motion without any conscious awareness of it, and Type 2 where individuals could detect a change in the visual field but do not develop any perception of that change (L. Weiskrantz, 1997).
Several explanations have been proposed for this interesting phenomenon:
• Areas other than primary visual cortex underlie the responses, including the superior colliculus, which has been shown to provide quick crude responses to visual stimuli.
• Whilst much of the primary visual cortex is destroyed in people with blindsight, small pockets of functionality remain, and this explains the residual abilities.
• The LGN is capable for detecting key visual information and passing this directly to other cortical areas which could explain the phenomenon.
Research continues into blindsight and the role of several brain structures in visual processing, but the existence of this phenomenon has demonstrated that subcortical pathways and structures outside the primary visual cortex can still play a significant role in visual processing.
We have now discussed transduction and pathways for vision but not said much about specific features of the visual scene are detected. As you will probably have guessed this is an extremely complicated process and so we will focus just on three components of the visual scene in the next section: colour, motion and depth.
Perceiving the world: colour, motion and depth
Colour processing is critical to our perception of the world, and you learnt earlier in this section that we have three types of cones with distinct but overlapping spectral sensitivities. These three types of cones in the retina are the start of our colour perception journey. The presence of three types of cones is referred to as trichromacy. The development of trichromacy is thought to offer an evolutionary advantage because it can help identify suitable foods and better discriminate their ripeness. For example, the ability to differentiate red-green is thought to be important as reddish colours in fruits are indicative of higher energy or greater protein content. Work with humans suggest colour remains important in food preferences (Foroni, Pergola, & Rumiati, 2016).
The output from the three types of cones is thought to be translated into an opponent colour system in the retinal ganglion cells which can then give rise to specific channels of information in the visual system, referred to as the opponent processing theory of colour processing (Figure 4.43):
• A red-green channel which receives opposing inputs from red and green channels.
• A luminance channel which receives matching inputs from red and green channels.
• A blue- yellow channel which receives excitatory input from blue channels and inhibitory input from the luminance channel (which in turn is created from excitation from red and green cones).
Cells that respond in line with this theory have been found in the LGN and the primary visual cortex. Further along in the ventral pathway, V4 has been found to contain neurons which respond to a range of colours i.e., not just red, green, blue and yellow (Zeki, 1980). This area receives input from V2 and sends information onwards to V8. The latter of which appears to combine colour information with memory information (Zeki & Marini, 1998).
Imagine looking out of your window on a bright morning to the leaves on the trees outside in the sunshine. Now consider looking out a few hours later when the weather has become dull and overcast. Do you perceive that the leaves have changed colour?
Hopefully you answered ‘No’ this question because you know that the leaves have not changed colour. But how do you know this?
The fact that we can perceive colour as unchanging despite overall changes in luminance is because of a phenomenon called colour constancy. The brain compensates for differences in luminance by taking into account the average colour across the visual scene.
We mentioned previously that some cells in V1 respond to specific orientations of stimuli. In addition to these cells, other cells in V1 have been found to respond to specific movement of stimuli indicating that motion detection begins early in cortical processing. However, it is V5, also known as MT (for medial temporal area), and the adjacent region V5a or medial superior temporal area, that are thought to be critical in motion detection. V5 region receives input from V1 but also from the superior colliculus which is involved in visual reflexes that are important for motion. Information from V5 is then sent onwards to V5a which has been found to have neurons that respond to specific motion patterns including spiral motion (Vaina, 1998).
Before we move on to the final subsection on visual impairment, we will look briefly at depth perception. The image created on the retina is two dimensional and yet we can perceive a three-dimensional world. This is possible because we use specific depth cues. Spend a moment looking at the visual scene in Figure 4.45.
The image shown in Figure 4.45 is complex, with multiple components including the house, fountain cascade, trees and the landscape beyond the house leading to the horizon. But how do we know how all the components fit together? For example, how do we know which trees are in front of the house and which are behind it or whether the trees in the distance are far away or just small? We can interpret the scene using depth cues including:
• Interposition: Objects which obscure other objects are closer to the viewer than the ones they obscure.
• Linear perspective: Parallel lines will converge as they move further away. This is illustrated with the sides of the fountain cascade in the image.
• Size constancy: Objects which appear smaller are likely be further away so trees in the distance will be smaller than those nearby because they are in the distance rather than because they differ in size.
• Height in the field: The horizon tends to appear towards the middle of the image with objects below the horizon nearer to the observer than the horizon and those close to the bottom of the image the nearest to the viewer.
We also obtain visual cues by comparing the images we have from the left and right eye – these are binocular cues. For example, when our left and right eye receive slightly different images, referred to as binocular disparity, this can alter our depth perception. It is through compiling cues like this, along with discrete information about colour, orientation and motion, that we can create a perception of the world around us.
Blindness: causes, impact and treatment
Globally, the main causes of visual impairment are uncorrected refractive errors, as discussed in Box 8. However, these do not typically result in blindness. The leading cause of blindness is cataracts which accounts for 51% of blindness worldwide (Pascolini & Mariotti, 2012). Cataracts occur when the lens of the eye develops cloudy patches, losing the transparency which is critical for transmitting light. Individuals may experience cataracts in one or both eyes. As the lens becomes cloudy, light cannot reach the retina. The National Institute for Health Care and Excellence (NICE, 2022) have identified several risk factors for cataracts including:
• Ageing – most cataracts occur in people over 60 years of age
• Eye disease – in this case the cataracts can occur because of other conditions
• Trauma – the cataracts arise due to injury to the eye
• Systemic disease – the cataracts arise because of other conditions, for example, diabetes
Aside from a decline in visual abilities to the point of blindness, cataracts have been associated with wider impact on health. In age-related cataracts the presence of cataracts is related to cognitive decline and increased depression (Pellegrini, Bernabei, Schiavi, & Giannaccare, 2020).
At the time of writing the only proven effective treatment for cataracts is surgery to replace the lens of the eye with a synthetic lens. These lenses cannot adjust like the natural lens so glasses will typically need to be worn after the surgery. The surgery is short (around 30 mins) and carried out under a local anaesthetic with a 2-6 week recovery period. These operations are considered routine in countries like the UK, but in low- and middle-income countries, eye care is often inaccessible, and cataracts can cause blindness.
After cataracts the next leading cause of blindness is glaucoma, accounting for 8% of cases followed by age-related macular degeneration (AMD), accounting for 5% of cases (Pascolini & Mariotti, 2012). Glaucoma refers to a build-up of pressure within the eye that can lead to damage to the optic nerve. This build-up happens because the fluid in the eye cannot drain properly, and it typically happens over time. As with cataracts, the condition is more common in older people. Several treatment options exist for glaucoma including use of eye drops, laser treatment and surgery, all aiming to reduce the intraocular pressure, but damage may be irreversible. Perhaps unsurprisingly, this condition is also associated with poorer quality of life (Quaranta et al., 2016).
For both cataracts and glaucoma, the site of damage is not specifically the retina and the photoreceptors. However, AMD does result from damage to the retina. In this case, the macular region of the retina deteriorates causing blurred central vision, although peripheral vision is intact, meaning it only causes complete blindness in a small percentage of people. As indicated by the name, this is an age-related condition such that older people are more likely to develop it, but other risk factors include smoking and exposure to sunlight. Whilst the remaining vision might suggest less impact on individuals than other types of visual impairment, it is still associated with reduced quality of life, anxiety and depression (Fernández-Vigo et al., 2021).
There are two types of age-related macular degeneration: dry and wet. Dry AMD occurs because of a failure to remove cellular waste products from the retina. These products build up causing deterioration of blood vessels and cell death of the rods and cones. This type of AMD accounts for around 90% of the AMD cases and there is no treatment for this type of AMD. Wet AMD arises in around 10% of people with AMD as a progression from dry AMD. Here new blood vessels form in the eye, but they are weak and prone to leaking. This type of AMD can be treated with regular injections into the eye to reduce the growth of new blood vessels. An alternative to injections, or to be used alongside the injections, is Photodynamic Therapy (PDT) where a laser is directed to the back of the eye to destroy the abnormal blood vessels there.
Key Takeaways: Summarising Vision
• Our sense of vision uses light as a sensory stimulus. Visible light is part of the electromagnetic spectrum and can be conceptualised as both a wave and a particle
• Light emitted from objects or reflected off them enters the eye through the dioptric apparatus where it is bent to a focal point on the retina at the back of the eye. Most of the refractive power comes from the cornea, but the lens provides an adjustable amount of power
• Refractive errors such as myopia can arise when the dioptric apparatus is too weak or powerful, causing the focal point to be in front of, or behind, the retina. Although refractive errors are a leading cause of visual impairment worldwide, they do not typically result in blindness
• Visual transduction occurs in the photoreceptors at the back of the retina of which there are two classes: rods and cones. Rods outnumber cones overall and are more sensitive, providing vision in scotopic conditions, but provide lower acuity and are found predominantly in the peripheral retinal areas. In contrast, cones are largely found in the fovea, and are specialised for high acuity, photopic vision. There are three types of cones, each with differing spectral sensitivity, giving rise to our colour perception
• The process of visual transduction begins with activation of photosensitive pigment in the photoreceptors. After this a series steps involving G-proteins results in the closure of ion channels and therefore a reduction of calcium entering the cell. This results in reduced glutamate release. Unlike the other senses, the presence of a stimulus results in hyperpolarisation of the receptor
• Retinal ganglion cells carry information away from the retina in the optic nerve to the lateral geniculate nucleus and onto the primary visual cortex. Information is arranged according to the eye and visual field and retinotopically mapped. After leaving the primary visual cortex over 30 cortical regions will receive visual input, including those forming the dorsal and ventral stream. Subcortical pathways also exist, most notably the pathway from the retina to the superior colliculus
• Specific features of the visual scene are identified by specific neural processes. For example, colour is believed to arise through opponent processing creating red-green, blue-yellow and luminance channels in the ventral pathway. Motion sensitive cells have been found in the dorsal pathway
• Different components of a visual scene can be combined and use of specific cues e.g., linear perspective can be used to create a 3D perception from the 2D image on the retina
• Leading causes of blindness are age related and include cataracts, glaucoma and age-related macular degeneration. In all cases the condition can have a significant impact on quality of life and result in distress. Treatments exist for most of these conditions but access to those treatments varies widely across the world.
References
Bartfeld, E., and Grinvald, A. (1992). Relationships between orientation-preference pinwheels, cytochrome oxidase blobs, and ocular-dominance columns in primate striate cortex. Proc. Nati. Acad. Sci. USA, 89, 11905-11909. Neurobiology. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC50666/pdf/pnas01098-0266.pdf
Fernández-Vigo, J. I., Burgos-Blasco, B., Calvo-González, C., Escobar-Moreno, M. J., Shi, H., Jiménez-Santos, M., . . . Donate-López, J. (2021). Assessment of vision-related quality of life and depression and anxiety rates in patients with neovascular age-related macular degeneration. Arch Soc Esp Oftalmol (Engl Ed), 96(9), 470-475. https://dx.doi.org/10.1016/j.oftale.2020.11.008
Foroni, F., Pergola, G., & Rumiati, R. I. (2016). Food color is in the eye of the beholder: the role of human trichromatic vision in food evaluation. Scientific Reports, 6(1), 37034. https://doi.org/10.1038/srep37034
NICE. (2022). Cataracts. Retrieved from https://cks.nice.org.uk/topics/cataracts/background-information/causes-risk-factors/
Pascolini, D., & Mariotti, S. P. (2012). Global estimates of visual impairment: 2010. Br J Ophthalmol, 96(5), 614-618. https://doi.org/10.1136/bjophthalmol-2011-300539
Pellegrini, M., Bernabei, F., Schiavi, C., & Giannaccare, G. (2020). Impact of cataract surgery on depression and cognitive function: Systematic review and meta-analysis. Clin Exp Ophthalmol, 48(5), 593-601. https://doi.org/10.1111/ceo.13754
Pöppel, E., Held, R., & Frost, D. (1973). Residual visual function after brain wounds involving the central visual pathways in man. Nature, 243(5405), 295-296.
Quaranta, L., Riva, I., Gerardi, C., Oddone, F., Floriani, I., & Konstas, A. G. (2016). Quality of Life in Glaucoma: A Review of the Literature. Adv Ther, 33(6), 959-981. https://doi.org/10.1007/s12325-016-0333-6
Vaina, L. M. (1998). Complex motion perception and its deficits. Current opinion in neurobiology, 8(4), 494-502. https://doi.org/10.1016/S0959-4388(98)80037-8
Weiskrantz, L. (1999). Consciousness lost and found: A neuropsychological exploration. OUP Oxford. https://doi.org/10.1093/acprof:oso/9780198524588.001.0001
Weiskrantz, L., & Warrington, E. (1974). K., Sanders, MD & Marshall. J. Visual capacity in the hemianopic field following a restricted occipital ablation. Brain, 97(1), 709-728. https://doi.org/10.1093/brain/97.1.709
World Health Organisation. (2018). Blindness and Visual Impairment. Retrieved from https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment
World Health Organisation. (2021). Global eye care targets endorsed by Member States at the 74th World Health Assembly. Retrieved from https://www.who.int/news/item/27-05-2021-global-eye-care-targets-endorsed-by-member-states-at-the-74th-world-health-assembly
Zeki, S. (1980). The representation of colours in the cerebral cortex. Nature, 284, 412-418. https://doi.org/10.1038/284412a0
Zeki, S., & Marini, L. (1998). Three cortical stages of colour processing in the human brain. Brain, 121(9), 1669-1685. https://doi.org/10.1093/brain/121.9.1669
About the Author
Dr Ellie Dommett studied psychology at Sheffield University. She went on to complete an MSc Neuroscience at the Institute of Psychiatry before returning to Sheffield for her doctorate, investigating the superior colliculus, a midbrain multisensory structure. After a post-doctoral research post at Oxford University she became a lecturer at the Open University before joining King’s College London, where she is now a Reader in Neuroscience. She conducts research into Attention Deficit Hyperactivity Disorder, focusing on identifying novel management approaches. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/04%3A_Sensing_the_environment_and_perceiving_the_world/4.04%3A_Lighting_the_world-_our_sense_of_vision.txt |
Learning objectives
By the end of this chapter, you will be able to:
• identify the stimuli and sensory structures involved in taste and smell sensations
• understand the transduction mechanisms in place to transform chemical information into action potentials within each sense
• describe the neural pathways supporting gustatory and olfactory perception.
Sensing chemical compounds in the environment is the most archaic sensory mechanism in living organisms. Very early on in the history of life on Earth, unicellular organisms developed chemical detection to distinguish food from toxins, to find mates and avoid danger. All the way to the present day, motivated and emotional behaviours in human and non-human animals are greatly influenced by detection of environmental chemical signals. In this chapter, we will revise the current knowledge of chemical senses in humans, paying special attention to the signals they can detect, how the transduction from chemical to neural code takes place, and what brain regions are involved in each case.
Humans can detect chemical compounds in the environment via the olfactory (smells) and gustatory (tastes) systems. Flavours are also a product of chemical sensation, but they result from the combined perception of smells and tastes. Through these specialised senses, chemical information in the surrounding environment is captured by chemosensory receptors, located mainly in the mouth and nasal cavity. Information regarding quality and quantity of chemicals is converted into the language of the nervous system, action potentials, through sensory transduction, and then transmitted to the central nervous system. In the brain, this information is integrated to produce olfactory and gustatory perception that will ultimately influence decision-making and behavioural selection and action.
Even though somewhat less studied than senses such as vision and audition, the chemical senses can be organised in two systems: gustation and olfaction. The gustatory sense, or sense of taste, picks up on soluble chemical compounds, present in the mouth. The olfactory sense, or sense of smell, reacts to airborne molecules that reach the nasal cavity.
These sensory systems provide animals with key environmental information for producing adaptive behaviours. Smells serve as long- and short-range signals, whereas tastes only act in the short-range, after we ingest food or drinks. This information may be crucial for finding, selecting and consuming food, finding a potential mate or regulating social interactions with others. Even though the chemical senses are composed by standalone systems, their coordinated action can produce even more complex sensory capacities such as detecting flavours, which involves the activation of common sensory neurons within the piriform cortex – the part of the brain that first processes olfactory information (Fu, Sugai, Yoshimura, & Onoda, 2004).
Sense of taste
Anatomical overview
Our sense of taste starts with tastant molecules reaching our mouth through ingestion.
Did you know?
The lumps that you see on your tongue are mistakenly called taste buds, but they are actually epithelia tissues called papillae.
Tastants are water-soluble or lipid-soluble chemical substances, present in food or drinks, that create the sensation of taste when detected by taste receptor cells (TRC) within the mouth. We are quite familiar with the many little raised bumps on our tongue epithelium that can be seen by looking at the tongue with a naked eye. These lumps are called papillae, and it is within the walls and fissures of the papillae that we find the taste buds that contain the taste receptor cells. There are thousands of taste buds in each papillae, which are divided into three categories, depending on their location: the foliate papillae located on the sides of the posterior section of the tongue; the circumvallate papillae located at the back of the tongue; and the fungiform papillae located in the anterior part of the tongue (Figure 4.46).
Each taste bud contains groups of between 50 and 150 taste receptor cells, and presents an upper aperture called the taste pore. TRCs project fine hair-like extensions, or microvilli, out of taste pores into the buccal cavity, where they encounter the tastants.
In humans, there are three main types of taste receptor cells, according to their function. Type I cells have primarily housekeeping functions. Type II cells are sensitive to sweet, bitter, and umami tastes. Type III cells appear to mediate sour taste perception. Detection of tastant compounds by receptor cells leads to neurotransmitter release (usually ATP) and generation of action potentials in neurons at the base of these receptor cells. The axons of these neurons form the afferent nerves that transmit the information to the brain via three different cranial nerves: VII, IX and X. Different papillae areas are innervated by different branches of the cranial nerves VII, IX, and X. The anterior two-thirds of the tongue, with the fungiform papillae, is supplied by branches of cranial nerve VII. The posterior third of the tongue is innervated by branches of nerve IX, the glossopharyngeal nerve. The posterior regions of the oesophagus and the soft palate are innervated by branches of cranial nerve X.
Nerves VII, IX and X project into the brainstem where they synapse with the rostral part of the nucleus of the solitary tract (NTS) which relays information to the ventral posterior medial nucleus of the thalamus. The thalamus projects to the anterior insular cortex and to a region called the primary gustatory cortex or insular taste cortex. Neural signals from the insular taste cortex travel to the secondary gustatory cortex, within the medial and lateral orbitofrontal cortex, and project also to structures like the amygdala, hippocampus, striatum and hypothalamus, where this sensory information can affect different stages of decision-making and behavioural output (Figure 4.48).
Did you know?
Taste buds have a life span of about two weeks, allowing them to grow back even when they are destroyed, for example when we burn our tongues. This makes them akin to skin cells, but they also share characteristics akin to neurons. For example, they have excitable membranes and release neurotransmitters.
Sensory transduction
How is the chemical information contained in the quality and quantity of specific tastants transformed into neural signals that the brain can interpret?
Tastants enter the papillae through the taste pore and induce different mechanisms in taste receptor cells. Each receptor cell has distinct mechanisms for transducing the chemical information into neural activity. Tastants are divided into salty, sour, sweet, bitter, and, umami – derived from the Japanese word meaning ‘deliciousness’. The umami taste is produced by monosodium glutamate, and probably other related amino acids.
As we have heard, components of salty chemicals are key for survival in several animals. Na+ ions contained in the saltiest of all salts, sodium chloride (NaCl), is key for maintaining muscle and neuronal functioning. A sub-group of Type II taste receptor cells are specialised for salt detection. These cells express receptors that detect and react to the presence of salty substances containing Na+. Similar Type II receptors express channels that allow other free cations such as H+ released by acid compounds into the cell (Figure 4.49a). Receptor cells expressing ion channels for Na+ or H+ allow these cations into the intracellular space, depolarisating the membrane, leading to release of neurotransmitter, typically ATP, and action potential firing in the neurons that make up the cranial nerves. Recent research has identified the ion channel responsible for NaCl detection in mice. Deletion of the gene that produces the epithelial sodium channel (ENaC) in mice specifically affected a sub-group of Type II taste receptor cells. Mice lacking ENaC showed complete loss of salt attraction and sodium taste responses compared to control animals (Chandrashekar et al., 2010). This was the first evidence that salt is detected by a specific protein expressed in a distinctive type of TRCs.Other categories of tastant molecules, specifically those perceived as sweet, bitter, and umami, activate G-protein-coupled receptors (GPCRs, Figure 4.49b).
As we have heard in earlier chapters, GPCRs are transmembrane receptors associated on their cytoplasmic side with G-proteins. They use a ‘key to lock’ mechanism for the transduction of the chemical information into neural activity. When a particular tastant molecule is recognised by a GPCR, the associated G-protein is activated, dissociating into α and βγ subunits. These can activate further intracellular signalling cascades, leading to depolarisation and/or an increase in intracellular calcium concentration that ultimately results in the release of neurotransmitters, usually ATP.
In mammals, sweet and umami receptors are heteromeric GPCR named T1R2+3 and T1R1+3, respectively. These receptors are a combination of proteins from families T1R1, T1R2 or T1R3, and can detect sweet and umami taste compounds. Like the ENaC knockout mice, animals without T1R1 fail to detect umami compounds, whereas animals lacking T1R2 fail to detect sweet tastes (Zhao et al., 2003).
Exercise
Domestic cats, lions or tigers do not have the genes that codify for T1R2 receptors. This means they cannot taste sweet tastes and are unable to experience sweetness. How do you think this fact influences their strictly carnivore diet?
Topography/distribution of taste receptors
Historically, scientists had a rigid view of the topography or distribution of taste receptors, but this concept is slowly being abandoned. Nowadays, we recognise that taste zones across the surface of the tongue are not absolute, and that all zones can detect all tastes, albeit with different detection capacities. Taste sensitivity thresholds, rather than receptor distribution, vary across the surface of the tongue, with all areas showing higher or lower sensitivity to all tastants. For instance, receptors with higher sensitivity for bitter tastants tend to be distributed posteriorly in the tongue. Salty and sweet tastes are more easily detected in the tip of the tongue and are conveyed primarily by cranial nerve VII. Bitter sensations are mainly relayed by cranial nerve IX, which provides innervation to the posterior third of the tongue.
Coding of information in the gustatory system
There is generally a proportional relationship between the concentration of the tastant and the firing rate of first order axons that enter the brain stem, so coding of taste intensity is based, at least in part, on frequency of action potentials.
Coding of gustatory information is also based on the topographical distribution of the taste receptor cells sensitivity. This distribution provides the foundation for labelled-line coding (Squire et al., 2012), meaning that information about the nature of the taste is provided by which cell has been activated. In other words, an axon that receives information from a sweet receptor is labeled as codifying sweetness. Hence, whenever this axon fires an action potential and conveys that signal into the brainstem, the received input is interpreted as sweetness. This is similar to the principles of encoding we encounter in the somatic sensory system, where the identity of the activated neuron, rather than the firing rate, indicates the quality of the signal carried by it (for example activation of a neuron innervating the finger is perceived as coming from that area, and the type of neuron activated influences what sensation is perceived).
In the case of gustation, we might recognise activity of axons as signals of the presence of sour, bitter, salty, sweet, and umami tastants. Other evidence, however, suggests that the pattern of activity across neurons that preferentially respond to different taste characteristics is used to code for specific tastes (pattern or ensemble coding).
Within the central nervous system, tastants’ identity is preserved in the relays from the nucleus of the solitary tract, into the ventral posterior medial complex of the thalamus, the gustatory cortex or insula (Doty, 2015). For some time, it was assumed that the insula would represent taste categories in a ‘gustotopic map’, but the empirical evidence has been elusive. Recent studies, using genetic tracing of taste receptor cells into the gustatory cortex, suggest that there are distinctive spatial patterns within the cortex, but no region is assigned to a single tastant (Accolla et al., 2007). Finally, the information from tastants reaches the orbitofrontal cortex, where it is integrated with sensory information from different modalities, suggesting that this area integrates tastes with other information to create more complex perceptual experiences.
Did you know?
There is some evidence of labelled-line coding in all of the sensory systems.
The concept refers to the idea that the line, or the pathway, from peripheral receptor into the brain, is labelled based on the presence of particular receptors that accomplish the sensory transduction process.
Sense of smell
Anatomical overview
In humans, olfaction, or the sense of smell, detects airborne molecules or odorants that enter the nasal cavity. Odorants interact with olfactory sensory neurons (OSNs) located in the olfactory epithelium that covers the dorsal and medial aspect of the nasal passageway (Figure 4.50).
OSNs are in charge of transducing the chemical information of odorants, encoding information about quality and quantity of smells, into action potentials that can be interpreted by the brain. Olfactory receptor cells extend their axons through the ethmoid bone, also called the cribriform plate. These axons make synaptic contact with the mitral cells within structures known as glomeruli within the olfactory bulb. Axons from the mitral cells bundle together and join the first cranial nerve, conveying olfactory information to various brain regions.
Information about the presence and quantity of smells leaves the olfactory bulb via the lateral olfactory tract. On the olfactory pathway, the lateral olfactory tract connects back to the inferior and posterior parts of the frontal lobe, near the junction of the frontal lobe and the temporal lobe, which constitutes the beginnings of the olfactory cortex (Figure 4.51).
Unlike other primary sensory cortices, primary olfactory cortex comprises a number of different structures. These include subcortical structures such as the olfactory tubercle, in the ventral part of the striatum, and part of the amygdala as well as cortical regions in the medial part of the temporal lobe (entorhinal cortex) and its junction with the frontal lobe (piriform cortex). The divisions of the olfactory cortex are interconnected, and even though there is most emphasis on the piriform cortex, the entire extended network of these regions constitute the olfactory cortex. Furthermore, these divisions of the olfactory cortex also project to other brain areas, including the thalamus, hypothalamus, hippocampus and, especially importantly, the orbital and frontal parts of the prefrontal cortex.
Unlike other sensory systems, in olfaction, there is not a thalamic relay between the peripheral sensory structures, i.e. the olfactory bulb, and the cortex (Breslin, 2019). In the olfactory system the connection with the thalamus is downstream from the cerebral cortex.
Sensory transduction and odour representation
Several types of cells are present at the olfactory epithelium. Supporting cells provide metabolic and physical support for the epithelium, but smell detection and transduction relies on mature cells called olfactory sensory neurons (OSNs). The nasal cavity is a challenging environment for living cells due to significant changes in environmental conditions such as humidity and temperature, which result in a short lifespan of OSNs. Constant mitotic divisions and maturation of basal cells replenishes the pool of OSNs, maintaining their number. In addition to the sensory and supporting cells, the epithelium is composed of glandular cells that produce and secrete the thick mucus that covers and protects its more exposed cellular structures (Figure 4.50).
Odorant molecules that access the nasal cavity and diffuse through the mucus interact with olfactory cilia, hairlike extensions projecting from the end of the OSN dendrite. Embedded in the membrane of the olfactory cilia are the receptor proteins that bind with the odorants. Humans have around one thousand different odour receptor (OR) genes but can perceive more than a trillion different odours (Bushdid et al., 2014; 10.1126/science.1249168). In a characteristic arrangement of ‘one-to-one-to-one’, each OSN express only one type of OR gene, and all OSN expressing the same OR protein project their axons to the same glomeruli within the olfactory bulb (Figure 4.50). Hence, glomeruli activation recapitulates OR activation, reproducing a combinatorial code of glomeruli activity unique to each odour.
The current understanding of how odours are recognise at the neural level is explained by the shape-pattern theory, which proposes that each scent activates unique arrays of olfactory receptors in the epithelium. The molecular attributes of odours will determine how many OR can bind to them. Hence, one odour will activate a series of OR with more or less intensity, and this pattern of OR activation is what the brain recognises as a label for that particular odour molecule. Different odours will trigger different OR activation patterns, but familiar odours (i.e., sharing some molecular properties like compounds belonging to the alcohol molecules family) will trigger more similar patterns since they may be recognised by overlapping but slightly differing OR combinations. Note that scents are usually a combination of more than one odour molecule, and scent perception is associated with a yet more complex pattern of OR activation and glomeruli representation. A graphical representation of this mechanism is presented in Figure 4.52.
Odours are represented as geometrical shapes, and OR as a shape-fit structure. Odours will fit more or less well within specific shape-fit structures, with better fitness being associated with higher OSN activation. Specific combinations of odours (scents) will produce distinctive OR activation patterns, which will be univocally identified by the olfactory sensory brain areas.
When odorants bind the OR on a given OSN, a series of intracellular events take place, transducing the chemical information into action potentials. ORs, like rhodopsin, metabotropic glutamate receptors and some taste receptors, are GPCRs. When odours bind to their specific OR, the associated G-protein is activated and the α and βγ subunits dissociate, and a second messenger pathway is activated. In this case this second messenger pathway is the activation of adenylyl cyclase and production of adenosine 3′,5′-cyclic monophosphate (cAMP) from ATP. This increase in intracellular cAMP levels opens cation selective channels, allowing calcium and sodium to enter the OSN, depolarising it and making the OSN fire action potentials (if the signal is strong enough). These action potentials are transmitted along the OSN axons out of the nasal epithelium through the olfactory nerve (cranial nerve I). At the glomeruli, OSNs make synaptic contact and activate mitral cells, which convey the chemosensory information to the brain (Schild & Restrepo, 1998). In contrast with other senses, the olfactory system lacks a topographic map of the sensory environment in the olfactory cortex. Instead odours are associated with unique activation patterns of primary regions within the olfactory cortex, which correspond with associated activity patterns at the OSN and glomeruli levels.
Expression of OR varies from individual to individual. In humans, only a third of all OR genes present in the genome are expressed into receptor proteins, but this number is highly variable between individuals. Olfactory experience depends on which OR genes are expressed, and how many copies of a specific receptor each individual has. Two people, expressing 358 and 388 different OR, respectively, will both be ‘normal’, but the sensory experience associated with a given odour molecule for each one of them may be different. For instance, in a recent study, Kurz examined the perception of coriander smell and taste by different volunteers. They found people are ‘lovers’ and ‘haters’ of coriander in roughly equal parts. While ‘lovers’ are attracted by coriander’s ‘fantastically savoury’ smell, ‘haters’ smell soap. This difference is apparently linked to the ability to detect some of the compounds present in coriander, the unsaturated aldehydes, that make ‘haters’ smell something like soap. ‘Lovers’, on the contrary, are insensitive to the unsaturated aldehydes, so do not detect a soap smell, leaving only the more pleasant characteristics of coriander to be detected by these individuals.
Key Takeaways
• Taste and smell are two senses specialised in detecting chemical compounds that reach the mouth or nose, respectively
• Taste sensory experience is the result of detection in a small number of dimensions, mainly salty, sour, sweet, bitter, and umami. Each sensory dimension is indexed by a specific type of taste receptor distributed along the tongue surface
• Smell detection is supported by a large number of odour receptor neurons, which are activated in a combinatorial fashion to give rise to molecule-specific activation patterns within the olfactory cortex
• ‘Normal’ smell sensation is highly variable between individuals and depends on the quality and quantity of odour receptors expressed between subjects.
Accolla et al., 2007; 10.1523/JNEUROSCI.5188-06.2007
Andreou, A. P., & Edvinsson, L. (2020). Trigeminal Mechanisms of Nociception. Neuromodulation in Headache and Facial Pain Management (pp. 3-31). Springer. https://doi.org/10.1007/978-3-030-14121-9_1
Bereiter, D. A., Hargreaves, K. M., & Hu, J. W. (2008). Trigeminal mechanisms of nociception: peripheral and brainstem organization. Pain, 5, 435-460. https://doi.org/10.1016/B978-012370880-9.00174-2
Breslin, P. A. (2019). Chemical senses in feeding, belonging, and surviving: Or, are you going to eat that? Cambridge University Press.
Chandrashekar et al., 2010; 10.1038/nature08783
Doty, R. L. (2015). Handbook of olfaction and gustation. John Wiley & Sons. https://doi.org/10.1002/9781118971758
Fu, W., Sugai, T., Yoshimura, H., & Onoda, N. (2004). Convergence of olfactory and gustatory connections onto the endopiriform nucleus in the rat. Neuroscience, 126(4), 1033-1041. https://doi.org/10.1016/j.neuroscience.2004.03.041
Hawkes, C. H., & Doty, R. L. (2009). The neurology of olfaction. Cambridge University Press. https://doi.org/10.1017/CBO9780511575754
Hummel, T., Iannilli, E., Frasnelli, J., Boyle, J., & Gerber, J. (2009). Central processing of trigeminal activation in humans. Annals of the New York Academy of Sciences, 1170(1), 190-195. https://doi.org/10.1111/j.1749-6632.2009.03910.x
Papotto, N., Reithofer, S., Baumert, K., Carr, R., Möhrlen, F., & Frings, S. (2021). Olfactory stimulation Inhibits Nociceptive Signal Processing at the Input Stage of the Central Trigeminal System. Neuroscience, 479
Price, S., & Daly, D. T. (2021). Neuroanatomy, trigeminal nucleus. In StatPearls [Internet]. StatPearls Publishing.
Schild & Restrepo, 1998; 10.1152/physrev.1998.78.2.429
Sell, C. S. (2014). Chemistry and the Sense of Smell. John Wiley & Sons.
Squire, L., Berg, D., Bloom, F. E., Du Lac, S., Ghosh, A., & Spitzer, N. C. (Eds.). (2012). Fundamental neuroscience. Academic press.
Viana, F. (2011). Chemosensory properties of the trigeminal system. ACS chemical neuroscience, 2(1), 38-50. https://doi.org/10.1021/cn100102c
Zhao et al., 2003; 10.1016/S0092-8674(03)00844-4).
About the Authors
Dr Paloma Manguele is a Research Fellow in the School of Psychology at the University of Sussex.
Dr Emiliano Merlo obtained a PhD in biology at the University of Buenos Aires, investigating the neurobiology of memory in crabs. He then moved to the University of Cambridge as a Newton International Fellow of The Royal Society and specialised in behavioural neuroscience, focusing on the effect of retrieval on memory persistence. Emiliano recently became a lecturer in the School of Psychology at the University of Sussex, where he convenes a module on the Science of Memory, and lectures on sensory and motor systems, and motivated behaviour in several undergraduate and graduate modules. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/04%3A_Sensing_the_environment_and_perceiving_the_world/4.05%3A_Chemical_senses-_taste_and_smell.txt |
Human and non-human animals are the only organisms with brains. This unique structure allows us to not only perceive the world around but also interact with it to survive and thrive. From navigating our way to school or work, to selecting the right food to eat, or the right partner to interact with, our brain integrates sensory and internal information to produce the most appropriate behavioural responses. In this section we will analyse how the motor system is organised to execute actions, from simple reflexes to complex movements. We will then review the current understanding on how the brain integrates sensory and internal state information to produce the most adaptive behaviour given the circumstances. Later editions will also focus on how the brain integrates multi-modal internal and external sensory inputs to produce motivated behaviours such as feeding and drinking.
Learning Objectives
After reading this section you will be able to:
• recognise the components of the human motor system and the different structures involved in sensorimotor integration
• discuss how the motor system is modified by development and learning, and what is the effect of specific damages along the motor system components
• describe the participation of different brain systems in preparing and executing complex motor outputs.
05: Interacting with the world
Learning Objectives
After reading this chapter, you will understand:
• the organisation of the central regions and pathways involved in motor control
• the role of different regions for organising and controlling movement
• that motor systems are modified during development and by learning
• how motor systems break down when components are damaged.
Movement is key to every aspect of our lives. From breathing to walking, writing, or frowning, each behaviour is controlled by the motor system. So, understanding how movement is generated is an important step to understanding behaviour.
Despite being so ‘natural’, the generation of movement is a very complex task. Depending on the goal, the brain computes current and previously stored information to generate instructions and commands that are transformed into movement. This transformation is achieved at the neuromuscular junction, where a motor neuron synapses on a muscle governing its state of contraction. Therefore, to understand how purposeful movements are generated we need to understand how the nervous system is organised and how different regions communicate to control the correct sequence of contraction of hundreds of muscles that will produce the appropriate movement.
In this chapter we will discuss how studies have revealed the relationship between cortical organisation and function for the control of voluntary movement. We will look at how the spinal cord, which contains the motor neurons, is more than just a passive relay of brain information into muscle contraction. Finally, we will evaluate the function of the cerebellum and the basal ganglia for the organisation of movement. As we go along, we will discuss how the motor systems are modified during development and learning. We will also look at what happened when certain components are damaged and how treatments or the ability of certain regions to change (plasticity) can help recovery.
Organisation of the motor system
The motor systems are used for multiple roles. They are involved in moving through and manipulating the world as well as for verbal and non-verbal (gesture) communication. They allow us to maintain posture and balance and control the contraction of the smooth muscles involved in autonomic functions like breathing and gut movements. Finally, they play a role in sensation, for example controlling the saccadic movements of the eyes as we visually track a stimulus. Despite the diversity of movements we perform, motor control is often considered as simple, probably reflecting that movements are seemingly effortless and largely unconscious. However, even simple movements require significant computations to coordinate the action of multiple muscles.
For example, imagine that you want to pick up a raspberry (Figure 5.1). This movement requires the concerted action of several regions of the nervous system, each one with a specific role. Once you have decided to pick-up the raspberry, visual information processed in the visual cortex is used to locate the fruit, this information is transmitted to the motor regions of the frontal lobe where the movement is planned, and command signals are sent. The commands are carried on to the spinal cord, which is responsible for generating the movement though activation of motor neurons. The coordinated activity of motor neurons induces the contraction and relaxation of muscles in the arm and hand that allow the raspberry to be grabbed. Now, the raspberry is a very delicate fruit, and the correct amount of pressure needs to be applied to detach the fruit but to avoid bursting it. Sensory receptors in your fingers relay tactile and proprioceptive information back to the spinal cord and the somatosensory cortex. From there, the information reaches the motor cortex to confirm that you are grabbing the fruit. Other areas are involved during this movement; the grasp force is judged by the basal ganglia, and the cerebellum helps in regulating the timing and accuracy of the movement.
The motor hierarchy
In the diagram of interactions between different regions of the nervous system just described (Figure 5.1), each component controls a particular function. These regions are organised hierarchically.
The forebrain regions, involved in taking the decision, command lower functional areas like the spinal cord to execute the movement. Parallel processing allows us to simultaneously produce other movements, like maintaining posture while singing or walking. Finally, there is a level of independence in the function of these brain areas, which can co-ordinate complex activity in multiple muscle groups having received relatively general commands. This allows movement to happen rapidly, precisely and without conscious control.
Strategies to control movement
The concept behind how movement is controlled to be efficient has been debated for quite some time. When we execute an action, sensory information is used to inform us about the movement, the position of the body and the surrounding environment.
This information can be processed as the movement is progressing allowing us to adjust it. This is called feedback control, where the output is monitored by various sensory systems and signals are relayed into the CNS to inform regions that generate motor outputs.
However, this model is limited to slow movements and sequential actions, since the processing of sensory feedback is relatively slow. For example, when catching a ball it may take 700 milliseconds to respond to visual clues, but the movement only takes between 150 and 200ms. This means that another motor control mechanism must be used for fast, ballistic movements.
In feedforward control, the optimal movement is predicted from current sensory conditions and from memory of past strategies. For example, if you open your front door and see snow and ice you will walk differently to how you would on a sunny day, you will take small steps, walk slowly, hold your arms out for balance, because you know that there is a risk of slipping and falling. If we return to our ball example, knowing the initial conditions of the arm and hand and being able to predict the ball trajectory are used to choose a stored motor programme to catch the ball. A general feature of feedforward control is that it improves with learning.
Feedback and feedforward controls are not mutually exclusive and are combined to optimally generate coordinated movements.
To understand how the regions of the central nervous system work together to plan and command movements, we will now analyse the role of the main regions, starting with the forebrain.
Key Takeaways
• Motor systems are used for multiple roles
• Motor systems consist of several regions that are hierarchically organised
• Motor and sensory systems work together to generate effective movement.
The forebrain and initiation of movement
In the frontal lobes of the brain, specific regions such as the prefrontal cortex, premotor cortex, and primary motor cortex contribute to movements in unique ways.
The prefrontal cortex is critical for making the decision of executing a particular action. For example, if you decide to grab your mobile phone to call a friend, it is the frontal cortex that reacts to that goal and instructs the motor system to initiate movement. The premotor cortex receives information from the prefrontal cortex and prepares the required motor sequences, selecting the movements that are most appropriate for the action in the current circumstances. In this case, you will need to retrieve the phone and unlock it with a passcode, moving your fingers from one number to another in an organised sequence and following a specific memory. The information about the motor sequence to be executed is conveyed to the primary motor cortex that produces the required movements by muscle contraction and relaxation. Sensory input from the posterior parietal cortex, for example about where the phone and your fingers are, also shapes this process.
The motor cortex
Evidence on the organisation of the motor cortex has been very influential in thinking about its function. Wilder Penfield was a neurologist who pioneered neurosurgery for the treatment of epilepsy that could not be controlled with medication. Through surgical interventions, he removed regions of the brain from which the seizure originated. To avoid catastrophic consequences, during surgery he electrically stimulated local regions of the nervous system in awake patients and recorded the results. He found that different parts of the primary motor cortex controlled different muscles (Figure 5.3) (Penfield & Boldrey, 1937).
This led to the drawing of a homunculus, which is a topographical representation of how the primary motor cortex contains a motor map of the body. As with sensory maps, body parts are not equally represented. Areas that need greater motor control – hands, fingertips, lips, and tongue – are controlled by disproportionately larger regions of the motor cortex compared to other body parts.
The homunculus is a simplification and Penfield himself noted that the facial, arm/trunk and leg regions overlap. He attributed this to variability in brain size and the lack of precise stimulation, but more recent analyses have shown a fractured somatotopic organisation that sees neurons controlling movement of facial, arm/trunk and leg movements intermingled. This has generated controversy: does the primary cortex control muscles, or movement?
Modelling movement
A more detailed analysis of the relationship between the primary motor cortical areas and the movement they generate has helped in making sense of how motor cortex works.
From an anatomical point of view, there is evidence that single cortical neurons make direct connections with motor neurons that innervate multiple muscles that work together (they are synergistic) to produce a particular movement.
Furthermore, finger representation is found in several regions of the cortex. This suggests that the fingers, which are involved in so many actions, can be linked to particular tasks and activated independently in different contexts.
These observations point to an organisation of the primary motor cortex to control movement, rather than the contraction of individual muscles. Different groups of neurons are grouped together, providing ‘libraries’ of muscle synergies that can be used for different movements or parts of movements. For example, a region of this cortex will be involved in activating the muscles required for grabbing a marble between thumb and index finger.
Recent discoveries have shown there is substantial complexity in the movements that can be controlled by the motor cortex. Using longer electrical stimulations of the motor cortex (half a second) in macaque monkeys, Michael Graziano and colleagues have shown that long (half a second) electrical stimulations of motor cortex in macaque monkeys can evoke complex actions (Figure 5.4) (Graziano et al, 2016). These actions represent movements usually used by the monkey (ethologically relevant). For example, stimulating one area of the motor cortex repeatedly and reliably induced hand-to-mouth action (E). They also found sites evoking apparent defensive movements (F) or reach to grasp (G). Each ethologically relevant type of action is organised in zones, and ablations towhees zones affect the ability to generate the corresponding movements. This zonal organisation of complex movements has been termed an action map.
Plasticity in the motor cortex
The cortical areas involved in the control of movement (prefrontal cortex, premotor cortex, and primary motor cortex) show an amazing plasticity. This means that the connections between neurons and their strength can change, new ones being made and old ones broken.
This is particularly obvious during development, when the nervous system is highly malleable allowing for the maturation of new behaviours like walking for a toddler. In humans, changes in the motor map also occur with the acquisition of skilled movement, like writing or playing the violin. The effects have been studied in detail in animals. At the beginning the motor map is absent but as the skill is learned the map is refined and becomes more precise. The changes are centred in regions that control the muscles involved in the learnt skill: each finger is controlled by a very defined region in the violinist primary motor cortex (Elbert et al., 1995).
This plasticity has also profound implications when the motor areas in the cortex are damaged. If a monkey damages a cortical motor area controlling its paw and does not undergo rehabilitation, this paw becomes paralysed. After a few months, an analysis of the motor cortex in that animal shows that the area controlling the monkey’s paw (wrist and digits) has become smaller, while the lateral areas controlling the elbow and shoulder have enlarged. If animals are not allowed to use their good hand, by use of a cast for example, they are forced to use their bad hand. This is a form of rehabilitation as the areas that control the hand and digits then retain their size and the monkey retains some ability to move its hand (Nudo et al., 1996).
These experiments, performed in animals, have permitted the development of new rehabilitation treatments for humans. Amongst them, constraint-induced movement therapy helps improve the deficit that results from different types of substantial damage to the central nervous system (CNS), such as stroke, traumatic brain injury, multiple sclerosis, cerebral palsy, and certain paediatric motor disorders (Taub, 2012). For example, in stroke patients, transcranial magnetic stimulation has been used to stimulate the damaged motor cortex or to inhibit the intact motor cortex in the opposite hemisphere and improve function (Ziemann, 2005).
The corticospinal tract
The main afferent route from the primary motor cortex to the brainstem and spinal cord is via the corticospinal tract. Most of the axons originate from pyramidal neurons in layer V of the cortex, but also include tracts from the premotor cortex and sensory cortex. The axon bundle descends into the brainstem where it sends several collaterals to brainstem nuclei and divides into two main branches. The opposite-side lateral tract controls movement of limbs and digits on the opposite side of the body. The same-side tract controls movements closer to the midline on the same side of the body, in particular movements of the trunk and shoulders that influence body orientation (Figure 5.5).
Key Takeaways
• The forebrain organises the initiation of movement: the prefrontal cortex plans, the premotor cortex organises and the motor cortex sends commands to produce movement
• The primary motor cortex contains a motor map of the body: the homunculus
• Motor cortical organisation represents simple and ethologically relevant movements
• Plasticity is fundamental for learning new motor skills and for rehabilitation
• Descending corticospinal tract conveys inputs to the executive circuits in the brainstem and spinal cord.
The spinal cord
The spinal cord plays a fundamental role for the execution of movement. It contains the motor neurons responsible for muscle contraction. It receives descending input from higher brain regions and the sensory feedback from muscles and from touch receptors. It generates the simplest movement: the reflex contraction. It also contains the circuits that control the generation of rhythmic movements, like walking or chewing. When it is lesioned, voluntary movement is impossible below the level of the damage.
[CNH: Use 5.16 but cut off head and painful stimulus and call it 5.6b? – I can have a go at this if we can’t ask Eliza; done this but think prob not fair on Eliza – should we instead just link to 5.16A?] Catherine do you mean call it Fig 5.7?
A cross-section of the spinal cord [Figure 5.6a] reveals the outer white matter that contains the axon tract and the central grey matter where the nuclei of neurons from the spinal cord are located. The grey matter is divided into the dorsal horn that relays sensory inputs to the spinal cord and the brain, and the ventral horn that contains the motor neurons. In the intermediate grey matter, the interneurons that relay inputs to motor neurons are found.
The spinal cord is divided into four sections: cervical, thoracic, lumbar and sacral, each comprising several segments (Figure 5.6b, left image). Limb muscles are supplied by nerves from several segments, reflecting the complexity of the movement generated.
The arm moves thanks to the coordinated stimulation of motor neurons that drive the contraction of extensor and flexor muscles.
For example, elbow flexion is mediated by cervical segments C5 and C6, while its extension is mediated by C7 and C8 (Figure 5.7). Sensory inputs from single strip of skin are supplied by individual spinal nerves, reflecting the importance of localised sensation.
The motor neurons
The motor neurons are the final output elements of the motor system. Each motor neuron innervates as many as 150 fibres of a single muscle (Figure 5.8a). This collection of fibres innervated by a single motor neuron constitutes the smallest unit of contraction, and was named the ‘motor unit’ by Sir Charles Sherrington [CNH: ask author to insert Reference]. Most muscles comprise hundreds of motor units.
By controlling the activity of each motor neuron and the number and type of motor units recruited, the type of movement and muscle force can be adjusted (Figure 5.8b).
Three types of motor units exist:
• Slow motor neurons generate a low and sustained tension, and are recruited first. They provide enough strength for standing or slow movements.
• Fast units generate more strength and are recruited for more intense activity. The fast fatigue-resistant units provide force for intermediate activity like walking or running.
• Finally, when intense movements are done like jumping, the fast fatigable units will be recruited.
The strength of contraction of each motor unit can also be modulated by changing the firing frequency of the motor neurons at the neuromuscular junction (NMJ).
The neuromuscular junction
The neuromuscular junction (NMJ) is the chemical synaptic connection between the terminal end of a motor neuron and a muscle (Figure 5.9). It allows the motor neuron to transmit a signal to the muscle fibre, resulting in muscle contraction. It begins when an action potential reaches the axon terminal of the motor neuron. In vertebrates, the neurotransmitter acetylcholine (ACh) is released from the axon terminal and diffuses across the synaptic cleft, where it binds to the nicotinic acetylcholine receptors (nAChRs) on the post-synaptic site on the muscle fibre. nAChRs are ligand-gated ion channels. ACh-binding opens the ion channel allowing Na ions into the muscle cell, depolarising the membrane At the muscle, this depolarisation is termed the ‘endplate potential’ (contrasting to the EPSP at a neuron to neuron synapse). This endplate potential causes an action potential in the muscle fibre that eventually results in muscle contraction. To prevent sustained contraction of the muscle, ACh is degraded in the NMJ by acetylcholinesterase.
The NMJ is the site of many diseases that affect the way messages are transmitted from the nerves to the muscles. For example, in congenital myasthenic syndrome, proteins required for synaptic transmission at the NMJ are mutated so an action potential in a motor neuron is less able to cause muscle contraction. This condition produces muscle weakness and impacts on mobility to different degrees, depending on the type of genetic mutation. Symptoms range from drooping eyelids and fatigue, to affecting breathing and other essential functions in the life-threatening forms of the disease. How to modulate the efficacy of NMJ transmission is a very active area of research to help patients with this syndrome.
Generation of rhythmic movements
As we already mentioned, the spinal cord is not only a relay site from the brain to the muscles, but also plays a fundamental role in the generation of rhythmic patterns of movement, like walking or running. This means that circuits located in the spinal cord are capable of coordinating the concerted actions of several muscles. More than one hundred years ago, Charles Sherrington (1910) and Graham Brown (1911) performed the first experiments that showed that the spinal cord, disconnected from the brain, could produce the rhythmic movement of stepping in cats. After years of controversy and experimentation in many species, it is now accepted that the spinal cord contains circuits that generate rhythmic movements like chewing or walking independently of the inputs it receives. These circuits of interneurons are called Central Pattern Generators (CPG) and they ensure the coordinated action of muscles, so that extensors and flexors work in concert to produce fluid movements (Figure 5.7).
While the CPGs can generate rhythmic movements, they require pre-motor inputs that select and coordinate the types of motor neurons needed. For example, walking and running require the contraction of muscles in the legs at different phases and with different intensities. During walking the duration of the stance is longer and the legs are less bent in comparison to running (Figure 5.10). The activity of the CPG that controls the timing and coordination of muscle contraction is influenced by descending inputs from higher centres (mostly primary motor cortex) that send the signal to select between gaits. Additionally, sensory feedback from the muscles (proprioception) and the environment shape the correct execution of movements (Figure 5.10b, bottom).
NEW FIGURE – check provenance? alternative to be found in public domain? eg this
ah ok thanks Sally for confirming adapted oa
Spinal cord injury
The understanding of the importance of the circuits located in the spinal cord is used to help patients with spinal cord injury. When the spinal cord is severed, the circuit below the lesion site cannot be activated. When the lesion occurs at lumbar level (C4-C6), the arms and legs are paralysed, resulting in quadriplegia. If the lesion is at thoracic level, the legs are paralysed, resulting in paraplegia (refer to Figure 5.6b).
However, it is possible to improve the recovery of locomotion by training. During step training (Figure 5.11), a patient’s body weight is supported by a harness over a treadmill. Therapists and technicians move the legs and joints of the patient to simulate normal walking. As the patient walks, sensory inputs from the legs, the sole of the foot and the trunk are repetitively sent to the spinal cord. This trains the spinal cord circuit, and walking and standing are slowly relearned. After several weeks of training, most patients can generate spontaneous walking when placed on the treadmill with support. This enhances health and well-being. When patients have incomplete spinal cord injury, it can be the beginning of recovery since it also stimulates the rewiring of descending inputs from the brain.
See also Locomotor Training video on YouTube: https://www.youtube.com/watch?v=diZLK32DUts
One or more interactive elements has been excluded from this version of the text. You can view them online here: https://openpress.sussex.ac.uk/introductiontobiologicalpsychology/?p=636#oembed-1
Key Takeaways
• The spinal cord has an organised structure
• The connection between a motor neuron and several muscle fibres comprises a motor unit – the smallest unit of motor output. Each muscle contains many motor units
• The activity and recruitment of motor units influences the motor output
• The neuromuscular junction is the cholinergic synapse between the motor neuron and the muscle
• The spinal cord has neural circuits capable of generating rhythmic movements like walking and chewing
• Training the spinal circuits has a beneficial effect for the treatment of spinal cord injury patients.
The cerebellum and the control of skilled movement
The cerebellum comprises between 10 and 20% of the brain volume, but it contains 50% of its neurons. This disparity is possible because the cerebellum is a highly organised structure that allows the dense packing of neurons. It is located on the back of the brain and just above the brain stem. The cerebellum is divided into several regions, each with specific functions and connections to different parts of the brain (Figure 5.12).
The cerebellum contains sensory and motor components, but it is not necessary for the direct execution of movement. Rather it plays a role for the coordination and planning of movement which are affected in patients with cerebellar lesions.
The first insight into the role of the cerebellum was obtained by the neurologist Gordon Holmes (Holmes, 2022). After World War 1, he analysed the behaviour of soldiers that had been wounded by bullets and and presented with localised damage to the cerebellum. He observed that despite not presenting sensory loss, the movement of the patients were affected: they presented cerebellar ataxia (lack of coordination).
The patients presented weakness (hypotonia), showed inappropriate displacements like overreaching (dysmetria) and struggled to make rapid alternating movements (dysdiadochokinesis). Their movements seemed to be decomposed, with lack of co-ordination of different joints.
All these defects pointed to a role of the cerebellum in the construction of movement, contributing to its coordination, scaling, timing and precision.
Interestingly, one of Holmes’ patients described that ‘the cerebellar lesion meant that it was as if each movement was being performed for the first time’.[REF? CNH ask author to add] This and other observations lead to the current view that the cerebellum enables predictive motor commands to be made. This means that over repeated reiterations of a movement – for example, hitting a tennis ball with a racquet – an internal model of the movement is learnt: a motor programme. The next time you want to hit the ball, this cerebellar representation is used to generate and construct the appropriate movements in response to the sensory inputs received, making your slide each time more accurate and ‘automated’ (remember feedforward control of movement).
The coordination of movement by the cerebellum is possible thanks to its high interconnectivity. It receives inputs about planned movements from the motor cortex, and sensory feedback on the actual movement. This allows the comparison between planned and actual performance of the movement. It produces a precise computation that uses sensory information to adjust the ongoing movement as a part of a feedforward predictive control system.
Key Takeaways
• The cerebellum plays a role in construction of movement. Cerebellar lesions dramatically affect movement, because the timing, scaling and pattern of muscle contractions is inappropriate
• The cerebellum is important in translating ‘sensory’ signals into ‘motor’ coordinates, as part of a feedforward predictive control system
• It also influences motor learning, contributing to the automatisation of movements.
Basal ganglia
The basal ganglia are structures that modulate the motor function at the highest levels. They receive extensive connections from the neocortex and feedback to the motor cortex. The basal ganglia participate in a wide range of functions, including action selection, association and habit learning, motivation, emotions and motor control. In this chapter we will look into their functional organisation and focus on the mechanism by which they allow the selection of movement and modulate movement force.
The basal ganglia are five interconnected nuclei within the forebrain located below the cerebral cortex. The main nuclei are the striatum (which means ‘with stripes’) formed by the putamen, the caudate nuclei, and the globus pallidus. And two midbrain nuclei, the substantia nigra and the subthalamic nucleus (Figure 5.13).
The ganglia receive inputs from all areas of the neocortex, comprising the motor cortex, as well as inputs from the limbic areas, which are involved in emotions, like fear. The nuclei project back to the motor cortex via relays in the thalamus influencing the descending commands from the primary motor cortex. There are no direct connections from the basal ganglia with the spinal cord.
Functional network organisation: the volume hypothesis
In the volume control theory the globus pallidus acts like a volume dial. It projects indirectly to the motor cortex via the thalamus. The globus pallidus is inhibitory: this means that it inhibits the thalamus when activated. If this happens, the thalamus, which is excitatory, does not activate the motor cortex and this results in less movement. On the other hand, if the internal globus pallidus is inhibited, inhibition on the thalamus is released and movement can occur. This model suggests that it is through this ‘volume control’ that we make choices and select appropriate goals while rejecting less optimal options.
In this schema of the functional organisation of the basal ganglia, the pathways towards the internal globus pallidus are critical in setting its output. There are direct and indirect pathways.
The direct pathway
In the direct pathway (Figure 5.14a), the striatum (caudate/putamen) is directly connected to the internal globus pallidus and the substantia nigra. If the direct pathway is activated, it inhibits the internal globus pallidus, thus removing the inhibition of the thalamus. This facilitates movement by increasing thalamic excitation of the motor cortex.
Blue is inhibitory, red is excitatory, the thickness of the line indicates the strength of the connections.
The indirect pathway
In the indirect pathway (Figure 5.14b), the striatum projects to the external globus pallidus and subthalamic nucleus.
The striatum inhibits the external globus pallidus. This disinhibits the subthalamic nucleus which excites the internal globus pallidus. This results in less motor cortex excitation.
Dopamine also plays a role in the modulation of movements. Dopaminergic inputs to the basal ganglia from the substantia nigra pars compacta facilitate movement via both the direct and indirect pathways. In the direct pathway, activation of D1 dopamine receptors on neurons in the striatum enhances striatal inhibition of the internal globus pallidus, disinhibiting the thalamus and facilitating motor outputs. Conversely, in the indirect pathway, dopamine activates D2 dopamine receptors in the external globus pallidus to reduce its inhibition. The external globus pallidus can therefore more strongly inhibit the subthalamic nucleus, reducing excitation of the internal globus pallidus and decreasing inhibition of the thalamus, further facilitating motor outputs.
Overall, the balance between the direct and indirect pathways controls the ‘volume dial’ that determines the strength of the basal ganglia output to the thalamus, thus acting to modulate the excitatory input received by the motor cortex to select and regulate movement.
Diseases of the basal ganglia
Damage to the basal ganglia can produce two main types of motor symptoms:
• Hyperkinetic symptoms, where there is excessive involuntary movement, as seen in Huntington’s chorea.
• Hypokinetic symptoms, where there is a paucity of movement, as seen in Parkinson’s disease.
Huntington’s disease is a genetic disorder characterised by uncontrolled movements (chorea). The symptoms are excessive spontaneous movements, irregularly timed, randomly distributed and abrupt in character. It is followed by dementia and ultimately death.
There is evidence that the motor symptoms are originated by neuronal death that can reach up to 90% in the striatum (caudate/putamen). This primarily disrupts the indirect pathway, where inhibition of the external globus pallidus is lost, producing a tonic inhibition of the subthalamic nucleus. This in turn reduces the inhibitory output to the thalamus, thus producing excessive movement.
The symptoms can be treated with antipsychotics that block dopamine transmission (e.g. clozapine) and decrease motor activity; as well as anxiolytics or anticonvulsants that increase inhibition via GABA (e.g. clonazepam).
Parkinson’s disease is a slow progressive disorder that affects movement, muscle control and balance. It has three main symptoms: resting tremor, stiffened muscles, and slowness of movement that results in small shuffling steps.
It is produced by a loss of dopaminergic neurons in the substantia nigra and the levels of dopamine in its output regions are dramatically reduced (Figure 5.15).
Dopamine normally facilitates movement. When the levels are decreased, both the direct and indirect pathway are affected, increasing the inhibitory output of the basal ganglia and reducing motor activity.
Pharmacological treatment of Parkinson’s disease largely focuses on restoring dopamine levels. Dopamine cannot be administered directly since it does not cross the blood-brain barrier, so does not reach the brain when systemically administered. Instead the dopamine precursor L-DOPA is used, which is taken up by the brain and becomes active upon conversion to dopamine by dopadecarboxylase. Dopamine receptor agonists, or inhibitors of dopamine breakdown have also been used. These treatments are beneficial, but require gradual increases in dose over time, which can generate many side effects.
Alternatively, stimulation of the subthalamic nucleus or internal globus pallidus through implanted electrodes (‘deep brain stimulation’) has been introduced as a treatment for Parkinson’s disease. This treatment can help relieve symptoms of Parkinson’s disease, but it is not clear whether this is by inhibiting, exciting or more broadly disrupting abnormal information flow through the direct and indirect pathways (Chiken and Nambu, 2016).
One or more interactive elements has been excluded from this version of the text. You can view them online here: https://openpress.sussex.ac.uk/introductiontobiologicalpsychology/?p=636#oembed-2
See YouTube video ‘Medtronics Deep Brain Stimulation Patient’: https://www.youtube.com/watch?v=_tkmSn2m0Ck
Key Takeaways
• The basal ganglia contribute to high level motor control
• Inputs to the basal ganglia arise from many regions of the cerebral cortex, outputs are directed to the frontal lobe
• Disorders of the basal ganglia involve limited or excessive movement as exemplified by Parkinsonism and chorea, respectively
• The basal ganglia also have important non-motor functions.
Glossary:
Brainstem: the lower part of the brain that is connected to the spinal cord. The brainstem is responsible for regulating most of the body’s automatic functions that are essential for life, like respiratory rhythm, and swallowing, etc…
Ethological: related to the behaviour in natural conditions.
Muscle synergy: the activation of a group of muscles to contribute to a particular movement, thus reducing the dimensionality of muscle control.
Neocortex: is an evolutionary new set of layers of the mammalian cortex involved in higher-order brain functions: generation of motor commands, sensory perception, cognition, spatial reasoning and language.
Plasticity (neuronal): Inherently dynamic biological capacity of the nervous system (CNS) to undergo maturation, change structurally and functionally in response to experience and to adapt following injury.
References
Brown, T. G. (1911) The intrinsic factors in the act of progression in the mammal. The Proceedings of the Royal Society B. Lond., 84(572), 308-319. https://doi.org/10.1098/rspb.1911.0077
Chiken and Nambu, 2016; https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4871171/
Elbert, T., Pantev, C., Wienbruch, C., Rockstroh, B., Taub, E. (1995). Increased Cortical Representation of the Fingers of the Left Hand in String Players. Science, 270(5234), 305-307. https://doi.org/ 10.1126/science.270.5234.30
Filh P. & Thomas B. (2010). Recognizing human gait types. In Ude A. (Ed), Robot Vision (pp183-208). InTechOpen. https://doi.org/10.5772/9293
Graziano, M. S. A. (2016). Ethological action maps: a paradigm shift for the motor cortex. Trends Cogn Sci, 20(2), 121-132. https://doi.org/10.1016/j.tics.2015.10.008
Homes, G. (1922). Clinical symptoms of cerebellar disease and their interpretation. Croonian Lecture I. The Lancet, 202 (1), 1178-1182. & Lecture II. The Lancet 202 (1), 1232-1237. DOI: 10.1080/14734220701415208
Kandel E., Schwartz J. & Jessell T. (2012) Principles of neural science (5th Ed) . McGraw-Hill Education.
Kleim, J. A., Hogg, T. M., VandenBerg, P. M., Cooper, N. R., Bruneau, R., & Remple, M. (2004). Cortical synaptogenesis and motor map reorganization occur during late, but not early, phase of motor skill learning. J Neurosci, 24(3), 628-633. https://doi.org/10.1523/JNEUROSCI.3440-03.2004
Kolb, B., Whishaw, I. Q., & Teskey, G. C. (2019). An introduction to brain and behavior (6th ed.) Worth Publishers.
Nudo, R.J., Wise, B.M., SiFuentes, F., Milliken G.W. (1996). Neural substrates for the effects of rehabilitative training on motor recovery after ischemic infarct. Science, 272(5269):1791-4. DOI: 10.1126/science.272.5269.1791
Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60(4), 389-443. https://doi.org/10.1093/brain/60.4.389
Sherrington, C. S. (1910). Flexion-reflex of the limb, crossed extension-reflex, and reflex stepping and standing. J Physiol, 40(1-2), 28-121. https://doi.org/10.1113/jphysiol.1910.sp001362
Taub, E. (2012). The behavior-analytic origins of constraint-induced movement therapy: An example of behavioral neurorehabilitation. Behav Analyst, 35, 155–178. doi: 10.1007/BF03392276
Ziemann, (2005).
About the Author
Dr Jimena Berni is a Senior Researcher at the Brighton and Sussex Medical School, University of Sussex. Her laboratory investigates the relation between neuronal circuits and behaviour with an emphasis on the diversification of circuits and the role of genes in specifying different neuronal networks and their assembly during development. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/05%3A_Interacting_with_the_world/5.01%3A_The_motor_system.txt |
Learning Objectives
After reading this chapter you will be able to understand:
• the different levels of sensorimotor integration
• the involvement of different brain systems in preparing, executing and evaluating a behavioural action based on external and internal sensory information.
Animals are the only branch of living organisms that have brains and almost all of them do. (Note: There are a few exceptions. Sponges are simple animals that survive on the sea floor by taking nutrients into their porous bodies, and they have no brain or nervous tissue of any kind.) Current theories suggest that one of the main advantages of having a brain is to allow its carrier to move around and interact with the environment. Let’s analyse an illustrative example: the sea squirt, a marine invertebrate animal, which has a very peculiar cycle of life (Figure 5.14).
In its juvenile form, the sea squirt swims around, looking for a suitable rock on which to attach itself. To do so, it uses a rudimentary central nervous system of around 200 neurons. Once attached, the animal becomes sessile (immobile), and eats its brain, a rich source of energy. For the rest of its life the sea squirt will remain immobile, so there is no longer any need for a brain. This fascinating example offers a strong support for the necessity of brains to generate adaptive behaviour by coordinating sensory information into motor action.
In this chapter, we will explore how brains produce adaptive behaviour by acquiring information from the environment through the senses. We will start by analysing the simplest sensorimotor integration mechanism, the spinal monosynaptic reflex, and escalate in complexity all the way to explain the generation of a complex behaviour such as hitting a tennis ball with a racquet during a match.
Intuitively, becoming a World Chess Champion is a much more complicated task than moving a pawn one square forward in a chess game. Reality suggests otherwise. We humans have been able to build, after a lot of effort and investment, a computer that is capable of consistently winning chess games against human World Champions. Nevertheless, we are still at the infancy of designing and building machines that have the manual dexterity of a small child when picking up a pawn and gently moving it one square forward. How is this possible?
In the game of chess there are a finite number of rules and movement possibilities, all of which are known to us. So, programming a computer that had sufficient computing power to calculate all movement possibilities and outcomes during a chess game was simply a technical challenge: considerable, but feasible. In 1997, an IBM computer called Deep Blue was the first machine capable of defeating the best human chess player of the time, the World Chess Champion Gary Kasparov. This was certainly an outstanding achievement, but Deep Blue was not physically moving the chess pieces but instead deciding on the next move given the current state of play. A human helper was required to physically move the pieces following the computer instruction.
Robots that can do physical actions mimicking humans are much more complicated to build and program than Deep Blue. Building and programming a robotic arm that can uncap a water bottle and pour a glass of water takes considerable amounts of money and the brains of lots of intelligent engineers. But the same robotic arm is unable to do other simple tasks for humans, such as tying shoelaces or breaking an egg to prepare a meal. Why is this the case?
Probably because robot designers are still not able to incorporate most of the fundamental rules that the nervous system uses to coordinate such tasks, based on real-time integration and analysis of noisy and multi-dimensional sensory information. In the following sections we will revise what is known about the basic principles ruling the sensorimotor integration, focusing on how and where sensory information is computed by the brain to produce adaptive and flexible behavioural outputs.
Sensorimotor integration: the minimal unit
One of the simplest structures to produce sensorimotor integration in humans is the monosynaptic spinal reflex (Figure 5.15).
These reflex arches comprise one sensory neuron, originating in a target muscle, and one motor neuron, originating in the spinal cord and making a synaptic contact with the same target muscle. The sensory neuron collects information about the stretch status of the muscle fibres via stretch sensitive sensory terminals. When the muscle stretches beyond a certain threshold the sensory neuron is activated, firing action potentials that travel to its axon terminal. The sensory neuron releases neurotransmitters that activate the motor neuron, which in turn fires action potentials that travel to its axon terminal located in the target muscle. Activation of the motor neuron axon releases the neurotransmitter acetylcholine (ACh), which, via ACh receptor channels, results in muscular fibre contraction. This very simple circuit is an example of sensorimotor integration. In this case, the sensory information is rather simple: the sensory neuron is activated or not, depending on the muscle stretch surpassing the predetermined threshold. Also, the outcome is all or none: either the motor neuron fires action potentials, and contracts the target muscle, or not. This very simple sensorimotor integration system serves a very specific function of preventing overstretching of the target muscle, which can damage permanently the muscle fibres. The behavioural outcome is also simple, producing a muscle contraction to prevent injury, but well-suited for its biological function.
Since we claimed above that the reason for having a brain is to produce behaviours such as swimming or looking for a suitable place to attach, you may be puzzled by the fact that spinal reflexes produce behaviours without involving the brain. One explanation for this apparent discrepancy is the biological function served by these reflexes. This becomes clear in the following home-based experiment on the knee patellar reflex in humans from Backyard Brains.
Applying a gentle hit to the patellar tendon (connecting the quadriceps muscle with the tibia) of a human volunteer, the quadriceps of the same leg contracts within 20 to 30 milliseconds. In contrast, if we instruct the volunteer (who is blindfolded so they cannot anticipate the upcoming hit to the knee) to contract the quadriceps of the other leg every time they detect the gentle hit on the target knee, the delay between the hit and the contraction increases to around 200 milliseconds. The contraction of the same leg that receives the hit is governed by a spinal reflex, whereas the contraction of the contralateral leg is controlled by the participant’s voluntary decision to move it, a decision that involves the participant’s brain activity. Given the biological function of the spinal reflexes, bypassing the brain allows for a faster response with higher chances of preserving tissue integrity.
However, most human behaviours taking place in our daily activities are a consequence of complex interactions of sensory information, internal state and response possibilities, which requires the computational power of the brain to maximise the benefits of action selection in real time in an ever-changing environment.
Behaviour in an ever-changing world
Let’s explore the following scenario. One morning, you are ironing your clothes before attending a job interview. In a split-second distraction one of your fingers touches the hot iron, and you immediately and rapidly retrieve your affected hand and arm from contact with the iron’s surface.
This everyday life example illustrates the function of another type of spinal reflex called polysynaptic reflex. In this case, the sensory and motor neurons characteristic of the monosynaptic reflexes presented above are complemented by an interneuron making a synaptic bridge between them (Figure 5.16a).
The function of this reflex is to prevent damage to the target body part and does not recruit or require brain activity.
But let’s now imagine that after ironing your clothes, you prepare some coffee for breakfast. When you remove the recently-heated cup of coffee from the microwave you realise that you overheated the coffee, and the cup is too hot for you to handle it all the way to the table. As in the previous example, the corresponding spinal reflex will be activated by the heat to prevent damage to your hand, but you do not drop the hot cup. Instead, you look for a nearby surface on which to place the cup down without spilling its precious contents.
If your brain were not involved in this scenario you would end up with a broken cup of coffee scattered all over the floor, but instead you managed to find a better solution and saved your hand and the coffee. This is an example of sensorimotor integration where action selection by the brain is key to produce an adaptive behavioural response. The polysynaptic spinal reflex that saved your hand from the hot iron needed to be inhibited in this case, and this was possible by the activation of additional interneurons descending from the brain (Figure 5.16b).
Your intention of drinking that coffee, and a plausible prediction of contacting a hot object when you were reaching for the cup inside the microwave, influenced your action selection and overrode the polysynaptic spinal reflex that was sensory-activated. This example illustrates why the brain is essential for this type of behavioural selection, since it can integrate sensory information from different sources, along with the internal state of the subject, to produce a more accurate and advantageous action at the right moment and time.
Neural pathways and structures involved in voluntary actions
Up until now we have revised how reflexes can control actions, and how the integration of multiple sources of information can alter predetermined reflex actions to produce more adaptive behaviours. Nevertheless, most of our daily actions and behaviours are produced voluntarily, without an apparent involvement of mono or polysynaptic reflexes. Seemingly trivial movements like hitting a tennis ball with a racquet require a complex integration of sensory perception and analysis of internal state, including posture and muscle status, action selection and execution. Such a complex task is entirely up to the brain and involves detecting the looming ball through visual and auditory information, estimating the ball speed and area of bounce, approaching the target area and preparing the strike, and finally striking the ball with the centre of the racquet.
In this section we will revise the sensory and neural pathways, and body structures, necessary to produce such voluntary action. You have heard about many of these in previous chapters into sensory and motor pathways, but here we will consider in more detail how they work together to generate behaviour. We will also discuss some of the basic principles that the brain uses to produce the best possible solution for the problem, hitting the tennis ball back to the other side of the tennis court, as well as how the consequences of our actions can sculpt more refined sensorimotor integration processes, producing better actions.
Tracking the ball: audition and vision in action
If you ever played a tennis match, you may recognise that there are two main sources of information when we are trying to track a looming tennis ball. Clearly, this task is mainly solved by the visual system, but the auditory system also plays a part.
For more experienced players, the sound of the ball being hit by the opponent is an early indication of the rough course the ball might follow. If the sound volume is very low or high, the probabilities of the ball not hitting the permitted section of are our side of the court are high. Also, the quality or frequency of the sound may also indicate if the ball may be worth tracking and preparing to return it. In those cases, we might even decide the ball is not worth tracking at all and we prepare for the next point. As discussed in the chapter Perceiving sound, both the volume and pitch of auditory stimulus are perceived in our inner ear by the structure called the cochlea.
Inner hair cells distributed along the basilar membrane can detect specific sound frequencies and codify the intensity of such frequencies by their action potential firing rate. The auditory information is translated into the language of the nervous system, action potentials, and it reaches the brain via the auditory nerve. Based on previous experience, the brain may be able to determine when the volume and pitch of a sound from the ball being hit by the opponent is more likely associated with a ball missing the target area of the court. In those cases, the motor command activated by the brain will be to prepare for the next point rather than tracking and striking the ball. But, if the sound is about right, then the visual system and a more complex sensorimotor integration mechanism takes place.
The eye is responsible for translating visual information into action potentials. The moving tennis ball travelling at speed towards our side of the court constitutes a looming object that occupies gradually more space on the retina surface. Both eyes will detect the ball, and the visual information will be integrated in the brain to estimate not only the direction of the ball but also its speed. As the ball moves towards the near side of the court, the eyes will move aiming to maintain the object in focus within the fovea. Keeping the image of the ball within the fovea will give the player the best visual resolution in daylight, maximising the capacity to detect the ball as it travels in a luminous environment. Photons bouncing on the tennis ball that arrive to the fovea will excite a collection of photoreceptors. These specialised cells will translate the visual information into electrical information via the activation of the photopigment and specific ion channels. Changes in the photoreceptor membrane potential lead to activation of the bipolar and ganglion cells, which convey the visual information, now converted into action potentials, to the brain.
The visual information arriving at the brain will play different roles in different motor outputs during action selection and execution. Early visual processing will be required for tracking the moving ball. Empirical research has determined which are the neural pathways involved in object tracking, and how this information is used to produce motor commands for eye movement. Rapid eye movements, called saccades, are used to track the ball and acquire information about the environment. This will be particularly important as we start approaching the area where the ball may bounce, since we not only require keeping an eye on the ball, but also moving safely and effectively within the court.
In the laboratory, visual attention of healthy volunteers can be traced by tracking the position of the eyes in real time. Using electroencephalography or brain imaging techniques in combination with eye tracking, we have learnt that saccades are mainly controlled by the oculomotor loop involving the cerebral cortex, the basal ganglia, and the thalamus (Figure 5.17).
Neurons in the posterior parietal cortex, an associative region that receives visual and motor information, increase their firing rate just before a saccade is observed. Lesions affecting this region destroy the capacity to perform saccades and produce a condition called ‘spatial neglect’ in humans, characterised by attentional deficit in the visual field contralateral to the injured brain hemisphere.
The posterior parietal cortex sends connections to several nuclei of the basal ganglia controlling eye movements. One of these nuclei, the superior colliculus, contains visual fixation neurons. These cells are activated immediately after a saccade takes place, and keep firing during eye fixation, inhibiting eye movement away from the target location.
Hence, the concerted action of several key brain regions is responsible for tracking down the tennis ball approaching to our side of the court. This information is valuable, but simply tracking a moving tennis ball does not mean we will be able to hit it back with a racquet. How can we use this information to prepare our action of returning the ball to the other side?
In the following section, we will analyse how the different sensory information streams are integrated to coordinate this action.
Integration of visual information:navigating the court towards the ball
The visual information carried out by the optic nerve follows parallel pathways for the analysis of different attributes of the visual sensory experience. Two main pathways are distinguished by the involvement of the primary visual cortex V1.
• In the geniculostriate pathway, the visual information from the optic nerve arrives to the lateral geniculate nucleus of the thalamus and then follows into the primary visual cortex V1.
The geniculostriate pathway divides the visual information into the dorsal and ventral streams. In our example, the dorsal stream will be responsible for perceiving the motion of the ball and the spatial relationship between the ball and myself (the so-called how information). The ventral stream will be responsible for determining the contrast, contour, and colour of the tennis ball (the so-called what information).
• In the tectopulvinar pathway, the visual information carried by the optic nerve is relayed into the superior colliculus, a region of the midbrain, and then follows into the pulvinar nucleus of the thalamus.
The tectopulvinar pathway determines the spatial location of objects in the environment, allowing us to navigate without hitting stationary objects. This visual pathway is independent of V1, and allows for an effective navigation of the tennis court avoiding stepping on stationary balls or other potentially dangerous objects.
As usual in neuroscience research, analysis of brain lesions and their consequences are key for understanding brain functioning. Some individuals who suffer a stroke affecting the primary visual cortex V1 are technically blind. They fail all tests for detecting objects or recognising others and places. This is due to the disruption of the geniculostriate pathway, that analyses the how and what of the visual experience, and supports the conscious experience of seeing.
Nevertheless, these patients can solve a visual navigation test (watch a video example of a visual navigation test). If they are left alone to walk down a corridor with different objects scattered along the path, the patients manage to navigate on their own without tripping, even if they do not experience conscious visual perception. This condition is known as blindsight (see also Box 9, Lighting the world: our sense of vision) and the remarkable behavioural observation is explained by the functioning of the tectopulvinar pathway, which does not use V1 for determining the position of objects in the environment. This fascinating observation is a good example of how neuroscientific research reveals brain functioning by analysing the effect of focal lesions in different brain regions.
But let’s go back to the moving tennis ball and how the brain uses this information to produce an action.
With all the visual information flowing through the different pathways, together with the interoceptive information regarding our internal state and the position of our legs and arms, the brain is making a continuous integration and selecting the right action for the right time. The rules the brain follows to make such decisions are currently a matter of intense focus in basic neuroscience research. One hypothesis is that the brain is constantly producing a Bayesian analysis of the world based on sensory information (Körding et al., 2007), using prior experience modulated by ongoing information to calculate the most probable outcome. The brain possesses prior information on where the ball is likely to bounce, which is generated from previous experience playing the game. For instance, very good tennis players aim for the ball to bounce near the court lines, which makes it more difficult for the adversary to return it. This prior information (the likelihood that the ball will be bouncing close to the court line) is combined with the live sensory information of where we estimate the ball is going to bounce. Hence, our estimation of the actual likelihood of the ball bouncing at a particular location on our side of the court is a product of overlapping the prior information with the present information. Our brain will then produce a prediction of where the ball is likely to bounce, and we will approach that position to prepare for the action. As the ball gets closer to the floor, the prediction based on sensory information becomes more accurate and so does the selection of the right behavioural action for those set of conditions.
Action!
Now that the brain has integrated the available sensory information and predicted where the ball is going to touch the floor, it is time to execute the motor command of hitting the ball after it bounces. Execution and control of voluntary motor sequences are performed by motor loops involving different regions of the cerebral cortex and the basal ganglia.
In our example, the two main motor loops involved are the oculomotor and body movement loops (see Figure 5.17, above).
As we mentioned in the section Tracking the ball, the oculomotor loop receives sensory and interoceptive information to control eye movement necessary for following the moving ball. The body movement loop controls the hundreds of muscles necessary for performing actions, and involves a serial connection of motor, premotor and somatosensory cortices with areas within the basal ganglia and the thalamus. The thalamus sends feedback connections into the early regions of the cortex. This neuronal circuit is key for constant monitoring of the current action and allows for modification of actions while they are being executed. The striatum and globus pallidus within the basal ganglia are important for action selection, initiation, and termination of motor actions (as seen in the example of eye saccades), and for relating actions with their consequences.According to tennis instructors, to hit a tennis ball properly requires a refined coordination between the position of the ball and the movement of the racquet. To achieve this goal, it is important to keep the eyes tracking the ball at all times, even when we are hitting the ball. To achieve this goal, the cerebellum needs to get involved (Miall et al., 2001). This brain region is activated during tasks that require high coordination between eye tracking and hand movements.
In addition, the execution of any of the movements mentioned so far, as well as the action of hitting the ball, will require the activation of the motor homunculus maps of the motor cortex. All the regions controlling the movement of the participating limbs and muscles will be recruited by the motor command during the whole exercise.
Lastly, activation of motor neurons by these motor commands will produce the firing of action potentials that will travel to the axon terminals. As we heard in the previous chapter, the synaptic contact between the motor neuron axons and the muscle fibres is a specific type of synapse called the neuromuscular junction. When the axon is activated by the arrival of one or more action potentials the internal concentration of the Ca2+ ion increases, increasing the probability of release of synaptic vesicles containing the neurotransmitter acetylcholine. Release of ACh into the synaptic cleft will activate ACh receptor channels expressed in the muscle fibre membrane, driving the depolarisation of the cell membrane and contraction of the muscle fibres.
The coordinated contraction of specific muscle and muscle groups are the outcome of a complex sensorimotor integration and coordination system. Even after the action is executed, the senses and the brain will continue to monitor the environment analysing its consequences.
Behavioural outcome and prediction error
Whether or not we were able to hit the ball, and depending on the outcome of that action, the brain will integrate this information through the reward system dependent on the neurotransmitter dopamine (Figure 5.18).
When the outcome matches our expectations, there is no error signal. The brain has mechanisms to maintain the neural relationships responsible for that behaviour as an adaptive response for similar scenarios. If the outcome differs from the expected results because we miss the ball completely, or it hit the net or went flying past the bottom court line, a prediction error signal is generated in several regions of the brain, by the release of dopamine (Schultz, 2000). This dopamine signal will affect the way different regions of the brain connect to each other, allowing for the modification of the action of approaching or hitting the tennis ball in future encounters. The reward system and the prediction of specific outcomes allow for the sensorimotor integration mechanism to learn from its own performance, allowing improvement of actions.
Key Takeaways
• Sensorimotor systems have evolved in animals to generate adaptive behavioural responses to environmental and internal stimuli.
• A simple action of a tennis ball during a match requires coordinated activity of a myriad of brain systems.
• Sensory information, relayed in real time to the brain, is key for selection of most appropriate motor actions for task performance.
• Memory, as prior knowledge, is also key for motor action selection.
• Producing an action involves several cortex-basal ganglia loops (oculomotor and skeletomotor), as well as the cerebellum.
• The dopaminergic system is often key for coordinating the maintenance or modification of motor actions.
References please check and correct [CNH – who is checking these?] Tom from RS team
Decision Theory: What “Should” the Nervous System Do? KONRAD KÖRDING, SCIENCE, 26 Oct 2007, Vol 318, Issue 5850, pp. 606-610, DOI: 10.1126/science.1142998
The cerebellum coordinates eye and hand tracking movements. R C Miall 1, H Imamizu, Nat Neurosci. 2001 Jun;4(6):638-44. doi: 10.1038/88465
Multiple reward signals in the brain. W Schultz, Nat Rev Neurosci. 2000 Dec;1(3):199-207. doi: 10.1038/35044563.
About the Author
Dr Emiliano Merlo obtained a PhD in biology at the University of Buenos Aires, investigating the neurobiology of memory in crabs. He then moved to the University of Cambridge as a Newton International Fellow of The Royal Society and specialised in behavioural neuroscience, focusing on the effect of retrieval on memory persistence. Emiliano recently became a lecturer in the School of Psychology at the University of Sussex, where he convenes a module on the Science of Memory, and lectures on sensory and motor systems, and motivated behaviour in several undergraduate and graduate modules. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/05%3A_Interacting_with_the_world/5.02%3A_Sensorimotor_integration.txt |
Why do we get up in the morning? an oft-heard question about the causation of our action. But why do we get up in the morning?
In this case, a complex causal chain of events: basic physiological regulatory mechanisms involving brain stem nuclei and hypothalamic photosensors involved in regulating our circadian wake-sleep rhythm; gut hormonal mechanisms that interact again at the level of hypothalamic nuclei signalling hunger and our need or desire to eat, maternal mechanisms involving e.g. oxytocin that drive our instinct to nurse our infant child, dopaminergic forebrain mechanisms that regulate our desire for earning rewards at work (i.e., a salary), as well as more diffuse and long-term cognitive expectations about what the day, week, year and career may bring us, and so forth.
The study of motivation cuts across psychological domains to understand principle mechanisms that cause our behaviours, whether it involves basic, essential regulatory behaviours such as drinking, eating/feeding, fighting, desire for sex, or more complex, psychological ones. We focus here principally on feeding, though will touch briefly on temperature regulation as a model system for regulatory physiological mechanism of motivated action, and drinking as a motivated behaviour aimed at maintaining hydration levels to ensure optimal physiological, neuronal and psychological function.
Motivation can be defined as an internal state that explains why we behave or why we learn to behave, and the study of motivation focuses on understanding what causes, drives and energises behaviour. We can therefore use terms like motivational states, motivational drives, and motivational desires to describe motivation. There are broadly two main classes of motivated behaviours: those that are ‘regulatory’ in nature and those that are ‘non-regulatory’ in nature.
Learning Objectives
By the end of the chapter you will understand the processes involved in:
• Basic mechanics of homeostatic system, including temperature regulation
• Mechanisms in brain, body including gut, involved in drinking and feeding
• How homeostatic mechanisms that regulate physiological variables around set-point, can deregulate to vary away from a set-point, including learning mechanisms, and more complex ones involving desire and hedonics.
We find so-called homeostatic regulatory mechanisms at the foundation of those motivated behaviours essential for basic survival needs; mechanisms that regulate our thirst (and thereby levels of (de)hydration), our sense of hunger and satiety (and in doing so, our levels of nutrient, including carbs, fats, vitamins, intake). These mechanisms encompass complex physiological mechanisms by which nutrients, water and salts are absorbed, distributed, released and excreted, but also behavioural (consummatory) mechanisms namely drinking and eating, and (appetitive) behaviors that direct, approach locations in the environment, and then make us work for water and nutrients to consume.
But regulatory homeostatic mechanisms, as essential as they are (AND THEY ARE!) are only part of the story of motivation; we do not just eat because we lack nutrition, nor do we only drink when we are dehydrated. We do not just have sex to procreate, or run when we are scared. Especially, human motivation is often not-regulatory in nature, requiring explanation that extent on regulatory mechanisms. We will explore these ‘higher motivations’ that rely on a complex set of brain structure forming a ring around the core thalamus in the human and non-human forebrain, described early in the history of neuroscience by Paul Broca (known also for identifying Broca’s speech production temporal lobe area), and built on throughout the years by Papez, McClean, and more recently the likes of Frederik Toates and Kent Berridge.
As said, we start by considering basic regulatory mechanisms of feeding and thirst, preceded by a very brief consideration how our body (and brain) and the thermostats in our houses and offices regulate temperature. This necessarily requires us to consider some older research from early and mid last century, mostly involving American, British and western European scientists, but the field has burgeoned over the decades into a diverse and inclusive scientific community. We begin in Box 1 with a typical but also atypical candidate in the likes of B. F. Skinner. Typical because he was an American (caucasian) male professor at Harvard; atypical, or better, unexpected, because he was a vocal opponent of the study of motivational mechanisms and states as a genuine targets of scientific inquiry by psychologists.
Motivation as a homeostasic negative-feedback mechanism
Box 1: Behaviourism and the study of motivation
One common tool used by experimental psychologists for studying motivational processes (especially in non-human animals) is the operant or instrumental conditioning chamber and was (somewhat ironically) designed and developed by the American scientist B.F. Skinner (1904-1990), who used the chamber for his seminal studies on the experimental analysis of behaviour. to show how ‘behaviour is shaped and maintained by its consequence’. If, for example, the consequence of a behaviour is generally positive because it leads to a rewarding outcome (e.g. the delivery of food for a hungry animal), or because it leads to avoiding an unpleasant outcome (e.g. the avoidance of a loud sound), then the animal will learn to repeat that behaviour (i.e. the behaviour is reinforced and the outcome of the behaviour is considered a reinforcer of the behaviour). Interestingly, however, Skinner also believed that trying to understand any internal states that may make an animal seek the reinforcing outcome more in some cases than others (e.g. seeking food when hungry vs. when satiated) distracts from understanding the effect of the reinforcing outcome or the reinforcer on behaviour. He famously wrote that ‘Mentalistic terms associated with reinforcers and with the states in which reinforcers are effective make it difficult to spot functional relations’ (B.F. Skinner, About Behaviorism, 1947).
Nevertheless, the methods that Skinner developed to study how the outcome of a behaviour affects whether the behaviour will be learned and repeated are also used today to delineate the motivation behind the execution of a behaviour.
Operant conditioning chambers (or Skinner boxes), typically contain a lever and a food/sugar-pellet dispenser cup on one side of the chamber. They also typically contain signal lights of different colours and speakers through which sound tones can be played. They may also contain an electric grid through which mild electric shocks (negative stimuli) can be delivered. Experiments involve animals learning that simple presses of the lever result in the delivery of food rewards. In other experiments, animals may learn that a tone or a particular light may signal that food becomes available in the dispenser cup.
Motivation researchers may explore the parts of the brain that are involved in the behaviour of pressing the lever for food when the animal is hungry and must alleviate this internal motivational state of hunger (i.e. regulatory motivated behaviour). In other experiments, motivation researchers may want to explore how other factors, such as learned associations between a light or a sound tone and food, may instigate pressing the lever for food even in an animal that is full, and thus explore brain regions involved in motivated behaviour that is not regulatory in nature.
Clark Hull (1884-1952) proposed that motivated behaviour is principally determine or driven by the need to alleviate an internal state of deprivation. Said simply, food reinforces a feeding behaviour if and because it alleviates a hunger state. Thus, the state of hunger is the internal state of deprivation and therefore the motivation to eat. Hull’s (1943) Drive Reduction theory emphasised the importance of maintaining homeostasis as the drive or motivation behind behaviour, and suggested that if this homeostasis, or balance in the internal environment of the organism, was taken away, this would lead to increases in arousal that would initiate action to bring back the balance. Thus the goal for an organism is to remain in homeostasis and to “reduce” any “drives” or motivations that arise from an imbalance in the system, so to reduce the arousal.
The organism can restore balance in its internal environment by acting to minimise the difference between an actual point that has caused the internal environment of the organism to be out of balance, and a set point, which is the point that the organism wants to be at in order to be at equilibrium, and be balanced. In order for our body to work properly, certain variables in our body must be maintained within narrow limits. As humans, we have optimum set points for body temperature (36.5-37.5 degrees Celsius; °C). We also have optimum set points for levels of hydration and levels of nutrients that are necessary for our body to work properly. The systems in our body that control body temperature, hydration and levels of nutrients, are homeostatic systems and thus tend towards equilibrium at particular set points. If there are deviations from these set points, the homeostatic processes that control them will become active so that action and behaviours (e.g. putting a jacket on when it’s cold and body temperature drops or moving to shaded or cooler spots when it’s hot and body temperature increases) can restore equilibrium, or so that physiological mechanisms outside our control (e.g. immune responses), can be activated to restore equilibrium.
These early drive reduction views about what motivates behaviour, rested on the idea of negative feedback proposed by Walter Cannon (1871 – 1945). He was a doctor and a medical researcher in the First World War, and proposed that homeostatic systems maintain balance via negative feedback. Negative feedback is a process by which the effect produced by an action serves to diminish or terminate that action. Negative feedback mechanisms are the primary way by which homeostatic systems can reduce the difference between an actual point and the ideal set point that the system wants to be at. Let’s have a look at how negative feedback processes work:
ELIZA TO DRAW LIST 2 (green) Notes to illistrator: Change (1) to “Environmental influences produce a change in variable”. Link to image: https://www.pharmaguideline.com/2022...meostasis.html
Figure 5.21 demonstrates how negative feedback loops work to maintain homeostasis.
At the bottom of the image you can see the physiological variable that must remain within narrow limits of the set point so that the balance does not tip to either one side. If we use the example of body temperature, the set point is within 36.5°C and 37.5°C, because that is normal body temperature. The system also consists of sensors or receptors. These measure what the actual body temperature is. Information about actual body temperature is typically sent to a control system that can monitor deviations from the set point. If there are deviations and the balance tips one way or the other, the control system will send this information to the effector part of the system so that correctional behaviours or physiological responses can be initiated in order to restore body temperature within the narrow margins of normal body temperature.
This is also how the thermostat in our homes works. If the ideal/target temperature on the thermostat has been set to 21°C, and it happens to be a cold windy day, the thermostat will display the target temperature (i.e. 21°C) and an actual temperature (e.g. 18°C). Since the actual temperature on the sensor/thermostat deviates from the target, the boiler (effector) will start working, turn the radiators on, and restore the home to 21°C, at which point the boiler will switch off. This is conceptually how we think of physiological homeostatic systems in our body that use negative feedback mechanisms to restore and maintain balance to the system.
Thus, a homeostatic system, or a physiological system that depends on homeostasis, requires a system variable that is controlled by the system (e.g., temperature, hydration, nutrients), which must remain within narrow bounds of a set-point for the system to work well (e.g. 36.5-37.5°C for body temperature). Sensors (receptors) measure the actual value of the system variable, and transmit this information to a control centre, which can detect deviations from the set point. If deviations are detected, the control centre transmits this information to the effector system which initiates the necessary behavioural/physiological processes to change the system variable and restore homeostasis.
Motivation to eat/stop-eating to maintain homeostasis
Box 2: How does the body use energy, and how does it extract energy from food?
The body uses energy for three primary reasons.
The largest amount of energy that we take in through our food is used to maintain basal metabolism rates (BMR). Thus 55% of energy usage is to maintain body heat and other basic bodily functions (e.g. breathing, blood circulation). The proportion of energy used to maintain BMR varies as a function of body size. Elephants, for example, consume more energy to maintain basic functions than mice. Of this 55%, the liver uses 27% and the brain uses 19%.
The digestion of food and the processes involved in extracting nutrients from food uses 33% of the energy that comes in through food.
Finally, 12-13% of the energy that we take in is used as energy for active behaviour, and this percentage varies depending on the level of exercise/activity that we do. If we go to the gym, for example, we will use more than the 13%. Since only a fraction of the energy that we consume is utilised for activity other than BMR maintenance and digestion, while exercise is a good way to lose weight, reductions in intake are also necessary for weight loss. Energy which is not used for BMR maintenance, digestion, or activity, will be stored as energy reserves either in the liver (short term storage) or in fatty tissue (long term storage).
Glucose is the primary fuel or form of energy that the body uses. Glucose is derived from three main sources in our diet:
• carbohydrates (sugars),
• amino acids (building blocks of proteins), and
• lipids (fat).
Carbohydrates are broken down and converted into glucose as soon as they are taken in by the body. Glucose in turn is used as the main energy source to fuel the brain, muscles and the rest of the body. Excess glucose is stored in the form of glycogen in the liver. This is a short term storage of energy that we can use when needed through a process involving the pancreas. The pancreas releases insulin, which converts excess glucose into glycogen that is then stored in the liver for short term storage. If we need this energy, the pancreas secretes glucagon to convert the glycogen back to glucose so that it is then used by the body and the brain. Carbohydrates are not essential from a body building block perspective. They are also not the only source of energy because proteins and fats are also a source of glucose. This knowledge is the basis for many low carb diets, whereby by reducing the intake of carbohydrates, the individual will need to get their glucose from amino acids, and especially fats.
Amino acids derive from proteins, and provide the basic building blocks that cells need (e.g. cells embed proteins in their membranes and the structure of the cell’s membrane requires proteins). Amino acids are a source of glucose as well. They are converted to glycerol which in turn is converted to glucose. There are nine amino acids that are essential (i.e. we cannot produce them in our bodies and need to take them in through our diet). Tryptophan is one essential amino acid and is found in oats, bananas, dried prunes, milk, tuna fish, cheese, bread, chicken, turkey, peanuts, and chocolate. It is the sole precursor of the neurotransmitter serotonin. The ability to change the rates of serotonin synthesis through the manipulation of levels of tryptophan in the body is the foundation of a large body of research examining the relationship between serotonin dysregulation and mood, behaviour, and cognition (Richard et al., 2009 for review).
Finally, lipids or fats can also be converted to glucose, and constitute essential building blocks for our cells (e.g. the lipid bilayer that forms the membrane of our cells). Glucose can also be stored long term in fatty tissue, or adipose tissue in our body. Fats are stored in fatty or adipose tissue, or converted either into fatty acids or glycerol. Glycerol can in turn be converted into glucose for energy.
Carbohydrates are non-essential but amino acids and lipids are essential from a building block perspective, as are minerals and vitamins. Minerals and vitamins must also be taken in through our diets or via supplements; they are essential for normal body functioning, but they are not a source of energy.
If the motivation to eat or stop eating results from a need to alleviate a negative state of hunger or of feeling full respectively, it begs the questions:
• What is the system variable that needs to remain in homeostasis?
• Which sensors or receptors measure the variable?
• Is there an effector mechanism that either changes metabolic processes or that initiates or terminates the feeding behaviour so that equilibrium is restored in the system?
• If so, is the effector mechanism located in a particular part of the body, or the brain?
Since glucose is the main source of energy in the body, it would make sense that we should have a homeostatic system that regulates the amount of glucose in the body.
The notion that glucose metabolism plays a key role in the control of hunger, satiety and the regulation of body energy balance, was first proposed by Anton Julius Carlson (1916), but was later formalised into the glucostatic theory of food control by Jean Mayer (1954;1955). According to this theory, the system variable that should be maintained within narrow limits is the level of glucose concentration in the blood. Campfield and Smith (2003) recorded blood glucose concentration changes in rats over time, and found that a fall in blood glucose was correlated with meal initiation. Thus, when blood glucose concentrations decreased, the animal would begin feeding, which would result in the rise of blood glucose concentrations.
While the glucostatic theory of food control proposed that short-term appetite control or starting/stopping eating is mediated by deviations from a hypothetical blood glucose level set point, other proposals included glucose concentrations in the brain. In terms of long term regulation of weight, which is different from a glucose-mediated short-term control of appetite, the lipostatic theory suggested that in the long term the body is trying to maintain an optimum body fat level. These ideas are not mutually exclusive and might be complementary. It may be that our body is regulating multiple variables in a homeostatic way.
If deviations from optimum blood or brain concentrations of glucose elicit regulatory motivational drives to eat or to stop eating, what part of the brain or body constitutes the effector mechanism?
According to the dual centre model (Stellar, 1954), two areas in the hypothalamus (‘hypo’ = below the thalamus), the lateral hypothalamus and the ventromedial hypothalamus, were thought to be the dedicated start and stop eating centres. The lateral hypothalamus is a group of cells in the hypothalamus that are located away from the mid line of the brain, while the ventromedial hypothalamus is a group of cells that are near the midline (medial) and towards the bottom (ventral) part of the hypothalamus.
The model was based on findings from lesion studies. Bilateral lesions of the ventromedial hypothalamus resulted in the animal starting to eat and put on weight (Hetherington and Ranson, 1942; Brobeck, Tepperman and Long, 1943). Thus, if removal of this area results in initiating feeding behaviour, then this area must be responsible for stopping feeding. Conversely, bilateral lesions of the lateral hypothalamus resulted in the animal eating less and losing weight compared to control animals without the lesion (Hetherington and Ranson, 1940; Anand and Brobeck, 1951). Thus, if removal of the lateral hypothalamus results in less feeding, then this area must be responsible for starting to eat. More recent experiments using optogenetics to stimulate the lateral hypothalamus have shown that animals initiate eating upon stimulation of the lateral hypothalamus (Urstadt et al., 2020). The following video shows you this effect (https://www.youtube.com/watch?v=lBhYmBkqj4o):
One or more interactive elements has been excluded from this version of the text. You can view them online here: https://openpress.sussex.ac.uk/introductiontobiologicalpsychology/?p=361#oembed-1
Thus the motivation to eat/stop eating from a homeostatic perspective could be governed by these effector mechanisms located in the lateral and ventromedial hypothalamus, with the lateral hypothalamus being responsible for initiating the processes that make the animal start to eat when the animal is hungry, and the ventromedial hypothalamus being responsible for the processes that make the animal stop eating when the animal is satiated.
Furthermore, research has identified receptors in the lateral hypothalamus and the liver (i.e. censors) that ‘measure’ levels of glucose.
However, further research suggested that the dual centre model may not reflect the full picture.
Research conducted by James Olds and Elliott Valenstein contradicted the idea that the lateral and ventromedial nuclei of the hypothalamus are dedicated starting and stopping eating centres. They placed animals in operant conditioning chambers and attached an electrode to their lateral hypothalamus. A lever was placed on one side of the chamber. When the animal accidentally pressed the lever, they received electric stimulation to their lateral hypothalamus. Thus, pressing the lever would result in the animal self-stimulating their lateral hypothalamus (a method known as ‘self-stimulation reward’). They observed that the animals would readily self-stimulate the lateral hypothalamus, and often to exhaustion. In some of their experimental set ups, the animals would run across a chamber where mild electric shocks were given in order to reach the lever that would allow it to self-stimulate the lateral hypothalamus. If the lateral hypothalamus is the hunger or start eating centre, why would the animals repeatedly press for stimulation that produces a hunger-like state?
In follow-up experiments, Elliot Valenstein changed the design of the studies so that only the researcher was able to administer the stimulation. They observed that similar to the optogenetics experiment mentioned above, the animal would eat upon stimulation when food was available. However, when water was available then the animal would drink. If there was an intruder in the chamber (e.g. another male rat), the animal would fight, and if there was a receptive female in the chamber the animal would mount the female. This suggested that the effects of lateral hypothalamic stimulation depend on the situation, and that therefore the lateral hypothalamus is not a dedicated hunger center, but more generally involved in motivated behaviours (including feeding).
The idea that the lateral and ventromedial hypothalamic nuclei are involved in hunger and satiety has not been rejected, but where research has moved is that maybe there are not dedicated parts of the brain but maybe there are dedicated receptors, or dedicated hormones that act on receptors. Maybe there are dedicated hunger or satiety hormones that play the role of the effector mechanism and lead to feeding or stopping feeding?
Thus, the idea of dedicated locations in the brain for hunger and satiety was revisited, following the discovery of various peptide hormones that are released predominantly in the periphery, by the gut, intestines or adipose tissue, that seemed to signal hunger or satiety.
Two peptide hormones, ghrelin and orexin, secreted from the gut (ghrelin), and from adipose tissue as well as from within the hypothalamus (orexin) not only stimulate food intake, but are also involved in wider motivational and also body clock regulatory processes.
The second set of hormones that were discovered were cholecystokinin (CCK) and Peptide YY. CCK is released from the intestines in response to the intake of fat. If hungry rats are administered CCK, feeding is inhibited. Peptide YY is released in the gut (stomach and intestines), and similarly injections in hungry animals, inhibit eating. Furthermore, there is some evidence that PYY may be abnormally low in individuals who are obese. There is stronger evidence from genetic conditions where these hormone levels are changed and the effects are dramatic.
Leptin was the first hormone that was discovered by researchers in Jeffrey Friedman’s lab in 1994 (Zhang et. al., 1994). The discovery of leptin was preceded by the accidental discovery of a genetic strain of mice (ob/ob mice) which grew to become obese, had decreased rates of basic metabolism and low physical activity. It was later concluded that their genetic mutation resulted in reduced circulation of leptin (see Figure 5 below). We now know that leptin is produced and released from adipose tissue, so fatty tissue, and we know that it acts on several different receptors, some of which are located within the ventromedial hypothalamus to signal stopping eating. Cases of congenital human leptin deficiency however are extremely rare, and while some clinical work in humans has shown that delivery of leptin in obese individuals allows them to lose weight, the clinical picture is more complicated as there is also evidence of leptin resistance (leptin doesn’t work well enough), as most obese individuals have plenty of leptin.
Non-regulatory motivated behaviours: motivation not for homeostasis
We know that motivation to eat or not does not only result from the need to maintain homeostasis of nutrients in the body.
We can stimulate eating through tastes and smells even in animals that are full, and we can stimulate eating and stopping eating through learned associations. Motivation in these cases does not depend on homeostatic mechanisms. Thus, conditioned motivational drives can cause changes in appetite.
Prior to three months, babies feed to maintain homeostasis: they take large breastfeeds first thing in the morning to relieve hunger when they wake up. However, at around 3-6 months, they switch to a large feed last thing at night. This large meal anticipates the relative difficulty of obtaining night-feeds. So this is not to relieve hunger, but in anticipation of possible hunger.
This anticipatory eating behaviour has also been observed in rats (Strubbe and Woods, 2004). Rats are nocturnal animals: they eat and drink when it’s dark, and sleep during day time.
Figure 5.24: ELIZA TO DRAW
Homeostatic and anticipatory eating and drinking in rats. Note to illustrator: The images was adapted from https://pubmed.ncbi.nlm.nih.gov/14756590/ COPYRIGHT
(We added the blue and red panels and blue and red writing, as well as the text to indicate what the black and white bars indicate). Redraw as is, but remove Days 1-10; Days 16-25; I SEM; fa:ld and fa:s(ld).
In Figure 5.24 you can see the distribution of eating and drinking behaviours in rats when the lights in their chambers were turned on (white bars at the bottom of each section of the graph) and when the lights in their chambers were off (denoted by the black bars at the bottom of each section of the graph. In the top two graph panels, the blue section shows you the eating and drinking as soon as the lights go off. The red section shows you the eating and drinking before the lights go on, so, before sleep. Rats will, similar to babies, increase their intake of food and drink as soon as the lights go off (when they wake up and start feeding/drinking: regulatory/homeostatic feeding and drinking), and will increase their food and drink before the lights go on (when they are going to sleep: anticipatory feeding and drinking). This behaviour however, is not related to whether the lights are on or off. It is related to their own internal body clock. So in the lower two graph panels you can see that eating and drinking increases in the same way even when they are always in the dark.
Motivation to eat/stop eating as a result of conditioned responses
Conditioning or learning can drive feeding even when the animal is full. This is known as cue-potentiated feeding, and it has been shown in rats, but also in humans.
Peter Holland and researchers in his lab taught hungry rats to associated a tone with the delivery of food. This association was achieved through simple pairing of the tone with the delivery of food, similar to the experiments carried out by Ivan Pavlov (Pavlovian conditioning). A second control cue was not paired with food. They then allowed the rats to eat until they stopped, until they were full. This ensured that the homeostatic drive to eat did not apply. The rats were full and therefore were not motivated to eat to relieve a hunger state. They then presented the tones that had been associated with the delivery of food and the one that had been associated with no food. They found that when the rats heard the tone associated with the delivery of food they ate, and when they heard the tone associated with no food they consumed less food. They found that this mechanism depended on the amygdala and more importantly on the connection between the amygdala and the lateral hypothalamus. Severing the connection between the amygdala and the lateral hypothalamus stopped cue-potentiated feeding.
A similar experiment was undertaken with preschool students (Birch et. al., 1989). Over several training days, the researchers presented students with a rotating red light and music followed by the presentation of different snacks that the students preferred so that the students learned to associate certain light conditions and music with favourite snacks (peanut butter, hot dogs etc.). On the test day, students were allowed to eat as much as they wanted. Then the light and music changed to the lighting conditions and music associated with the training sessions. The researchers found that not only did the students consume food again, even though they were full, but also that when the light and music were the same ones as when their favourite food was available, they began eating sooner than when the light and music presented in the cafeteria had not been paired with their favourite food.
Motivation to eat/stop-eating as a result of ‘liking’ vs. ‘wanting’
Earlier in the chapter we described a series of experiments by James Olds and Elliott Valenstein which used the method of ‘self-stimulation reward’ in which rats readily pressed a lever to self-administer electric stimulation to their lateral hypothalamus. We also saw that in subsequent experiments, researcher-elicited stimulation resulted in the animals engaging in various motivated behaviours depending on the situation (eat, drink etc.). These findings casted doubt on the prevalent idea that behaviour is motivated by “drive reduction” because if drive reduction theory were true, stimulation in the same region that elicited hunger should have resulted in the animal finding the state of hunger aversive and would become motivated to behave in a way to reduce this state of deprivation. Instead, the animals self-stimulated the same region of the brain. The researchers concluded that the rats were motivated to self-stimulate because they found the self-stimulation rewarding (Valenstein et al., 1970).
Subsequent work by psychobiologists Robert C. Bolles, Dalbir Bindra and Frederick Toates in the 70s and 80s allowed psychologists to abandon “drive reduction” views of motivation and paved the way for the concept of ‘incentive motivation’. Incentive motivation theories propose that behaviour is motivated by the prospect of an external reward or incentive. Thus incentive motivation is mediated by learning (consciously or unconsciously) about the availability of rewards in our environment. If a particular behaviour is expected to lead to a rewarding outcome, then we will be motivated to repeat this behaviour in order to obtain the goal of the reward (e.g. if knowledge that pressing a lever will provide a sugary treat to a rat, the rat will increase the rate of pressing the lever, so will increase their motivation to execute the behaviour in order to obtain the sugar reward). Similarly, if a stimulus in the environment is expected to lead to a reward (e.g. a Pavlovian conditioned association where the sound of a bell predicts that food will be available), then motivation will increase for seeking out the stimulus that predicts the reward. Interestingly, in the Bindra-Toates model of motivation, they suggested that physiological states can moderate the effect, so that the value of the incentive/reward and thus also the value of the stimulus that may predict the reward can change depending on the physiological state of the animal. Thus the motivation to take a hot bath on a hot day if we are feeling cold will be higher and the hot bath will be perceived as more pleasant and rewarding than if it is a hot day.
The Bindra–Toates incentive motivation model additionally suggested that rewards and incentives are liked and wanted. In addition, the learned Pavlovian stimuli that predict them also become both ‘liked’ and ‘wanted’ as a consequence of the learned association with the reward. Liking and wanting were proposed to be synonymous in the Brindra-Toates model.
However, Terry Robinson and Kent Berridge in their incentive salience model proposed that the incentive motivational processes of ‘liking’ and ‘wanting’ should be considered separately because these two components of reward are mediated by different brain mechanisms.
In a series of influential experiments, they dissociated the processes of ‘liking’ a reward and ‘wanting’ (or working for) a reward. ‘Liking’ was linked to the hedonic pleasure that was associated with the reward (e.g. they observed that as in babies, rats will also lick their mouth upon receiving a sweet taste). ‘Wanting’ on the other hand, or what they termed ‘incentive salience’ is the motivational value of the reward or of a stimulus that may predict the reward, and while in some cases pleasure/hedonic impact/’liking’ and ‘wanting’/motivational value of the reward may coincide to motivate behaviour (e.g. eating a cold ice cream on a hot day), in some cases they do not (e.g. eating a cold ice cream on a very cold day – here the liking will be the same, but eating a cold ice cream on a hot day to cool down might be wanted more).
Reward has long been associated with dopamine release in the mesocorticolimbic dopamine system which projects from the ventral tegmental area to the nucleus accumbens and to parts of the prefrontal cortex (see Figure 5.25).
As a result, Kent Berridge and Terry Robinson hypothesised that if dopamine in the nucleus accumbens was depleted (through selective lesioning of dopamine neurons), then rats would not seek out a reward (no ‘wanting’). This is indeed what they found: Hungry rats became aphagic and adipsic (did not eat or drink). However if they were forced to eat something sweet, they did show licking responses associated with ‘liking’. In follow-up experiments with genetically modified mice which had high levels of dopamine in their nucleus accumbens, they found that the mice would work more to obtain sucrose, but ‘liking’ responses did not differ compared to mice without the mutation. These experiments suggested that the incentives of ‘liking’ and ‘wanting’ were indeed dissociable and that ‘wanting’ was mediated, at least in part, by dopamine release in the nucleus accumbens (see Berridge and Robinson, 2016 for review).
The influential work by Ann Kelley and her colleagues in the 90s corroborated the idea that liking and wanting incentives are likely mediated by separate systems by showing that liking may be in part be mediated my opioid receptors in the nucleus accumbens, as opioid receptor stimulation of the nucleus accumbens resulted in the enhancement of intake not of food in general but specifically in the enhancement of intake of palatable sweet or high fat foods more than other foods (Kelley et al., 1996; Zhang et al., 1998; Zhang and Kelley, 1997).
The current understanding is that motivation due to ‘liking’ is mediated by opioid, GABA and cannabinoid neurotransmitter systems in the nucleus accumbens and that motivation due to ‘wanting’ is mediated by dopamine in the nucleus accumbens.
Limbic structures involved in non-regulatory motivation
Emotions also influence motivated behaviour. We are inclined to avoid fearful situations or environments and approach situations and environments that can make us feel happy.
The amygdala, a region within the limbic system of the brain, has long been associated with both emotion and motivation, ever since it was observed that amygdala lesions in monkeys produced Kluver-Bucy syndrome. The lesions resulted in the animals showing no behavioural responses to ordinarily-threatening stimuli but they increased exploration of familiar stimuli (as if they were unfamiliar), elicited feeding towards inedible objects such as rocks and increased sexual behaviours towards inappropriate partners such as human experimenters.
Research is still ongoing to delineate which precise regions of the amygdala are involved in motivated behaviours but the amygdala is thought to be involved in motivational processes that involve conditioned and learned associations between environmental cues and rewarding or aversive outcomes.
Key Takeaways
• Basic physiology of motivation involves homeostatic (negative feedback) mechanisms that maintain temperature, hydration, nutrient levels around set-point
• Hypothalamic areas critical in homeostatic regulation
• Non-homeostatic influences through learning, emotion etc. dependent on limbic systems.
References
Anand, B. K., & Brobeck, J. R. (1951). Hypothalamic Control of Food Intake in Rats and Cats. The Yale Journal of Biology and Medicine, 24(2), 123-140. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2599116/
Berridge, K. C., & Robinson, T. E. (2016). Liking, wanting, and the incentive-sensitization theory of addiction. American Psychologist, 71(8), 670–679. https://doi.org/10.1037/AMP0000059
Birch, L. L., McPhee, L., Sullivan, S., & Johnson, S. (1989). Conditioned meal initiation in young children. Appetite, 13(2), 105–113. https://doi.org/10.1016/0195-6663(89)90108-6
Brobeck, J. R., Tepperman, J., & Long, C. N. H. (1943). Experimental Hypothalamic Hyperphagia in the Albino Rat. The Yale Journal of Biology and Medicine, 15(6), 831. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2601393/
Campfield, L. A., & Smith, F. J. (2003). Blood glucose dynamics and control of meal initiation: A pattern detection and recognition theory. Physiological Reviews, 83(1), 25–58. https://doi.org/10.1152/PHYSREV.00019.2002/ASSET/IMAGES/LARGE/9J0130225011.JPEG
Hetherington, A. W., & Ranson, S. W. (1940). Hypothalamic lesions and adiposity in the rat. The Anatomical Record, 78(2), 149–172. https://doi.org/10.1002/AR.1090780203
Hetherington, A. W., & Ranson, S. W. (1942). The spontaneous activity and food intake of rats with hypothalamic lesions. 136(4), 609–617. https://doi.org/10.1152/AJPLEGACY.1942.136.4.609
Hull, C. L. (1943). Principles of behavior, an introduction to behavior theory. Appleton-Century-Crofts.
Kelley, A. E., Bless, E. P., & Swanson, C. J. (1996). Investigation of the effects of opiate antagonists infused into the nucleus accumbens on feeding and sucrose drinking in rats. Journal of Pharmacology and Experimental Therapeutics, 278(3), 1499–1507.
Mayer, J. (1953). Glucostatic mechanism of regulation of food intake. The New England Journal of Medicine, 249(1), 13–16. https://doi.org/10.1056/NEJM195307022490104
Mayer, J. (1955). Regulation of energy intake and the body weight: the glucostatic theory and the lipostatic hypothesis. Annals of the New York Academy of Sciences, 63(1), 15–43. https://doi.org/10.1111/J.1749-6632.1955.TB36543.X
Stellar, E. (1954). The physiology of motivation. Psychological Review, 61(1), 5–22. https://doi.org/10.1037/H0060347
Strubbe, J. H., & Woods, S. C. (2004). The Timing of Meals. Psychological Review, 111(1), 128–141. https://doi.org/10.1037/0033-295X.111.1.128
Urstadt, K. R., & Berridge, K. C. (2020). Optogenetic mapping of feeding and self-stimulation within the lateral hypothalamus of the rat. PLOS ONE, 15(1), e0224301. https://doi.org/10.1371/JOURNAL.PONE.0224301
Valenstein, E. S., Cox, V. C., & Kakolewski, J. W. (1970). Reexamination of the role of the hypothalamus in motivation. Psychological Review, 77(1), 16–31. https://doi.org/10.1037/H0028581
Zhang, M., Gosnell, B. A., & Kelley, A. E. (1998). Intake of high-fat food is selectively enhanced by Mu opioid receptor stimulation within the nucleus accumbens. Journal of Pharmacology and Experimental Therapeutics, 285(2), 908–914.
Zhang, M., & Kelley, A. E. (1997). Opiate agonists microinjected into the nucleus accumbens enhance sucrose drinking in rats. Psychopharmacology, 132(4), 350–360. https://doi.org/10.1007/S002130050355
Zhang, Y., Proenca, R., Maffei, M., Barone, M., Leopold, L., & Friedman, J. M. (1994). Positional cloning of the mouse obese gene and its human homologue. Nature 1994 372:6505, 372(6505), 425–432. https://doi.org/10.1038/372425a0
About the Authors
Dr Nikolaou completed her PhD at Goldsmiths University of London before completing postdoctoral work in the School of Psychology at the University of Sussex, the Department of Developmental Psychology at the University of Amsterdam, and the Institute of Psychiatry at Kings University of London. She moved to the University of Sussex where she is currently a lecturer in Psychology.
Her work has focused on understanding the acute effects of various drugs of abuse on executive and cognitive functioning, as well as on how drug-related cues are processed in the brain and elicit biased behavioural and cognitive responses.
Professor Hans Crombag is an internationally recognised expert in behavioural and neurosciences with a PhD in Biological Psychology. His research has primarily focused on mental health, biological/environmental interactions, and substance abuse.
Since 2007, he has been employed by the University of Sussex, developing and overseeing innovative and interdisciplinary scientific research programmes, spanning and integrating multiple health-related and public/social justice fields. He has influenced thinking around neurolaw, justice and public policy as Co-Director of the Sussex Crime Research Centre.
Previously, at the Department of Psychological & Brain Sciences at the John Hopkins University, he worked on research in the areas of neurogenetics of eating/ eating disorders (and obesity) and substance abuse/addiction. He is a member of the Society for Neuroscience, European Behavioural Pharmacology Society and International Neuroethics Society.
He has a long-standing interest in mental health and wellbeing and public health policy. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/05%3A_Interacting_with_the_world/5.03%3A_Motivated_behaviour-_nutrition_and_feeding.txt |
We’ve now learnt a lot about the structures and cells of the nervous system and how these cells and systems coordinate perception and generation of behaviours. It’s fascinating to understand how these complex, interacting processes allow us to successfully navigate life, appropriately responding to the world around us. But it’s also important to consider what happens when our brains and nervous systems don’t work so well. The next section will consider how these processes can go wrong, what the implications of nervous system dysfunction are for our lived experience, and what we can do to prevent or minimise these changes.
06: Dysfunction of the nervous system
Learning Objectives
• Appreciate the features of reward and reinforcement
• Understand how the effects of drugs on reinforced behaviours points to a critical role of dopamine
• Understand how abused drugs affect mesolimbic dopamine transmission
• Appreciate the link between drug self-administration and drug dependence (addiction).
What is addiction?
Drug addiction, or to give it its more scientific term, dependence, is the taking of a chemical substance (the drug) for non-nutritional and non-medical reasons, where the drug-taking behaviour is compulsive. An addict feels they have no control over taking the drug, but instead feels driven to take it. Their lives often become centred around acquiring and consuming the drug, to the detriment of behaviours necessary for survival – for example, eating, drinking water – and they often engage in risky or illegal behaviour in order to feed their drug habit. Addicts often develop a tolerance to the drug, such that they need more of the drug to produce the ‘high’. Drug dependence must be distinguished from drug use and drug abuse. Drug use is where the substance is taken in small quantities, relatively infrequently, and importantly with no damage to relationships or daily function. For example, people often enjoy a glass of wine with a meal, or a drink with friends. If drug use escalates to frequent and/or excessive taking of the substance, causing disruption to daily functioning or relationships, but without the compulsivity, this would be termed drug abuse. Drug dependence, as stated above, is similar to drug abuse, except that the drug-taking is compulsive, with the addict feeling they have no control over whether to take the drug.
There are four major stages of drug addiction: initiation, maintenance, abstinence and relapse, each of which are likely to be driven by different mechanisms. Initiation is the first stage, where a person takes the drug for the first time. The main factors which influence initiation are: the ‘pleasant’ feeling (hedonic impact) from taking the drug; overcoming stress; peer pressure and the desire to conform to a group; or simply to experiment. For many people, drug taking never progresses beyond this stage, and whether or not they take a drug is entirely a conscious decision.
However, where a person becomes dependent, or addicted, this moves on to the second stage: maintenance. Here the person no longer feels in control of the decision as to whether to take a drug, but rather feels a compulsion to take it. The maintenance stage can be long-lasting, and is very often accompanied by an increasing motivational drive to take the drug, which is driven by a process called sensitisation (see ‘Tolerance and sensitisation’ box below). However, there is rarely an accompanying increase in hedonic impact from taking the drug: indeed very often hedonic impact decreases, and the drugs may even become aversive.
Tolerance and sensitisation
These are forms of neuroadaptation which are important in many aspects of neuronal function, and are particularly important in understanding processes of drug addiction.
Tolerance refers to the process where a drug becomes less effective, that is, it produces a weaker response after repeated administration. Sensitisation, on the other hand, is where the drug becomes more effective over repeated administration.
Both processes are mediated through changes in cellular function, including:
• changes in neurotransmitter synthesis, storage and release,
• changes in receptor density,
• changes in reuptake and metabolism of transmitters,
• changes in second messenger signalling,
many of which probably involve upregulation or downregulation of specific gene expression.
However, at present we do not fully understand how these mechanisms are controlled. Interestingly, the processes involved in sensitisation are very long-lasting, and some have even suggested that they may be irreversible, accounting for the enduring changes that underlie maintenance of addiction. A link with mechanisms of learning has also been suggested by the observation that antagonists at NMDA-type glutamate receptors, given into VTA, prevent sensitisation. NMDA-receptors are known to be critically involved in neuroplasticity mechanisms of learning, and this evidence suggests that similar processes involving NMDA receptors may underlie drug-induced sensitisation.
Once a person has become dependent, it is likely that the addiction remains with them for the rest of their life: there is little evidence for true recovery. Therefore when an addict refrains from taking a drug, they are not normally considered to be ‘cured’ or ‘recovered’, but rather they are considered to be abstinent. This reflects the view that the addiction is still present, but is not expressed because the person no longer takes the drug. However, the motivational drive to take the drug – that is, the craving – may still be strong. This is underlined by evidence showing that physiological and neurochemical changes occurring in the brain with the development of dependence are largely irreversible, as we will see later. Therefore, abstinent addicts are very prone to restarting their drug taking, termed relapse. A single intake of the drug can reinstate the maintenance phase in an addict who may have been abstinent for many years, hence the requirement, in treatment programs for dependence, that the addict never take the drug. Relapse is driven by cravings in the individual, which may be brought about by stress or by exposure to people, items or situations associated to previous drug taking, emphasising the link between classical conditioning and drug taking.
When an addicted person stops taking a drug, they often experience withdrawal symptoms. These are behavioural changes, often opposite to the effects elicited by the drug, and can be very aversive. In the early stages of abstinence these withdrawal symptoms are particularly strong and can be extremely unpleasant. Thus, avoiding withdrawal symptoms provides a strong motivation to take the drug and can lead to relapse. However, as the period of abstinence increases the withdrawal symptoms subside, so reducing this as a motivation for reinstatement.
Brain motivation circuits and addiction
Many studies have been undertaken in experimental animals, particularly rats and mice, but also primates, to investigate the neural circuitry underlying addiction. These mostly focus on pathways controlling reinforcement and motivation often termed the reward pathway.
Reward or reinforcement?
Reward is a term widely used in the discussion of dopamine signalling in the mesolimbic pathway. Indeed, many publications refer to the mesolimbic pathway as the ‘reward pathway’. However, the term ‘reward’ has several problems in the scientific context. Reward is a pleasurable experience, and is subjective: what is pleasurable for one person may not be for another. It also raises problems of assessing pleasure in experimental animals: how do we know that an animal is enjoying an experience? Third, the term does not necessarily imply that it will change behaviour.
In scientific terms, for empirical research, we need to avoid subjective measures: far better to have objective measures. The term reinforcement refers to the ability of a stimulus, situation, or outcome, to elicit a behaviour. Reinforcement strengthens an animal’s future behaviour on exposure to the stimulus. It is an objective measure: we can simply measure changes in the behaviour, for example the number of operant lever presses. Importantly, also, reinforcement does not imply pleasure, so in measuring the behaviour, we don’t have to worry about whether the animal is enjoying the experience.
The discovery by Olds and Milner (1956) that rats would work by pressing a lever to receive mild electrical stimulation to a specific area of the brain, fuelled major research programs looking at this and related pathways in controlling motivation. They saw that the rats were motivated to stimulate areas of the mesolimbic pathway electrically: this pathway projects from cell bodies in the ventral tegmental area (VTA) along the axons of the mesolimbic pathway, to terminals located primarily in the nucleus accumbens.
Subsequent experiments have characterised the mechanism promoting the level pressing response in greater detail, but, importantly, they have emphasised the critical role of dopamine in driving the behaviour effect. Thus, amphetamine or cocaine, which enhance dopamine signalling, increase the lever press rate, whereas giving a dopamine antagonist reduces the lever press rate, indicating that the reinforcement signal driving the lever pressing behaviour is mediated through dopamine. It is important to note that the anatomical location of brain regions which support self-stimulation is very specific: if the electrodes are located outside these localised regions, animals will not self-stimulate. An interesting point here, in relation to the phenomena of addiction, is that in self-stimulation experiments such as these, animals will repeatedly press the lever to receive the stimulation rather than eating or drinking, indicating that the electrical stimulation is a very strong motivational drive which suppresses the drive to carry out behaviours critical for survival: addicts often neglect normal nutrition and self-care in order to maintain their drug-taking.
In a similar procedure, rats and mice will also press a lever in order to receive injections of certain drugs. In its simplest form the drugs are administered intravenously (i.v.), through an indwelling cannula in a blood vessel: therefore lever pressing gives an i.v. injection of the drug. In a modification of the design, drugs can be administered via a microinjection into local brain areas. There are a number of drugs which animals will administer intravenously, including amphetamine, cocaine, nicotine, morphine, heroin and ethanol, and they will also administer amphetamine or cocaine into the nucleus accumbens and morphine into the VTA. It should be emphasised that animals will only self-administer certain drugs: the vast majority of drugs do not support self-administration. Similarly, the brain regions where animals will self-administer the drugs, VTA and nucleus accumbens, are very specific and indicates the importance of the mesolimbic pathway.
The importance of the mesolimbic dopamine system can be confirmed with lesion experiments, using 6-hydroxy-dopamine (6-OHDA), a drug which specifically kills catecholamine (dopamine and noradrenaline) containing cells. Following lesions to the mesolimbic pathway, but not to other pathways, animals will no longer self-administer drugs. Furthermore, enhancing dopamine function by giving local injections of cocaine or amphetamine into nucleus accumbens also increases the lever pressing, supporting the role of dopamine in the lever-pressing response. Paradoxically, many experiments have shown that dopamine antagonists also increase the lever press rate. However, it was subsequently found that the dose was critical: at low doses the lever-press rate is increased, but at higher doses it is entirely abolished. This dose-dependence can be explained by considering that at low antagonist doses not all receptors are occupied and therefore increasing the amount of drug self-administered will overcome the effect of the antagonist, whereas at high antagonist doses, the receptors are completely blocked, and so no matter how much more drug is administered the effect of the antagonist cannot be overcome. So it is concluded that the motivational effects of these drugs is mediated via dopamine neurones in the mesolimbic pathway.
You will recall that stress is a major contributory factor to development of dependence and relapse in abstinent addicts, and the influence of stress can be seen in animals trained to self-administer. If rats that had previously been trained to lever-press to receive self-administration of cocaine are left drug-free for several weeks, they no longer press the lever when cocaine is once more available, modelling abstinence. If they then receive a foot shock, they do start pressing the lever again, reinstating the compulsive self-administration and paralleling the effect of stress on relapse in people. In the context of the role of stress in reinstatement, it is notable that this reinstatement is prevented by corticotropin-releasing hormone antagonists, emphasising the role of the hypothalamus-pituitary-adrenal axis mediated stress response to the process.
Measuring rodents’ operant behaviour, such as lever pressing, is a good way of measuring their level of motivation, and as we saw from the experiments above, they show strong motivation to receive stimuli which activate the dopaminergic mesolimbic pathway. However these are clearly very artificial behaviours: they do not mimic activities which the animals undertake naturally. But we just need to look in the wild at how much effort animals will put in to getting food, be it a predator chasing down a prey, or annual migrations to new feeding grounds. Animals have an innate motivation to pursue behaviours which are beneficial to survival, for example eating, drinking and reproducing. As such, motivational systems in the brain are highly evolved to reinforce behaviours which enhance animals’ ability to perform these actions. Stimuli associated with these action become strong predictors of outcome, and strong motivational cues to perform behaviours leading to consumption.
In the laboratory, too, motivation to pursue beneficial behaviours can be demonstrated: initial observations by B.F. Skinner in the 1940s, opened the way for many subsequent operant experiments where rats or mice pressed a lever in order to receive food or water or even a sexually receptive mate. Moreover, as with self-stimulation and self-administration, lesion and pharmacology experiments have shown the importance of dopamine in the mesolimbic pathway for controlling this behaviour. Thus, lever pressing to receive a natural reward (food, water) is abolished in animals with 6-OHDA lesions of the mesolimbic pathway, or by the application of dopamine receptor antagonists, and is enhanced by the administration of amphetamine or cocaine. So the mesolimbic pathway is clearly involved in motivation to undertake behaviours vital for survival, and self-stimulation and self-administration tap into this mechanism by promoting activity in the pathway either electrophysiologically or pharmacologically.
Direct neurochemical measurement in localised brain areas, primarily using brain microdialysis or fast-scan cyclic voltammetry (FSCV) (see ‘Measuring neurotransmitter release in the brain’ box, below) have shown that dopamine release in nucleus accumbens, but not in other dopaminergic terminal regions in the brain, is increased during appetitive behaviours. These behaviours include eating and drinking, and during electrical stimulation of the VTA, similar to that used in self-stimulation, therefore largely confirming the importance of dopamine in motivation processes. Importantly, also, drugs which support self-administration also increase dopamine release in nucleus accumbens preferentially over other regions. The mechanisms by which the different drugs increase mesolimbic dopamine function varies across the different drug types. Some, such as nicotine, morphine, heroin and alcohol activate the pathway by either direct or indirect actions on the dendrites and cell body in the VTA, while others, such as amphetamine and cocaine affect the reuptake of released dopamine in the terminal regions, including nucleus accumbens.
Considering the site of action of the different drugs accounts for why animals will self-administer morphine into the VTA and amphetamine and cocaine into the nucleus accumbens, as these are the regions where the respective drugs activate mesolimbic function. Therefore, although addictive drugs exhibit very different primary pharmacology, with only amphetamine and cocaine acting directly on the dopamine system, and also have very different primary behavioural effects (Table 1), they all share the ability to increase dopamine function selectively in the mesolimbic pathway and it is this action that is believed to underlie their motivational effects. The drugs ‘hijack’ the neural pathway in the brain which controls the animals’ motivation to pursue behaviours essential for survival, and instead motivate the individual to perform behaviours related to taking the drugs.
Measuring neurotransmitter release in the brain
Measurement of neurotransmitter release in localised brain areas during behaviour and/or in response to drugs is really important in understanding underlying neurotransmitter actions. Over the last few decades, two main methods have been employed, both of which can be used in awake, freely moving experimental animals.
Brain microdialysis involves implanting a small length of dialysis membrane into the brain, and perfusing it continuously with artificial cerebrospinal fluid (aCSF). Dissolved substances in the brain extracellular fluid pass through the membrane, by dialysis, into the aCSF, and can be measured typically by high performance liquid chromatography (HPLC). Second, fast-scan cyclic voltammetry (FSCV) measures the oxidation of chemicals when a voltage is applied to a carbon fibre microelectrode. Although it is possible to measure some other neuroactive compounds, the most widespread use of FSCV is to measure dopamine. Microdialysis has the advantage that many different compounds can be measured in a single sample, whereas FSCV is mainly restricted to a single compound, normally dopamine. However, microdialysis probes are comparatively large (typically 1 to 2 mm long, 0,5 mm diameter) and so have a relatively poor spatial resolution. FSCV, on the other hand, uses carbon fibre microelectrodes which are much smaller (typically 100 µm long, 10 µm diameter) which give a much higher spatial resolution and allow targeting of smaller brain sub-regions. Microdialysis also has relatively poor temporal resolution, as it requires collection of enough sample to be able to analyse: most studies use sample collection times of 1 to 10 minutes, although some have managed less than a minute. In contrast, FSCV typically makes 10 measurements per second. Therefore FSCV is able to pick up fast transient changes in response to specific stimuli, whereas microdialysis can only pick up slower more sustained changes.
Recent developments with genetic markers are opening the way to novel approaches to measuring many aspects of neurotransmitter function with high chemical specificity, spatial resolution and temporal resolution.
Conditioned place preference tests an animal’s preference for an environment which is associated with a reinforcer, and can be assessed using a two compartment testing box. Animals are trained over repeated sessions that one compartment contains a reinforcer (typically food, sucrose or water), whereas the other compartment does not. After several training trials, the place preference is tested in the absence of any reinforcer (that is, both compartments are empty). The animal is placed back in the testing box, and the time spent in each compartment is recorded. Animals spend more time in the previously reinforced compartment than in the control compartment, even though at test there is no reinforcer present.
This shows that the animal has learned which compartment of the test box contained the reinforcer, and it is motivated to visit that compartment in preference to the control compartment, even when the reinforcer is no longer present. This effect is:
1. abolished by 6-OHDA lesions of the mesolimbic pathway;
2. enhanced by drugs which increase dopamine, such as amphetamine and cocaine; and
3. attenuated by dopamine receptor antagonists.
Therefore, the mesolimbic dopamine pathway is critical for expression of conditioned place preference.
In a variant of the procedure, instead of a natural reinforcer, drugs can be used. In this case, during training, the animal is given an injection of a drug and placed in one compartment or a saline injection and placed in the other compartment. At test, with no drug present, they show a preference for the compartment in which they have previously received the drug. The drugs which will induce place preference, including amphetamine, cocaine, morphine, heroin and alcohol, are the same ones which animals will self-administer, and all are potentially addictive drugs in people. One important aspect that the place preference experiments demonstrates is that animals learn about the environment in which they receive reinforcing stimuli, be it natural reinforcers or reinforcing drugs, and, given the choice, they return to that environment, even when the reinforcer is not present. This indicates that associative learning, or conditioning, is occurring between the reinforcer and the environment. Similar learning can also be demonstrated to specific cues: in the lever press experiments described above, if a neutral stimulus (e.g. a light) is presented immediately before the lever is made available, the animals will approach the lever when the light stimulus is presented alone, and they will try to press the lever, even when it is still retracted and unavailable.
Microdialysis and FSCV experiments have shown that, once this learning has taken place, dopamine release in nucleus accumbens is increased during the presentation of the light stimulus, even long after the withdrawal of the reinforcer. Therefore, animals learn to associate specific cues and environment to the reinforcer, such that they can evoke the both release on dopamine in nucleus accumbens and reinforced behaviour, even in the absence of the reinforcer. These behaviours in experimental animals strongly resemble behaviours seen in drug addicts, where cues associated with drug taking (e.g. an empty vodka bottle to an alcoholic or a needle to a heroine addict), or the environment they associate with the drug taking can be very strong motivational drivers, or cravings, to take the drugs. Interestingly these conditioned effects can long outlast the period of association in both experimental animals and in addicts: so cues and/or environment can trigger cravings in abstinent addicts even years after they last took the drug.
In the early stages, drug taking is not addictive: that is, people take the drugs through choice. A number of psychological factors originating in prefrontal cortex, including impulsivity and inhibitory self-control, have been shown to be vulnerability factors for drug taking. Thus, dysregulation of prefrontal control over mesolimbic circuits may underlie impaired inhibitory self-control exhibited by many addicts, while, high impulsivity, mediated through abnormalities of the orbitofrontal area of prefrontal cortex, may explain people’s choice of short-term gratification of drug taking over long-term benefits of abstinence.
However, at some stage there is a change from use or abuse to the compulsive drug use characteristic of dependence. As mentioned earlier, this change includes neuro-adaptive processes involving sensitisation of dopamine systems controlling motivation and seems to be largely irreversible, accounting for enduring cravings even after long periods of abstinence. Moreover, evidence has shown that there is a learned component to this process: experimental animals are more likely to show sensitisation (e.g. sensitisation of locomotor activity, mediated through mesolimbic dopamine activity) if tested in the same environment where the initial drug administration took place. In addition, previous sensitisation enhances the acquisition of self-administration and place preference, effects mediated through mesolimbic dopamine. With repeated drug administration, drugs may acquire greater and greater incentive value and become increasingly able to control behaviour. This may parallel the observation in drug addicts where places, acts or objects associated with drug-taking become especially powerful incentives.
Addictive drugs produce long-lasting changes in brain organisation. The brain systems that are sensitised include the dopaminergic mesolimbic pathway, responsible for the incentive salience (‘wanting’) of the drug or drug-associated cues. Systems mediating the pleasurable or euphoric effects of the drug (‘liking’) are not sensitised. Animal studies have looked at the mechanisms of sensitisation to repeated drug taking. Psychostimulants, including amphetamine and cocaine, cause increased locomotor activity in rodents, an effect which is mediated through the mesolimbic pathway: lesions of the mesolimbic pathway abolish it. On repeated systemic administration (i.e. intravenous, intraperitoneal or subcutaneous: where the drug accesses the whole brain), the hyperlocomotion induced by the drug increases, showing a sensitised response. The precise mechanism of the sensitisation is not certain, but it probably involves long-term and enduring neuro-adaptive changes in the cell body region in the VTA (see box below).
Localisation of neuroadaptation underlying sensitisation
Rats were given repeated local injections of amphetamine into either the cell body region of the mesolimbic pathway in the VTA, or the terminal region in the nucleus accumbens. A third control group received no injections.
Animals injected into the nucleus accumbens showed a hyperlocomotor response, which did not increase over repeated injections: that is, there was no sensitisation. Animals given injections into the VTA showed no behavioural response. This is not surprising, as the pharmacological effect of amphetamine is at the terminals, it increases release and blocks reuptake: therefore it is likely to be most effective in the terminal region. After these repeated local injections, animals were left for a week drug-free, then given a challenge dose of amphetamine systemically. All animals showed hyperlocomotion. Animals which had received repeated drug injections into nucleus accumbens showed a similar level of hyperlocomotion to the non-injected controls: that is, there was no sensitisation. However, animals which had received drug into the VTA showed an augmented response compared to the other groups, indicating that sensitisation had taken place.
Therefore, repeated injection into nucleus accumbens evoked a behavioural response, but not sensitisation, whereas repeated injection into VTA produced no behavioural response, but did cause sensitisation, providing evidence for the critical role of VTA in sensitisation.
In summary, there is strong evidence from studies in experimental animals that:
1. dopamine signalling in the mesolimbic pathway drives motivational systems to promote behaviours critical for survival;
2. addictive drugs impact on this system to promote behaviours associated with drug-seeking and drug taking;
3. neuro-adaptation in this pathway accounts for the long-term, enduring nature of dependence and
4. activity in this pathway driven by conditioned associations can cause cravings to take the drug even after long periods of abstinence.
Therefore activity in this pathway can account for many of the phenomena associated with addiction in people.
Models of addiction
Several models have been proposed to account for the features of addiction, prominent amongst which is the incentive sensitisation model, proposed by Robinson and Berridge in 1993. This develops ideas taken from two other prominent models, the opponent process model and the aberrant learning model, and it is worth considering these two models briefly first.
Aberrant learning model
According to the aberrant learning model, abnormally strong learning is associated with drug taking, through two distinct components of learning. First, explicit learning where the association between action (drug taking) and outcome (drug effect) is abnormally strengthened leading to drug taking because of an expectation of the hedonic impact, even when the drug no longer produces that effect. Second, implicit learning where the action-outcome relationships (as above) change to more automatic stimulus-response relationship (habit), meaning that the stimulus evokes the response irrespective of any conscious expectations about the outcome.
While this theory accounts for the motivational drive from stimuli associated with drug-taking and the ability of these stimuli to promote cravings, it does not account for the fact that most addicts do not report expectation of a positive hedonic effect. Therefore it seems unlikely that this could be the motivation for their drug seeking and taking. Similarly, it does not explain the compulsive nature of addiction – it implies that drug seeking and taking are purely automatic behaviours, whereas in fact they appear more as a motivational compulsion. Finally, it does not explain the behavioural flexibility shown by addicts. The theory would predict that if the normal route to drug taking were prevented, the addict would not be able to adapt behaviour in order to seek the drug from a different source or via a different process, whereas in fact addicts do show substantial behavioural flexibility in these circumstances.
Opponent process model
The opponent process model is well founded in neuroscience, as a mechanism for homeostatic control of many functions. It posits two processes, the A-process and the B-process which oppose each other: the A-process is activated by an external stimulus, leading to a change in functioning, and the B-process is the body’s reaction to the change brought about by the A-process to return to the set point level. In the context of drug taking, the A-process represents the direct effect of the drug, which triggers the B-process, the opponent process, which aims to restore the homeostatic state. The A-process leads to the hedonic state (‘high’) associated with taking a drug, while the B-process leads to the aversion from not taking the drug, for example the withdrawal symptoms. Over repeated drug taking, tolerance builds up to the A-process, accounting for the reduced hedonic impact of the drugs, while the B-process is strengthened, leading to withdrawal symptoms, which can only be eliminated by taking more of the drug. Thus the driving force for drug taking is to prevent the aversive withdrawal symptoms which occur when the A-process diminishes, but the B-does not. Thus, people who initially take drugs to gain a positive hedonic state, are subsequently motivated to continue drug taking to avoid a negative hedonic state.
This accounts for the drive to take the drug to achieve a homeostatic state, but does not account for evidence showing that avoiding the negative hedonic state of withdrawal is not a major motivator for drug taking. Indeed, many addictive drugs do not evoke strong withdrawal symptoms. Also, withdrawal symptoms are maximal in the days following abstinence, yet cravings for the drug, and reinstatement, even after a small dose, can last for years – in alcoholics who have been abstinent for years, a single alcoholic drink can reinstate the addictive behaviour.
Incentive sensitisation model
The incentive sensitisation model derives certain aspects from the above models, but puts them into a motivational framework. It delineates two distinct components of reinforcement – hedonic impact (‘liking’) and incentive salience (‘wanting’), which are dissociable behaviourally and physiologically. Robinson & Berridge use the terms ‘liking’ and ‘wanting’ (in quotation marks) to represent these very clearly defined behavioural parameters. Thus, when given in quotation marks, they represent much more specific scientific terms than the everyday usage of the two words. Incentive learning, both explicit and implicit, which forms the core of the aberrant learning model, provide the route through which stimuli associated with the behaviour acquire incentive salience – they become salient, attractive and wanted – and guide behaviour.
The incentive sensitisation model focusses on how drug cues trigger excessive motivation for drugs, which drives drug seeking and drug taking behaviour. The subjective pleasure derived from taking the drug, the hedonic impact or ‘liking’, is due the direct psychopharmacological action of the drug in producing a ‘high’, reducing social anxiety and/or, increasing socialisation. Incentive salience, or ‘wanting’ on the other hand, represents the motivational importance of stimuli, making otherwise unimportant stimuli able to attract attention, making them attractive and ‘wanted’. The critical feature of the model is the dissociation between these two processes, both behaviourally and physiologically, and that the incentive salience, or ‘wanting’ is sensitised over repeated drug taking, so increasing the driving force to take drugs, whereas the hedonic impact, or ‘liking’, is unaffected, or may even reduce, through tolerance. Under normal conditions of natural reward the two processes work together to motivate behaviours which are beneficial to survival, it is only in un-natural situations such as taking of addictive drugs that there is a dissociation between the actions of the two systems, such that drugs can become exceptionally strong motivators of drug-seeking and drug-taking behaviour. This dissociation of the two components accounts for the observation that addicts continue to seek and take drugs, even when they derive little or no pleasure from it, and when they are fully aware of the physical, emotional and social damage it is causing. Importantly, the dissociation between ‘wanting’ and ‘liking’ has also been demonstrated experimentally, indicating that it is not simply a theoretical concept, but does actually occur.
The dissociation between ‘wanting’ and ‘liking’
In a series of experiments designed to test whether a dissociation between ‘wanting’ and ‘liking’ could be demonstrated experimentally, Berridge and co-workers devised a scoring scheme for measuring facial expressions related to palatability across several species, through which they assessed ‘liking’ (Figure 6.3). Amphetamine was shown to have no effect on ‘liking’, and may have increased aversion.
In order to assess ‘wanting’, a lever press experiment was used in the same animals. Animals were first trained to press a lever for sucrose reward, then trained that one auditory stimulus signalled that the a lever press will deliver sucrose (CS+) but that a different auditory stimulus signalled that the level press will not deliver sucrose (CS-). Level press responses to CS+ and CS- were measured as an index of ‘wanting’.
Amphetamine microinjection selectively enhanced lever pressing for sucrose by the CS+ auditory stimulus, but not by the CS- auditory stimulus, indicating that amphetamine selectively enhanced the motivational element, ‘wanting’. Therefore amphetamine had no effect on, or perhaps decreased, ‘liking’, but enhanced ‘wanting’, providing experimental evidence for the dissociation of the two which forms the basis of the incentive.
Figure 6.3 depicts representative hedonic tongue protrusions (reaction to sweet tastes) and aversive gapes (reaction to bitter tastes) from adult rat, young primate, and infant human (after Berridge).
Summary
(a) Increasing concentrations of amphetamine cause a small decrease in hedonic reaction, and an increase in aversive reaction, showing that amphetamine does not increase ‘liking’: indeed, it decreases it.
(b) Amphetamine causes an increase in responding to the auditory stimulus which has been paired with sucrose (CS+), but not to the auditory stimulus that was not paired with sucrose (CS-), showing that amphetamine increases motivational drive, that is ‘wanting’.
The mesolimbic dopamine pathway is primarily involved in control of incentive salience, while other parts of the basal ganglia circuitry, including those using opioids and acetylcholine, control the hedonic impact. This accounts for the role of dopamine is in the incentive salience aspect, but not the hedonic impact aspect: drugs which enhance dopamine function increase the incentive salience (‘wanting’), increasing the animals motivation to pursue the goal, without increasing the hedonic impact (‘liking’). While ‘liking’ of the drug may decrease with repeated exposure (probably due to tolerance), the motivation to take the drug increases through sensitisation of the incentive salience (incentive sensitisation). Thus the model explains the dissociation between the pleasure received from taking the drug, and the motivational drive to take it. The sensitisation of the incentive salience is similar to habit learning (aberrant learning model), but is distinguished from it by the fact that it is only one component of the response which is sensitised. Related to this, work from Everitt, Robbins and co-workers suggest that the switch to compulsive drug taking which characterises dependence may be mediated by a shift in the specific dopamine pathway controlling the response, from the neurones terminating in nucleus accumbens, when responses are primarily goal directed, to neurones terminating more dorsally in the dorsal striatum when responses become compulsive (e.g. Everitt et al, 2001).
Addictive behaviours
There are a number of behaviours which share a lot of features in common with drug addiction. Behaviours like gambling and exercise can become compulsive with individuals carrying out the behaviours to the detriment of normal daily function or family relationships. A common feature with these behaviours is a dopaminergic component to the motivation, which may include activation of endorphin (an endogenous opioid, related to morphine) systems which in turn activate the mesolimbic dopamine pathway. Therefore some of these so called ‘addictive behaviours’ share many of characteristics of drug addiction, and evidence suggests that they may share similar neural mechanisms. A major research focus is aimed at identifying whether they are indeed different manifestations of the same process or different processes.
Treatment
Treatment options for drug addiction are fairly limited at present. The best long-term therapy is abstinence, although, as has been discussed earlier, people, specific cues and environments associated with drug taking can produce very strong cravings, often leading to relapse, even after extended periods of abstinence – you will recall that, in rats, stimuli associated with reinforcement (e,g, drug administration) evoke dopamine release in nucleus accumbens long after the withdrawal of the reinforcer. In all therapeutic strategies for treating addiction, a vital consideration is that the individual must recognise that they have an addiction and they must be motivated to overcome it. Treatments can be physically and emotionally demanding, and without the motivation to stop, treatment is rarely successful.
Psychological therapies have proven fairly successful in sustaining abstinence. Cognitive behaviour therapy (CBT) helps recognize unhealthy behavioural patterns, identify triggers which may potentially lead to relapse, and develop coping strategies to overcome them. This may also include contingency management, which reinforces the positive aspects of avoiding drugs through specific rewards. Stepped management schemes are a form of group therapy which identifies negative consequences of addiction and through support networks develops strategies to overcome them. In the longer term, psychological therapies also look at aspects in the person’s life beyond their addiction, and particularly any other pathological conditions they may experience. A key aim is to improve stress management, since we have seen that stress is a major precipitatory factor in relapse.
For psychological therapies to be effective, the individual must first stop taking the drugs, a process often called detoxification: given the compulsive nature of drug addiction, this in itself can be a major challenge. The four main approaches used are drug elimination, agonist therapy, antagonist therapy or aversion therapy. Pharmacological treatments which reduce the impact of withdrawal symptoms and cravings can also help during the detoxification processes.
Drug elimination is where the person simply does not take the drug any more. Sometimes the drug is simply withdrawn, in a single step (e.g. very often when smokers give up smoking), but more often, particularly for more serious addictions, the daily intake of drug is slowly reduced under clinically controlled conditions, until the addict is no longer dependent on the drug. One of the main problems with this approach is that the person normally experiences withdrawal symptoms, which can be extremely unpleasant in some cases, and are a major motivator to relapse into a drug-taking habit.
Antagonist therapy is where an antagonist for the addictive drug is given to block the action of the drug. This form of therapy is rarely used as it induced very severe withdrawal effects, so much so that when antagonist therapy is used, the individual is normally anaesthetised or heavily sedated. Agonist therapy is probably the most widely used treatment for coming off drugs. In this case an agonist for the addicted drug, or in some cases the drug itself, is given but in a very controlled way, reducing the amount given over a period of time: normally also the drugs and/or route of delivery is less harmful. Finally, aversion therapy can be effective in some cases, but is not widely used. This is where drug taking is paired with an aversive stimulus, such that a conditioned association is made between the drug and the aversive stimulus. For example an emetic drug is given alongside the addicted drug to induce sickness, making the addicted drug-taking aversive – you will remember the important role of conditioning in the development of addiction: well, it can also be used to treat it.
Key Takeaways
• Drug addiction is the compulsive use of drugs, to the detriment of daily functioning and relationships.
• Drugs which can become addictive have a wide variety of primary pharmacology, but all share the property that they provoke increased dopamine release in the mesolimbic pathway projecting from VTA in the midbrain to the nucleus accumbens in the forebrain.
• Many behavioural procedures in experimental animals have shown that this pathway is important in motivation and that animals show a strong motivation to work (e.g. press a lever) in order to receive injections of drugs with addictive potential, providing a link between natural motivation networks and addiction.
• The incentive sensitisation model accounts for the phenomena of addiction by proposing a dissociation between motivation and hedonia, which can be demonstrated experimentally, and which accounts for the observation that drug addicts often report a heightened drive to take drugs, yet the enjoyment from taking them is diminished.
References and further reading
Berridge, K. C., & Robinson, T. E. (1998). What is the role of dopamine in reward: Hedonic impact, reward learning, or incentive salience? Brain Research Reviews, 28(3), 309-369. https://doi.org/10.1016/S0165-0173(98)00019-8
Caine, S. B., Negus, S. S., Mello, N. K., Patel, S., Bristow, L., Kulagowski, J., . . . Borrelli, E. (2002). Role of dopamine D2-like receptors in cocaine self-administration: studies with D2 receptor mutant mice and novel D2 receptor antagonists. The Journal of Neuroscience, 22(7), 2977-2988. https://doi.org/10.1523/JNEUROSCI.22-07-02977.2002
Di Chiara, G. (1999). Drug addiction as dopamine-dependent associative learning disorder. European Journal of Pharmacology, 375(1–3), 13-30. https://doi.org/10.1016/S0014-2999(99)00372-6
Di Chiara, G., & Imperato, A. (1988). Drugs abused by humans preferentially increase synaptic dopamine concentrations in the mesolimbic system of freely moving rats. Proc Natl Acad Sci U S A, 85(14), 5274-5278. https://doi.org/10.1073/pnas.85.14.5274
Everitt, B. J., Dickinson, A., & Robbins, T. W. (2001). The neuropsychological basis of addictive behaviour. Brain Research Reviews, 36(2–3), 129-138. https://doi.org/10.1016/S0165-0173(01)00088-1
Everitt, B. J., & Robbins, T. W. (2016). Drug Addiction: Updating Actions to Habits to Compulsions Ten Years On. Annual Review of Psychology, 67(1), 23-50. https://doi.org/10.1146/annurev-psych-122414-033457
Franken, I. H. A. (2003). Drug craving and addiction: integrating psychological and neuropsychopharmacological approaches. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 27(4), 563-579. https://doi.org/10.1016/S0278-5846(03)00081-2
Goodman, A. (2008). Neurobiology of addiction: An integrative review. Biochemical Pharmacology, 75(1), 266-322. https://doi.org/10.1016/j.bcp.2007.07.030
Heilig, M., MacKillop, J., Martinez, D., Rehm, J., Leggio, L., & Vanderschuren, L. J. M. J. (2021). Addiction as a brain disease revised: why it still matters, and the need for consilience. Neuropsychopharmacology, 46(10), 1715-1723. https://doi.org/10.1038/s41386-020-00950-y
Kalivas, P. W., & Weber, B. (1988). Amphetamine injection into the ventral mesencephalon sensitizes rats to peripheral amphetamine and cocaine. Journal of Pharmacology and Experimental Therapeutics, 245(3), 1095-1102.
Koob, G.F. (2005). The neurocircuitry of addiction: Implications for treatment. Clinical Neuroscience Research, 5(2-4), 89-101. https://doi.org/10.1016/j.cnr.2005.08.005
Koob, G. F., & Volkow, N. D. (2009). Neurocircuitry of Addiction. Neuropsychopharmacology, 35, 217-238. https://doi.org/10.1038/npp.2009.110
Koob, G. F., & Volkow, N. D. (2016). Neurobiology of addiction: a neurocircuitry analysis. The Lancet. Psychiatry, 3(8), 760-773. https://doi.org/0.1016/S2215-0366(16)00104-8
McKendrick, G., & Graziane, N. M. (2020). Drug-induced conditioned place preference and its practical use in substance use disorder research. Frontiers in Behavioral Neuroscience, 14, 582147. https://doi.org/10.3389/fnbeh.2020.582147
Negus, S. S., & Miller, L. L. (2014). Intracranial self-stimulation to evaluate abuse potential of drugs. Pharmacological Reviews, 66(3), 869-917. https://doi.org/10.1124%2Fpr.112.007419
Olds, J., & Milner, P. (1954). Positive reinforcement produced by electrical stimulation of septal area and other regions of rat brain. Journal of Comparative and Physiological Psychology, 47(6), 419. https://doi.org/10.1037/h0058775
Robinson, T. E., & Berridge, K. C. (1993). The neural basis of drug craving: An incentive-sensitization theory of addiction. Brain Research Reviews, 18(3), 247-291. https://doi.org/10.1016/0165-0173(93)90013-P
Robinson, T. E., & Berridge, K. C. (2001). Incentive-sensitization and addiction. Addiction, 96(1), 103-114. https://doi.org/10.1046/j.1360-0443.2001.9611038.x
Schultz, W., & Dickinson, A. (2000). Neuronal coding of prediction errors. Annu Rev Neurosci, 23, 473-500. https://doi.org/10.1146/annurev.neuro.23.1.473
Vezina, P. (1993). Amphetamine injected into the ventral tegmental area sensitizes the nucleus accumbens dopaminergic response to systemic amphetamine: an in vivo microdialysis study in the rat. Brain Research, 605(2), 332-337. https://doi.org/10.1016/0006-8993(93)91761-G
Wise, R. A. (1998). Drug-activation of brain reward pathways. Drug and Alcohol Dependence, 51(1–2), 13-22. https://doi.org/10.1016/S0376-8716(98)00063-5
Wyvell, C. L., & Berridge, K. C. (2000). Intra-accumbens amphetamine increases the conditioned incentive salience of sucrose reward: enhancement of reward “wanting” without enhanced “liking” or response reinforcement. Journal of Neuroscience, 20(21), 8122-8130. https://doi.org/10.1523/JNEUROSCI.20-21-08122.2000
About the Author
Dr Andrew Young obtained a BSc degree in Zoology from the University of Nottingham, and his Ph.D in Pharmacology from the University of Birmingham. He then spent four years as a post doc at Imperial College, London, studying glutamate release in the context of mechanisms of epilepsy, before moving to the Institute of Psychiatry (Kings College, London) for nine years to study dopamine signalling in models of schizophrenia and addiction. In 1997 he was appointed as Senior Research Fellow in the School of Psychology at University of Leicester and is now Associate Professor in that department. His research interests focus mainly on neurochemical function, particularly dopamine, in attention and motivation, and in models of schizophrenia and addiction. He teaches topics in biological psychology and the biological basis of mental disease to both undergraduate and postgraduate students in the School of Psychology and Biology. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/06%3A_Dysfunction_of_the_nervous_system/6.01%3A_Addiction.txt |
Learning Objectives
• Know the main symptom clusters associated with affective disorders, including bipolar disorder and major depression
• Be aware of the diagnostic criteria used
• Know the fundamentals of the monoamine theory of depression
• Understand the theoretical underpinning of current approaches to pharmacological therapy for depression, based on the monoamine theory, and appreciate the short-comings of current approaches
• Understand the theoretical basis linking depression to abnormalities in stress responses in the brain and appreciate how this theoretical framework informs novel antidepressant drug development.
Overview of affective disorders
Affective disorders, or mood disorders, are a group of psychological disturbances characterised by abnormal emotional state, and generally manifest as depressive disorders. When considering depressive disorders, the two most prevalent conditions are unipolar (major) depression and bipolar disorder, characterised by alternating depression and mania, although other conditions including dysthymia, cyclothymic disorder, seasonal affective disorder and pre and post-natal depression are also important (Figure 6.4). They occur across the lifespan, although incidence in pre-adolescents is low, and their characteristics are essentially the same across all ages and across cultures.
Depression is characterised by persistent feelings of sadness, loss of interest, feelings of worthlessness and low self-esteem. Major depression and dysthymia share similar symptoms, with prolonged bouts of depressed mood: the main difference is the severity of the symptoms, with dysthymia showing less severe and less enduring symptoms. Bipolar disorder, on the other hand is characterised by similar periods of depression, but interspersed with periods of extreme euphoria, high activity and excitement and inflated self-esteem, termed mania. As with the depressive illnesses, the main difference between bipolar I, bipolar II and cyclothymia is the severity of the symptoms and the degree of interference with daily life.
Diagnosis of affective disorders uses diagnostic criteria laid down in the American Psychiatric Association International Classification of Diseases 11th Revision (DSM-5) or the World Health Organisation’s International Classification of Diseases 11th Revision (ICD-11), which focus on the key features of the condition and give guidance to clinicians for diagnosing the conditions.
Bipolar disorder
Bipolar disorder, formerly called manic depression, is characterised by cycles of extreme mood changes, from periods of a severely depressed state, resembling major depression (see below), to periods of extreme euphoria, high activity and excitement (termed mania). During a manic period people may experience inflated self-esteem and poor judgement, which may lead to them undertaking risky and often destructive behaviours; and a reduced need for sleep and a general restlessness, accompanied by physical agitation and a reduced ability to concentrate. They often deny that there is anything wrong, and become irritable, particularly when challenged about dubious decision making. It is not clear what causes mania. Genetic factors are implicated, since bipolar disorder tends to run in families, although no specific genes have yet been identified that link to it. However, genetic factors only account for around half of the vulnerability, so clearly environmental and social factors are also important.
There are three levels of severity of bipolar disorder. Bipolar I is the most severe form, and is characterised by manic episodes which last at least a week, while depressive episodes last for at least two weeks. The symptoms of both can be very severe and often require hospitalisation. Bipolar II is similar, but less severe: in particular, the manic episodes are less intense, and less disruptive (often termed hypomania) and do not last as long. People with bipolar II are normally able to manage their symptoms themselves, and rarely require hospitalisation. There is a risk of progressing to bipolar I disorder, without correct treatment, but this can be kept to around 10% with the correct management. The least severe category is cyclothymia disorder, where people experience repeated and unpredictable mood swings, but only to mild or moderate degrees.
Diagnosis
Diagnosis of bipolar disorders require presence of both depressive symptoms (as below) and three or more of the listed features of mania (extreme euphoria, high activity, inflated self-esteem, poor judgement: see ‘Diagnostic criteria for bipolar I disorder’ box below). The main difference between diagnostic criteria for bipolar I, bipolar II and cyclothymia are the degree of severity and the time course of the expression of symptoms.
For a diagnosis of bipolar I disorder, it is necessary to meet the following criteria for a manic episode. The manic episode may have been preceded by and may be followed by hypomanic or major depressive episodes.
Manic episode
A distinct period of abnormally and persistently elevated, expansive, or irritable mood and abnormally and persistently increased goal-directed activity or energy, lasting at least 1 week and present most of the day, nearly every day.
During the period of mood disturbance and increased energy or activity, 3 (or more) of the following symptoms (4 if the mood is only irritable) are present to a significant degree and represent a noticeable change from usual behaviour:
• Inflated self-esteem or grandiosity
• Decreased need for sleep
• More talkative than usual or pressure to keep talking
• Flight of ideas or subjective experience that thoughts are racing
• Distractibility
• Increase in goal-directed activity or psychomotor agitation
• Excessive involvement in activities that have a high potential for painful consequences
• The mood disturbance is sufficiently severe to cause marked impairment in social or occupational functioning, or to necessitate hospitalisation to prevent harm to self or others, or there are psychotic features.
• The episode is not attributable to the physiological effects of a substance (e.g., a drug of abuse, a medication, or other treatment) or to another medical condition.
Source: The Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM–5; American Psychiatric Association, 2013)
Incidence of bipolar disorder
Bipolar disorder is present in around 2% of the population, with bipolar I more common than bipolar II: lifetime prevalence is 1% and 0.4% respectively. Unlike major depression (see below) bipolar disorders are equally prevalent in males and females. Bipolar disorder can occur at any stage in the lifespan, although it is rare in pre-adolescents. Peak age of onset is between 15 and 25 years, although diagnosis may be considerably later, with the average age of onset of bipolar I disorder (18 years) a little earlier than for bipolar II disorder (22 years). It is a major cause of cognitive and functional impairment and suicide in young people.
Pathology of bipolar disorder
A number of brain abnormalities have been described in bipolar disorder, some of which overlap with those seen in unipolar depression (see below), but others appear to be specific to bipolar, and may represent changes responsible for the episodes of mania. Although the underlying neuronal abnormality causing mania is not well understood, changes in a number of chemical markers related to the regulation of pathways modulating neurotransmitter function and neurotrophic pathways have been described in cortex, amygdala, hippocampus and basal ganglia, suggesting compromised intracellular chemical signalling. Notably, there is evidence for dysregulation of intracellular signalling pathways which regulate the function of a number of neurotransmitters, most notable of which are dopamine, serotonin, glutamate and GABA. This in turn may lead to the dysregulation of these transmitters which has been reported in mania. The decreased brain tissue volume reported in bipolar disorder, reflecting reduced number, density and size of neurones, may link to the compromised neurotrophic pathways leading to mild neuro-inflammatory responses and neurodegeneration reported in localised brain regions in mania. Therefore, although the pathology of mania seen in bipolar disorder is not well understood, it appears most likely that it derives from abnormalities in intracellular signalling cascades, perhaps related to localised neurodegeneration through decreased neurotrophic factors.
Treatment
First line treatment for bipolar disorder is antipsychotic medication: haloperidol, olanzapine, quetiapine or risperidone. These drugs target dopamine and serotonin signalling in the brain, and are likely to be downstream of the primary abnormalities associated with mania. If antipsychotic treatment is ineffective, then the mood stabilisers, including lithium, valproate or lamotrigine may be prescribed, either alone or in combination with antipsychotic drugs. Lithium has been widely used in the treatment of mania since its introduction in 1949, but the mechanisms through which is has its mood-stabilising effects are still poorly understood. However, recent evidence has linked it to modulation of intracellular signalling pathways, particularly involving adenyl cyclase, inositol phosphate and protein kinase C: by competing with other metal ions which normally regulate these reactions (e.g, sodium, calcium, magnesium), but which may have become dysregulated, it is able to reverse instabilities in these reactions. Interestingly, other drugs, which also have mood-stabilising effects, including valproate and lamotrigine, also modulate these same intracellular signalling cascades. Therefore, the actions of lithium and other mood-stabilising drugs on these pathways provide supporting evidence for abnormalities in these intracellular signalling mechanisms in mania, perhaps opening novel routes for pharmacological therapy, but also provide plausible mechanism through which the drugs exerts their therapeutic action.
In addition to pharmacological treatment, psychotherapy has an important role to play in treatments of bipolar disorder. This may include cognitive behaviour therapy, which helps the individual to manage stress, and replace unhealthy negative beliefs with healthy positive beliefs; and well-being therapy which aims to help the individual manage stress, replace negative beliefs with positive beliefs and improve quality of life generally, rather than focusing on the symptoms. Psychotherapy is particularly important in managing cyclothymia, to minimise the risk that it will develop into bipolar I or II disorder.
Major depression
Major depression is characterised by persistent feelings of sadness, which manifests as enduring and pervasive, ‘blocking out’ all other emotions. Associated with this is a loss of interest in aspects of life (termed anhedonia) which may start as general lethargy, but in its extreme it is a complete loss of interest in all aspects of daily life, including health and well-being. In addition to these emotional symptoms there is also a spectrum of physiological and behavioural symptoms, including sleep disturbances, psychomotor retardation or agitation, catatonia, fatigue or loss of energy. There are also cognitive symptoms including poor concentration and attention, indecisiveness, worthlessness, guilt, poor self-esteem, hopelessness, suicidal thoughts and delusions with depressing themes. Dysthymia, also termed persistent depressive disorder (DSM-5) essentially relates to similar symptoms, but less severe and with a more chronic time course. An individual can suffer from both major depression and dysthymia, which is termed double depression.
Diagnosis
The diagnostic criteria for major depression according to DSM-5 require the occurrence of feelings of sadness or low mood and loss of interest in the individual’s usual activities, occurring most of the day for at least two weeks (Table 2). Importantly, the symptoms must cause the individual clinically significant distress or impairment in social, occupational, or other important areas of functioning, and must not be a result of substance abuse or another medical condition. Diagnosis of dysthymic disorder is similar to that for major depression, but less severe: symptoms in all domains are at the mild to moderate level.
Five (or more) of the following have been present during the same two-week period and represent a change from previous functioning; at least one of the symptoms is either (a) depressed mood or (b) loss of interest or pleasure:
• Depressed most of the day, nearly every day
• Markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day
• Significant weight loss when not dieting or weight gain or decrease or increase in appetite nearly every day
• Insomnia or hypersomnia nearly every day
• Psychomotor agitation or retardation nearly every day
• Fatigue or loss of energy nearly every day
• Feelings of worthlessness or excessive or inappropriate guilt nearly every day
• Diminished ability to think or concentrate, or indecisiveness, nearly every day
• Recurrent thoughts of death, recurrent suicidal ideation without a specific plan, or a suicide attempt or a specific plan for committing suicide
• The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning
• The episode is not attributable to the physiological effects of a substance nor to another medical condition
• The occurrence of the major depressive episode is not better explained by schizoaffective disorder, schizophrenia, schizophreniform disorder, delusional disorder, or other specified and unspecified schizophrenia spectrum and other psychotic disorders
• There has never been a manic episode or a hypomanic episode.
Source: The Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM–5; American Psychiatric Association, 2013)
Incidence
The overall prevalence of depression worldwide is estimated at around 5% of the population. Although there are some regional variations, prevalence rates world-wide are fairly similar with women around twice as likely (5 to 6%) as men (2 to 4%). As with bipolar disorder, incidence in pre-adolescents is very low, but the condition begins to emerge in adolescence, peaks in late middle age and then declines in old age. World-wide, depression is the leading cause loss of functionality at the population level, including absence from work and treatment costs, and major depression is the most prevalent mental disorder associated with the risk of suicide.
Causes of depression
Like many mental illnesses, the underlying cause is not yet known. It is likely that genetic, environmental and social factors contribute, and the exact origin may be different in different people. The main risk factors for an individual developing depression are a family history of depression, particularly if they experience severe or recurrent episodes, a history of childhood trauma, and major stressful life changes. In addition, some physical illnesses and medications can bring on a depressive episode.
Evidence suggests that offspring of people who suffer major depression are 2 to 3 times more likely to suffer from major depression themselves compared to the rate in the populations as a whole. This figure rises to 4 or 5 times greater risk if we consider only offspring of parents with recurrent depression or depression which developed early in life. Studies on identical twins suggest that major depression is around 50% hereditable, although this may be higher in the case of severe depression. Although there is clearly a genetic link, there is no one gene which is responsible for this vulnerability. Rather a vulnerability for depression is promoted by combinations of genetic changes. In adoption studies, a higher risk of an adopted child developing depression has been found if an adoptive (unrelated) parent has depression than if they are unaffected. This gives a clear indication that, as well as genetic influences, parents also clearly have a social influence.
Stress seems to be the most important environmental factor involved in the incidence of depression. The stress-diathesis model puts forward the notion that it is the interaction between stress and the individual’s genetic background which determine the expression of depression. Studies on childhood trauma show that children who have experienced emotional abuse, neglect and sexual abuse have an increased likelihood of developing depression in the future of around three-fold, and around 80% of depressive episodes in adults are preceded by major stressful life events. Therefore it is likely that stressful life events, be they in the distant past or more recent, are both a vulnerability factor and a precipitatory factor in the origin of depression.
Beck’s cognitive triad provides a mechanism through which stressful life events may impact on altered cognition leading to a tendency to interpret every-day events negatively leading to the development of depression. Essentially he proposed that the combination of early life experiences and acute stress led to negative views of oneself, the world and the future (the cognitive triad), which in turn created negative schema with a cognitive bias towards negative aspects of a situation, an overemphasis on negative inferences and an overgeneralisation of negative connotations to all aspects of a situation. While these factors may in themselves be sufficient to invoke a depressive episode, it becomes more likely in those with a genetic predisposition (Figure 6.5). | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/06%3A_Dysfunction_of_the_nervous_system/6.02%3A_Affective_disorders.txt |
Learning Objectives
Type your learning objectives here.
• First
• Second
Overview of schizophrenia
Schizophrenia is a severe and persistent mental disorder which causes profound changes in social, emotional and cognitive processes, with a major impact on daily lives. It has a prevalence of 0.3 to 0.7% worldwide, and is characterised by disturbances in thought processes, perception, behaviour and cognition. These symptoms normally emerge in late adolescence and early adulthood (17 to 30 years), with males generally showing earlier onset than females.
Although the term schizophrenia was only coined in the nineteenth century, there are descriptions of symptoms resembling schizophrenia from as early as ancient Egypt, Greece and China, and examination of the manifestations of people who were termed “possessed” or “mad” suggest that they probably suffered from schizophrenia. In the Middle Ages, mental illnesses were separated into four main categories: idiocy, dementia, melancholia and mania. At the time, mania was a general term describing a condition of insanity where the individual exhibited hallucinations, delusions and severe behavioural disturbances, rather than the more specific diagnostic term used today (see Affective Disorders).
The German psychiatrist Emil Kraepelin (1856 – 1926) found these categories unhelpful in understanding the presentation, progression and outcome of mental diseases. In the 1890s, he put forward the idea of grouping together symptoms associated with similar outcomes, which he believed provided different manifestations of a single progressive disease, which he called dementia praecox, or early dementia, characterised by dementia paranoides, hebephrenia and catatonia. However, at the time, Kraepelin’s views were not widely accepted, and indeed were ridiculed by many clinical professionals.
The widespread adoption of these ideas across the psychiatric community can be put down to the work of the Swiss psychiatrist Eugen Bleuler (1857 – 1939), published in 1911. He developed Kraepelin’s diagnostic ideas, but he conceptualised the condition as a more psychological disorder, rather than the neuropathological disorder conceived by Kraepelin. He regarded hallucinations and delusions, the key features described by Kraepelin, as accessory symptoms, suggesting that the core (cardinal) symptoms related more to anhedonia and social withdrawal aspects of the condition (negative symptoms: see below), which were present in all cases. He also coined the term schizophrenia, as he believed the term dementia praecox was misleading. The name refers to a splitting of the mind, derived from the Ancient Greek, schizo – split and phren – the mind. This refers to dissociated thinking and an inability to distinguish externally generated stimuli from internally generated thoughts: notably, he was not referring to dual personality, which is a completely different psychological phenomenon.
In the 1950s, another German psychiatrist, Kurt Schneider (1887 to 1967) asserted that hallucinations, thought disturbances and delusions (positive symptoms), which he termed ‘first rank’ symptoms, as were? the most relevant. (nb not a complete sentence) The diagnostic principles laid down by Kraepelin, Bleuler and Schnieder form the basis of the systems of diagnosis used today, the Diagnostic and Statistical Manual of Mental Disorders (DSM: American Psychiatric Association) and the International Classification of Diseases (ICD: World Health Organisation).
Aetiology of schizophrenia
It is now clear that no one cause underlies schizophrenia, but that it is determined by the interaction between genetic, biological and social factors.
Genetic factors
Studies from the early twentieth century showed that relatives of schizophrenia sufferers (need to be careful with language, ‘people with schizophrenia’, e.g. https://www.mentalhealth.org.uk/expl...-mental-health) were more likely to develop schizophrenia than the population as a whole, suggesting some familial, genetic influence. More recently studies on fraternal (dizygotic) and identical (monozygotic) twins showed a substantially higher incidence of schizophrenia in twins whose co-twin suffered schizophrenia.
Twin studies
Concordance (percentage) in schizophrenia compares the incidence rate between two individuals. In dizygotic twins concordance is 15 to 25% – that is, if one twin has schizophrenia, there is a 15 to 25% chance that the other twin will also have it. In monozygotic twins, the concordance rate is 40 to 65%.
This compares to a concordance rate in the population as a whole of approximately 0.3 to 0.7%, indicating a clear hereditable, genetic component to vulnerability. However, given that dizygotic twins share 50% of their genes and monozygotic twins share 100%, the concordance rates are considerably lower that would be expected if schizophrenia were entirely genetically determined (50% and 100% respectively). Adoption studies showed that the concordance rates in twins raised separately were similar to this, even when they were unaware that they were twins. Moreover, twins of healthy biological parents who were adopted by foster parents, of which one of the latter went on to develop schizophrenia, did not have a higher risk of themselves developing schizophrenia. Therefore, it is not social factors of upbringing that are influencing the concordance rates in twins, but rather the genetic influence.
Importantly, these data do not suggest that being a twin is a risk factor for developing schizophrenia: the concordance rate amongst twins is the same as in the population as a whole. Rather, the data show that if one twin has schizophrenia, the other twin has a higher than normal probability of also having it.
It is thought that genetic factors contribute approximately 50% of the vulnerability for schizophrenia, and molecular genetic approaches over the last three decades aimed to identify specific genes or groups of genes involved in this susceptibility. A number of candidate genes have been identified, although their precise involvement in the development of schizophrenia is still uncertain. However, it is unlikely that any one single gene is responsible for the vulnerability, but rather a combination of genes across the genome. This may account, to some extent, for the variation on presentation of the disease across different sufferers, if the combination of ‘vulnerability’ genes is different in different individuals.
Biological factors
A number of biological risk factors have been suggested, including pregnancy and birth complications, maternal infection during pregnancy, and possibly infections and/or exposure to toxins during development. However, there is still considerable uncertainly as to the relative contribution of any of these, or how exactly they influence the progression of the disease.
Looking first at pregnancy complications, there is evidence that maternal infection, particularly during the second trimester of pregnancy, is correlated with a raised chance of developing schizophrenia. Children born in the spring (March and April in the northern hemisphere and September and October in the southern hemisphere), where the second trimester coincides with the winter months during which viral infections are at their peak, have a higher incidence than children born outside these months. In addition, there is a high incidence of the disease in children born shortly after major influenza epidemics: it remains to be seen what impact the COVID-19 pandemic will have on the incidence of schizophrenia in the future. This effect may be mediated by pro-inflammatory cytokines, which have been shown to alter foetal neurodevelopment, particularly during the period of high proliferation and specialisation in the second trimester. Similarly, food shortage or malnutrition, particularly in early pregnancy, and maternal vitamin D deficiency during pregnancy are reported to increase the risk.
Birth trauma has also been identified as a risk factor. Premature labour and low birthweight are both associated with an increased risk, although both these may be a result of pregnancy complications rather than birth complications per se. However, asphyxiation during birth has also been identified as a risk factor, and there is a high incidence in babies born with forceps delivery: this could be due to the trauma of the use of forceps, or it could be the outcome of the delivery complication which necessitated the use of forceps.
Social factors
There has been considerable focus on whether childhood trauma, such as dysfunction of the family unit, neglect or sexual, physical or emotional abuse increases the probability of developing schizophrenia in the future. While such trauma undoubtedly increases the severity of a schizophrenic episode and the distress caused in sufferers, and predicts a worse long term outcome, it is debatable whether there is a causal connection between childhood trauma and schizophrenia.
The urban environment has also been suggested as a risk factor: an increased incidence of schizophrenia has been reported in people who grew up in urban surroundings suggesting that social conditions such as social crowding, social adversity, social isolation and poor housing may have an influence on the incidence of the condition. However, urban surroundings are often associated with poverty and poor diet, which may provide a more biological and less social account for the increased incidence. Similarly, people are more likely to have higher exposure to toxins (e.g. lead) in an urban environment than in a more rural environment. So, although social factors cannot be ruled out further studies, particularly using longitudinal designs, are required to identify specific relationships.
Precipitatory factors
People with genetic, biological and social vulnerabilities, even those in the high-risk group, do not necessarily go on to experience a schizophrenic episode. Rather, precipitatory factors are triggers which evoke schizophrenia in people who are at risk (Figure 10). The main trigger that has been identified is stress, often brought about by traumatic life events, such as bereavement, accident, break up of a relationship, unemployment, homelessness or abuse. Importantly these are not sufficient to trigger a schizophrenic episode in themselves, but can trigger them in people who already have a vulnerability. It is possible also that premorbid changes occurring before the first psychotic episode (see below) may alter the individual’s perception of traumatic events or their ability to deal with them, and so pre-diagnosis schizophrenia may exacerbate the impact of life-changing events, rather than the other way around.
Neurodegeneration/neurodevelopment
We have seen that the main vulnerabilities for schizophrenia are laid down very early in development during pregnancy, at birth and during early childhood, yet the outcome – a schizophrenic episode – normally occurs in early adulthood, some 15 to 20 years later. This suggests a neurodevelopmental aetiology, with abnormal development ‘unmasked’ by changes occurring in adulthood. The neurodevelopmental hypothesis is also supported by structural evidence from the brains of schizophrenia sufferers, where cortical volume is seen to be less than in control participants. Importantly this apparent loss of cortical tissue is not accompanied by significant increases in glial cells (which occurs with neurodegeneration), indicating that the reduction is not due to degeneration, supporting a neurodevelopmental explanation.
Brain development is a rapid and extremely complex process and is susceptible to damage from many sources. The main period of vulnerability starts in the second trimester of pregnancy, when neurogenesis and neuronal migration are at their peak, and continues through later pregnancy, birth and early childhood, when synaptogenesis occurs. Stress during this period, including inflammation, malnutrition or drugs has a major impact of foetal brain development, which can lead to developmental abnormalities, leading to a vulnerability to schizophrenia. However, we do not yet have a good understanding of what these neurodevelopmental changes are, how they are triggered, how genetic, biological and social factors influence them, nor how they progress to leave the individual vulnerable to schizophrenia.
Although the wealth of evidence suggests a neurodevelopmental basis for the brain abnormalities in schizophrenia, there is also some evidence for neurodegeneration. Psychotic episodes appear to increase in severity over time, and the response to antipsychotic medication reduces over time, suggesting a progressive, neurodegeneration mechanism. It has been proposed that having a psychotic episode may be damaging to the brain, accounting for the increased likelihood and severity of subsequent episodes. If this is the case then it emphasises the importance of early intervention to prevent psychotic episodes developing. In this context, it is interesting that recent evidence suggests that both negative and cognitive symptoms may pre-date positive symptoms, and may act as a premorbid marker for people who are at risk. This then opens the possibility for psychological interventions before the onset of a psychotic episode.
The disconnection hypothesis of schizophrenia (Friston & Frith, 1995)
Many studies have shown both structural and functional abnormalities in the brains of schizophrenic patients.
Classical theories of schizophrenia suggest that impaired function is explained by pathological changes in specific localised brain areas, and that the type of dysfunction exhibited (i.e. the symptoms) depends on the particular areas damaged.
The disconnection theory, on the other hand, proposes a dysregulation of connectivity between regions in neural networks. Although areas may appear both structurally and functionally normal, their interactions within the neural networks controlling behaviour are abnormal, through a failure to establish a proper pattern of synaptic connection. This idea is also consistent with the neurogenerative processes occurring during the second trimester, where vulnerability to damage seems to be highest, since this is the main time of neuronal migration, and marks the start of synaptogenesis.
Disconnection, therefore, causes a failure of appropriate functional integration between regions, rather than specific dysfunctions of the regions themselves. Thus the abnormality is expressed as an output from certain regions of the brain which are dependent on activity in other areas, and abnormal responses would only be seen when the specific activity involved interactions with other parts of the brain: so, for example, behaviours controlled in the frontal cortex are modulated by incorrect information coming from the temporal cortex. The important distinction is summed up as the distinction between ‘the pathological interaction of two cortical areas and the otherwise normal interaction of two pathological areas’ (Friston, 1998, p. 116).
Image source author?
Symptoms
The presentation of schizophrenia is very diverse, meaning that two individuals, both with a diagnosis of schizophrenia, may exhibit a very different spectrum of symptoms. Even within an individual, the symptoms may change over time, so that they may present differently from one month to the next.
Symptoms were originally divided into two groups: positive symptoms (type 1), characterised by exacerbation of normal behaviour; and negative symptoms (type 2), characterised by a suppression of normal behaviour. However, more recently it became clear that there is actually a third cluster, cognitive symptoms, characterised by changes in cognitive executive function. Notably, negative and cognitive symptoms likely predate the onset of psychotic (positive) symptoms and are stable across the duration of the illness in most patients: they generally do not respond well to treatment and often persist after recovery from an acute psychotic episode.
Positive symptoms
Positive symptoms are symptoms that manifest as an enhancement or exaggeration of normal behaviour, where a patient loses touch with reality (psychosis). The most common symptoms in this domain are hallucinations, delusions, and abnormal and disorganised thoughts. These are essentially the symptoms referred to by Schneider as ‘first order symptoms’, and are often the first noticeable sign of illness.
Hallucinations (sensing things that are not there) are the most common symptom, and are experienced by around 75% of people suffering from schizophrenia. They are most commonly experienced in the auditory modality, where sufferers ‘hear voices’, which may be people commenting on what they are doing and/or giving commands. Voices frequently appear angry or insulting and often demanding, although at times they can be neutral. People can also experience hallucinations in visual, somatosensory or olfactory modalities, where they see, feel or smell (respectively) objects or people that are not actually present. Although not necessarily debilitating in terms of everyday life, experiencing hallucinations can be frightening and distressing for the individual.
Delusions are false beliefs that do not go away, even with evidence that they are not true. They are often associated with delusional perceptions (delusions of reference), where a normal perception takes on a specific, erroneous meaning. This may in turn reflect disturbances in salience attribution, that is, assigning inappropriate importance to unimportant stimuli. The most frequent type of delusions are persecutory delusions where the individual believes that people, including close friends or family, are working against them, commonly associated with large organisations such as government, MI5, or CIA. This leads to extreme mistrust of people, often the people who are trying to help them. However, patients also experience grandiose delusions, where they believe they are exceptionally talented or famous; somatic delusions, where they believe they are ill or deformed; and delusions of control, where they believe that their thoughts are being controlled by an outside force (thought insertion, thought removal, thought broadcasting). The specific form of delusion is often influenced by the person’s own lifestyle, life events and social surroundings.
Disorganised thought, or formal thought disorder, is normally manifest as disorganised speech, where discourse is fragmented and lacks logical progression. This often includes non-existent words (neologisms), and disorganised behaviour, where the individual exhibits unusual and unpredictable behaviour and may show inappropriate emotional responses. At its worst, narrative becomes a completely incoherent jumble of words and neologisms, sometimes referred to as ‘word salad’, reflecting severely disorganised thought processes and poverty of thought content.
There is strong evidence that positive symptomatology is related to temporal lobe dysfunction, and with abnormalities in dopamine function in the basal ganglia, particularly in the mesolimbic pathway, perhaps reflecting a dysregulation of glutamate-dopamine actions in the output from temporal cortex. Positive symptoms respond reasonably well to antipsychotic medication, which reduces dopamine function, emphasising the importance of dopamine systems in their generation.
Negative symptoms
Negative symptoms manifest as general social withdrawal, reduced affective responsiveness (emotional blunting), a lack of interest (apathy), desire (avolition) and motivation (abulia) and reduced pleasure (anhedonia). In its extreme this can lead to mutism (not speaking) and catatonia (immobility), often for extended periods of time. Negative symptoms may be present in the premorbid phase, before the first psychotic episode, but may also emerge during or after a psychotic episode. The brain mechanisms underlying negative symptoms are currently not well understood, although frontal cortex abnormalities are implicated, and, as they do not respond well to treatment, they are a particularly debilitating component of the condition.
Cognitive symptoms
Cognitive symptoms encompass a number of difficulties with learning, memory, attention, planning and problem solving. Cognitive impairments occur in the majority of patients, if not all, and can be extremely severe and persistent. Indeed the degree of cognitive impairment contributes substantially to long term debilitation and is good predictor of outcome. Cognitive changes normally occur before the first psychotic episode, and may contribute to the individual’s abnormal perceptions and attribution which subsequently manifests as positive symptoms. As with negative symptoms, cognitive symptoms probably originate from frontal cortex dysfunction, and do not generally respond well to antipsychotic treatment, emphasising a clinical need for better treatment strategies.
Diagnosis of schizophrenia
The diagnosis of schizophrenia primarily uses one of two diagnostic tools:
1. Diagnostic and Statistical Manual of the (DSM: American Psychiatric Association)
2. International Classification of Diseases (ICD: World Health Organization)
Both have had several iterations.
In the UK, ICD is the most widely used. Early versions of ICD and DSM exhibited a number of important differences in the conceptualisation of schizophrenia and diagnostic criteria, leading to different diagnoses between countries using ICD (e.g. UK) and DSM (e.g. USA). However, the most recent versions of each, ICD-11 (2019) and DSM-5 (2013) have very similar diagnostic criteria, improving consistency and applicability to clinical practice.
Diagnosis of schizophrenia (ICD 11)
At least two of the following symptoms must be present (by the individual’s report or through observation by the clinician or other informants) most of the time for a period of 1 month or more. At least one of the qualifying symptoms should be from item a) through d) below:
a. Persistent delusions (e.g., grandiose, reference, persecutory).
b. Persistent hallucinations (most commonly auditory, but may be any sensory modality).
c. Disorganized thinking (formal thought disorder). When severe, the person’s speech may be so incoherent as to be incomprehensible (‘word salad’).
d. Experiences of influence, passivity or control (i.e., the experience that one’s feelings, impulses, actions or thoughts are not generated by oneself).
e. Negative symptoms (e.g. affective flattening, alogia, avolition, asociality, anhedonia).
f. Grossly disorganized behaviour that impedes goal-directed activity (e.g. bizarre, purposeless or unpredictable behaviour; inappropriate emotional) Psychomotor disturbances such as catatonic restlessness, posturing, negativism, mutism, or stupor.
g. Symptoms are not a manifestation of another medical condition and are not due to effects of a substance or medication on the central nervous system, including withdrawal effects.
Source: World Health Organization (2019). International Statistical Classification of Diseases and Related Health Problems (11th ed.). https://icd.who.int/
Changes in brain structure in schizophrenia
Although many changes in brain structure have been reported in schizophrenia, no one signature abnormality has been identified. The most consistent changes that have been observed is a reduction in overall brain volume, particularly in grey matter, cortical thinning and increased ventricle size. However, these are also seen in normal aging and in other brain diseases, so are not unique to schizophrenia.
In particular, cortical thinning has been seen in the prefrontal cortex, an area important for logical thinking, inference, problem solving and working memory, perhaps explaining the prevalence of disorganised thoughts and disrupted executive function and working memory in schizophrenia sufferers. This idea is supported by observations from functional imaging studies, showing reduced prefrontal cortex activity in schizophrenics during execution of cognitive tasks.
Medial temporal lobe structures, including entorhinal cortex and hippocampus, are also important in attention and working memory function, and imaging studies have also shown reduced tissue volume in these areas. In addition, both the temporal cortex and prefrontal cortex have substantial connectivity to the basal ganglia, including the nucleus accumbens (ventral striatum), an area critically involved in salience detection and response selection.
Thus, it is feasible that disruption of dopamine transmission in nucleus accumbens, perhaps under abnormal direction from temporal and frontal cortex inputs, may underlie salience attribution deficits (inappropriate attribution of importance to irrelevant stimuli) seen in schizophrenia. Notably, dopamine is a critical neurotransmitter in the nucleus accumbens, which may account for the efficacy of dopamine antagonists in treating these symptoms.
Therefore, although we do not yet know precisely what structural changes there are in the brains of schizophrenic patients, there is substantial evidence from recent imaging studies, which is beginning to give us clues as to how subtle changes in brain connectivity may translate into the sorts of dysfunctions seen in schizophrenia.
Biochemical theories of schizophrenia
There is strong evidence that certain illicit drugs, including cocaine, amphetamine, phencyclidine and LSD, can cause acute and transient changes which resemble aspects of schizophrenia. In addition, these drugs exacerbate symptoms in schizophrenia sufferers and can trigger a relapse in people recovering from a previous episode. These drug effects give pointers to neurochemical changes which may underlie schizophrenia, and have formed the basis of biochemical theories. Indeed, they have also provided animal models for studying the mechanisms underlying schizophrenia and development of better drugs.
The dopamine theory posits that schizophrenia is caused by an increase in sub-cortical dopamine function, particularly in the mesolimbic dopamine pathway, projecting from the ventral tegmental are in the midbrain to limbic forebrain areas, primarily nucleus accumbens, but also hippocampus and amygdala. The dopamine theory is based on three main observations:
• first, that drugs which increase dopamine function, including amphetamine, cocaine and L-DOPA (used in the treatment of Parkinson’s disease), cause schizophrenia-like symptoms;
• second, that the first generation of drugs used in treating schizophrenia (typical antipsychotics) are dopamine receptor antagonists;
• and third, there is some evidence for perturbation of dopamine signalling in post-mortem schizophrenic brains, although these may derive from the body’s adaptive changes in response to long-term treatment with dopaminergic drugs.
Whilst there is a great deal of evidential support for the dopamine theory, there are fundamental limitations which indicate that, although dopaminergic systems are involved, the dopamine theory cannot provide a complete explanation of schizophrenia.
Looking at this in more detail, dopaminomimetic drugs like amphetamine cause behavioural changes in normal individuals which resemble some aspects of schizophrenia. However, the changes are limited to behaviours resembling positive symptoms only, including hallucinations, delusions and thought disorder, but do not evoke changes resembling negative or cognitive symptoms. Therefore, although increasing dopamine function does evoke behavioural changes resembling schizophrenia, it does not cause the full spectrum of symptomatology, but only those resembling positive symptoms. Similarly, typical antipsychotic drugs which target dopamine receptors alone are moderately effective at treating positive symptoms, but are very poor at treating negative or cognitive symptoms: indeed there is some evidence that these typical antipsychotic drugs may exacerbate negative and cognitive symptoms, possibly through actions in frontal cortex where dopamine signalling has been reported to be reduced in schizophrenia.
Dopamine signalling as the final common pathway
In its original iteration, the dopamine theory of schizophrenia posited a general over-activity of dopamine in schizophrenia. In a later refinement, only subcortical dopamine was thought to be overactive, with a dopamine underactivity in the prefrontal cortex, accounting for the observation that dopamine receptor antagonists (typical antipsychotic medication) exacerbate negative and cognitive symptoms. However, even this conceptualisation has shortcomings, and does not explain how risk factors translate into symptoms and time course of schizophrenia check?.
In their reconceptualisation of the dopamine hypothesis, which they name ‘version III: the final common pathway’, Oliver Howes and Shitij Kapur (2009) bring together recent data from genetics, molecular biology and imaging studies to provide a framework to account for these anomalies.
Molecular imaging studies show increases in activity in the dopamine neurones in schizophrenia, implying that the abnormality lies in the input to dopaminergic neurones rather than the output from them. Notably, dysfunction in both frontal and temporal cortex has been shown to increase mesolimbic dopamine release, suggesting that core abnormalities in these areas can modify critical dopamine function.
Thus, abnormal function in multiple inputs leads to dopamine dysregulation as the final common pathway: the different behavioural manifestations seen in schizophrenia may be due to the actual combination of dysfunctional inputs to the dopaminergic system in each individual. Moreover, within this framework, the underlying damage could be in the brain areas sending projections to the dopamine neurones, or in the connections themselves (see Disconnection hypothesis).
Notably, mesolimbic dopamine systems are implicated in salience attribution, therefore dysregulation of activity in this pathway would result in abnormal salience attribution which may underlie positive symptomatology.
Therefore an important goal for future drug development is to target the mechanisms converging on the dopamine systems, which are abnormal in schizophrenia, rather than on dopamine systems themselves, which are the target of current antipsychotics. This in turn relies on a fuller understanding of what systems are involved.
Glutamate is the most prevalent excitatory neurotransmitter in the mammalian brain, which acts at several different receptor types. Non-competitive antagonists at one of these receptor types, the NMDA receptor (e.g. phencyclidine, ketamine and dizocilpine (MK-801)), cause behavioural changes in normal people which resemble schizophrenia. In addition, when given to schizophrenia sufferers, they exacerbate the symptoms, providing evidence that the drug action mimics the disease state. Therefore this implies that a glutamate underactivity, particularly at NMDA receptors may underlie schizophrenia. Importantly, unlike dopaminergic drugs which provoke behaviours resembling positive symptoms only, NMDA-receptor antagonists generate behavioural changes which resemble symptoms in all three domains – positive, negative and cognitive, implying that glutamate dysregulation is the core deficit in schizophrenia, and that dopamine abnormalities are downstream of this core deficit. There is also a body of evidence showing changes glutamate function in schizophrenic brains, including reduced levels of glutamate and increased cortical glutamate binding in post mortem brains, and increased glutamate receptor density in living brains.
Other transmitters which have been implicated in schizophrenia are serotonin and gamma-aminobutyric acid (GABA). Lysergic acid diethylamide (LSD), an agonist at serotonin receptors, is an illicit drug taken recreationally, and causes reality distortions and hallucinations resembling positive symptoms of schizophrenia, implicating serotonin over activity in schizophrenia. This is consistent with the pharmacological action of atypical (second generation) antipsychotic drugs, many of which are 5HT-2 receptor antagonists. However there is little or no evidence for abnormalities in serotonin function in schizophrenic brains. Cortical GABA signalling has also been shown to be dysfunctional in the brains of schizophrenic patients (as above – need to be really careful on terminology – students pick this stuff up all the time!), but it is not clear how this impacts on cortical function leading to schizophrenia symptoms.
Treatment
Social and clinical outcome
Without pharmacological intervention, around 20% of schizophrenia sufferers recover well, although it is likely that they never actually show full recovery, hence the term ‘near full recovery’ is often used.
With pharmacological intervention, this figure rises to around 50% showing near full recovery and able to live independently or with family. A further 25% show moderate recovery, but still require substantial support: these generally live in supervised housing, nursing homes or hospitals. The remainder show little or no improvement.
In particular, negative and cognitive symptoms do not respond well to treatment, and often form the most debilitating long-term dysfunctions.
(Data from Torrey, 2001)
Until relatively recently, there were no effective treatments for schizophrenia. In the nineteenth and early twentieth centuries, sufferers were usually installed in asylums, with little or no form of treatment offered, and little or no communication with the outside world. Where treatments were offered, these included shock treatment (insulin shock, pentylenetetrazol [Metrazol] shock, and electroconvulsive shock [ECT]) and even frontal lobotomy (a severing of the neurones connecting the frontal lobes to the remainder of the brain), both of which were severely debilitating, and had limited efficacy in treating the disease. In this situation, patients rarely showed any sort of recovery: indeed their condition often worsened during confinement.
Typical antipsychotic drugs
The drug chlorpromazine is a powerful tranquilliser, used in managing recovery after surgical anaesthetic. People who took it reported a feeling of well-being and calm. On this basis, during the 1950s, it was tried on schizophrenia patients, who often exhibited extreme agitation. It was found to alleviate some of the symptoms of schizophrenia, notably the hallucinations, delusions and disorganised thought – all symptoms within the positive symptom domain – even at a much lower dose than that required for tranquilliser action.
Pharmacologically, chlorpromazine is a dopamine receptor antagonist, with some selectivity for D2-like receptors (D2, D3, D4) compared to D1-like (D1, D5), although at the time nobody knew what the pharmacology of the drug was: indeed it was not for another decade that dopamine was realised to be a neurotransmitter. It was also not effective in all patients, or against all symptoms, and was associated with some debilitating side effects. Nevertheless, at the time (mid 1950s), it formed a major breakthrough as the first pharmacological treatment for schizophrenia.
Following the discovery of the antipsychotic effect of chlorpromazine, many other dopamine D2-like receptor antagonists were tested as potential antipsychotic drugs. This led to the development of a whole class of antipsychotic drugs: the typical or first-generation antipsychotics. Of these, haloperidol is now the typical antipsychotic of choice, although there are several other typical antipsychotic drugs also licenced for use in UK (e.g. flupentixol, pimozide, sulpiride), which became the mainstay of pharmacological treatment for schizophrenia during the 1970s and 1980s. Originally these drugs were called neuroleptics, as they induced neurolepsis (immobility associated with their major tranquilliser action). Now, they are called antipsychotics, reflecting their reduction of psychotic symptoms at doses much lower than those used to induce neurolepsis. Their antipsychotic efficacy is a direct result of their antagonist action at dopamine D2 receptors.
However, treatment with typical antipsychotic drugs has a number of drawbacks. Firstly they are not very effective: around 25% of patients fail to respond to treatment at all, and others (around 25%) show some improvement, but still show substantial symptomatology. In particular, typical antipsychotic drugs show little or no efficacy at treating negative or cognitive symptoms; they are mainly effective only on positive symptoms. Therefore, while treatment may alleviate positive symptoms, sufferers are left with residual and potentially severely debilitating negative and cognitive symptoms.
Another main drawback of typical antipsychotic drugs is that they produce sedative and motor side effects in the majority of patients. The most debilitating of these are the motor side effects, including resting tremor and akathisia (similar to those seen in Parkinson’s disease), and tardive dyskinesia: each occurs in around 25% of patients taking typical antipsychotic medication. These are caused by D2 receptor antagonism in the dorsal striatum (caudate nucleus and putamen) resembling the dopamine depletion seen in these areas in Parkinson’s disease. Notably, the parkinsonian side effects recover on withdrawal of the drugs, but tardive dyskinesia does not and motor function will progressively deteriorate irreversibly if the medication is continued. Finally, the antipsychotic effect of these drugs is not immediate, but takes several weeks to establish, creating a substantial delay between initiation of treatment and control of symptoms.
Atypical antipsychotic drugs
In the search for antidepressant drugs similar to the tricyclic antidepressant, imipramine, several drugs were discovered which had antipsychotic properties: one of these was clozapine. In sharp contrast to other antipsychotics used at the time, clozapine had good antipsychotic potency, but with minimal motor side effects and for this reason it was called an atypical antipsychotic (also known as second generation antipsychotic). Subsequently it was found that, as well as positive symptoms, it is at least somewhat effective at treating negative and cognitive symptoms, and it is effective in some patients who do not respond to other antipsychotic drugs. Pharmacologically, too, it is rather different from typical antipsychotics, which are D2 receptor antagonists: clozapine has a wide ranging pharmacology with effects at dopamine, serotonin, acetylcholine, noradrenaline and histamine receptors. Clozapine was introduced as an antipsychotic medication in the early 1970s, but was withdrawn a few years later after a Finnish study reported a high incidence of severe, and potentially fatal blood disorders, agranulocytosis and leucopoenia. However, after extensive studies, it was concluded that the occurrence of agranulocytosis (1%) and neutropenia (3%) in patients taking clozapine is relatively low, particularly beyond 18 weeks after the start of treatment, and it was reintroduced into the market in the 1990s, with strict monitoring controls in place. Thus, in the UK, patients need to have blood tests every week for the first 18 weeks of treatment, then fortnightly up to the end of the first year of treatment and every four weeks thereafter. If there is any sign of agranulocytosis or leukopenia, the drug has to be withdrawn permanently. This monitoring adds substantially both to the patient inconvenience and financial cost, and therefore, although clozapine is still the most effective antipsychotic available, it is only used in cases where other medications have not worked.
The discovery of the effectiveness of clozapine initiated a new approach to developing novel antipsychotic drugs. Rather than focussing on D2 receptor antagonists, drugs with much wider pharmacology were tested. Several more atypical antipsychotics derived from this approach, including olanzapine, currently the first line treatment, quetiapine, risperidone and lusaridone. Although they mostly have a range of pharmacological effects, the common action of these drugs and clozapine, is potent antagonist effects at both D2 and 5HT2 receptors: this dual action is believed to underlie the antipsychotic actions.
Although these drugs are not much more effective at treating positive symptoms – even clozapine is only effective in around 85% of patients – they do have some limited efficacy at treating negative and cognitive symptoms, and they cause little or no motor side effects. However, there use is still limited by other side effects, including substantial weight gain and excessive salivation. In addition, their effects can be quite variable, and are normally significantly slower in onset that typical antipsychotics.
Third and fourth generation antipsychotics
Third generation antipsychotics, for example aripiprazole, brexpiprazole and cariprazine, are D2 receptor partial agonists, rather than full antagonists, which means that where endogenous dopamine levels are high, the drugs reduce its effect, but when they are low the drugs enhance its effect. They also have actions on second messenger pathways to modulate the actions on D2 receptors. Therefore they have a dopamine ‘stabilising’ effect. Some of them also have 5HT partial agonist actions. They are generally as effective as other antipsychotics, but with reduced side effects and are better tolerated. However, they are still not very effective at treating negative and cognitive symptoms. Adequate control of negative and cognitive symptoms, which are arguably the most pervasive and disruptive symptoms of schizophrenia is, at present, an unmet clinical need, and several alternative therapeutic approaches are at the experimental stage, either in preclinical testing or in clinical trials, aiming to target actions beyond dopamine and serotonin receptors. Among these are drugs which modulate glutamate function, drugs acting on acetylcholine systems and drugs targeting a group of regulatory compounds called trace amines.
Psychological therapy
There are a number of psychological therapies available for treating schizophrenia, of which the most important are cognitive behavioural therapy (CBT) and family therapy. Although these therapies are not effective in all people or situations, they are showing great promise for future refinement. CBT primarily focusses on helping the patient understand their abnormal perceptions and work to overcome them, while family therapy involves working with the patient and their family to achieve a less stressful and more supportive environment.
Psychological therapy is often not effective during an acute psychotic episode, as presence of psychotic symptomatology make meaningful communication difficult and also make patients suspicious of caregivers. The most success has been achieved with people who have been stabilised pharmacologically first, where psychological therapy has been successful to maintain stability allowing reduction or even cessation of drugs. Interestingly, also, there has been some success in using psychological therapies in individuals who have shown a vulnerability, or have shown evidence of existing negative or cognitive symptoms, but have not yet experienced a full psychotic episode. In this case therapy looks at adverse life events and the individuals reactions to them: this approach has shown some success in preventing the development of a psychotic episode. Given the evidence that psychotic episodes may in themselves cause damage, this is valuable in managing vulnerable patients, and emphasises the importance of being able to identify vulnerable individuals in the premorbid stage.
Current treatments
The current first line treatment is generally an atypical antipsychotic drug, normally olanzapine, alongside individual CBT and family therapy, although acutely symptomatic patients rarely respond well to psychological therapy: patients require stabilisation pharmacologically before psychological therapy becomes effective. If the first drug is not effective at controlling symptoms, or has unacceptable side effects, a second drug would be tried, normally another atypical antipsychotic drug, but for some patients a typical antipsychotic is more appropriate. Clozapine is only considered after two other antipsychotics have been tried, one of which must be an atypical drug.
In the post-acute period, following a schizophrenic episode, both pharmacological and psychological therapies are generally continued in order to prevent relapse, although it is sometimes possible to slowly reduce drugs, with careful monitoring to guard against relapse, particularly with effective psychological therapy. However, in the post-acute phase, many patients choose not to take the drugs, believing that they are cured, or even choosing the risk of relapse rather than the side effects of the drugs. This alone is estimated to account for a relapse rate of around 20% of patients. In some cases, where adherence to oral preparations is unreliable, it is beneficial to give patients slow-release ‘depot’ preparation, known as long acting injectable drugs, or LAIs. Mostly these are typical antipsychotics, haloperidol, flupentixol or fluphenazine, but LAI preparations of atypical antipsychotics, including olanzapine, risperidone and aripiprazole are now available for clinical use.
Key Takeaways
• Schizophrenia occurs in approximately 0.5% of the population, with peak onset in early adulthood. It is characterised by a variety of symptoms, which cluster into three types: positive (psychotic), negative and cognitive. Although positive symptoms are the most noticeable, and indeed it is usually the emergence of positive symptoms that alerts people to the problem, negative and cognitive symptoms may occur before a psychotic episode, and often endure long after recovery from a psychotic episode, causing substantial long-term debilitation. Vulnerability to schizophrenia depends on genetic, biological and social factors, which influence neurodevelopment, although little is known about the precise mechanisms. A psychotic episode is triggered in a vulnerable individual by precipitatory factors, the most prominent of which seems to be stress, particularly from adverse life events.
• Biochemical theories posit critical roles for glutamate and dopamine in the pathology of schizophrenia, although other transmitters, notably serotonin and GABA have also been implicated. It is thought that the primary deficit may lie in abnormal cortical glutamate function, supported by physiological and imaging studies showing decrease cortical volume, and changes in markers of cortical glutamate function in schizophrenic brains. Negative and cognitive symptoms may be a result of abnormalities in frontal and/or temporal cortices, or in the communication between them, while dysregulated glutamate-dopamine signalling, particularly in the basal ganglia, may account for positive symptomatology.
• Current treatments rely heavily on drugs which act as antagonists at dopamine and serotonin receptors, the typical and atypical antipsychotics. They are reasonably effective at treating positive symptoms, perhaps reflecting the critical dopaminergic element in the expression of positive symptoms, but have little or no effect on negative or cognitive symptoms: they are also not effective in around 25% of sufferers, and cause unpleasant and debilitating side effects. Therefore there is a real clinical need for drugs which offer better control of symptoms in all three domains, with fewer side effects.
References and further reading
Bowie, C. R., & Harvey, P. D. (2006). Cognitive deficits and functional outcome in schizophrenia. Neuropsychiatric disease and treatment, 2(4), 531–536. https://dx.doi.org/10.2147/nedt.2006.2.4.531
Chen, Z., Fan, L., Wang, H., Yu, J., Lu, D., Qi, J., … & Wang, S. (2022). Structure-based design of a novel third-generation antipsychotic drug lead with potential antidepressant properties. Nature Neuroscience, 25(1), 39-49. https://doi.org/10.1038/s41593-021-00971-w
Egerton, A., Modinos, G., Ferrera, D., & McGuire, P. (2017). Neuroimaging studies of GABA in schizophrenia: A systematic review with meta-analysis. Translational psychiatry, 7(6), e1147. https://dx.doi.org/10.1038/tp.2017.124
Ellenbroek, B. A. (2012). Psychopharmacological treatment of schizophrenia: What do we have, and what could we get? Neuropharmacology, 62(3), 1371-1380. https://dx.doi.org/10.1016/j.neuropharm.2011.03.013
Friston, K. J. (1998). The disconnection hypothesis. Schizophrenia Research, 30(2), 115-125. https://dx.doi.org/10.1016/S0920-9964(97)00140-0
Friston, K. J., & Frith, C. D. (1995). Schizophrenia: A disconnection syndrome? Clin Neurosci, 3(2), 89-97
Henriksen, M. G., Nordgaard, J., & Jansson, L. B. (2017). Genetics of schizophrenia: Overview of methods, findings and limitations. Frontiers in human neuroscience, 11, 322. https://doi.org/10.3389%2Ffnhum.2017.00322
Howes, O. D., & Kapur, S. (2009). The dopamine hypothesis of schizophrenia: Version III–the final common pathway. Schizophr Bull, 35(3), 549-562. https://dx.doi.org/10.1093/schbul/sbp006
Jauhar, S., Johnstone, M., & McKenna, P. J. (2022). Schizophrenia. The Lancet, 399(10323), 473–486. https://dx.doi.org/10.1016/S0140-6736(21)01730-X
McCutcheon, R. A., Abi-Dargham, A., & Howes, O. D. (2019). Schizophrenia, dopamine and the striatum: From biology to symptoms. Trends Neurosci, 42(3), 205-220. https://dx.doi.org/10.1016/j.tins.2018.12.004
McKenna, P. J. (2013). Schizophrenia and related syndromes. Routledge.
Morgan, C., & Fisher, H. (2007). Environment and schizophrenia: Environmental factors in schizophrenia: Childhood trauma – a critical review. Schizophrenia bulletin, 33(1), 3–10. https://dx.doi.org/10.1093/schbul/sbl053
Orsolini, L., De Berardis, D., & Volpe, U.(2020) Up-to-date expert opinion on the safety of recently developed antipsychotics, Expert Opinion on Drug Safety, 19(8), 981-998, https://dx.doi.org/10.1080/14740338.2020.1795126
Seeman, P. (2013). Schizophrenia and dopamine receptors. European Neuropsychopharmacology, 23(9), 999-1009. https://doi.org/10.1016/j.euroneuro.2013.06.005
Tandon, R., Nasrallah, H. A., & Keshavan, M. S. (2009). Schizophrenia, “just the facts” 4. Clinical features and conceptualization. Schizophrenia Research, 110(1), 1-23. https://dx.doi.org/10.1016/j.schres.2009.03.005
Torrey, E.F. (2001) Surviving Schizophrenia: A Manual for Families, Consumers, and Providers (4th Edition); HarperCollins.
World Health Organization (2019). International Statistical Classification of Diseases and Related Health Problems (11th ed.). https://icd.who.int/
About the Author
Dr Andrew Young obtained a BSc degree in Zoology from the University of Nottingham, and his Ph.D in Pharmacology from the University of Birmingham. He then spent four years as a post doc at Imperial College, London, studying glutamate release in the context of mechanisms of epilepsy, before moving to the Institute of Psychiatry (Kings College, London) for nine years to study dopamine signalling in models of schizophrenia and addiction. In 1997 he was appointed as Senior Research Fellow in the School of Psychology at University of Leicester and is now Associate Professor in that department. His research interests focus mainly on neurochemical function, particularly dopamine, in attention and motivation, and in models of schizophrenia and addiction. He teaches topics in biological psychology and the biological basis of mental disease to both undergraduate and postgraduate students in the School of Psychology and Biology. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/06%3A_Dysfunction_of_the_nervous_system/6.03%3A_Schizophrenia.txt |
Learning Objectives
• To gain knowledge and understanding of the biological and cognitive changes that occur with ageing
• To understand methodological approaches to studying effects of ageing and strategies which may exist to promote healthy (cognitive ageing)
• To understand the subtle sensory changes which may occur with ageing.
Ageing can be defined as a gradual and continuous process of changes which are natural, inevitable and begin in early adulthood. Globally, the population is ageing, resulting in increasing numbers and proportion of people aged over 60 years. This is largely due to increased life expectancy and has ramifications in terms of health, social and political issues. The ageing process results in both physical and mental changes and although there may be some individual variability in the exact timing of such changes they are expected and unavoidable. Whilst ageing is primarily influenced by a genetic process it can also be impacted by various external factors including diet, exercise, stress, and smoking. As humans age their risk of developing certain disorders, such as dementia, increases, but these are not an inevitable consequence of ageing.
Healthy ageing is used to describe the avoidance or reduction of the undesired effects of ageing. Thus, its goals are to promote physical and mental health. For humans we can define our age by chronological age – how many years old a person is – and biological age, which refers to how old a person seems in terms of physiological function/presence of disease. All of the systems described elsewhere in this textbook will undergo changes with ageing and here we have focused on the main cognitive and sensory ones. Gerontology is the study of the processes of ageing and individuals across the life span. It encompasses study of the social, cultural, psychological, cognitive and biological aspects of ageing using distinct study designs, as described in Table 1, and specific methodological considerations (see insert box).
Table 6.1. Advantages and disadvantages of different methodological approaches to study ageing
Study type Description Advantages Disadvantages
Longitudinal Data is collected from the same participants repeatedly at different points over time.
Easier to control for cohort effects as only one group involved
Requires (relatively) fewer participants
Participant dropout rates increase over time.
Resource intensive to conduct studies over long periods of time.
Practice effects
Cross-sectional Data is collected at a single time point for more than cohort. Cohorts are separated into age groups. Efficient – all data collection completed within a relatively short time frame, studies can be easily replicated
Difficult to match age groups.
Differences due to cohort/historical differences in environment, economy etc
Sequential longitudinal/Sequential cross sectional Two or more longitudinal, or cross sectional designs, separated by time Repeating or replicating helps separate cohort effects from age effects. Complex to plan. Can be expensive
Accelerated longitudinal A wide age range is recruited, split into groups and each group is followed for a few years. Longitudinal data is collected from the same participants over time. Cross-sectional data collection occurs within shorter time frame. Does not completely avoid cohort effects.
Credit: Claire Gibson
Methodological considerations for ageing studies
There are a number of issues that are important to consider when studying ageing. Many of these occur because it is difficult, or impossible, to separate out the effects of ageing from the effects of living longer or being born at a different time;
• Older people will have experienced more life events. This will be true even if the chances of experiencing something are the same throughout life and not related to age. This means that older people are more likely to have experienced accidents, recovered from disease, or have an undiagnosed condition.
• People of different ages have lived at different times, and thus experienced different social, economic and public health factors. For example, rationing drastically changed the health of people who grew up in the middle of the twentieth century. It is likely that the Coronavirus pandemic will also have both direct and indirect effects in the longer term.
• The older people get, the more variable their paths through life become. It is often found that there is more variability in data from older people, however we also know that environmental factors have a strong effect on behavioural data.
• Non-psychological effects can directly impact on psychology. A good example of this is attention. Often as we age our range of movement becomes limited or slower. Since attention is often shifted by moving the head or eyes, reduction in the range of speed of motion will directly impact shifts of attention. Of course, not all attention shifts involve overt head or eye movements.
• Generalised effects such as slowing need to be controlled for or considered before proposing more complex or subtle effects. One way to do this is to use analysis techniques such as z-scores, ratios or Brindley plots to compare age groups and to ensure there is always a within age group baseline or control condition.
Biological basis of ageing
Ageing per se involves numerous physical, biochemical, vascular, and psychological changes which can be clearly identified in the brain. As we age our brains shrink in volume (Figure 6.9), with the frontal cortex area being the most affected, followed by the striatum, and including, to a lesser extent, areas such as the temporal lobe, cerebellar hemispheres, and hippocampus. Such shrinkage can affect both grey and white matter tissue, with some studies suggesting there may be differences between the sexes in terms of which brain areas show the highest percentage of shrinking with ageing. Magnetic Resonance Imaging (MRI) studies allow the study of specific areas of brain atrophy with age and show that decreases in both grey and white matter occur, albeit at different stages of the lifespan (for further information see Farokhian et al., 2018). Shrinkage in grey and white matter results in expansion of the brain’s ventricles in which cerebrospinal fluid circulates (Figure 6.10). The cerebral cortex also thins as we age and follows a similar pattern to that of brain volume loss in that it is more pronounced in the frontal lobes and parts of the temporal lobes.
Brain plasticity and changes to neural circuits
During normal ageing humans, and other animals, experience cognitive decline even in the absence of disease, as explained below. Some of this cognitive decline may be attributable to decreased, or at least disrupted, neuroplasticity. Neuroplasticity refers to the brain’s ability to adapt and modify its structure and functions in response to stimuli – it is an important process during development and thought to underlie learning and memory. In the young, the potential for brain plasticity is high as they undergo rapid learning and mapping of their environment. As we age, this capacity for learning, and therefore plasticity, declines, although it may be argued that we can retain some capacity for learning and plasticity through practice and training-based techniques (see insert box on ‘Strategies to promote healthy cognitive ageing’). Synaptic changes, thought to be a major contributor to age-related cognitive decline, involve dendritic alterations in that dendrites shrink, their branching becomes less complex, and they lose dendritic spines. All in all, this reduces the surface area of dendrites available to make synaptic connections with other neurons and therefore reduces the effectiveness, and plasticity, of neural circuitry and associated cognitive behaviours (for review see Petralia et al., 2014).
Cellular and physiological changes
Certain pathological features, in particular the occurrence of beta amyloid (Ab) plaques (sometimes referred to as senile plaques) and neurofibrillary tangles, are typically associated with dementia-causing diseases such as Alzheimer’s. Such features are described in detail in the Dementias chapter. However, it is important to note that they also occur with ageing, albeit in smaller amounts, and are more diffusely located compared to disease-pathology, but may also contribute to cell death and disruption in neuronal function seen in ageing.
Other physiological changes which occur with ageing, all of which have been suggested to result in cognitive impairment, include oxidative stress, inflammatory reactions and changes in the cerebral microvasculature.
Oxidative stress is the damage caused to cells by free radicals that are released during normal metabolic processes. However, compared to other tissues in the body, the brain is particularly sensitive to oxidative stress, which causes DNA damage and inhibits DNA repair processes. Such damage accumulates over the lifespan resulting in cellular dysfunction and death. Ageing is associated with a persistent level of systemic inflammation – this is characterised by increased concentration in the blood of pro-inflammatory cytokines and other chemokines which play a role in producing an inflammatory state, along with increased activation of microglia and macrophages. Microglia are the brain’s resident immune cells and are typically quiescent until activated by a foreign antigen. Upon activation, they produce pro-inflammatory cytokines to combat the infection, followed by anti-inflammatory cytokines to restore homeostasis. During ageing, there appears to be a chronic activation of microglia, inducing a constant state of neuroinflammation, which has been shown to be detrimental to cognitive function (Bettio et al., 2017; Di Benedetto et al., 2017). Finally, the ageing brain is commonly associated with decreased microvascular density, vessel thickening, increased vessel stiffness and increased vessel tortuosity (or distortion, twisting) which all result in compromised cerebral blood flow. Any such disruption to cerebral blood flow is likely to result in changes in cognitive function (e.g. see Ogoh 2017).
Neurotransmitter changes
During ageing the brain also experiences changes in the levels of neurotransmitters and their receptors in different regions of the brain largely, but not exclusively, involving the dopamine, serotonin and glutamate systems. Neurochemical changes associated with ageing are important to understand as they may be relevant when considering therapeutic targets aimed at stabilising or enhancing those brain functions which typically deteriorate with age.
• Dopamine is a monoamine neurotransmitter which plays a neuromodulatory role in many CNS functions including executive function, motor control, motivation, arousal, reinforcement and reward. During ageing, dopamine levels have been reported to decline by about 10% per decade from early adulthood onwards and have been associated with declining motor and cognitive performance (Berry et al., 2016; Karrer et al., 2017). It may be that reduced levels of dopamine are caused by reductions in dopamine production, dopamine producing neurons and/or dopamine responsive synapses.
• Serotonin, also known as 5-hydroxytryptamine (5-HT) functions as both an inhibitory neurotransmitter and a hormone. It helps regulate mood, behaviour, sleep and memory all of which decline with age. Decreasing levels of different serotonin receptors and the serotonin transporter (5-HTT) have also been reported to occur with age. Areas particularly affected by loss of serotonin neurones include the frontal cortex, thalamus, midbrain, putamen and hippocampus (Wong et al., 1984).
• Glutamate is the primary excitatory neurotransmitter in the CNS synthesised by both neuronal and glial cells and high levels of glutamate, causing neurotoxicity, are implicated in a number of neurodegenerative disorders including multiple sclerosis, amyotrophic lateral sclerosis, Alzheimer’s disease and schizophrenia. Neurotransmission at glutamatergic synapses underlies a number of functions, such as motor behaviour, memory and emotion, which are affected during ageing. Levels of glutamate are reported to decline with ageing – older age participants have lower glutamate concentrations in the motor cortex along with the parietal grey matter, basal ganglia and the frontal white matter (Sailasuta et al., 2008).
Ageing and genetics
The genetics of human ageing are complex, multifaceted and are based on the assumption that duration of lifespan is, at least in part, genetically determined. This is supported by evidence that close family members of a centenarian tend to live longer and genetically identical twins have more similar lifespans than non-identical twins. The genetic theory of ageing is based on telomeres which are repeated segments of DNA (deoxyribonucleic acid) which are present at the ends of our chromosomes. The number of repeats in a telomere determines the maximum life span of a cell, since each time a cell divides, multiple repeats are lost. Once telomeres have been reduced to a certain size, the cell reaches a crisis point and is prevented from dividing further; thus the cell dies and cannot be replaced. Many believe this is an oversimplified explanation of the genetics of ageing and that actually a number of genetic factors, in combination with environmental factors, contribute to the ageing process (see reviews for further information – Melzer et al., 2020; Rodriguez-Rodoro et al., 2011).
Changes in cognitive systems
The changes in the biological systems and processes above translate into changes across all psychological processes. Some changes are found across all systems, but, as the sections below describe, changes with age are not uniform and there are specific changes in cognitive systems such as memory and attentional control and inhibition.
One of the strongest findings in ageing research is that there is a general slowing of processes (Salthouse 1996). In virtually every task, older people respond more slowly, on average, than younger people, reflecting many of the biological changes described above. This is such a ubiquitous finding that it is important to control or account for slowing before considering any further theories of cognitive change. Furthermore, slowing can have more subtle effects than simple changes in reaction times. If a cognitive process requires more than one step, a delay in processing the first step can mean that the entire processing stream cannot proceed, or information cannot synchronise between different sub-processes (see ‘Methodological considerations’ text box).
Memory
Ageing appears to have greater effects in some types of memory more than others. Memory for personally experienced events (i.e. episodic memory) undergoes the clearest decline with age (Ronnlund et al 2005). Within this, the decline after age 60 seems to be greater for recall tasks that require the participant to freely recall items, compared to tasks where they are asked to recognise whether the items were seen before (La Voie & Light, 1994). In a meta-analysis of studies where participants were asked whether they remembered the context or detail of a remembered event, Koen and Yonelinas (2014) found that there was a significant difference in participants’ recollection of context, but much less reduction in the ability to judge the familiarity of a prior occurrence.
Changes in other types of memory are less clear. Semantic memory, that is the memory for facts and information, shows less decline (Nyberg, L. et al., 2003), as does procedural memory and short-term memory (Nilsson, 2003). Working memory is distinct from short-term memory in that items in memory typically have to be processed or manipulated and this also declines with age (Park et al 2002).
When considering these studies on memory it should always be remembered that there is considerable variability in memory performance, even in the domains where studies find consistent evidence of decline. Some of these differences are likely to be due to differences in the rate of loss of brain structure (sometimes termed ‘brain reserve’). Other variability is likely to be due to differences in how well people can cope or find alternative strategies to perform memory tasks. For example, it has been argued that some older people interpret the sense of familiarity or recognition differently to younger people and are more likely to infer they recalled the event. Higher levels of education earlier in life, as well as higher levels of physical or mental activity later in life, are associated with better memory. This variability illustrates the importance of considering participant sampling and cohort differences in ageing research.
Strategies to promote healthy cognitive ageing
Recently, application of behavioural interventions and non-pharmacological approaches has been demonstrated to improve some aspects of cognitive performance and promote healthy cognitive ageing. This is of particular relevance in older age when cognitive performance in particular domains declines. Such approaches, including cognitive training, neuromodulation and physical exercise are thought to improve cognitive health by rescuing brain networks that are particularly sensitive to ageing and/or augmenting the function of those networks which are relatively resilient to ageing. However, although such approaches may be supported by relatively extensive psychological studies, in terms of improvement or stabilisation of cognitive ability, evidence of underlying changes in biological structure/function, which is needed to support long term changes in cognitive behaviours, is more limited.
• Cognitive or ‘brain’ training – a program of regular mental activities believed to maintain or improve cognitive abilities. Based on the assumptions that practice improves performance, similar cognitive mechanisms underlie a variety of tasks and practicing one task will improve performance in closely related skills/tasks. Such training encompasses both cognitive stimulation and strategy-based interventions, are typically administered via a computer or other electronic medium and aim to restore or augment specific cognitive functions via challenging cognitive tasks that ideally adapt to an individual’s performance and become progressively more difficult. Recent meta-analyses of randomized controlled trials of cognitive training in healthy older adults and patients with mild cognitive impairment (MCI) report positive results on the cognitive functions targeted (Basak et al., 2020; Chiu et al., 2017).
• Neuromodulation – non-invasive brain stimulation techniques, such as transcranial direct current stimulation (tDCS) and repetitive transcranial magnetic stimulation (rTMS) have been shown to moderately improve cognitive functioning in older people (Huo et al., 2021) and improve cognitive performance in patients with MCI (Jiang et al., 2021). tDCS is approved as a safe, neuromodulatory technique which delivers a weak, electrical current via electrodes placed on the scalp to directly stimulate cortical targets. rTMS uses an electromagnetic coil to deliver a magnetic pulse that can be targeted at specific cortical regions to modulate neuronal activity and promote plasticity. Although some studies have combined cognitive training and neuromodulation approaches to enhance cognitive performance there is limited evidence of enhanced performance beyond that reported for either approach used in isolation.
• Physical activity – structured physical activity, in the form of moderate to vigorous aerobic exercise, has been reported to preserve and enhance cognitive functions in older adults. In particular, it moderately improves global cognitive function in older adults and improves attention, memory and executive function in patients with MCI (Erickson et al., 2019; Song et al., 2018). However, the mechanisms by which exercise has these effects are not fully understood and it is likely that the mechanisms of exercise may vary depending on individual factors such as age, affective mood and underlying health status.
Attentional control
One example of slowing is that the response to a cue becomes slower with age. The extent of the difference in response times between age groups differs depending on the type of cueing. A cue can be used to create an alerting response, which increases vigilance and task readiness. Older people are slower to show this alerting response (Festa Martino et al 2004). A cue can also symbolically direct attention to a specific location. One commonly used example of a symbolic cue is an arrow pointing to a particular location. Several studies have shown that older people are comparatively slower at responding to this type of cue (see Erel & Levy 2016 for an extensive and useful review). On the other hand, cues can also capture attention automatically, for example a loud noise or bright light. In contrast to the alerting and symbolic cues, numerous studies show that older people maintain an automatic orienting response (e.g. Folk & Hoyer 1992). The attention effects above are likely to be due, in part, to other effects of ageing, such as sensory change (see section on Sensory change with age). For example, in the study by Folk and Hoyer (1992) older people were slower to respond to arrow cues only when they were small, not larger. This illustrates that the effects of changes in perception in ageing need to be considered when interpreting ageing effects.
Attentional processes are also often measured by visual search tasks. In these tasks participants search for a particular target amongst distracters. The target might be specified in advance (‘search for the red H’) or be defined by its relation to the distracters (‘find the odd one out’). The distracter can be similar to the target, typically leading to slower search (Duncan & Humphreys, 1989), or be sufficiently different that it ‘pops out’ (Treisman & Gelade, 1980). The distracters might be completely different to the target, allowing people to search based on a feature (e.g. a red H among blue As), or share some features with the target, which is usually termed conjunction search (e.g. a red H among red As and blue As). Visual search performance can be used to test multiple attentional mechanisms such as attentional shifting, attention to different features, response times and attentional strategies.
Older people show slower performance than younger people for conjunction search, compared to relatively preserved performance in feature-based search (Erel & Levy 2016). This can be considered analogous to the differences in cueing above. The quick performance in feature search is often considered to be automatic and involuntary (Treisman & Gelade 1980) and the slower performance in conjunction search to require more complex and voluntary processes. Older people have slower performance when they are required to make more attentional shifts to find the target (Trick & Enns 1998). However, the role of declines in other processes such as discrimination of the target and distractors, inhibition and disengagement from each location and general slowing are also likely to play a part.
Inhibition
In contrast to directing attention towards a target, we also need to ignore, avoid or suppress irrelevant actions. In psychology this is often referred to as ‘inhibition’ and it can refer to the ability to ignore distracting items or colours on screen, or to resist the urge to make a specific repeated or strongly cued action. In ageing research, it is often measured by the Stroop task (where participants must name a word but ignore its ink colour) a go/no-go task (where participants must press a button in response to a target on most trials, and avoid that press on a few trials with a different stimulus) or flanker or distracter tasks (where participants’ performance with and without a distraction is measured).
Hasher and Zacks (1988) proposed that an age-related decline in inhibition underlies many differences in performance between older and younger people. Deficits can be found in many tasks which appear to be based on inhibition. Kramer et al (2000) asked people to search for a target item among distracters. For example, older people’s reaction times were more affected (slowed) when there was a particularly salient (visible) distracter also on the screen. Older people also do not show as strong ‘negative priming’ as younger people. Negative priming is found when a distracter on a previous trial becomes the target on the current trial. This extended effect on performance is attributed to the distracter being inhibited so if inhibition is reduced then the negative priming effect is reduced.
On the other hand, some of the findings of inhibitory deficits can be attributed to other factors. For example, differences in Stroop task performance can be partly due to differences in the speed of processing of colours and words with age (Ben-David, B. M. and B. A. Schneider (2009), and inhibition of responses rather than the sensory profile of the distracter itself (Hirst et al, 2019). A meta-analysis by Rey-Memet & Gade (2018) suggested that older people’s inhibitory deficit is likely to be limited to the inhibition of dominant responses.
Sensory changes with age
Most of us will have noticed someone using reading glasses, or turning up the TV to better hear the dialogue in a favourite film or drama. Although these are commonly assumed to be the main effects of ageing, the sensory changes with age are varied and some are quite subtle.
Vision
With age there are changes in the eye and in the visual pathways in the brain. A reduction in the amount that the eye focuses at near distances means that almost everyone will need reading glasses at some point. The lens also becomes thicker and yellows, reducing the amount of light entering the optic nerve and the brain. This affects how well we can see both colour and shape. For colour vision, the yellowing of the eye has a greater effect on the shorter, i.e blues and greens wavelengths of light (Ruddock, 1965, Said, 1959). So, for example, when matching red and green to appear the same brightness, more green has to be added for older, compared to younger, participants (Fiorentini et al, 1996).
For shape perception, the reduction in the amount of light passing through the eye reduces people’s ability to resolve fine detail (Weale, 1975; Kulikowski, 1971). Furthermore, neural loss and decay with age also contribute to declining visual ability, especially for determining the shape of objects. In general, for coarser patterns (lower spatial frequencies), the differences in performance between older and younger people are likely to be due to cortical changes. For finer detailed patterns (high spatial frequencies) the loss is more likely to be due to optical factors. Note that glasses correct for acuity, which is mostly driven by the ability to detect and discriminate between small and fine detailed patterns, but visual losses are far more wide-ranging and subtle than this.
Older people also have reduced ability to see movement and things that are moving. Older individuals tend to misjudge the speed of moving items (Snowden and Kavanagh, 2006) and the minimum speed required to discriminate direction of motion is higher for older, compared to younger, people (Wood & Bullmore, 1995). On the other hand, other studies have found that age-related deficits in motion processing are absent or specific to particular stimuli (Atchley & Andersen, 1998) and there are also reports of improvements in performance with age. For instance, older adults are quicker than younger adults to be able to discriminate the direction of large moving patterns (Betts et al 2005; Hutchinson et al 2011). This illustrates an important point about ageing and vision: many of the age-related deficits in motion processing are not due to deficits at the level of motion processing per se, but due to sensitivity deficits earlier in the processing stream. When presenting stimuli to older adults, it is worth noting that slight changes in details of the stimuli (for instance, their speed or contrast) might make dramatic changes in visibility for older, compared to younger, adults (Allen et al, 2010).
Hearing
As with vision, the effects of age on hearing include declines in the ear as well as in the brain. Also similar to vision there is a reduction in sensitivity to high frequencies. In hearing, high frequencies are perceived as high notes. The ear loses sensitivity with age and this loss starts with the detection of high tones and then goes on to affect detection of low tones (Peel and Wingfield 2016).
Although a loss of the ability to hear pure tones is very common in older people, one of the most commonly reported issues with hearing is a loss of ability to discriminate speech when it is in background noise (Moore et al 2014). This causes trouble with hearing conversations in crowds, as well as dialogue in films and TV. The deficit is found both in subjective and objective measures of hearing in older people. Interestingly, there is an association between ability to hear speech in noise and cognitive decline (Dryden et al 2017). One suggestion is that this reflects a general loss across all systems of the brain, but another interesting suggestion is that the effort and load of coping with declining sensory systems causes people to do worse on cognitive tasks.
Touch
Touch perception is perhaps the least well understood sense when it comes to ageing. We use our hands to sense texture and shape, either pressing or stroking a surface. Beyond this, our entire body is sensitive to touch to some degree, for example touch and pressure on the feet affect balance, and touch on our body tells us if we are comfortable. We know that ageing affects the condition of the skin as well as the ability to control our movements. Skin hydration, elasticity and compliance are all reduced with increased age (Zhang & Duan 2018). Changing the skin will change how well it is able to sense differences in texture and shape. There are also changes in the areas of the brain that process touch, and in the pathways that connect the skin and the brain (McIntyre et al 2021), both affecting basic tactile sensitivity (Bowden & McNulty, 2013; Goble et al 1996).
Taste and smell
Taste and smell are critically important for quality of life and health and also show decline with ageing. Loss of appetite is a common issue for the old, and loss of smell and taste contribute to this. Loss of taste or smell is unpleasant at any age, making food unpalatable, but also making it difficult to identify when food is ‘off’ or when dirt is present. The change in smell and taste is gradual over the life span, but by the age of 65 there are measurable differences in the ability to detect flavours or smells (Stevens 1998).
Key Takeaways
• Ageing causes natural and inevitable changes in both brain structure and function – however, we don’t yet fully understand the rate of change and the processes involved.
• Changes to the brain which may affect cognitive and sensory functions occur at molecular, synaptic and cellular levels – some of which, but not all, is driven by genetic factors.
• Understanding the mechanisms of ageing is important as this may identify approaches to try and alleviate age-related decline in cognition and sensory functions, along with identifying psychological and lifestyle factors which may help promote healthy cognitive ageing.
About the Authors
Professor Claire Gibson obtained a BSc degree in Neuroscience from the University of Sheffield and her PhD from the University of Newcastle. She then gained a number of years’ experience researching the mechanisms of injury following CNS damage – initially focusing on spinal cord injury and moving on later to cerebral stroke. She is now a Professor of Psychology at the University of Nottingham whose research pursues the mechanisms of damage and investigates novel treatment approaches following CNS disorders, focusing primarily on stroke and neurodegeneration. She regularly teaches across the spectrum of biological psychology to both undergraduate and postgraduate students.
Professor Harriet Allen received her BSc and PhD in Psychology from the University of Nottingham. She then worked in Montreal, at McGill University, and the University of Birmingham, UK before returning to the University of Nottingham where she is now a Professor of Lifespan Psychology. She researches how sensory processes interact with attention over the lifespan and teaches research methods, cognitive psychology and perception to undergraduates and postgraduates. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/06%3A_Dysfunction_of_the_nervous_system/6.04%3A_Ageing-_a_biological_and_psychological_perspective.txt |
Learning Objectives
• To gain an overview of the symptoms and main causes of dementia along with approaches used to diagnose dementia
• To understand the symptoms and pathological consequences of the main causes of dementia – focusing on Alzheimer’s disease, vascular dementia and dementia with Lewy bodies
• To gain an understanding of the various pharmacological and psychological approaches to treat Alzheimer’s disease
Dementia is a syndrome associated with a progressive decline in brain functioning, most commonly affecting memory. Symptoms of dementia can be wide-ranging and have huge individual variability which may include, but are not limited to:
• loss of memory
• apathy
• difficulties in language
• difficulties in judgement
• difficulties in motor control
• speed of (cognitive) processing
A person with dementia may also experience paranoia, hallucinations, and find it challenging to make decisions and live independently. There are currently no cures for dementia. However, depending on the type of dementia, and the underlying cause, treatments may exist which can stabilise symptoms and slow the progression of the disease.
There are many different causes of dementia (see Table 6.2) with Alzheimer’s disease (AD) being the most common (see Figure 6.11). Whilst some symptoms of dementia, such as memory loss, might be expected with the normal ageing process, dementia is a syndrome in which the deterioration in cognitive function is beyond that which might be expected from the usual consequence of biological ageing and symptoms are usually severe enough to interfere with daily activities.
Table 6.2. Common causes of dementia
Type of dementia
Brief description of cause
Alzheimer’s Disease
Progressive degeneration of brain tissue
Vascular Dementia
Block or reduction in blood flow to the brain
Mixed Dementia
Several types of dementia contribute to symptoms
Dementia with Lewy Bodies
Abnormal aggregates of protein that develop inside neurons
Frontotemporal Dementia
Progressive degeneration of the temporal and frontal lobes of the brain
Parkinson’s Disease with Dementia
Development of dementia symptoms as disease progresses
Other
May include conditions such as Creutzfeld-Jacob Disease; Depression; Multiple Sclerosis, Down’s syndrome
Credit: Claire Gibson
Risk factors for dementia
Although age is the strongest risk factor known for dementia it does not occur as an inevitable consequence of biological ageing. Additionally, dementia does not exclusively affect older people – young onset dementia, typically defined as onset of symptoms before the age of 65, accounts for up to 9% of all cases of dementia. There are various risk factors which have been identified to increase the risk of developing dementia, including:
• smoking
• excessive alcohol use
• low levels of physical activity
• high cholesterol
• atherosclerosis
• social isolation
• obesity
• mild cognitive impairment (MCI)
There are also a number of known genetic risk factors for developing dementia, in particular Alzheimer’s Disease (see below). It is likely that the development of dementia occurs due to a combination of various risk factors – some of which are modifiable (e.g. diet, physical activity) and some which are not (e.g. genetic).
Mild Cognitive Impairment
MCI is typically an early stage of memory loss, or other cognitive ability loss, such as language or visual/spatial perception. Individuals diagnosed with MCI are able to maintain the ability to live independently and perform most activities of daily living. Importantly, people with MCI exhibit a decline in memory and/or other cognitive areas beyond a level we would expect to see during normal ageing. MCI is not a type of dementia but it is associated with a higher risk of developing dementia, in particular AD (Boyle et al., 2006).
Alzheimer’s Disease
AD is a neurodegenerative disorder that leads to cognitive decline and memory loss. AD is characterised pathologically by the accumulation of extracellular beta amyloid (Aβ) plaques, neurofibrillary tangles, and neuroinflammation.
First described by Alois Alzheimer in 1907, AD is the most common form of dementia, accounting for between 60-80% of total dementia cases. Currently it is estimated that 30 million people worldwide have AD, which is predicted to rise to up to 90 million by 2050. In the UK there are over 500,000 people living with AD and, if the prevalence remains the same, this is forecast to rise to over 1 million by 2025 and 2 million by 2050 (Prince et al., 2014). The risk of AD increases with age affecting 1 in 20 people under the age of 65, 1 in 14 over the age of 65 and 1 in 6 over the age of 80. In England and Wales, AD was one of the leading causes of death accounting for over 10% of deaths registered in 2021 (Office of National Statistics, 2022).
There are two types of AD: early onset (familial, EOAD) or late onset (sporadic, LOAD), which are diagnosed before or after the age of 65 respectively. EOAD is rare and accounts only for up to 5% of all AD cases. It is thought to be caused by mutations in one of three genes: amyloid precursor protein (APP), presenilin 1 (PSEN1) or presenilin 2 (PSEN2), that lead to increased production of Aβ plaques. No single gene mutation is thought to be the cause of the more common LOAD, but it is suggested to be driven by a complex interplay between genetic and environmental factors. However, genetic mutations have been identified in LOAD that increase the risk of developing AD, the strongest being the apolipoprotein E4 gene (APOE4). More recently, genome-wide association studies implicate genes associated with the innate immune system and microglia (the resident immune cells of the brain), including the phagocytic receptors CD33 and TREM2 (triggering receptor expressed on myeloid cells 2) (Griciuc & Tanzi, 2021).
Symptoms
AD develops slowly over several years, sometimes decades, so the symptoms are not always obvious at first and also depend on the stage of the disease. The predominant symptoms of AD are progressive and irreversible impairment in memory and cognitive function. One of the first signs of cognitive decline in AD is the generalised disruption of declarative memory, including the inability to learn and remember new facts (semantic memory deficit) and recall past experiences (episodic memory deficit) and this is often characterised by an abnormally rapid rate of forgetfulness (Holger, 2013). Symptoms that occur early in the disease include forgetting the names of objects and places, misplacing items (losing your house keys), and repetition such as asking the same question several times. Deficits in episodic memory are one of the best indicators of early AD compared to other forms of dementia and have been reported in the pre-clinical stage of the disease.
As AD progresses, other cognitive deficits manifest, including disruptions in language (aphasia), spatial orientation (e.g. judging distances), attention and executive functions. In contrast, procedural memory (habits and skills) remains relatively unaffected until the late stages of the disease when there are significant problems with both short- and long-term memory.
AD is also associated with various behavioural and psychological symptoms including depression, anxiety, apathy, irritability, aggression, disinhibition and reduced curiosity. These changes all form part of the behavioural and psychological symptoms of dementia (BPSD), which are commonly seen in people with AD in either the early or late stages of the disease but which can fluctuate throughout its progression. Research also indicates that BPSD might contribute to cognitive decline as the disease progresses (Gottesman & Stern, 2019).
Disturbances in sleep-wake patterns are also a common feature, with people with AD displaying increased sleepiness during the day and increased wakefulness at night. AD patients often exhibit a shift in their body clock, tending to wake up later in the day and going to sleep later than non-demented controls. Similarly, circadian shifts in eating patterns are observed in people with AD who show a tendency to have their biggest meal during breakfast and display increased preference for sweet food. However, considerable weight loss is also a common symptom which can lead to frailty and weakness. Over time, the ability to perform everyday activities becomes increasingly impaired and eventually leads to permanent dependence on caregivers.
Neuropathology
Macroscopic features
Several pathological features can be seen macroscopically in the brain of someone with AD (DeTure & Dickson, 2019). These features get worse with disease progression and can be visualised using imaging (e.g. magnetic resonance imagining, MRI) and post-mortem analysis. For example, cortical atrophy (thinning), which is characterised by enlarged sulcal spaces and atrophy of the gyri, is seen prominently in the frontal and temporal cortices of people with AD. As a result of this atrophy there is a reduction in brain weight and ventricular enlargement (see Figure 6.12). The hippocampus, a crucial region for learning and memory, also shows atrophy thought to be due to neuronal loss. However, while these features suggest that someone has AD, they can sometimes be seen in other dementias and also in clinically normal people.
Microscopic features
Check with Eliza – this was to be the circles only?
The key neuropathological hallmarks that define AD are the presence of Aβ plaques (or senile plaques) and neurofibrillary tangles (DeTure & Dickson, 2019). It is worth noting that the presence of such plaques and tangles are not unique to AD as they are also seen during the ageing process, but the density and location are distinct in AD.
These features are initially located in temporal lobe structures e.g. hippocampus and entorhinal cortex, but can spread to other areas as the disease progresses. Aβ plaques consist of insoluble aggregates of Aβ that are found in the brain parenchyma. The genetic mutations seen in people with EOAD affect the processing of amyloid precursor protein (APP), which subsequently leads to a build-up of Aβ plaques. APP is processed by three enzyme complexes known as α, β and γ secretase. APP is normally cleaved by α-secretase then γ-secretase but, in people with EOAD, APP is processed by β- and γ-secretase which results in different species Aβ being produced that are far more prone to aggregation.
The other characteristic sign in AD is the presence of neurofibrillary tangles composed of hyperphosphorylated tau. Normally, tau’s role is to regulate elements of the microtubule cytoskeleton such as stabilisation and facilitation of axonal transport. In AD, tau becomes hyperphosphorylated and forms neurofibrillary tangles inside neurons. This disturbs microtubule structure causing major problems in the neuronal function and ultimately leads to neuronal cell death.
Other features of AD pathology include synaptic loss that precedes neuronal loss and strongly correlates with cognitive decline (Terry et al., 1991). There is also an inflammatory response that is observed in the brains of people with AD, including microglia and astrocyte activation around Aβ plaques that is thought to contribute to disease pathogenesis.
Diagnosis
Currently there is no simple and reliable test for diagnosing AD. If a person is suspected to have AD, their cognitive ability will be evaluated using tests that assess memory, concentration and attention, language and communication skills, orientation, and visual and spatial abilities.
The most commonly used test to measure cognitive impairment is the Mini Mental State Exam (MMSE), which was first introduced in 1975 by Marshal Folstein and colleagues. The MMSE is a 30-point assessment involving a series of questions and tests that each score points if answered correctly. Specifically, the MMSE test measures short-term memory (e.g. memorising an address and recalling it a few minutes later), attention and concentration (e.g. spelling a simple word backwards), language (e.g. identifying common objects by name), orientation to time and place (e.g. knowing where you are, and the day of the week) and comprehension and motor skills (drawing a slightly complicated shape such as copying a pair of intersecting pentagons). Scores of 24 or higher generally indicate normal cognition, while scores below this can indicate mild (19-23), moderate (10-18) or severe (9) cognitive impairment. The MMSE can therefore be used to indicate how severe a person’s symptoms are, but if repeated it can assess changes in cognitive ability and how quickly their AD is progressing. The onset of cognitive symptoms indicates severe neurodegeneration has already taken place and diagnosis at an earlier stage is therefore needed.
Furthermore, while cognitive impairment is a symptom of AD, several other conditions associated with dementia (e.g. vascular dementia) can lead to cognitive decline and a reduction in MMSE score, so in order to determine the likelihood of AD, patients might also undergo a brain scan. Neuroimaging techniques such as MRI have dramatically advanced the ability to diagnose people with AD at an earlier stage (Kim et al., 2022). Structural MRI is used to detect changes in brain structure such as cerebral atrophy and ventricular enlargement. Positron emission tomography (PET) imaging can detect other characteristic hallmarks of AD such as brain hypometabolism, characterised by decreased brain glucose consumption, and Aβ burden, but these types of scans are more commonly used in research rather than as a clinical diagnostic tool.
Treatments
Despite large scientific research efforts, there is no cure for AD and treatment options are limited with a few pharmacological treatments and non-pharmacological interventions available. Examples of non-pharmacological therapies to improve memory, problem-solving skills, mood and wellbeing include cognitive stimulation therapy, cognitive rehabilitation and reminiscence/life story work (see insert box). In the UK there are four pharmacological treatments licenced for AD. These include three acetylcholinesterase (AChE) inhibitors donepezil (Aricept), rivastigmine (Exelon) and galantamine (Reminyl) and the N-methyl-D-aspartate (NMDA) receptor antagonist memantine (Namenda). However, these drugs only provide symptomatic effects by reducing the severity of common cognitive symptoms, and while they can increase the quality of life, they do not alter the course and progression of the disease. There are also some drugs that might be prescribed for the symptoms of BPSD including the antipsychotic medicines risperidone or haloperidol or antidepressants if depression is suspected as a cause of anxiety. While there are no disease-modifying treatments licenced in the UK, in 2021 the United States Food and Drug Administration (FDA) approved the use of aducanumab (Aduhelm), a monoclonal antibody designed to bind and eliminate aggregated Ab for treatment in AD, although there are still uncertainties around the benefits it may bring. Therefore, effective interventions to halt or reverse the neurodegeneration seen in AD are still needed.
Psychological approaches for dementia
Psychological approaches are aimed at improving cognitive abilities (e.g. cognitive training/stimulation), enhancing emotional well-being (e.g. activity planning, reminiscence therapy), reducing behavioural symptoms (e.g. music therapy) and promoting everyday functioning (e.g. occupational therapy). Whilst such approaches do not prevent or delay the progression of the underlying cause of dementia, they can improve the quality of life for the patient and their caregivers (Logsdon et al., 2007; Woods et al., 2018). Some specific examples:
• Cognitive Training/Stimulation – adapted from rehabilitation programmes designed for individuals with neurological disorders (e.g. stroke, traumatic brain injury) its goal is to improve memory, attention and general cognitive function. It typically involves strategies such as memory training, general problem solving (including games and puzzles), use of mnemonic devices and/or use of external memory aids such as notebooks, calendars.
• Reminiscence Therapy – involves discussing events and experiences from an individual’s past aiming to stimulate memories, mental activity and improve well-being. It is usually supported by external aids such as photographs, music, objects and may involve direct discussion with the individual or involve a wider family or social group.
• Music/Art Therapy – aimed at improving the mood, alertness and engagement of individuals with dementia. Through engagement with music and/or art this can help trigger memories, stimulate communication and build confidence – all of which impact positively on the quality of life for individuals with dementia. This type of approach allows for self-expression and engagement of individuals which has been shown to reduce agitation and distressing behaviour.
Cholinesterase inhibitors
Acetylcholine (ACh) is a neurotransmitter produced in cholinergic neurones that has a role in memory, thinking, language and attention. In AD there is a loss of cholinergic neurones particularly in the hippocampus, cortex and amygdala, which leads to a reduction in ACh. The enzyme AChE breaks down ACh, and AChE inhibitors (donepezil, rivastigmine, and galantamine) are therefore believed to treat the cognitive symptoms of AD by increasing the levels of ACh in the brain. These drugs are used treat the symptoms of mild to moderate AD and can lead to an improvement in thinking, memory, communication or day-to-day activities. In some people a noticeable improvement is not seen, but their symptoms do not worsen as quickly as expected. Some common side-effects of AChE inhibitors are diarrhoea, feeling or being sick, trouble sleeping, muscle cramps and tiredness.
Memantine
Memantine is the most recent drug to be approved for the treatment of AD in the UK. Pharmacologically, it is a non-competitive antagonist of the NMDA receptor. In AD, NMDA receptor over-activity due to an excess of glutamate is thought to result in neuronal cell death as well as calcium-dependent neurotoxicity, therefore memantine is thought to prevent these toxic effects of glutamate and reduce the symptoms of AD. Memantine is recommended for people with severe AD, or those with moderate AD who are unable to use AChE inhibitors, but is often used in combination with these drugs. Common side effects include drowsiness, dizziness, constipation, headaches and shortness of breath.
Vascular dementia
Vascular dementia is the second commonest cause of dementia after AD and it occurs as a consequence of reduced blood flow to the brain. All cells within the brain require a constant supply of oxygen and nutrients in order to function and these are delivered via the blood supply within the brain’s vascular network. Any interruption or reduction in the blood flow within the brain, for example as a consequence of stroke, can result in impaired function of brain cells, cell death and disruption of cognitive and motor processes.
Symptoms occurring following vascular dementia tend to vary quite considerably between individuals as they depend on the location of the damage and symptoms may develop suddenly, for example following a stroke, or more gradually, such as with small vessel disease. Some symptoms of vascular dementia may be similar to those of other types of dementia – however, although memory loss is typical of the early stages of AD it is not usually the main early symptom of vascular dementia. The most common cognitive symptoms in the early stages of vascular dementia include problems with planning/organising and decision making, slower speed of cognitive processing, inattention and short periods of confusion. There are three main types of vascular dementia:
• Subcortical vascular dementia – reported to be the most common type of vascular dementia, occurs as a consequence of disease of the very small arteries that lie within subcortical regions of the brain and is termed small vessel disease (Tomimoto, 2011). Over time, the walls of these vessels thicken and therefore the vessel lumen narrows leading to reduced blood flow (Figure 6.14) and subsequent areas of brain damage termed ‘infarcts’ are produced. Subcortical structures of the brain are important for processing complex activities such as memory and emotions. It can be distinguished from AD because it is associated with more extensive white matter infarcts and less severe atrophy of the hippocampus.
• Multi-infarct dementia – occurs when an individual experiences a series of mini-strokes, which are sometimes referred to as a transient ischemic attacks. Such mini-strokes cause a temporary reduction in blood flow to the brain and whilst the patient may only experience temporary symptoms at the point of experiencing the mini-stroke they can result in generation of infarcts. Over time, if a number of infarcts develop then the cumulative damage may be sufficient for the individual to develop symptoms of dementia (McKay and Counts, 2017).
• Post-stroke dementia – about 20% of individuals who experience an ischaemic stroke will develop dementia within the following 6 months. An ischaemic stroke is caused by the presence of a clot in a blood vessel which reduces blood flow within that cerebral blood vessel, resulting in tissue loss and brain dysfunction (Mijajlović et al., 2017). Factors which increase the risk of cardiovascular disease and ischaemic stroke, such as hypertension and high cholesterol, also increase the risk of cognitive decline post-stroke.
Dementia with Lewy Bodies
Dementia with Lewy bodies (DLB) is a progressive disease associated with abnormal deposits of a protein called alpha-synuclein in neuronal and non-neuronal cells within the brain (Outerio et al., 2019). These deposits, termed Lewy bodies, named after FH Lewy, the German doctor who first identified them, affect neurotransmitter functioning, in particular ACh and dopamine, which in turn disrupts cognitive functioning, movement, behaviour, and mood. DLB causes a range of symptoms, some of which are shared with AD and some with Parkinson’s disease, resulting in DLB being commonly wrongly diagnosed (see insert box for methods used to diagnose dementia). However, symptoms more commonly associated with DLB rather than other causes of dementia include sleep disturbances, visual hallucinations and motor symptoms. Having a family member with DLB also may increase a person’s risk, though LBD is not considered a genetic disease. Variants in three genes, APOE, synuclein alpha (SNCA) and glucocerebrosidase (GBA), have been associated with an increased risk, but for the majority of DLB cases, the cause is unknown.
Testing for dementia
There is no single test for dementia but medical doctors use information from a variety of approaches (listed below) to determine if an individual is experiencing dementia and, if so, what the underlying cause is. Understanding the underlying cause of a patient’s dementia is important as it will inform treatment approaches and determine the likely progression of the disease and associated symptoms.
• Medical history to ascertain how any symptoms are affecting daily life along with ensuring any other existing medical conditions (e.g. hypertension) are being treated appropriately.
• Tests of cognitive ability – typically involve neuropsychological tests of memory, attention, problem solving and awareness of time and place.
• Blood tests to check for other conditions which may be causing symptoms which mimic those seen in dementia. Such tests may typically check the function of the liver, kidneys and thyroid.
• Brain scans can detect signs of brain damage which may help identify the underlying cause of dementia. For example, MRI scans can provide more detailed information about blood vessel damage that might indicate vascular dementia or show atrophy in specific brain areas. For example, hippocampal atrophy is a strong indicator of AD whereas atrophy in the frontal and temporal lobes are more typical of frontotemporal dementia. Other types of scan, e.g. CT scan, may be used to rule out the presence of a brain tumour. Clinical research studies tend to use scans more, such as a PET scan, to identify markers of interest e.g. glucose, in specific relation to disease progression and/or evaluation of potential new therapeutics. It is likely the majority of individuals will not receive a brain scan if the various other tests and assessments show that dementia is a likely diagnosis.
Other dementias
Mixed dementia may be diagnosed when a person has more than one underlying cause of dementia – most commonly this would be co-occurrence of AD and vascular dementia, although other possible combinations are possible such as AD and DLB. Mixed dementia tends to be more common in older age groups (over 75 years of age), and is reported to account for 10% of all dementia diagnoses.
Frontotemporal dementia (FTD) is a rare form of dementia sometimes referred to as Pick’s disease or frontal lobe dementia. It typically occurs at a younger age than other forms of dementia, with 60% of cases occurring in people aged 45 to 64 years old. FTD occurs as a consequence of selective degeneration within the frontal and temporal lobes. In the early stages of FTD individuals tend to display changes to their personality and behaviour and/or aphasia. Aphasia is when a person has difficulty with their language or speech and in FTD is usually caused by damage to the left temporal lobe. Compared to disorders such as AD patients with FTD tend to have good memory performance in the early stages although this does become progressively worse as the disease progresses.
Key Takeaways
• Dementia is a complex syndrome with different underlying causes and huge variability in symptom presentation.
• Symptoms associated with dementia typically involve cognitive processes, in particular memory, but can also affect other behaviours, motor control, sleep and mood.
• There are no cures for dementia which is progressive and symptoms worsen over time. However, some of the more common causes of dementia, i.e. AD, do have treatments which have been shown to be effective in slowing and stabilising the progression of symptoms.
• Psychological approaches for dementia are also important as they have been demonstrated to improve wellbeing and quality of life not only for the patient but also for those involved in caring for dementia patients.
• A huge amount of research is invested in further understanding the pathology underlying dementia and identifying novel treatment approaches.
About the Authors
Professor Claire Gibson obtained a BSc degree in Neuroscience from the University of Sheffield and her PhD from the University of Newcastle. She then gained a number of years’ experience researching the mechanisms of injury following CNS damage – initially focusing on spinal cord injury and moving on later to cerebral stroke. She is now a Professor of Psychology at the University of Nottingham whose research pursues the mechanisms of damage and investigates novel treatment approaches following CNS disorders, focusing primarily on stroke and neurodegeneration. She regularly teaches across the spectrum of biological psychology to both undergraduate and postgraduate students.
Dr Catherine Lawrence obtained a BSc degree in Pharmacology and her PhD from the University of Manchester. She then gained over two years’ experience in the commercial sector working as a Clinical Research Associate in the pharmaceutical industry but, returned to academic research at the University of Manchester on a post-doctoral position (funded by AstraZeneca). In 2004 she secured a position as a Senior Research Scientist at AstraZeneca, but in 2005 was awarded an RCUK fellowship at the University of Manchester. In 2010 she became a lecturer and a senior lecturer in 2015. Her current research interests are Alzheimer’s disease and stroke and in particular understanding how diet can influence these disorders and the involvement of inflammation. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/06%3A_Dysfunction_of_the_nervous_system/6.05%3A_Dementias.txt |
Learning Objectives
To gain an understanding of the following:
• The definition of a placebo effect
• The biological and psychological mechanisms of the placebo effect
• The importance of placebos in clinical trial design and their ethical considerations
• The contribution of placebos to our understanding of complex disorders i.e. pain, depression
Definition of a placebo
The term placebo, derived from the Latin for ‘I shall please’, is used in modern medicine to describe a dummy substance or other treatment that has no obvious or known direct physiological effect. The most common examples of a placebo include an inert tablet (e.g. sugar pill) or injection, via intramuscular or intravenous routes, of a control solution (typically saline), but can also include a surgical procedure. However, such inert treatments can have measurable effects and benefits in patient groups due to the context of their administration and expectation. Such effects are not limited to the individual’s subjective evaluation of symptom relief but can include measurable physiological changes such as altered gastric secretion, blood vessel dilation and hormonal changes.
The placebo effect
A medical treatment/procedure is associated with a complex psychosocial context that might affect the outcome of the therapy (see Figure 6.15). To determine the effects of the psychosocial context on the patient it is necessary to eliminate the specific action of the treatment and to replicate the context of the treatment administration with administration of the active treatment itself. Thus, a placebo is given in which the patient believes they are receiving an effective therapy and therefore expects to experience its benefits such as symptom relief. The placebo effect, or response, is the outcome that follows this administration of a placebo. It is essential administration takes place within the design of a clinical trial (see insert box) to evaluate the potential effectiveness of new treatments and eliminate the influence of patient expectation on outcome, as drug effects may be influenced by the patient’s history and beliefs/expectations about the drug/treatment being developed.
In part, the effect of a placebo may be explained as an outcome of classical conditioning. For example, in the case of pain relief (see insert box), if there is a history of an injection causing pain relief (e.g. morphine). Thus, by association, the syringe and the context of the injection can acquire some pain-relieving capacity i.e. an association between the procedure (conditional stimulus), the drug (unconditional stimulus) and the pain relief (unconditional response). However, conditioning cannot fully explain the placebo effect in all scenarios – for example, if a person is told that pain-relief is to be expected there can be some tendency for it to be experienced. A wealth of neuroimaging and neurobiological studies report changes in brain activity and brain function following placebo administration, supporting the notion of a biological basis of the placebo effect.
Pain
Pain is a highly complex and individual experience which results in behavioural, chemical, hormonal and neuronal responses. Humans may experience occasional pain which activates the autonomic, central, and peripheral nervous systems as well as chronic pain over a number of months and even years. Chronic pain substantially impacts the quality of life of affected individuals and there is a demand to develop new and effective therapies. Various treatment approaches for pain exist, including medicines, physical therapies (for example, heat/cold treatment, exercise, massage) and complementary therapies (for example, acupuncture and meditation). Placebo effects have been reported to act as pain relievers in certain groups of patients and may offer a viable therapeutic option (Miller and Colloca, 2009).
Functional brain imaging studies show that opioids and placebos activate the same brain regions and that both treatments reduce the activity of brain regions responding to pain, including the cingulate cortex (Wager et al., 2004). A consistent finding is that some people experience relief from a placebo and others do not. People who respond to placebo show a greater activation of brain regions with opioid receptors than do non-responders, further implicating endogenous opioids in the placebo effect. Opioids have often been reported as inducing relaxation which may account for the feelings of pain relief following placebo treatment. However, there is also substantial evidence that placebos are able to alleviate pain through the reduction of negative emotions (i.e. feelings of fear and anxiety) associated with pain rather than acting to reduce the sensation of pain itself. For example, placebo treatment decreases activation of the cingulate cortex but not the somatosensory cortex. Similar to pain itself, the relief from pain symptoms is complex and placebos can play an important aspect in the therapeutic approach to treat pain in certain individuals.
Mechanisms: psychological mechanisms
From the psychological perspective, the placebo effect has traditionally been attributed either to conscious cognition—for example, the expectations of the patients—or the action of automatic basic learning mechanisms like classical or Pavlovian conditioning. The evidence accumulated over recent decades suggests that conscious cognition and conditioning shape different instances of the placebo effect, and that they can interact to determine the effect (e.g., Stewart-Williams & Podd, 2004). Here, we explore two versions of the conscious cognition approach, the most prevalent Expectancy Theory, and a promising approach that characterises some instances of the placebo effect as a particular type of error in decision making. We will then explore how conditioning accounts for the placebo effect, by reference to the research done with non-human animals and how it translates to clinical practice in humans.
Conscious cognition: expectancy theory
A placebo produces an effect because the patient expects it to produce such effect. The expectancy account considers several factors known to shape the recipient’s expectations, including the therapeutic relationship and the authority of the professional that administers the placebo. Other factors known to contribute to the development of expectancies include the branding and cost of the medication. For example, the use of a placebo was more effective in reducing headache when the use of brand name was used to label the tablets than when a generic label was used; also, fewer side effects were attributed to tablets with the brand name (Faase et al., 2016). Similarly, the colour of the pills can also contribute to shape the expectancies of the recipient: red and orange are associated with a stimulant effect, while blue and green tend to be associated with sedative effects (de Craen et al., 1996).
The key question from this perspective is how expectancies contribute to the placebo effect. Different mechanisms have been proposed. Lundh (2000) suggested that positive expectancies contribute to reduce the anxiety of the placebo recipient. It is well established that stress and anxiety have an adverse effect in a diversity of physiological processes and increase the number and intensity of the symptoms reported by the patients. The use of placebos, by reducing anxiety levels, can contribute to easing symptomatology (see Stewart-Williams & Podd, 2004). Expectancies can also contribute to the placebo effect by changing other cognitions: the placebo-induced expectancy of improvement, by promoting a sense of control, may enable the recipient to face pain more positively; the patient may be more likely to disregard negative thoughts and interpret ambiguous stimuli more favourably. Another way in which positive expectancies can mediate the placebo effect is by changing the actual behaviour of the recipient: the expectation of an improved condition may lead the patient to resume their daily routines which would improve the mood and distract them from the symptoms reducing the pain experience (Peck & Coleman, 1991; Turner et al., 1994).
Conscious cognition: decision making
It is suggested that patients treated either with an active therapeutic agent or a placebo are left with a binary decision: was the symptom alleviated or not (Allan & Siegel, 2002)? This might be a tricky question in some situations where the symptom, for example periodic pain, emerges from the brain’s interpretation of the input received from sensory receptors and, as discussed above, is mediated by psychological factors. The relative intensity of pain would fluctuate over time depending on whether the patient is distracted or fully focused on the symptom, for example. To decide whether an improvement is experienced or not, patients need to consider whether the average pain intensity has decreased. Perhaps it has, or perhaps the level of pain is similar to what they felt before treatment was administered. In judging the relative intensity of the sensation, the patient is facing an ambiguous situation. The reduction of symptomatology is a signal presented against a noisy environment (the changing intensity of symptoms over time). We can apply the principles of the Signal Detection Theory (SDT, Tanner & Swets, 1954) to characterise this instance of the placebo effect. We can summarise all the possible outcomes by reference to ‘The Patient’s Decision Problem’ (see Table 1).
The outcome would depend on the criterion used, which can be liberal (any change would be identified as the signal, and therefore the patient is likely to incur a False Positive and experience alleviation) or conservative (the signal will not be easily detected, and the patient is likely to incur a False Rejection—experiencing the absence of effect). The adoption of a liberal or conservative criterion would depend on the perceived consequences of each of the possible errors: high risk of false rejections leads to a liberal criterion; high risk following a false positive contributes to the adoption of a more conservative criterion. In the clinical context, a false rejection could be equivalent to claiming that an effective, tested drug is non-effective. This would challenge the accepted wisdom as well as the authority of the physician that administers the treatment. To avoid this potentially embarrassing situation, the recipient might adopt a liberal criterion, which increases the probability of a false positive. In many instances, a patient given a placebo would rather make the decision or ‘mistake’ that is deferential to the established wisdom (the science) and pleases the doctor and their family, experiencing relief of their symptoms in the absence of an active therapeutic agent. This would lead to an instance of the placebo effect.
Treated with an ‘active agent’ or a ‘placebo’, the patient can experience ‘alleviation’ of the symptom, or ‘no effect’. The outcome can be termed a Correct Positive when an active agent has been administered and the patient experiences alleviation. Similarly, in the absence of an active therapeutic agent (placebo), the absence of effect would be a Correct Rejection. In this situation, the patients can also make mistakes; failure to experience alleviation when treated with a therapeutic agent would produce a False Rejection; on the other hand, experiencing alleviation of the symptoms in the absence of an active agent (treatment with a placebo) would be a False Positive. Incurring a False Positive, a common and in some cases a desirable mistake, constitutes the placebo effect.
Conditioning: learning-mediated placebo
In this section, we focus on a different instance of the placebo effect that emerges when pairing two events: the situational cues where a treatment is taking place (physical context, the form of administration, the health worker that administers the treatment, etc.) and the active agent that has a therapeutic effect. The therapeutic effect is an automatic, unconditioned response to the active agent or drug. Repeated experience of the drug in the presence of the situational cues promotes the development of Pavlovian conditioning, whereby the situational cues acquire the capacity to elicit a conditioned therapeutic response: in the presence of the situational cues, even in the absence of the active agent, the patient will experience alleviation of the symptoms. This therapeutic conditioned response to the situational cues is an instance of the placebo effect that can be used to reduce the dose of the active agent (especially relevant for drugs with undesirable side effects) in different contexts. We will briefly describe a couple of examples of the use of the conditioned placebo effect in the treatment of auto-immune diseases and the treatment of pain.
The conditioning of the pharmacological effects of drugs has a long history. Pavlov (1927, p. 35) described early experiments by Krylov in which dogs were repeatedly injected with morphine, which produces nausea, salivation, vomiting and sleep. After 5 or 6 injections of morphine in a particular experimental setting, the preliminaries of the injection sufficed to produce all these symptoms in response, not to the effect of the drug in the blood stream, but of the exposure to the external stimuli that previously preceded the morphine injection. Conditioned pharmacological responses have been successfully used in humans subject to immunodepression treatment. Giang et al. (1996) treated 10 patients of multiple sclerosis (MS) with cyclophosphamide, an effective immunosuppressant that helps control the symptoms of MS but has serious side effects (e.g., increased risks of infection and cardiovascular disease and depletion of the bone marrow). The participants ingested an anise-flavoured syrup prior to each administration of the immunosuppressive drug. Later, they were given the syrup with a small, ineffective dose of the drug. Eight of the ten participants displayed a clear conditioned immunosuppressive response, suggesting that it is possible to reduce the dose of the drug administered during the treatment to keep the side effects at bay.
Conditioned pharmacological responses can also be used in the treatment of pain. Opioids are used to block pain signals between the brain and the body and are typically prescribed to treat moderate to severe pain. However, a second set of responses (undesirable side effects) are activated that contribute to the development of tolerance, which reduces the effectiveness of the drug requiring increased doses to achieve the desired therapeutic effect. Used in high doses, opioids can lead to the development of addiction and of opioid induced hyperalgesia (OIH) that worsen the patients’ wellbeing (e.g., Holtman, 2012). When an analgesic drug (like an opioid) is administered to an individual in pain the drug results in a reduction of pain, a therapeutic effect which is highly rewarding. Repeated presentations of the active therapeutic agent in a particular context would allow the context to activate a conditioned therapeutic response that reduces pain in the absence of the active therapeutic agent. This would potentially help reducing the dose of the drugs used to treat pain keeping opioids effective at low doses without side effects.
Persuasive evidence has been presented for the development of conditioned analgesia in mice. Guo et al. (2010), treated mice with either morphine or aspirin before placing them on a hotplate. Animals exposed to the hotplate at 55 °C display a paw withdrawal response; treatment with an analgesic significantly delays the paw withdrawal response—evidence of the analgesic properties of the drug. Following training with either morphine or aspirin, the animals showed evidence of a conditioned analgesic response by delaying the paw withdrawal response when they were exposed to the hot plate following the injection of a saline solution—an instance of the conditioning mediated placebo effect. Interestingly, when animals were treated with an opioid antagonist (naloxone) the conditioned analgesia disappeared in the animals initially treated with morphine, but not in the animals treated with aspirin. This is consistent with the observation that, in humans, placebo analgesia is associated with the release of endogenous opioids (Eippert et al., 2009) indicating the importance of opioidergic signalling in pain-modulating and the placebo effect. It is worth mentioning that the psycho- and pharmaco-dynamics of opioids is very complex and not yet fully understood. In some cases, pairing situational cues with opioids can lead to the development of a conditioned response which is opposed to the desired therapeutic response (a conditioned hyperalgesia response; see Siegel, 2002, for a full review). The development of conditioned hyperalgesia is beyond the remit of this chapter, but the reader should be aware of the need to identify the parameters that promote the development of therapeutic conditioned responses and prevent the development of conditioned hyperalgesia that could worsen the condition of patients in clinical settings.
Biological mechanisms of the placebo effect
In order to explain the changes seen in the function of certain brain areas following placebo administration, a biological mechanism of action must exist. In terms of placebo effects these are typically described as occurring via opioid or non-opioid mechanisms. The role of opioids in the placebo effect was established by the observation that under some conditions the effect is abolished by prior injection of the opioid antagonist naloxone. In the placebo effect, dopamine and opioids are activated in various brain regions (e.g. nucleus accumbens) corresponding to the expectation of beneficial effects. Comparing different people, high placebo responsiveness is associated with high activation of these neurochemicals. Opioid receptors are found in regions of the pain neuromatrix which are reduced in activation corresponding to the placebo effect e.g. anterior cingulate cortex and the insula (Kim et al., 2021). Opioid mediated placebo responses also extend beyond pain pathways. It is reported that placebo-induced respiratory depression (a conditioned placebo side effect) and decreased heart rate and β-adrenergic activity can be reversed by naloxone, demonstrating the involvement of opioid mechanisms on other physiological processes, such as respiratory and cardiovascular function.
However, the opioid system is not the only pathway involved in the placebo effect. Placebo administration also increases the release and uptake of dopamine and dopamine receptors are activated in anticipation of benefit when a placebo is administered. This suggests the dopamine system may underlie the expectation of reward following placebo administration (Scott et al., 2008). In addition, placebo effects that are non-opioid mediated can be blocked by the cannabinoid receptor antagonist CB1 (Benedetti et al., 2011) suggesting a role of the endocannabinoid system. Genetics are also reported to have a part in the biological explanation of the placebo effect in that they can influence the strength of the effect. For example, patients with opioid receptors that are less active are less likely to be placebo responders whereas patients with reduced dopamine metabolism, and therefore higher dopamine levels in the brain, are more likely to experience a strong placebo effect (Hall et al., 2015). Placebo treatments can also affect hormonal responses that are mediated via forebrain control of the hypothalamus-pituitary-hormone system.
Although other medical conditions have been investigated from a neurobiological perspective, the placebo mechanisms in these conditions are not as well understood compared to pain and analgesia. For example, placebo administration to Parkinson patients induces dopamine release in the striatum, and changes in basal ganglia and thalamic neuron firing. In addition, changes occur in metabolic activity in the brain following placebo administration in depression (see insert box) and following expectation manipulations in addiction.
Depression
Placebo effects in clinical trials exploring potential therapies for the treatment of depression are extensively reported, with many trials failing to report a significant benefit of a novel therapeutic treatment compared to that seen in the placebo group. This can be attributable to positive benefits of the placebo treatment rather than simply being due to an ineffective treatment. In fact it has been reported that in clinical trials for major depression approximately 25% of the benefit reported by patients is due to the active medication, 25% due to other factors such as spontaneous remission of symptoms and 50% is due to the placebo effect.
Insight originally gained from pain studies has helped to reveal how the endogenous opioid system, important in regulating the stress response and emotional regulation, is an important mediator for placebo. As this system is dysregulated in depression it is plausible that opioids are responsible for mediating the placebo effect seen in depression. Studies have shown that individuals with higher opioid receptor activity in areas of the brain such as the anterior cingulate cortex, nucleus accumbens and amygdala, all areas implicated in emotion stress regulation and depression, are more likely to experience anti-depressive symptoms following placebo treatment (Zubieta et al., 2005).
Ethics and the nocebo effect
In clinical trials the placebo is essential to the design of experiments evaluating the effectiveness of new medications because it eliminates the influence of expectation on the part of the patient. This control group is identical to the experimental group in all ways, yet the patients, and medical staff administrating treatment, are blinded to whether they are receiving active or placebo treatment. Such precautions ensure that the results of any given treatment will not be influenced by overt or covert prejudices on the part of the patient or the observer. The assumption is that to truly examine the potential biological effects of a treatment and exclude the influence of the psychosocial context the patient must be deceived as to whether they are receiving an active treatment or placebo.
In terms of the placebo response an individual can often be characterised as a responder or non-responder. It is important to consider the ethical implications of this characterisation. For example, it may be appropriate to consider targeting responders with placebo treatments that result in a positive response for them whereas it may also be appropriate to consider excluding responders from clinical trials to ensure results are not compromised. The design of clinical trials is important (see insert box) – if, for example, a new treatment is found to be effective during a clinical trial then it would be considered unethical to deny any participants of that trial access to the effective treatment. Thus, clinical trials are often designed as blocks where patients receiving alternating blocks of active treatment and placebo to ensure all patients have equal chance of receiving benefit of a new treatment.
A less well understood phenomena is the nocebo effect, in which negative expectations of a treatment decrease the therapeutic effect experienced or increase experience of side effects. Administration of the peptide cholecystokinin (CCK) has been shown to play a role in nocebo hyperalgesia through inducing anticipatory anxiety mechanisms, [CNH added…]while blocking CCK reduces nocebo effects (Benedetti et al, 1995; https://pubmed.ncbi.nlm.nih.gov/9211474/). A deactivation of dopamine has been found in the nucleus accumbens during nocebo hyperalgesia and brain imaging studies have demonstrated activation of brain areas, different to those activated during a placebo effect, including the hippocampus and regions involved with anticipatory anxiety (Finniss & Fabrizio, 2005).
Placebo role in clinical trial design
The ‘discovery’ of a new drug or treatment usually occurs in one of three ways; the rediscovery of usage of naturally occurring products, the accidental observation of an unexpected drug effect or the synthesizing of known or novel compounds. In all cases a substance must progress through various stages in order to meet the licencing arrangements within the relevant country to allow that treatment to be approved and subsequently marketed. The initial stages of drug/treatment development tend to involve extensive synthesis (if relevant) of the drug, chemical characterisation and a series of preclinical or animal studies to establish the potential effectiveness and/or safety of the treatment. Drugs/treatments deemed worthy of clinical investigation progress through three phases of clinical trial and it is important to consider the role of placebo within the design of a clinical trial:
• Phase I involves healthy volunteers and aims to specify the human reactions, in terms of physiology and biochemistry, to a drug along with determining safety.
• Phase II involves patients with the disorder the new drug/treatment is targeting and are aimed at determining the effectiveness of such a drug.
• Phase III expands Phase II by increasing the number of patients in the trial. These trials are typically less well controlled than phase II as they tend to occur across multiple sites and even multiple countries.
Clinical trials normally occur by randomly allocating patients into treatment groups which may vary in terms of the dose received and whether the patient is receiving active or placebo treatments. Such trials are termed randomised controlled trials (RCTs). A double-blind study is one in which neither patient or medical staff knows into which group (i.e. active treatment or placebo) a patient has been allocated. Treatments and placebos are made to look identical and coded to obscure their identity. Both the subject’s and experimenter’s expectancies may influence the effects of the drug that the subject experiences. Whereas the simplest design is to assign patients either active or placebo treatment, due to ethical considerations, most trials are based on a block design with each subject receiving blocks of active or placebo treatment. Such designs can help unpick placebo from treatment effects, however, as a clinical trial progresses the observed response in the placebo group may occur due to other factors such as natural course of the disease and fluctuations of symptoms, making it harder to discern a genuine placebo response.
Conclusions
Strong evidence supports the notion that placebo effects are real and that they may even have therapeutic potential. Placebo effects are mediated via diverse processes which can include learning, expectations and social cognition and are mediated via biological mechanisms. It is important to consider the contribution of placebo effects in the design and interpretation of clinical trials. Placebos may have meaningful therapeutic effect and should continue to be studied to fully understand their potential.
Key Takeaways
• Whilst placebos do not contain an active substance to produce a biological effect they can produce a response. Thus, they are important to consider in the design of clinical trials to determine the true effect of a biologically active drug or treatment.
• The placebo effect can be psychological or physiological in nature and can be observed in humans (typically in medical settings) and in non-human animals (typically in a research context). For example, pharmacological conditioning elicits strong placebo effects both in humans (e.g., Amanzio & Benedetti, 1999; Olness & Ader, 1992) and animals (e.g., mice; see Guo et al., 2010).
• The placebo effect has been extensively researched and the picture that emerges suggests there is not a single placebo response but many, with different mechanisms at work across a variety of medical conditions, interventions, and systems (see Benedetti, 2008, for a full review).
About the Authors
Professor Jose Prados has a PhD in Psychology from the University of Barcelona (Spain) and has been working in Higher Education for more than twenty-five years in a diversity of institutions. He is now a Professor of Psychology at the University of Derby. His primary research interests concern learning and memory from a comparative and evolutionary perspective. He uses Pavlovian and instrumental tasks to ascertain whether animals from different phyla (vertebrates and invertebrates like snails or flatworms) learn according to the same principles and test the power of associative learning theory to explain abilities traditionally considered out of its scope (e.g., navigation; perceptual learning).
Professor Claire Gibson obtained a BSc degree in Neuroscience from the University of Sheffield and her PhD from the University of Newcastle. She then gained a number of years’ experience researching the mechanisms of injury following CNS damage – initially focusing on spinal cord injury and moving on later to cerebral stroke. She is now a Professor of Psychology at the University of Nottingham whose research pursues the mechanisms of damage and investigates novel treatment approaches following CNS disorders, focusing primarily on stroke and neurodegeneration. She regularly teaches across the spectrum of biological psychology to both undergraduate and postgraduate students. | textbooks/socialsci/Psychology/Biological_Psychology/Introduction_to_Biological_Psychology_(Hall_Ed.)/06%3A_Dysfunction_of_the_nervous_system/6.06%3A_Placebos-_a_psychological_and_biological_perspective.txt |
Introduction
The previous chapter likely convinced you of the importance of sleep, but how do you get that sleep? Identifying and establishing the behavioral changes necessary to improve sleep can be elusive, so I will guide you through a simple methodical approach for success. Rest assured, most people start sleeping better after making a few small adjustments to their routines or environment. This chapter will help you create habits for boosting your sleep.
To begin, identify your current level of sleep wellness with the SATED questionnaire, published by Daniel Buysse in 2014.1 (In the article, scroll down to Figure S1 for the questionnaire.) Part of the motivation in the development of SATED was to help researchers and clinicians move away from a sleep disorder–centric model of thinking and provide a way to assess and promote sleep health. This is because if we view wellness as an absence of disease, we are missing opportunities to increase health in our communities. By defining, and thus being able to evaluate, a person’s sleep health, we have a better opportunity to prevent disease, maximize wellness, and have an impact on entire communities. Concerns can be addressed, and educational interventions taken, to prevent the tragic effects of sleep debt. If, in addition to the traditional programs focused on disease treatment, political and health-care policies support health practitioners in assessing the well-being of individuals and communities to determine educational targets, we could take a significant leap toward increasing vitality and preventing disease.
In his article, Buysse stresses the difficulty of conducting meaningful research and making health policy changes without a clear understanding of sleep health, pointing to the lack of a clear definition of this term in the scientific literature and the field of sleep medicine. The SATED questionnaire is part of his attempt to provide a better understanding of what constitutes healthy sleep. He considers sleep health to have five dimensions:
• • Satisfaction with sleep
• • Alertness during waking hours
• • Timing of sleep
• • Efficiency of sleep
• • Duration of sleep
Determining Sleep Need
Before diving into detail about how to get good sleep, let’s agree on how much is enough. For most adults, it is around eight hours, and for many adults, a little more than eight. Even if you get that much, you may wonder how to verify if it is of good quality. That is easier to ascertain than you might imagine.
Here are questions to ask to determine if you are getting adequate sleep:
1. 1. After being up for two hours in the morning, if you were to go back to bed, would you be able to fall asleep?
2. 2. If you did not set your alarm, would you wake up automatically at the desired time, feeling refreshed?
3. 3. Without caffeine or nicotine during the day, would you easily stay awake and alert?
4. 4. When you go to bed at night, do you fall asleep “when your head hits the pillow”?
5. 5. Do you doze off during a boring meeting, conversation, or TV show? (Figure 1.1).
Answers to these simple questions reveal if you’re getting enough good-quality sleep. If it is adequate, your answer to questions 2 and 3 would be yes but no to questions 1, 4, and 5. Question 4 is the only one that may not be obvious: some of you likely believe it is a healthy sign to fall asleep immediately upon getting in bed, but that in fact is a sign of an extreme lack of quality sleep. It should take about fifteen minutes to fall asleep if a person is getting enough good sleep each night. Similarly, regarding question 5, a person might assume they are getting ample sleep and that it is normal to doze off if they had an exhausting day and are watching a TV show in the early evening. However, these situations are actually unmasking sleep debt and are a signal that more sleep is needed.
What about sleeping too much? This concern can often be traced to a misinterpretation of research showing a correlation between nine or more hours of sleep a night and a shorter life-span. However, there is no evidence that more good-quality sleep is the cause. Rather, having a disorder such as obstructive sleep apnea (OSA) can cause a person to stay in bed nine or more hours a night (see chapter 6). In this case, they will report they are “sleeping” nine or more hours, but unbeknownst to them, they are not actually getting quality sleep during those nine hours, and that is why they end up staying in bed so long. After the eighth hour in bed, their body is still trying to get sleep because they may have been awakened, without knowing it, hundreds of times during the night due to breathing issues. So untreated OSA is what increases the risk of an earlier death, not excessive sleep. Someone without OSA who spends nine hours each night going through healthy sleep cycles and feels refreshed throughout the day would not have an increased risk of an earlier death. Please use this content to deliberate with a classmate about correlation versus causation.
One factor that must be included in discussions of the ideal amount of sleep is sleep opportunity. Going to bed at 11:00 p.m. and arising at 7:00 a.m. does not mean a person has slept eight hours. This means the person was providing themselves a sleep opportunity of eight hours (the time they spent in bed) with the time of actual sleep still to be determined. This is often an area of confusion in interpreting population studies of sleep. In questionnaires, people likely report that they sleep eight hours if they are in bed from 11:00 p.m. to 7:00 a.m. However, if you have those same individuals wear an actigraphy device or polysomnography equipment, the results may show that during those eight hours, they sleep less than four hours or, under the best of circumstances, seven and a half hours (see chapter 2 for a discussion of actigraphy and polysomnography). You may be wondering why, under the best of circumstances, eight hours of sleep would not be obtained after eight hours in bed. This is because it is normal to take fifteen minutes to fall asleep (as mentioned at the start of this chapter) and to have a few tiny awakenings during the night (most of which we are usually unaware). If we recommend a person get eight hours of sleep, we are referring to actual sleep, which requires being in bed for eight hours plus the time it takes to fall asleep and any additional time for awakenings during the night. This means most people need to give themselves a little over eight hours in bed each night.
One fascinating study that received considerable press—press that misrepresented the scientists’ conclusions—was regarding hunter-gatherer tribes and their sleeping less than seven hours a night. Understanding this study will help you comprehend the difference between sleep efficiency and sleep opportunity as well as encourage you to think critically when hearing news stories. Consider, for example, how a headline in popular media that tells people “You do not really need 8 hours of sleep” will sell magazines. Even though that was likely not the intention of the scientists who conducted the study, this distortion of the results made for profitable press. But what actually happened?
The researchers studied people from three tribes: the Hadza (Tanzania), Tsimané (Bolivia), and San (Kalahari; Figure 1.2). The idea was that since these are preindustrial tribes, the way they sleep is how we city dwellers should too. The members of the tribes wore actigraphy devices that showed an average of 6.75 hours of sleep per night for the duration of the study. A layperson’s interpretation of this could be that the tribal member was in bed for 6.75 hours; consequently, that layperson may believe they achieve optimal sleep health if they go to bed at 1:15 a.m. and get up at 8:00 a.m. However, for a sleep efficiency of 85 percent (the low end of the healthy range), a person would have to be in bed 7.9 hours to get 6.75 hours of sleep (see chapter 2 for a discussion of sleep efficiency).
But wait—how long does it take the person to fall asleep? Under ideal circumstances, a person falls asleep in 15 minutes (0.25 hour), so add 0.25 hours to the 7.9 hours to get 8.15 hours (8 hours and 9 minutes). This means that if a person has both healthy sleep efficiency and sleep latency (time to get to sleep), they need to be in bed 8.15 hours to get 6.75 hours of sleep. It is doubtful that most people interpreted the popular-press headlines (boasting we need less than 7 hours of sleep) of this research as guidance to be in bed for over 8 hours; rather, many people probably ended up getting less than 6 hours a night, thinking they were on track because they allowed themselves 6.75 hours of time in bed as their new healthy goal. The actual study supports this as well: the tribespeople were giving themselves between 7 and 8.5 hours of sleep opportunity a night.2
As a science student, the lesson for you in this is to think critically when reading news stories, ask yourself if the reporter has an agenda, and most importantly, look for the source of the data and find the original article. See if that article is in a peer-reviewed scientific journal, and read the article itself critically as well.
In his book Why We Sleep, Matthew Walker, PhD, adds further fuel to the argument that these tabloid headlines are harmful and misguided. He points out that the life expectancy for people in these tribes is fifty-eight years, a number very close to the projected sixty-year life-span of an adult in an industrialized country who gets 6.75 hours of sleep a night. He also refers to animal studies indicating that the cause of death in sleep-deprived animals is the same lethal intestinal infection that is the cause of death for many of the tribespeople of the study. He reasons that the tribespeople may be sleeping 6.75 hours, but they might live longer if they were to sleep more. He then postulates that the reason they sleep less is due to a lack of sufficient calories; they border on starvation for a significant part of each year. There are physiological cascades that shorten sleep if the body needs to spend more time acquiring food. This is clearly not the goal for a person looking for the ideal amount of sleep to get each night for the sake of health optimization and longevity.
Napping
Napping makes us stronger, faster, smarter, and happier, and it helps us sleep better at night. From the prophet Muhammad, who recommended a midday nap (qailulah), to the Mediterranean concept of a siesta, napping has spanned cultures and the ages (Figure 1.3). The word siesta derives from Latin: hora sexta, meaning “sixth hour.” Here is why that makes sense: the day begins at dawn, around six in the morning; consequently the sixth hour would be around noon—siesta time! Only recently have modern North Americans, on a larger scale, embraced the practice of napping, thanks to extensive research showing the mental and physical health benefits of a brief amount of sleep shortly after midday. This is the time we have a genetically programmed dip in alertness—the signal to nap—that is a function of our human circadian rhythm, regardless of ancestry.
If we take the sleep wellness advice, adjust our routines, and start getting eight hours of sleep a night, it can feel disappointing to still feel drowsy in the afternoon. However, it is time to create a new habit of celebrating that afternoon slump as a healthy response in the body, even after sleeping well the night before. Drowsiness at this time is a valuable reminder to take a ten- to twenty-minute nap. Remember to set an alarm to train the body to limit the nap’s duration, and with practice, you will wake up just before the alarm sounds. The groggy feeling upon awakening from a nap might be a deterrent for even an ardent napper. This is sleep inertia, and with a more regular napping routine, it will be easily managed. Knowing you will have that sensation and that it will pass, usually within ten minutes, will make it easier to settle down for the nap. Some people who enjoy caffeine, and have tested to be sure it is not affecting their nighttime sleep, might want to have some right before their nap or immediately upon waking to help manage sleep inertia, but this isn’t always necessary. The quality of clear and relaxed energy that takes you well into the evening, as opposed to the energy crash from not napping and using caffeine or nicotine in place of a nap, is usually enough to motivate someone to maintain a napping habit. Since the body has not been on a roller coaster of drowsiness during the day, thanks to the missed nap and possibly the use of stimulants, it approaches bedtime in a more even and restful state, and a better night’s sleep should follow.
Students hoping to optimize their study efforts would be wise to close the book and take a short nap (Figure 1.4). Naps increase memory performance, and scientists have documented particular types of brain activity that are associated with enhanced learning during naps, including sleep spindles (see chapter 2). When working on a homework set, the skill of restructuring—viewing a problem from various perspectives and creating a novel vision—is an ingredient for success that can be obtained via a brief afternoon snooze. Combine this with the amplification of creativity after napping and we see why students at colleges around the world are finding ways to get a nap on campus (see chapter 7).
When trying to avoid a cold or the flu, people are willing to spend a lot of money on immune support supplements and vitamins, but one of the strongest ways to provide powerful immune support is free (Figure 1.5). After a night of poor sleep, antiviral molecules such as interleukin-6 drop and reduce immune system power. However, a nap can bring those levels back to normal. Researchers have also found increased levels of norepinephrine, the “fight or flight” molecule, after reduced nighttime sleep. Sustained high levels of norepinephrine, associated with the stress response, have harmful effects on blood glucose balance and cardiovascular health. Napping brought the norepinephrine levels back within their normal range. Similarly, since one in three adults in the US have high blood pressure, it is welcome news that a daily nap can bring that down as effectively as medications and other lifestyle changes.
Athletes have been converting en masse to napping based on research showing the benefits it has for athletic performance as well as increased motor learning, even after a nap as short as ten minutes (Figure 1.6). Athletes, such as sprinter Usain Bolt, have shared stories of napping earlier in the day before a record-breaking performance. Adam Silver, National Basketball Association commissioner, cautions those who want to contact athletes during siesta time, “Everyone in the league office knows not to call players at 3 p.m. It’s the player nap.”
If you are still not convinced of the importance of napping, consider the distressing consequences when many healthy Greeks gave up napping. This occurred when business owners in many Greek communities began deciding to keep the businesses open rather than shut down for a siesta as they always had. Around that time, Harvard researchers examined over twenty thousand Greek adults with no cardiovascular disease. When they followed up after several years, the individuals who had given up napping had seen a 37 percent increase in their risk of dying from heart disease. For working men, it was an over 60 percent increase. A more hopeful way to view this is that the risk of dying was reduced by these significant amounts for those who continued napping.
Sleep Wellness Guidelines: Daytime, Before Bed, In Bed
Refer to the Sleep Wellness Guide below and take inventory, noting areas that you need to address. Prioritize each of those problem areas based on the significance of its impact on sleep, the feasibility of making change, and the value its implementation would have for the individual. This increases success by helping people see the flexibility of the approach and how they can control the process.
• Significance: If a person is having caffeine late in the day and it is keeping them awake, the caffeine is having a significant impact on their sleep. Therefore, avoiding caffeine—or having it earlier—may solve the problem without the need to address less significant areas. Putting the effort toward changing behavior in less significant areas while continuing something significantly disruptive, such as caffeine intake, might not result in any improvement. In addition to being ineffective, it is frustrating because you feel like you are putting in effort and not getting results.
• Feasibility: All items on the Sleep Wellness Guide are not feasible for everyone. If someone is a shift worker or caring for a family member, it may not be possible to get to bed at the same time every night. Determine a way to address this item that recognizes the reality of the situation. For example, could you go to bed and get up at the same time four days a week and maintain a different sleep schedule the other three days.
• Value: Do you really enjoy that bowl of ice cream when watching a movie right before bed? Sleep wellness is not about giving up life’s pleasures. One sleepless client I worked with told me he had addressed all the items in his sleep wellness inventory but was still not getting quality sleep. We went over his daytime and evening routines, and I found out he truly treasured his ice cream, a generous serving of it, shortly before bed. I did not want him to deprive himself of this pleasure. He agreed to instead switch from a large to a small bowl to reduce the serving size. I asked him to choose an artistically pleasing little bowl, hoping to tap into an additional pathway to the reward center of his brain. We discussed the concern of heightened sugar levels right before bed and decided to offset this by, in addition to reducing the portion, rolling up a piece of sliced turkey and eating it while he scooped his ice cream into the bowl. This would help balance out the sugar-to-protein ratio of his late-night snack. Now, he still gets to sit and enjoy his bowl of ice cream, but thanks to those adjustments, his sleep is now satisfactory.
Sleep Wellness: Beyond the Guide
After reviewing each of the items in the Sleep Wellness Guide, synthesize the content with a deeper understanding of the science behind the practices.
Light
Chapter 3 provides elucidation about the role of light in regulating your sleep-wake cycle, while this section provides details about how to use the timing and quality of light exposure to improve sleep health. Sunlight or bright indoor light on the face in the morning is helpful to correct the circadian rhythm of someone who is not sleepy until late at night—a night owl—or has a difficult time waking up at the desired hour. Then, in the evening, establish a routine with reduced (or preferably, no) blue/white light exposure two hours before bedtime. This light so close to bedtime disrupts the circadian rhythm and interferes with sleep quality. In the evening, use solely amber- or orange-colored lights for illumination (Figure 1.7). For the phone, computer, and TV, utilize apps that filter blue light (the display will appear slightly orange). Alternatively, donning a pair of amber eyeglasses that block blue light will carry you into the bedtime hours, reassured that your melatonin secretion will not be disrupted by, for instance, preparing tomorrow’s lunch in a well-lit kitchen (Figure 1.8). Consider switching to an orange night-light, in place of bright vanity lights, to use while brushing your teeth before bed. When sleeping, keep the bedroom as dark as possible for the soundest sleep.
If someone is a lark, we use an alternate approach. Falling asleep early in the evening and awakening before sunrise, a lark is often an elder, although a small percentage of younger people fit this rhythm. Light therapy is used with a different schedule to shift the lark circadian rhythm. Upon arising, the light levels are kept low, including filtering blue light, thus sustaining melatonin levels for those predawn hours. If the lark engages in early morning outdoor activities or a morning commute, sunglasses are essential. Late in the afternoon and into the early evening, bright light is used to keep melatonin levels from building. This will often shift the lark’s schedule closer to the desired rhythm.
Exercise
A commitment to movement, especially if it is enough to get a little sweaty or elevate the heart rate—even slightly—helps us sleep better. Consider something that you can make a regular part of almost every day for twenty to thirty minutes. Movement and consistency, more so than the time of day or type of activity, are key. If gardening is pleasurable, let that be your sport. If the convenient time is in the evening, it is better for most to have the evening workout than to skip it due to worries that it is too close to bedtime. It may take several weeks to have an impact on sleep, but research suggests exercise increases sleep quality.
Nighttime Urination
There are several possible ways to eliminate nighttime urination. (This refers to people who interrupt their sleep to get up to urinate, as opposed to bedwetting, a different problem discussed in chapter 6.) Maybe you are thinking, “I only get up once during the night to urinate and go right back to sleep, so it isn’t a problem.” However, when we understand sleep architecture, the importance of its components, and how our eight hours of sleep must be uninterrupted in order to get the proper balance of each stage, we will see how even just one interruption each night can be a significant problem (see chapter 2). Let’s help people eliminate nighttime urination so they get the benefits of a full night’s sleep.
During the day, fluid accumulates in the legs in varying amounts depending on physical activity level. By elevating the legs during sitting and taking breaks to get movement in the legs, some of this fluid is moved from the legs up toward the kidneys to be urinated out during the day. Otherwise, upon lying down in bed at night, the fluid in the swollen legs, now elevated, moves up into the kidneys, producing more urine than the bladder can contain during the night. When working on a computer or watching television, prop up your legs above the level of your hips, being sure to provide support for the lower back (Figure 1.9). If sitting for long periods, get up occasionally, and while standing, lift the heels to put weight on the toes, then lift the toes so weight is on the heels. (Hold on to something if support is needed.) Repeating this several times helps move fluid out of the legs.
Fluid intake during the day and the evening has an impact on sleep. Stop drinking fluids ninety minutes before bed to give the kidneys time to filter the excess water from your blood. Then urinate immediately before bed to empty your bladder. For some people, herbal tea causes increased urination; however, in other people, it is no different than water. If you enjoy herbal tea before bed, determine if this is an influence by not drinking it within five hours of bed. After your nighttime urination is resolved, reintroduce the evening herbal tea and, if sleep is sound and uninterrupted, enjoy your tea (as long as it contains no caffeine). Alcohol also increases urination and is best avoided five hours before bed for this reason (and also due to its sleep architecture–disrupting properties). Eliminate caffeine entirely after noon, as it is a bladder irritant. Also examine nutritional supplements and any protein or workout powders to check for ingredients with diuretic effects.
An enlarged prostate is associated with nighttime urination. This is a gland surrounding part of the male urethra, the tube that carries urine and semen (Figure 1.10). As men age, there is normal age-related prostate enlargement that squeezes the urethra to varying degrees. This makes it difficult to completely empty the bladder before bed, making it crucial to put all the other strategies in place to minimize the need to disrupt sleep for urination. Some men also decide to talk to their medical doctor regarding various prescription medications or surgical procedures to treat the symptoms.
If a person addresses the various concerns and is practicing all these strategies to eliminate nighttime urination but finds they are still getting up to urinate, there is a possibility that the nervous system is responding to a trigger of awakening—a snoring partner, an outdoor noise, a warm room—and perceiving a need to urinate even though the bladder is not full. Depending on a range of factors, including age, the bladder holds around two cups of urine and, for most people, even more at night. An easy way to determine if the bladder truly needs emptying is to collect the urine and measure the output. Upon arising in the middle of the night, urinate into a container such as a pitcher placed in the bathroom. In the morning, determine the volume of urine. If it is a small amount of urine, just a few ounces, perhaps the body and mind need to be trained to go back to sleep and not respond to the trigger to get up and urinate. However, if well over a cup of urine is produced after putting in place all the strategies mentioned, take this information to a doctor and discuss what could be causing the urine production. Knowing the amount of urine produced during the night will be helpful in the course of diagnostics.
If you still must urinate at night, be safe by lighting the way, and at the same time, preserve melatonin levels by using orange lights for illumination from bedside to the toilet.
Caffeine and Stimulants
Individual responses to caffeine vary widely, but if someone is getting poor sleep, advice about when to end consumption remains standard. Avoid caffeine in all its forms after noon until healthy sleep is achieved and sustained for at least a week. The same is true for guarana, a stimulant found in a range of sources, including energy drinks (Figure 1.11). Some folks need to give up caffeine, guarana, and any other stimulants (e.g., theobromine, which is found in chocolate) entirely until they get good sleep. After a satisfying sleep rhythm is maintained for a week, you could consider reintroducing stimulants. However, many will find getting good sleep for a week without stimulants provides such an increase in vitality that there is no need for any stimulants. If you are still craving a boost from caffeine or another stimulant, first reintroduce it before noon and notice if there are changes to sleep quality or the refreshed feeling upon awakening in the mornings. From there, determine the latest time in the day your body can clear out the caffeine/stimulant and allow you to sleep well at night.
Alcohol
Under the influence of alcohol, the brain is not able to construct a proper night’s sleep. Being relaxed and falling asleep is not the same as creating health-promoting sleep architecture (see chapter 2). For example, having as little as one serving of wine, beer, or spirits close to bedtime can cause increased awakenings during sleep (even though the person may not be aware of them), decreased rapid eye movement (REM) sleep in the first half of the night, and disturbing REM sleep rebound in the latter half. Alcohol on its own is not the challenge to sleep; rather it’s the timing of its consumption. Avoid alcohol at least five hours prior to bed so the sleep-disrupting chemicals get mostly metabolized out of the body before it’s time to tuck yourself in for the night. This is a wiser approach than the close to bedtime “nightcap” that is sure to hijack a sound night’s sleep.
Nicotine
The double bind of nicotine is that it is a stimulant that will keep us awake if used too close to bedtime, but if a person stops nicotine earlier in the evening, they will have subtle awakenings during the night due to nicotine withdrawals. However, the latter is preferable, so cease nicotine use at least five hours before bed.
To support sleep wellness and overall health, seek a local or online smoking cessation program, preferably one with scientifically proven mindfulness training, which has shown significant success. During the process of quitting, practice self-compassion for two reasons. The first is that smoking is one of the most difficult habits to change, so it is important to be kind to yourself throughout. The second is that neuroscience has proven that self-compassion is an effective component of habit-changing. Many communities have a resource such as the Hawaiʻi Quitline.4 Nationally in the US, there is also smokefree.gov or 1-800-QUIT-NOW (1-800-784-8669).
Nap
In the early afternoon, take a ten-to-twenty-minute nap. (See the napping section for details)
Medications
Sleep is disrupted by many medications, such as some antidepressants, over-the-counter sleep aids, pain medications, antihistamines, and even prescriptions marketed to promote sleep. Just because a medication puts someone to sleep does not mean it creates natural restorative sleep. Check with your health-care provider to determine whether any medications you take may impact your sleep and for guidance about pros and cons associated with sleep disruption and each course of treatment.
Sleep Diary
A sleep diary’s purpose goes well beyond keeping track of how you sleep. By keeping a good sleep diary, you will notice how daytime habits—exercise, alcohol, caffeine, TV viewing—and their timing have an impact on sleep. By keeping track of your sleep habits along with how you feel during the day, you will also establish a connection between sleep quality and daytime mood and performance. Record your data in a sleep diary for two weeks. In addition to providing clear motivation to make changes, this type of biofeedback also fuels the brain for habit-changing behavior. Use this fillable sleep diary5 created by one of my sleep science students at Kapiʻolani Community College. You may also try one of the many phone apps for tracking daytime activities and sleep quality. Daytime activity and mood data are essential to the process, so be sure whatever you use also tracks that information. People are often surprised by their findings after making use of a sleep diary. It shines a light on several potential areas for change to improve sleep.
Ritual
The brain can be rewired to associate behaviors and sensory input with falling asleep. Decide on a before-bed ritual, such as taking a shower, using a soothing naturally scented lotion, reading a book you read only at bedtime, meditating, singing, practicing a relaxing breathing technique, or listening to an audio book or podcast (Figure 1.12).
Leg Cramps
If you experience leg cramps at night, talk with your health-care practitioner to determine if you have any electrolyte imbalances or if they can suggest any supplements, vitamins, or electrolyte drinks. Maintain sufficient hydration. Incorporate daily exercise. Gentle early evening stretching, from head to toe, helps relieve lower leg cramps because they can be triggered by tension elsewhere (including up much higher) in the body. Consider a warm bath with Epsom salts (magnesium sulfate) before bed. During the cramp, applying an ice or heat pack or standing and holding a stretch might alleviate some of the pain.
Snack
Our tūtū (the way we say “grandparents” in Hawaiʻi) and tias (Spanish for “aunts”) knew what they were talking about when they advised us to have warm milk with honey before bed. Although there is a small amount of tryptophan in milk, which is associated with the cascade that puts us to sleep, and the carbohydrates in honey clear the way to allow more of the tryptophan to get into the brain, our sound sleep is probably more due to the calming ritual and the balanced nutrition of that bit of nourishment. The general guideline is to have a little snack close to bedtime and to include a small amount of fat and protein and balance that with carbohydrates, but no high-sugar items, which cause a stress response that keeps you awake. Examples of healthy bedtime snacks would be milk (can be dairy, almond, etc.) with whole-grain cereal (low in sugar) or nut butter with crackers (Figure 1.13). A small serving is best because digestion slows down with sleep. If you have gastroesophageal reflux disease, it is best to skip having food too close to lying down. Time it so it does not aggravate your symptoms.
Sleep in Bed
Use your bed only for sleeping, having sex, reading, or listening to a relaxing audio file. Avoid emailing, engaging in social media, or watching television in bed, all of which condition the brain to associate the bed with a different level of alertness, interfering with sleep. If you have spent what feels like twenty minutes trying to fall asleep, get out of bed, do something relaxing like reading a book on the couch or listening to a relaxing audiobook until sleepy, and then return to bed.
Temperature
While most people can sleep in a range of temperatures, I have had several clients find cooling the bedroom was the one thing needed to fix their sleep. Research shows the ideal sleeping temperature is a surprisingly cool 65–68 degrees Fahrenheit (18–20 degrees Celsius). In the wild, the natural drop in temperature each evening triggers the hypothalamus (see chapter 2) to launch the cascade that ultimately releases melatonin, telling our bodies it is time to sleep. Taking a warm bath or shower before bed promotes this cooling by bringing the blood flow to the skin in response to the heat. Then, after stepping out of the bath, the blood on the skin surface works like a radiator to cool the body temperature and send you into a relaxing sleep. To investigate this phenomenon, researchers developed a bodysuit with a layer containing a mesh of tiny tubes of water, precisely controlled for temperature and region of flow. When wearing the suit, participants’ skin surface was exposed to heat, yet remained dry. These experiments showed bringing blood flow to the body surface via temporary superficial warmth provided core-temperature body cooling and thus reduced the time participants needed to fall asleep and improved their sleep quality.6 Warming the feet and/or hands with a warm soak or heating pad is also a quick trick if taking a shower or bath is too time-consuming or not practical.
Timing
Most adults need around eight hours of sleep every night, and it is best to go to bed and get up in the morning at the same time each day, even on weekends.
Clocks
Do not have a clock within view of the bed; being aware of the time triggers a loop of thinking that keeps you awake. When awakening in the middle of the night, resist the urge to look at the clock or your phone (both of which should not be near your bed or visible) and train your brain to let go of the curiosity about the time.
Noise
If it is not possible to make the bedroom quiet, use noise-reducing earplugs. There are also phone apps and audio files that create relaxing white noise, such as rain sounds. Running a fan in the room is sometimes enough to mask intrusive noises. However, the brain still processes white noise information, so minimizing it is preferable when outside noises are low enough that you can still sleep.
Cognitive Behavioral Therapy for Insomnia
Cognitive behavioral therapy for insomnia (CBTI) involves meeting with an individual or a group once a week for four to eight weeks. The client is advised on how to change thoughts and behaviors to increase healthy sleep. The National Institutes of Health (NIH) claims CBTI is safe and effective.7 Many insurance companies cover CBTI, and research shows it is more effective than sleep medications. CBTI does not have medications’ harmful side effects and also has been shown to have beneficial effects extending beyond the treatment period, which is not the case with medications. One of the paradigms for CBTI involves five pillars: sleep wellness, sleep restriction, stimulus control, sleep diary, and actigraphy.
1. 1. Sleep wellness: Refer to the Sleep Wellness Guide for instructions on this step.
2. 2. Sleep restriction: Research shows this works better than medications and has longer-lasting effects. In general, the concept is to be in bed only when sleeping and not to spend hours lying there trying to sleep. Here are the steps:
1. a. Spend only five hours in bed. Figure out what time you have to get up and count back five hours. Go to bed at the same time every night.
Example: Do you have to get up at 7:00 a.m.? Then go to bed at 2:00 a.m.
2. b. After five days, you will be very tired in the evening due to sleep deprivation, but your circadian rhythm is closer to being set, so you can go to bed fifteen minutes earlier on that fifth night.
3. c. After five more days, go to bed fifteen minutes earlier, and continue with this adjustment every five days until you are going to bed around eight hours before having to wake up.
During the program, a person may feel worse because they are so tired. Pay extra attention to light. Use dim or orange light at night and bright light in mornings.
Be safe. Put help in place before beginning. Do not do dangerous work, drive, take care of children, or anything else that requires your full attention to do safely in the early days of the program due to the high level of sleep debt.
3. 3. Stimulus control: A stimulus is something that causes a specific reaction. If you hear your phone (sound from phone = stimulus), you walk toward it (walking = response). Stimulus control involves separating sleep-related activities in the bedroom from wakeful activities in the rest of the home. For example, not watching TV, emailing in bed, or sleeping part of the night on the living room couch. Here are the instructions:
1. a. If you are not sleepy, do not go to bed.
2. b. If you cannot fall asleep within what feels like twenty minutes, leave the bedroom.
3. c. Listen to an audiobook or do some gentle reading by a dim or blue light–filtered light in a chair or on the couch. Do not fall asleep there. When you start to fall asleep, move to your bed.
4. d. Use the bed only for sleep, sex, and gentle reading or relaxing audio books.
4. 4. Sleep diary: Refer to the sleep diary section in this chapter for instructions on this step. This helps pinpoint areas from the sleep wellness list that need to be addressed. For example, someone could report in their sleep diary that they were texting in bed or having a glass of wine before bed, but they did not realize those things could affect sleep.
5. 5. Actigraphy: This is not necessary but can be helpful. Some clinicians use medical actigraphy devices, while laypersons might use mobile phone apps that monitor sleep. If using a phone app, temper your connection to the results and do not become fixated on the data, especially given the significant limitations of such phone apps as of the writing of this textbook. I have met people who became obsessed with their phone app sleep data to the point that it caused them anxiety and poor sleep. Also keep in mind that the movement of a sleeping partner may appear as your movement during a night’s recording, depending on the placement of your device and how easily movement is translated across your mattress. Both actigraphy and sleep-related phone apps use an accelerometer to detect changes in velocity, providing a record of physical activity. The movement patterns are processed by a computer algorithm that translates those movements as a state of sleep or waking. All this is in an attempt to verify four things:
1. a. Circadian rhythmicity: Going to bed between 9:00 and 11:00 p.m. and getting out of bed early in the morning or around midmorning. These times are part of a healthy circadian rhythm.
2. b. Consolidation: One major block of sleep, as opposed to something like three hours at midnight and three hours in the afternoon.
3. c. Sleep schedule regularity: Going to bed and getting out of bed at the same every day.
4. d. Napping: When and for how long the nap is taken.
Additional Support during Pregnancy
The National Sleep Foundation’s “Women and Sleep” poll in 1998 showed that 78 percent of women had more difficulty with sleep during pregnancy than any other time. Their 2007 follow-up survey indicated that the primary factors disturbing women’s sleep during pregnancy were getting up to urinate; back, neck, or joint pain; leg cramps; heartburn; and/or dreams. Even with all these challenges, there is good news, because most women can mitigate pregnancy-related sleep problems by implementing strategies listed in the Sleep Wellness Guide along with the following advice. This section will address the importance of sleep during pregnancy, and how to improve sleep by addressing challenges particular to pregnancy.
There are a range of reasons pregnant women are driven to be concerned about their sleep. Kathy Lee—a University of California, San Francisco, nursing professor and specialist on pregnancy and sleep—advises pregnant women to remember that in addition to “eating for two,” they are also “sleeping for two.” One of her studies reported that pregnant women who get less than six hours of sleep a night have more difficult labors and are over four times more likely to need a cesarean. A study by another group, which controlled for other factors associated with preterm birth, indicated that poor sleep during pregnancy is associated with a higher incidence of preterm birth (when a baby is born too early). Scientists suggest that preterm labor and births may be related to the increase in prostaglandins found in people getting inadequate sleep.
One of the disruptions to sleep in pregnancy is snoring. Because even a small increase in weight multiplies the chance of snoring, a woman who never snored could begin snoring during pregnancy, even with the minimal weight gain required. University of Michigan researchers recommend screening and treatment for this, as they found snoring that begins during pregnancy is associated with a higher risk of developing high blood pressure during the pregnancy (gestational hypertension) and preeclampsia. Hypertensive disorders during pregnancy can have serious consequences, so we must make an effort to educate people about the importance of screening pregnant women for snoring.
Polls show that a small percentage of pregnant women drink alcohol before bed in hopes of improving their sleep, even though there is solid research on alcohol’s damaging effects to the fetus. Additionally, as stated earlier, while alcohol induces what feels like sleep, it is not healthy, normal sleep. It is essential for women to seek support to eliminate alcohol during pregnancy and lactation due to the damaging impact of alcohol on fetal and infant development. Infant sleep is significantly disrupted by even small amounts of alcohol in breast milk. If giving up alcohol during the breastfeeding months/years is not feasible, some people use different strategies like considering the timing of alcohol consumption and “pumping and dumping” breast milk until it is clear of alcohol before nursing. Please contact a lactation consultant or health-care provider for guidance.
Strategies for healthy sleep during pregnancy begin with the list of items on the Sleep Wellness Guide combined with these additional practices: Sleeping on the side, compared to on the back, reduces lower-back strain and takes the weight of the enlarging uterus off the large blood vessels vital to baby’s and mom’s circulation. This also is helpful for the digestive system, freeing it from the pressure of being beneath the uterus. As often as is comfortable, sleep on the left side, which is slightly preferred as it takes the weight of the uterus off the liver, which is on the right side of the body. Left-side sleep also provides the best position for blood flow to the heart and the rest of the body. Early in the pregnancy is a time to practice building the habit of sleeping on the side. However, sleeping all night on the side, especially the same side, is not necessary and likely would cause discomfort in the hips and shoulders. Remember that while this is the optimal position theoretically, the position itself is not something for the pregnant woman to worry about. The priority is to get sleep. During the night, you may awaken to find yourself on your back, or when falling asleep, you might feel better in something other than this prescribed side-sleeping position. Get comfortable as you wish, and rest assured that your body will give you a sign when a move is in order.
Here are some suggestions for increasing your comfort when side sleeping. Lying on your side, place a pillow between your bent knees and extend that pillow to the feet (Figure 1.15). The cushion between the knees squares the hip alignment, and its placement between the feet prevents the rotation of the top of the thigh bone (femur) in the hip socket. All this diminishes back strain. As the uterus increases in size, a cushion beneath the abdomen in this position is often comforting. Body-length pillows may also be a satisfying luxury. If you experience heartburn, use pillows to slightly elevate the head and shoulders in addition to following your health practitioner’s general heartburn treatments.
Regarding other common pregnancy-related sleep disturbances, see, for example, previous sections on treatment for frequent nighttime urination, leg cramps, and unsettling dreams. If there are still challenges, seek out a cognitive behavioral therapy for insomnia (CBTI) practitioner. CBTI is the most effective proven technique for insomnia and does not have the risks and side effects of medications.
Family Sleep and Bed Sharing
The baby has arrived—but now, where do they sleep? Babies sleeping in the same bed with parents is normal in a vast array of cultures all over the world, yet in the US, there continues to be fervent debate (Figure 1.16). Could it be our litigious society, where legal advisors caution medical groups against suggesting cosleeping on the off chance that something could go wrong, or are there legitimate safety and medical concerns? In the following discussion, the terms family sleep, family bed, bed sharing, and cosleeping will be used to refer to the practice of having a baby or child in the bed or in the immediate sleeping space of the parent.
Using research from the fields of medicine and anthropology, Dr. James McKenna, at the University of Notre Dame, provides resources to guide families in safe cosleeping practices. He emphasizes the need for an infant to be in contact with the mother’s body during sleep in order to properly regulate itself, as it did when in the womb. He is also very clear that bed sharing involves much thought, discussion, and a commitment from the parent and also the additional parent—if there is one—and that bed sharing is not suitable for everyone. A misperception associated with family sleep is that the child will grow to be clingy and more dependent, but sociologists and psychologists explain the opposite to be true. When a child senses the strong emotional bond of a parent, the child more easily grows to be independent and emotionally secure. One concept behind cosleeping is that it fosters an environment where a child more confidently differentiates from the parent.
Safe family sleeping requires certain precautions and arrangements such as these:
• • Infants should sleep on their back.
• • The sleeping surface must be firm and not a pillow.
• • The mattress should be as close to the floor as possible, preferably on the floor.
• • There must be no potential for a covering, such as a blanket or sheet, to fall over their face.
• • There must be no exposure to cigarette smoke or nicotine in utero or as an infant.
• • There must be no stuffed animals, pillows, or sheepskins (fluffy items).
• • Do not use water beds, beanbags, couches.
• • There must be no gap between the mattress and frame or the mattress and wall.
• • Parents must not use alcohol, drugs, or medication that may interfere with their ability to easily awaken.
• • Parents with long hair need to fix it so it cannot wrap around the baby’s neck.
• • Parents should ensure that they still experience a good night’s sleep. For parents who do not feel they will sleep well with the baby in the bed, there are certified-safe cosleeping bed attachments to consider.
• • Breastfeeding helps reduce death from SIDS (sudden infant death syndrome) and other diseases and is highly recommended in conjunction with cosleeping. If the baby is not sleeping with their breastfeeding parent or if the parent is extremely obese, it is safer for the baby to be on a separate surface from the parent’s bed, but still adjacent to it (such as in a cosleeping bed attachment).
Social Justice and Sleep Wellness
Who has the luxury of putting these sleep wellness practices in place? Who is able to dedicate eight hours each night to sleep when we have work and family responsibilities; go to school or work somewhere we can take a nap; make time for exercise; sleep in a comfortable bed in a dark, quiet room at the desired temperature? By now, you are likely clear on the importance of good sleep and its connection to how healthy you will be, how good you feel emotionally, and even how long you will live. But due to economic injustices and lack of equity around things like race and sexual orientation, many people cannot get adequate sleep. Please consider your part in working to help yourself and everyone get better sleep by reading “Your Next Actions for Justice” and chapter 7.
1 Daniel J. Buysse, “Sleep Health: Can We Define It? Does It Matter?,” Sleep 37, no. 1 (January 2014): 9–17, https://doi.org/10.5665/sleep.3298.
2 Gandhi Yetish et al., “Natural Sleep and Its Seasonal Variations in Three Pre-industrial Societies,” Current Biology 25, no. 21 (November 2015): 2862–68, https://doi.org/10.1016/j.cub.2015.09.046.
3 Shook, Sheryl, “Sleep Diary,” Google, accessed December 3, 2021, https://docs.google.com/document/d/1zigrkIEwmCLq5oMAkZ-bQIajdwhA9mQezfNAervgIoE/copy.
4 “Hawai‘i Tobacco Quitline,” accessed December 3, 2021, https://hawaii.quitlogix.org/en-US/.
5 Shook, Sheryl, “Sleep Diary,” Google, accessed December 3, 2021, https://docs.google.com/document/d/1zigrkIEwmCLq5oMAkZ-bQIajdwhA9mQezfNAervgIoE/copy.
6 Roy J. E. M. Raymann, Dick F. Swaab, and Eus J. W. Van Someren, “Cutaneous Warming Promotes Sleep Onset,” American Journal of Physiology: Regulatory, Integrative and Comparative Physiology 288, no. 6 (June 2005): 1589–97, https://doi.org/10.1152/ajpregu.00492.2004.
7 “NIH State-of-the-Science Conference Statement on Manifestations and Management of Chronic Insomnia in Adults,” NIH Consensus and State-of-the-Science Statements 22, no. 2 (June 2005): 1–30, https://consensus.nih.gov/2005/insomniastatement.htm. | textbooks/socialsci/Psychology/Biological_Psychology/The_Science_of_Sleep_(Shook)/01%3A_Sleep_Wellness.txt |
Learning Objectives
After you read this chapter, you will be able to
• provide an introductory overview of neuroanatomy
• identify and describe functions of sleep-related brain structures
• illustrate both directions of the shift between sleep and waking states
• list the components of polysomnography (PSG)
• describe rapid eye movement (REM) sleep and non-REM (NREM) sleep
• determine sleep stages from PSG data
• construct a diagram of healthy sleep architecture for eight hours of sleep
• explain actigraphy, including its limitations compared to PSG
Introduction
Thanks in part to the availability of phone apps claiming to measure and analyze sleep, as well as an epic assortment of sleep analysis devices for the layperson to wear, we are experiencing a much-needed increase in the desire to deconstruct and explore our own sleep. To help understand the intricacies of sleep, let’s first get a fundamental understanding of the brain, at least in the context of how it functions when it’s awake versus sleeping. This will also be valuable in later chapters, which will make reference to assorted brain structures.
Brain Anatomy and Physiology
What molecules in your brain had to be released for you to make the decision to study this chapter? And how are you managing to hold your head up or read the words on the page? The nervous system carries signals through the body via neurons.1 These signals cause activity in muscles, glands, and other neurons. Some of the neurons are in the brain and the spinal cord, which together make up the central nervous system. Others travel throughout the rest of your body and comprise the peripheral nervous system (Figure 2.1). Sensory information from things we see, hear, feel, taste, or smell flows into the body and is processed by the central nervous system. After the brain has put us to sleep, it has a simple way of keeping most of that sensory information from awakening us. And while we are sleeping, the brain is actively creating the elaborate sleep architecture that carries us through the different stages and cascades necessary to secure the myriad benefits of a healthy night’s sleep.
Note to reader: A more comprehensive review of brain anatomy and physiology is beyond the scope of this book, but this chapter will provide enough context and detail to give an understanding of sleep-related brain structures and functions. For additional brain anatomy and physiology, see The Brain from Top to Bottom, a website developed by Bruno Dubuc, hosted by McGill University in Canada, and labeled “copyleft” as a part of their desire to encourage people to freely copy and use their site’s content.2
The nervous system has two classes of cells: glial cells and neurons. Glial cells provide metabolic (metabolism = chemical reactions of the body) and physical support, while neurons carry the nervous system’s signals. Glial comes from the Greek for “glue.” Scientists chose this term when they noticed how numerous these cells were in the brain and mistakenly thought they had no purpose other than holding the neurons together. Later, it became clear that these cells are much more than brain glue and play a crucial role in preventing neurologic disorders through their sleep-related housekeeping activities. The misinformation surrounding glial cells did not end with their name. For ages, scientists believed glial cells immensely outnumbered neurons in the brain. Several studies suggested glial cells were ten times more numerous than neurons. However, in 2016, researchers from the Universidade Federal do Rio de Janeiro and University of Nevada School of Medicine used a new counting method and proposed that there are actually fewer glial cells than neurons in the brain. In their paper, they also provided a history of the techniques used to count glial cells, along with a discussion of the problems with the methods used that led scientists to the wrong conclusions for so many years.3 However, there are still some neuroscientists who debate this conclusion.
In contrast to glial cells, neurons use electrical activity and chemicals to carry signals throughout the body. The basic parts of a neuron are the dendrites, cell body, and axon (Figure 2.2). Dendrites carry information toward the cell body. From there, the signal travels to the axon to be transmitted to a muscle, gland, or another neuron. The functional connection between the neuron and the cell of its destination is called a synapse. Here, chemicals (neurotransmitters) or sometimes charged particles (ions) move from the first cell (presynaptic) to the second cell (postsynaptic). In this way, a signal, such as one triggered from the aroma of your roommate’s cooking, can make you aware of a delight to come. Meanwhile, another pathway, triggered by that same aroma, may cause you to salivate and activate your muscles to get you moving swiftly toward the kitchen so you can eat and fuel your brain for further studying.
The four major parts of the brain are the brainstem, cerebellum, diencephalon, and cerebrum (Figure 2.3). The brainstem is continuous with and superior to (above) the spinal cord. Within the brainstem are the medulla oblongata, pons, and midbrain. Posterior to (behind) the brainstem is the cerebellum. The diencephalon—which includes the thalamus, hypothalamus, and epithalamus—sits on top of the brainstem. The cerebrum, the largest part of the brain, rests on top of the diencephalon.
Brainstem
With groups of neurons that control breathing, heart rate, and blood vessel diameter, the brainstem coordinates movements such as swallowing, coughing, sneezing, and much more. Pathways of sensory and motor information pass through and sometimes make connections in various regions of the brainstem. The reticular activating system (RAS)—a network of connections, primarily originating in the reticular formation—contains brainstem circuits that send signals to the cerebral cortex directly and also via the thalamus to contribute to consciousness (Figure 2.4). Sensory signals along this pathway keep you alert and oriented to your surroundings. The RAS is activated during awake states and is inactivated as part of initiating and maintaining sleep. However, when someone is sleeping, a strong enough sensory stimulus, such as a loud noise, will awaken the person via RAS activation. People differ from one another in the threshold required to activate the RAS during sleep: thus there are “heavy” and “light” sleepers. Signals from the eyes, the ears, and most of the rest of the body (e.g., temperature, touch, pain) travel through the RAS, but odors do not. This is why smoke detectors are important in sleeping areas. A person may die inhaling smoke from a fire while they are sleeping because the smell of smoke will not travel through the RAS and awaken them. If a person is unable to hear a fire alarm, they may consider smoke detectors that utilize extremely bright flashing lights or strong pillow vibrations to activate RAS pathways and increase their chances of awakening.
Cerebellum
Although the cerebellum is only one-tenth of the weight of the brain, it contains almost half of the brain’s neurons. Many of these neurons are dedicated to coordinating and optimizing movement, as well as maintaining posture and balance. While the preliminary motor signal to make a move, such as throwing a ball or saying a word, originates in the motor area of the cerebral cortex, that signal will loop into the cerebellum and back to the cerebral cortex. The benefit of the cerebellar input is that the movement will be smoother and more precise. There are also nonmotor functions of the cerebellum, such as learning and information processing, and a number of sleep-related functions. Research shows cerebral cortex and cerebellar interactions are crucial for memory consolidation, and some of these interactions occur particularly during sleep.4 Cerebellar activity also changes depending on the specific stage of sleep. Scientists continue to debate the exact role of the cerebellum in sleep, but it is clear that its dysfunction can cause sleep problems. In the presence of abnormal cerebellar function due to damage or a neurologic disorder, the sleep-wake cycle can be disrupted, and sleep disorders may be present. Of interest is that clock genes—regulators of the circadian rhythm—are expressed by cerebellar cells, but their function in this region remains to be elucidated.
Diencephalon
The thalamus—the largest part of the diencephalon—is a relay station, transmitting sensory information from the spinal cord and brainstem up into the sensory areas of the cerebral cortex (Figure 2.5). Additionally, by conveying information from the cerebellum and other brain structures up to the motor regions of the cerebral cortex, the thalamus is instrumental in creating coordinated movement. There are also thalamic functions associated with learning, memory, emotions, and consciousness. This consciousness is maintained in part by the thalamus transmitting some of the RAS signals up to the cerebral cortex. In contrast, during some components of sleep, the thalamus sends oscillatory signals to a large area of the cerebral cortex, in effect interfering with the cerebrocortical reception of sensory input that would normally travel up from the RAS. Oscillatory signals in this setting refer to neuronal electrical activity that is regular and synchronized, as opposed to neuronal sensory activity while awake, which would be irregular and not synchronized in a widespread manner.
Posterior and superior to the thalamus, the epithalamus contains the habenular nuclei, which associate emotions with smells—for example, the reaction you may have to the fragrance of your ipo (Hawaiian for “sweetheart”).
The other structure in the epithalamus is the pineal gland, a pea-sized structure that releases the hormone melatonin. Hormones are molecules that flow through the blood to their target structure, where they have an effect. This is the mechanism of action of the endocrine system.5 Therefore, the pineal gland, though it is in the brain, is a part of the endocrine system. During the darkness of night, the pineal gland releases its highest levels of melatonin, thereby regulating the circadian rhythm (see chapter 3).
The hypothalamus (hypo = under) is made up of several nuclei with a vast array of functions. You may be familiar with the nucleus (plural, nuclei) as the part of a cell that contains the genetic material. However, in the brain, nucleus refers to a group of neuronal cell bodies such as those comprising the hypothalamus (Figure 2.7). To get a sense of the range of functions of the hypothalamus, they include—but are not limited to—regulating body temperature, generating the feeling of being satisfied after eating, being sexually aroused, changing heart rate, and controlling the circadian rhythm.
Of the many nuclei in the hypothalamus, the suprachiasmatic nucleus (SCN) is the one that orchestrates the circadian rhythm. As covered in chapter 3, the light- and dark-dependent signals from the eyes are one of the driving forces of the SCN, which regulates the pineal gland’s release of melatonin.
The posterior hypothalamus (posterior hypothalamic nucleus) is a nucleus that contributes to an elaborate network of structures involved with maintaining the awake state. One of the molecules the posterior hypothalamus releases to sustain wakefulness is histamine. This explains, in part, the drowsiness experienced when taking an antihistamine, found in many allergy medications, which blocks the effects of histamine. In fact, one of the wake-producing pathways of caffeine is associated with activating the release of histamine from these neurons. The posterior hypothalamus also releases gamma-aminobutyric acid (GABA) to maintain wakefulness. It does this by inhibiting neurons that would normally inhibit cerebral cortex activity. If you are thinking, “That sounds like a double negative,” you are correct. Think of it this way: The awake cerebral cortex is actively processing information, but that processing can be inhibited by neural pathways, thus resulting in sleep or drowsiness. But if those drowsiness-inducing pathways are inhibited by GABA from the posterior hypothalamus, then the brain will remain alert.
To understand one of the mechanisms for falling asleep, let’s consider what would happen if the posterior hypothalamus, and its wake-promoting effects, were inhibited. Since one of the posterior hypothalamus’ roles is to facilitate the transmission of information up to the cerebral cortex, then inhibiting the posterior hypothalamus would support sleep onset by reducing cerebrocortex information processing. The anterior hypothalamus (anterior hypothalamic nucleus) pulls this off via GABA. When the anterior hypothalamus is activated by the neurotransmitter serotonin, and if the timing is right in terms of circadian rhythm, the posterior hypothalamus is inhibited by the anterior hypothalamus, helping bring about the sleep state. The RAS is also inhibited from the anterior hypothalamus’ GABA activity, further reducing the likelihood that sensory information will have alerting effects on the cerebral cortex. Now the brain can fall asleep, mostly uninterrupted from the outside experience.
Cerebrum
Singing a song, writing a story, playing a sport, and planning the day are made possible by our cerebrum. It is divided in half, with discrete regions that connect the left and right hemispheres. Deep inside the cerebrum are structures associated with an array of functions including memory, emotions, and motor control. The more superficial neurons of the cerebrum comprise the cerebral cortex, which is divided into four lobes: frontal, parietal, occipital, and temporal (Figure 2.8). The insula is another section of the cerebral cortex but is best visualized by creating a space between the meeting of the frontal and temporal lobes (Figure 2.9).
The frontal lobe contains areas for motor control, speech generation, odor identification, reasoning, personality, judgment, understanding consequences, learning complicated concepts, and more.
The parietal lobe receives sensory information, such as touch, temperature, pain, and itch. It also associates sensory data with other information, enabling you to identify a previously encountered item, such as your favorite fruit, entirely by touch. Part of the ability to understand language is also in the parietal lobe.
The occipital lobe processes visual information, including giving meaning to images. For example, image shapes coming from the eyes are combined in the occipital lobe in a manner that allows you to recognize your shoes solely by looking at them.
The temporal lobe receives and processes sounds and has areas for recognizing faces and perceiving smells.
The insula, previously one of the least understood brain regions, is now known to process taste, smell, sound, visceral and body surface sensations, and emotional responses such as empathy.
The limbic system includes part of the cerebral cortex and contains groups of neuronal cell bodies and pathways that interconnect cerebral cortex regions and other brain structures (Figure 2.10). It creates emotions such as pleasure, anger, and rage while also sparking drives for hunger and sex. The hippocampus, a vital structure for memory, is in the limbic system. The hippocampus has received more attention in recent decades because studies have suggested that the adult hippocampus produces new neurons, something previously deemed impossible anywhere in the adult brain. However, with further research, neuroscientists began questioning the existence of hippocampal neurogenesis. The debate has continued, with 2019 research swinging the view back in favor of neurogenesis in adult humans up to ninety years of age.6
In later chapters, we will revisit assorted aspects of brain anatomy, such as when learning about the creation and qualities of different types of dreams or how dreams can help us heal from trauma. For now, our discussion of brain activity will turn to how its characteristics are used to classify different waking and sleep states.
Polysomnogram
The polysomnogram (PSG) is the scientific tool for verifying sleep and is also used clinically to analyze sleep for disorders. While phone apps and actigraphy (see Actigraphy section) are commonly used to report sleep data of varying value, the scientific community has agreed to physiologically define sleep in humans as a set of stereotypical electrical signals from the brain, eyes, and skeletal muscles. Together, these three measurements—electroencephalogram, electrooculogram, and electromyogram—comprise the polysomnogram (poly = many, somno = sleep, gram = recording; Figure 2.11).
Electroencephalogram
During an electroencephalogram (EEG; electro = electricity, en = inside, cephalo = head, gram = recording), electrical activity in the brain travels through the skull and skin and can be detected by pasting tiny electrodes to the scalp (Figure 2.12). Viewing the voltage changes across time gives an indication of sleep onset and offset as well as the stage of sleep (such as REM or NREM, covered in the following section). The voltage change is measured vertically along the y axis, and the time change is measured horizontally along the x axis. This axis orientation is typical for all three types of polysomnogram recordings (electroencephalogram, electrooculogram, and electromyogram), but the scale on the y axis may vary.
The PSG electrical wave characteristics are amplitude, frequency, and morphology (Figure 2.13). Wave amplitude is exactly what it sounds like: the size of the wave—a y axis measurement of voltage. Frequency describes how fast the waves are coming, so will be measured by looking along the x axis, at timing (Figure 2.14). The units for frequency are measured in hertz, also known as “cycles per second,” with a cycle being an entire wave. So this refers to how many whole waves are arriving every second. (The term hertz [Hz] was named after a person who studied electromagnetic waves.) Morphology (morph = form) is a way to look along the recording for unique shapes, such as a sleep spindle or K-complex, which are discussed in relation to NREM 2 in the Sleep Stages section (Figure 2.15). Different physiological states, such as sleeping or thinking, can be identified by EEG (Figure 2.16).
• Beta: awake, alert, thinking; 14–40 Hz
• Alpha: awake, resting the mind, eyes closed; 8–13 Hz
• Theta: drowsiness, daydreaming, sleep; 4–7 Hz
• Delta: sleep; 1–4 Hz
Electrooculogram
Different parts of the sleep cycle have particular eye movements that can be recorded by pasting electrodes on the skin beyond the outer corner of each eye for an electrooculogram (EOG; electro = electricity, oculo = eye, gram = recording). The anterior (front) region of the eyeball is positively charged compared to its posterior (back) region. This charge difference is utilized to generate a voltage trace for each eye, indicating if the eye is moved toward or away from the electrode, as well as the speed and size of the movements (Figure 2.17).
Electromyogram
Body movement during sleep can be categorized to determine sleep stages. Electrodes are typically placed below the chin and on the leg for an electromyogram (EMG; electro = electricity, myo = muscle, gram = recording). If you are sitting up reading this chapter and start to fall asleep, your head would fall slightly forward because the postural muscles below the chin relax. This change in muscle tone is picked up by an EMG. During a night’s sleep, it is normal to change position, twitch, and even have periods of paralysis. The EMG displays the type and timing of this movement (or lack of movement) so that data can be combined with the EOG and EEG to provide details about a person’s sleep.
Additional Clinical Measures
The EEG, EOG, and EMG are useful in research, but a clinical sleep study relies on additional physiological data. The sleep technician will connect the patient to devices to measure heart activity (electrocardiograph), blood oxygen (pulse oximeter), breathing effort (chest and abdominal expansion measurement instruments), and breath movement at the mouth and nose (oral/nasal airflow sensors). See the Apnea section of chapter 6 for a further discussion of these clinical measures.
Sleep Stages: REM Sleep and Non-REM Sleep
Sleep is divided into five major stages, each with an assortment of characteristics that distinguishes one stage from the other. However, they are named simply in reference to the presence or absence of rapid eye movement (REM). Curiously, REM sleep has only brief periods of rapid eye movement, but that name has persisted through the decades. Non-REM (NREM) sleep is further divided into four stages: NREM 1, 2, 3, and 4. Each of the five sleep stages occurs and repeats during different parts of a night’s sleep, comprising the full sleep cycle. The order, timing, and duration of the stages are referred to as sleep architecture. We will see that the brain has quite a job to do if it is to build a healthy night’s sleep according to the sleep architectural blueprint, which has been perfected over millennia.
REM Sleep
During REM sleep, we have vivid and emotional dreams while the body is paralyzed and not apparently regulating several physiological functions such as body temperature, heart rate, and blood pressure. REM sleep is composed of phasic and tonic components. Phasic REM sleep is easily recognized due to the “phases” when the eyes are darting back and forth. Tonic REM sleep, while still considered REM sleep, does not have eye movements but has similar brain activity to phasic REM sleep. Unless otherwise noted, in this textbook, REM will refer to REM in general without differentiating between phasic or tonic.
The purpose of those rapid eye movements may surprise you, especially if, like many others, you assumed the movements were associated with dream content (which they are not). Research by a Columbia University ocular physiologist suggests that rapid eye movement during sleep may be a way to keep the aqueous humor in the eyeball swirling in order to transport oxygen from the blood vessels of the iris to the cornea, which lacks blood vessels.8 During sleep, if the eyes did not move, the lack of aqueous fluid movement could result in corneal suffocation and cell death. When a person is awake, with the eyes open, there is a temperature difference on either side of the cornea that creates convection currents, causing the aqueous humor to move and transport the oxygen (Figure 2.18). The story gets more interesting when we try to understand why periods of REM sleep get longer throughout the night. The Columbia researcher’s group theorized that this lengthening of the REM sleep periods is necessary for oxygen transport, as the cumulative time (NREM + REM) the closed eye remains motionless increases from the first to the last hour of sleep.
Looking at the EEG of a person in REM sleep may lead you to believe they are awake because the electrical activity is asynchronous—it looks messy. This asynchronous activity is typical of the waking state, when the brain is processing myriad sensory input and thoughts.
The flaccid paralysis of skeletal muscles during REM sleep leaves the person motionless except for breathing, rapid eye movement, and the occasional twitch, perhaps in a leg, finger, or facial muscle. There are also tiny skeletal muscles in the middle ear—providing protection from loud noises—that are not paralyzed during REM sleep, but this is certainly not observable to the casual viewer. Figure 2.19 shows the EEG, EOG, and EMG of the waking state and different stages of sleep: S1 (NREM 1), S2 (NREM 2), SWS (slow-wave sleep; NREM 3 and NREM 4), and REM.
NREM Sleep
NREM 1 is how you enter sleep and is a light stage of sleep. Light sleep means a person is easily awakened. Many of us have been on one end of this experience: You wake up your friend, who is obviously sleeping, and you thoughtfully mention, “Sorry to wake you, but—” and they interrupt, “I was not sleeping!” and look at you like you said something ludicrous. If your friend happened to be hooked up to PSG, you would be able to show them they were in fact asleep. They may report they could not have been asleep because they were thinking about something, although usually something quite mundane. These “thoughts” are in fact the dull dreams of NREM 1. Another experience of NREM 1 can be when we lie down to sleep, and after a few moments, wonder why we were thinking something slightly absurd or illogical. We likely fell into NREM 1, easily awakened with no impression of being asleep, and then recalled the NREM 1 dream as a “thought.”
Here are some more facts about NREM 1 sleep:
• • The EEG of NREM 1 is characterized by theta activity, with its lower frequency compared to the awake state.
• • Although you may freak out your roommate by staring at them while they fall asleep (the sleep scientist’s folly), you can note when they drift into NREM 1 as their closed eyes have easily observed slow rolling movements.
• • Occasionally, we see someone lying down, gently readjusting their position, and we conclude they are not sleeping. We may say something to them, to find they startle a bit and ask why we awakened them. These seemingly wakeful movements are normal in NREM 1 and are seen in the EMG. This stage can also include hypnic jerks, where the entire body or body parts have a large twitch, and there is often a sensation of falling. There is speculation that hypnic jerks are a vestigial response that prevented our ancestors who slept in trees from falling to the ground.
The term light sleep often refers to NREM 1 and NREM 2, but NREM 2 sleep, where you will spend almost half of your night, is more difficult to awaken from than in NREM 1. This is when things in the body start to slow down:
• • The unique EEG morphology—sleep spindles and K-complexes—of NREM 2 makes it easy to differentiate this stage of sleep from the others (Figure 2.20). Sleep spindles may be associated with learning, and transferring information from short-to-long-term memory. K-complexes are generated in response to a stimulus, such as touch or sound, and may help us stay asleep during those potential disruptions.
• • The eyes do not have any noticeable movements during NREM 2.
• • There may still be some body movements during NREM 2, such as shifting position.
NREM 3 and NREM 4 together are often referred to as “deep sleep” because awakening from these stages is difficult and results in a fierce feeling of grogginess.
• • The EEGs for NREM 3 and NREM 4 both contain large amplitude, slow waves—delta waves—giving both of these sleep stages the name “slow-wave sleep.” NREM 4 consists almost entirely of these slow waves, while NREM 3 has intermittent periods of the slow waves. Because this percentage of slow-wave sleep is the most noticeable difference between NREM 3 and NREM 4, many scientists have abandoned use of NREM 4, stating that NREM has only three stages, 1, 2, and 3. For simplicity and clarity in this text, we will use slow-wave sleep (SWS) to refer to NREM 3 and NREM 4 collectively, collapsing NREM 4 into NREM 3 when discussing the NREM stages.
• • The eyes do not have any noticeable movements during SWS.
• • Some body movement may occur during SWS, but it is minimal.
Some researchers debated the use of the word deep when referring to slow-wave sleep, so occasionally an article may seem contradictory to the convention. Which would you consider deep sleep: SWS, during which the body may be moving slightly and is still regulating many of its physiological functions, such as temperature and blood pressure, or REM sleep, when the body is paralyzed and not highly regulating some physiological functions, such as temperature and blood pressure? Ultimately, most have landed on considering SWS deep sleep due to the synchronous slow-wave brain activity and the difficulty of awakening a person from this stage, compared to the asynchronous brain activity of REM sleep and the relative ease of awakening from REM sleep.
Sleep Architecture
Sleep architecture is the timing and order of each of the sleep stages: REM and NREM 1, 2, and 3. Your brain and body are building something complex while you are lying there, and even something seemingly minor like a glass of wine shortly before bed is enough to disrupt your brain’s ability to create all the elements of sleep. Alcohol, a central nervous system depressant, is one of the many substances that can prevent the brain from generating some of the sleep stages, such as REM, and can wreak havoc on the body’s ability to organize the stages in a manner necessary to receive the benefits of a healthy night’s sleep.
Sleep begins with NREM 1 and then moves through NREM 2 and 3 before going into the first period of REM, and this completes the first sleep cycle. On the way from NREM 3 to that first REM period, there may be some time in NREM 2 and 1. This first cycle takes about ninety minutes and will repeat throughout the night around five times, resulting in around 7.5 hours of sleep (consider doing that math to convince yourself it makes sense). The hypnogram in Figure 2.21 shows sleep architecture. Around midnight, this person took a few minutes to fall asleep (sleep-onset latency), went into NREM 1 (stage 1), and then went through each of the night’s sleep cycles before ultimately awakening fully at 6:30 a.m.
Within each ninety-minute cycle, as it repeats during the night, REM increases and NREM 3 and 4 decrease. Another way to think of this is that during the beginning of the night, you are getting more NREM 3 and 4, and during the last part of the night, you are getting more REM. Putting it all together, we also see that almost half the night is spent in NREM 2. (Note how the example hypnogram differentiates between NREM 3 and NREM 4 [as stages 3 and 4], while in this textbook, those two stages are typically merged into NREM 3.)
Actigraphy
Cell phones contain a tiny instrument, an accelerometer, that changes the view on the phone display—the screen rotation—depending on how the phone is being held. In general, an accelerometer detects a change in the speed, direction, and size of a movement. Actigraphy utilizes accelerometers in small, watch-like devices to record a person’s physical activity, and consequently, in combination with computer algorithms, can be used to examine sleep in clinical and research studies (Figure 2.22).
The idea behind actigraphy is that during long enough periods of inactivity, a person must be sleeping, so that period would be labeled as sleep. Usually, the device will have a button that can be pressed when the person goes to bed and awakens. That context is helpful because sitting for two hours watching television could also seem a lot like sleep to an accelerometer. Polysomnography, with the three physiological measures of EEG, EOG, and EMG, has been used to validate actigraphy. However, it is important to understand actigraphy’s limitations. In actigraphy, we are using a device to measure movements and then making a leap utilizing computer programming to label different periods as sleep, while PSG is measuring the actual elements (EEG, EOG, and EMG) used in defining sleep.
The different measures and derivatives from actigraphy are as follows:
• Sleep latency: how long it takes to fall asleep
• Wake after sleep onset (WASO): how much time, after falling asleep, was spent awake
• Total sleep time: from sleep onset to final awakening, with WASO subtracted
• Sleep efficiency: total sleep time divided by the total time between sleep onset and final awakening; often referred to as sleep quality
Sleep latency should be at least fifteen minutes, as discussed in chapter 1, but certainly, much beyond that can begin to be frustrating. Sleep efficiency should be between 85–95 percent. To make this relatable, imagine that during the eight hours between falling asleep and waking up in the morning, you were awake for a few minutes enough times in the night that it added up to one hour of being awake (one hour of WASO). That would equate to seven hours of sleep during that eight-hour period. Dividing seven by eight gives a healthy sleep efficiency of 88 percent. Upon seeing their actigraphy data for the first time, many of my sleep science lab students are shocked by how many times they woke up during the night and even more surprised that it is considered normal and healthy. We are rarely aware of any of these awakenings.
What about a sleep efficiency of 100 percent—and why is that not included in the healthy range? With normal sleep architecture and a reasonable amount of sleep debt, a person would still occasionally awaken, as noted previously. However, if a person has a sleep disorder or an extreme amount of sleep debt, they may not awaken at all during their night’s sleep and have a sleep efficiency close to 100 percent.
1 Gordon J. Betts et al., Anatomy and Physiology (Houston: OpenStax, 2013), 12, available at https://openstax.org/books/anatomy-and-physiology/pages/12-introduction.
2 Bruno Dubuc, The Brain from Top to Bottom (blog), last modified May 4, 2021, https://thebrain.mcgill.ca/index.php.
3 Christopher S. von Bartheld, Jami Bahney, and Suzana Herculano-Houzel, “The Search for True Numbers of Neurons and Glial Cells in the Human Brain: A Review of 150 Years of Cell Counting,” Journal of Comparative Neurology 524, no. 18 (June 2016): 3865–95, doi.org/10.1002/cne.24040.
4 Cathrin B. Canto et al., “The Sleeping Cerebellum,” Trends in Neurosciences, regular ed., 40, no. 5 (May 2017): 309–23, https://doi.org/10.1016/j.tins.2017.03.001.
5 Betts, Gordon J., Kelly A. Young, James A. Wise, Eddie Johnson, Brandon Poe, Dean H. Kruse, Oksana Korol, Jody E. Johnson, Mark Womble, Peter DeSaix. Anatomy and Physiology. (Houston: OpenStax, 2013), 17, https://openstax.org/books/anatomy-and-physiology/pages/17-introduction.
6 Elena P. Moreno-Jiménez et al., “Adult Hippocampal Neurogenesis Is Abundant in Neurologically Healthy Subjects and Drops Sharply in Patients with Alzheimer’s Disease,” Nature Medicine 25, no. 4 (March 2019): 554–60, https://doi.org/10.1038/s41591-019-0375-9.
7 “Waikiki Health,” accessed on December 3, 2021, https://waikikihc.org/.
8 “New Research Suggests REM Is about Eyes Not Dreams,” Columbia University Irving Medical Center, Columbia, accessed May 5, 2021, https://www.cuimc.columbia.edu/news/new-research-suggests-rem-about-eyes-not-dreams. | textbooks/socialsci/Psychology/Biological_Psychology/The_Science_of_Sleep_(Shook)/02%3A_The_Sleeping_Brain-_Neuroanatomy_Polysomnography_and_Actigraphy.txt |
Introduction
Why are you lying awake, staring at the ceiling? It’s 10:00 a.m. and you stayed up all night to study for an exam, arrived at your early morning class thinking about nothing more than how quickly you could get back to your bed, completed the exam in spite of the occasional head bob, then rushed home to at last jump into bed (Figure 3.1). Yet there you are, not only having flashbacks from the exam pages but also criticizing yourself for something silly you said to your crush as you were leaving the classroom. In other words . . . you are wide awake! You can thank your circadian rhythm.
Circadian Rhythm and Sleep Pressure Don’t Always Agree
There are brain cells that drive your body to go through an activity cycle that is roughly twenty-four hours in duration. The cycle is your circadian rhythm, and those brain cells are like a clock inside your body. Almost every creature on earth has a similar cycle; even plants and insects exhibit these rhythms.
The 2017 Nobel Prize was awarded to researchers who Figured out how genes in the fruit fly create a rhythm of cell activity that is approximately twenty-four hours. They also clarified how similar mechanisms are utilized in human cells to create our biological clock. The internal clock provides the daily timing for sleep, body temperature, blood pressure, mental clarity, bowel movements, hormones, athletic performance, and more (Figure 3.2). And while light and dark have a significant impact on this circadian rhythm, the cycle will persist even if a creature is in total darkness for days.1
Your body has another process that controls whether or not you are sleepy: sleep pressure, the drive to sleep depending on how long you have been awake. Your brain breaks down adenosine triphosphate (ATP) to get energy (Figure 3.3). This reaction causes an accumulation of adenosine. Every hour you are awake, adenosine builds up, binds to adenosine receptors, and activates sleep-promoting regions of the brain, while at the same time, adenosine inhibits alert-promoting brain regions. Through these pathways, adenosine puts “pressure” on the brain to go to sleep. During sleep, adenosine will get broken down, recycled, and removed from the brain, so your sleep pressure drops to its lowest point during the final minute of your sleep. Then with each waking moment, sleep pressure continues to build, and the cycle continues.
As you likely guessed from our all-nighter scenario that left you wide awake at 10:00 a.m., circadian rhythm and sleep pressure have an interaction. By staying up all night, sleep pressure builds continuously until we have to struggle heartily to stay awake. But circadian rhythm drives the brain and body to be alert in the midmorning hours, even if we are sleep deprived. You likely have also experienced this circadian rhythmicity on a day when, after a perfectly sound night’s sleep, you find yourself feeling quite drowsy around 2:00 p.m. This is the internal clock of your circadian rhythm giving you the healthy body signal that it is nap time (Figure 3.4).
How Many Hours Are in a Day?
In 1938, two sleep science pioneers were so intrigued by the sleep-wake cycle that they spent a month 140 feet (over 42 meters) below ground in Mammoth Cave in Kentucky. There was no outside light, and the temperature remained at 54 degrees Fahrenheit (12 degrees Celsius) in the ana (Hawaiian for “cave”).
One of their primary interests was what we now call the circadian period—the time it takes to complete one cycle of the circadian rhythm. In other words, away from the influence of light and other cues that tell us when a day begins and ends, how long would it take for the body to go through a cycle of its natural biological rhythms before starting over for the next “day.” These University of Chicago researchers, Professor Nathanial Kleitman and his student Bruce Richardson, recorded, among other things, fluctuations in body temperature, hoping to gain insight into the body’s internal connection to the twenty-four-hour day (Figure 3.6). “Internal” in this case refers to something that would drive the circadian cycle without any external cues, such as daylight. Based on sleep-wake cycles and body temperature fluctuations, they found their biological rhythms were in fact longer—by one to four hours—than twenty-four hours. We now know they were on track with this conclusion, as given a setting not influenced by external cues such as light, the human circadian period is about twenty-four hours and fifteen minutes. This means that left to our own devices, each night, we would fall asleep fifteen minutes later. After just eight days of this, rather than falling asleep at midnight and arising at 8:00 a.m., those times would shift to 2:00 a.m. and 10:00 a.m. The time shift would continue this way forever. Eventually, you would be falling asleep in the late afternoon and awakening several hours before sunrise. We will turn our attention to sunlight to explain why we are saved from that daily shift in our schedule.
Sunlight, Larks, and Night Owls
Thankfully, sunlight has a strong influence on our circadian rhythm. Exposure to light in the morning synchronizes it with our planet’s solar cycle, thus trimming those fifteen minutes off our circadian period. Even artificial light, social activity, noise, temperature, and food impact our internal clock (Figure 3.7). These cues are called zeitgebers, from the German for “time givers.” Part of the success of Kleitman and Richardson’s work in the cave was due to their being away from major zeitgebers, so they could experience what the internal clock would do in the absence of most external influences. Being isolated from zeitgebers puts a person in a “time-free” environment. They do not know the time of day or night or even how many days have passed.
In the decades since Kleitman and Richardson, circadian rhythm studies have often emphasized the importance of time-free settings in order to substantiate the theory of the internal clock working on its own. In some protocols, male participants are directed to shave their faces at varied intervals so their “five-o’clock shadow” will not provide any clues about the time of day or number of days passing. In the absence of these types of zeitgebers, numerous investigations have verified that our clock signal is generated inside of us (endogenous), but where exactly is its control center?
Animal studies have demonstrated that the suprachiasmatic nucleus (SCN), a tiny structure in the brain, is necessary and sufficient to create the circadian rhythm (see chapter 2). Scientists removed the SCN from animals that previously exhibited healthy circadian rhythms, and their rhythmicity disappeared, suggesting the SCN is necessary to generate the circadian rhythm. Another procedure involved transplanting the SCN from one animal to another. The SCNs from animals with long circadian periods (more than twenty-four hours) were transplanted into animals with short circadian periods (less than twenty-four hours), and vice versa. Consequently, the animals’ circadian rhythms shifted to be aligned with their new SCN, implying the SCN is sufficient for generating circadian rhythm. But the SCN is only one structure along the pathway of signals that keeps the body on the approximately twenty-four-hour rhythm.
In the dark, a signal from the paraventricular nucleus of the hypothalamus (PVH) activates a circuitous pathway to the melatonin-releasing cells of the pineal gland (Figures 3.8 and 3.9). The signal travels from the PVH, down into the upper thoracic region of the spinal cord, through the superior cervical ganglion (a little ball of neurons in the neck), and finally up to the pineal gland, causing it to release melatonin, which is a circadian rhythm–setting molecule that tells your brain it is time to sleep. Then when light shines on the eyes, an electrical signal travels along the optic nerve to the SCN—our internal clock—which is also a part of the hypothalamus. In the presence of light, especially sunlight and blue light, the SCN sends a signal to the PVH and inhibits the melatonin-producing pathway. This lets our brain know it is time to be awake. We are diurnal (active during the day), so this pathway keeps us alert during the daytime hours. In a nocturnal (active at night) animal, the melatonin release/inhibition pathway is similar to ours, except opposite in one regard: the response to light is to induce sleep and the response to dark is to bring about alertness and activity.
Since this pathway of disrupting melatonin secretion begins with light shining on the eye, it can be surprising to find that some blind people do have their circadian rhythm entrained to the sunlight. It is because not all the cells in the retina, a layer of tissue lining the inside of the back of the eye, transmit visual signals: some ganglion cells carry light-signal information from other retinal cells to the brain for processing visual images. But another type of retinal ganglion cell contains melanopsin, a light-sensitive pigment, and in addition to responding to light themselves, these cells carry the light information to the SCN for circadian rhythm light entrainment.
In other words, there are different types of retinal ganglion cells, which perform different functions. If a person’s blindness was caused by something that left the melanopsin-containing ganglion cells functioning, they will be able to maintain their circadian rhythm in sync with the sun. However, it is important for these individuals to wear dark sunglasses in daylight because they will not have the pupillary constriction (the shrinking of the pupil) that would protect their retina from the sun’s damaging rays. For some blind people, their melanopsin-containing ganglion cells do not function, so sunlight does not regulate their circadian rhythm and thus they struggle more with maintaining a twenty-four-hour circadian rhythm. The US Food and Drug Administration and the European Medicines Agency have approved a drug that activates melatonin receptors as a treatment, and some research also suggests that melatonin supplements, when dosing and timing are appropriate, improve circadian rhythmicity in blind people.
In some parts of the world with short periods of daylight in winter, circadian rhythm disruptions are understandably more common and are often associated with poor sleep quality (Figure 3.10). Complicating the situation is seasonal affective disorder (SAD), a type of depression that most often begins when the weather becomes cloudier (blocking the sun) and/or daylight periods get shorter. There are interactions between the circadian rhythm pathway and pathways that involve the release of molecules like thyroid and serotonin, which affect mood. The association of depression with poor sleep further compounds the challenge of SAD. In these regions with darker days, it is helpful to incorporate various forms of light therapy, including working in front of light boxes and installing classroom lights that simulate a bright spring day at noon (Figure 3.11).
Regardless of where they live, some people find their circadian rhythm is naturally shifted so they fall asleep in the early evening and wake up before dawn. These folks are sometimes referred to as larks (“morning people”), while their counterparts, the night owls (“night people”), fall asleep after midnight and wake up much later in the morning, maybe as late as noon (Figure 3.12). These are two different chronotypes (a word that comes from khronos, the Greek word for time). While sometimes these bedtime patterns are age-related, such as the elderly lark or teenage night owl, chronotype is also a gene-based timing pattern for when a person naturally feels sleepy. The genetically determined chronotype usually persists regardless of age.
About 40 percent of humans are larks, about 30 percent are night owls, and the remaining 30 percent fall in the middle. This genetic variation within our species is quite inconvenient in our modern world, which operates mostly around a nine-to-five workday. It is especially difficult for the night owls, who have to start engaging their brain several hours before they are metabolically ready every morning. It is unfortunate that our society does not accommodate different chronotypes, especially considering the health consequences (increased heart disease, diabetes, brain disorders) and accidents associated with disrupting a person’s natural circadian rhythm. Considering all this may make you wonder why the different chronotypes exist. Yet looking to our tribal ancestors, these different chronotypes make perfect evolutionary sense. If it is time for the tribe members to sleep, the safer outcome in terms of avoiding attack during sleep would be to have a few members awake late at night and a few alert and functioning before dawn. Thus the larks and owls were the revered sentinels (Figure 3.13). This is not much solace for night owls who have to catch the bus at 6:30 a.m. to get to an 8:00 a.m. class, but there are coping strategies. Chapter 6 contains a discussion of these circadian rhythm disorders, referred to as advanced sleep-wake phase (in larks) and delayed sleep-wake phase (in night owls). Tips for working with each of these disorders are in chapter 1, which also includes details about orange versus blue light (such as from a computer screen) and their effects on the circadian rhythm.
Derailing the Circadian Rhythm
For those who are neither larks nor night owls, there still are plenty of challenges to maintaining the circadian rhythm. And deviating from the earth’s rotational rhythm and natural daylight hours comes at a great cost. Even a once-a-year shift forward in the clocks can be deadly, as seen in the US with the significant increase in deaths from heart attacks and accidents on the Monday after the beginning of daylight savings time. Russian researchers also claim their country had an enormous increase in heart attacks and suicide rates on that day, and for that reason, Russia and many countries are abandoning the daylight savings time shift. But if it causes so much harm, why and where did it begin?
This shifting of the clock time—in the US, setting it ahead by an hour in March and then returning to standard time in November—originated in different periods of history, and independently in many countries for varied reasons. For example, a New Zealand entomologist in the late 1800s wanted more evening hours to find insects, and the Germans during World War I hoped it would help their war effort. Currently, only a little more than a third of countries in the world engage in this practice. Many nations—based on science and as a reflection of their value of health, safety, and productivity—are making the move to ditch the practice of shifting the clock.
Regardless of time zone, many people have to live by a different clock because of their work hours. While some have the luxury of a 9:00 a.m. to 5:00 p.m. workday, a shift worker may have to work through the middle of the night (Figure 3.14). More challenging still, some shift workers have weekly rotations in their shifts, from daytime to nights to mornings. A night shift might be from 5:00 p.m. to 1:00 a.m., with the morning shift from 1:00 a.m. to 9:00 a.m. Shift work is associated with devastating health problems such as increased rates of cardiovascular disorders, depression, diabetes, and cancer. The World Health Organization has listed shift work as a probable carcinogen, as it is associated with cancer. They state this is due to the health damage that comes with disrupting the circadian rhythm.
One of the recommendations to help shift workers is to eliminate the weekly rotation between shifts so the body does not have to experience the equivalent of traveling through eight time zones every week and never settling on any circadian rhythm. Science indicates that if rotation is necessary, the shifts should be rotated clockwise: from day shift, to night shift, to morning shift. This movement, while not at the same magnitude, is at least in the same direction as our internal circadian rhythm, which is fifteen minutes longer each day, making us naturally want to go to bed and wake up later each day. The other advice is to rotate every three weeks, not every week. Protecting the eyes from sunlight and blue light with tinted glasses in the two hours before sleep is also helpful for shift workers, especially those driving through bright sunlight as they head home for their much-needed sleep.
The time zone change experienced with airline travel has some difficulties in common with shift work (Figure 3.15). Scientists have shown how jet lag can cause digestion problems, menstrual cycle irregularities, feelings of depression, and foggy thinking. Left to its own, the SCN adjusts to a new time zone by only an hour a day, so thankfully, there are several effective jet lag strategies. One of the most surprising protocols is fasting to reset our internal clock. Studies have shown that if an animal is not getting enough food, light takes a back seat as the strongest zeitgeber. The driving factor for circadian rhythm becomes all about the best time to get food. See chapter 6 for approaches to minimize jet lag.
Jet lag and demanding work and school schedules motivate many people to turn to caffeine in an attempt to forestall the circadian rhythm (Figure 3.16). But we pay for that short-term boost. The adenosine activity that creates sleep pressure gets intercepted by caffeine, which blocks adenosine receptors in the brain, so they are not able to be activated. Theobromine, a constituent of chocolate, works in a similar manner to caffeine as it also blocks adenosine receptors. Having the adenosine receptors blocked, whether it is via caffeine or theobromine, creates a short-term illusion that we are not sleepy. However, adenosine is still building up at its normal rate. Then, when the caffeine (or theobromine) is broken down and the adenosine receptors are available again, there is an extra high level of adenosine in the brain, and the adenosine receptor activation is substantial, causing the typical crash experienced several hours after caffeine use. For some, the next step is another cup of coffee, continuing the cycle—which will have disastrous effects on sleep that night. Depending on factors such as age and genetics, caffeine is metabolized at different rates: it may take around thirty minutes for caffeine to kick in, but it can still be in the system eight to ten hours later.
Polyphasic versus Biphasic
If there is a group of people with the resources and will to hack the circadian rhythm and reduce the hours of sleep needed, it would be the National Aeronautics and Space Administration (NASA). Every hour an astronaut is in space is expensive, and every hour they are sleeping is an hour they could be working (Figure 3.17). Yet even NASA scientists have not Figured out a way to get around the fact that most of us need around eight hours of sleep and a ten-minute nap every day. This is biphasic sleeping: two sleep periods every twenty-four hours. Solo sailors are also interested in finding a way to sleep less and still perform optimally, as they may be at sea for days and must keep their sailboat safe and on course.
Consequently, Claudio Stampi, a sleep specialist and expert round-the-world sailor, was a leader in the development of polyphasic sleeping for sailors, athletes, and others in extreme situations, including outer space. One form of polyphasic sleeping is taking a thirty-minute nap every four hours, for a total of three hours’ sleep in a twenty-four-hour period (Figure 3.18). However, even Stampi himself, a self-proclaimed biphasic sleeper, is clear that polyphasic sleeping is only for extreme events and is an unhealthy practice long-term.
Despite this, the concept of gaining additional waking hours each day became irresistible for laypeople, who see it as a way to potentially add years of awake time to a person’s life. Based on sleep debt research, however, the health damage caused by this sleep schedule would likely actually take several years off a person’s life and lessen the quality of their living years by diminishing cognitive function, lowering mood, reducing physical abilities, and more. In spite of the contradiction, the practice has taken off and gained a following. It is an unfortunate misinterpretation of Stampi’s research.
Another probable misinterpretation related to circadian rhythm research is regarding news of a bygone sleeping pattern. In the past, this theory purports, the schedule was to sleep several hours at night, awaken at midnight for a couple of hours, and return to sleep for several more hours. Some reported that this is the biphasic sleeping pattern we are all meant to follow. This practice seemed to originate around the 1700s among a group of Western Europeans: after sleeping several hours, they would wake up for singing, sex, praying, or storytelling, then finish the rest of their slumber until morning.
While it made popular headlines, evidence indicates this was an isolated practice and that there is no biological justification for it. The jury is in: based on the scientific examination of human circadian rhythm over the ages and in current times, we have indeed evolved to be biphasic sleepers—that is, sleeping around eight hours each night with a ten-to-twenty-minute nap in the afternoon.
1 “The 2017 Nobel Prize in Physiology or Medicine—Press Release,” Nobel Prize, accessed May 28, 2021, https://www.nobelprize.org/prizes/medicine/2017/press-release/. | textbooks/socialsci/Psychology/Biological_Psychology/The_Science_of_Sleep_(Shook)/03%3A_Circadian_Rhythm.txt |
Introduction
Animals, such as lions, that eat many pounds of meat in a single meal can spend fifteen hours a day sleeping. Compare this to plant-eating animals, like giraffes, which sleep an hour or less each day (Figure 4.1). If you were to make a hasty survey, it appears predators sleep more than other animals. However, there are exceptions to that generalization and others like it (such as that smaller animals sleep less than larger ones). There are a multitude of theories about the sleep duration differences between animals, but currently, there is no clear front-runner. Even within groups of animals that are similar genetically, there are sometimes more pronounced sleep duration differences than between vastly dissimilar animals. We do know all creatures studied so far have a period of something similar to sleep. Even unicellular organisms have stretches of time, linked to the earth’s light-dark cycle, when they barely move and have a decreased reaction to stimuli.
Since it is not possible to hook up a bug to EEG, EOG, and EMG to verify sleep physiologically, for many life-forms, we must rely on this more behavioral definition of sleep—for example, a stereotypical resting posture combined with reduced response to the external environment. And let’s be sure to add that sleep is a reversible state—one of the hallmarks of sleep, thank goodness. In the absence of polysomnography, an additional factor to add to the equation for verifying sleep is to note the strong drive the organism exhibits to return to that “sleeping” state when deprived of it. But how would you know a sleep-deprived insect is trying harder to sleep? Scientists record the baseline level of stimulation required to awaken the sleeping creature when it is left to its normal rhythm for a few days. Then the creature is kept awake during what would be its sleeping time. In this sleep-deprived condition, more intense stimulation is required to rouse the creature from sleep. Imagine your roommate awakening you by softly tapping you on the arm after you’ve had several nights of full and comfortable sleep. Compare that to the jab required if you have at long last dozed off after pulling an all-nighter. This exemplifies one aspect of sleep rebound: sleeping more deeply after being kept awake too long. The other aspect is falling asleep during what are normally waking hours.
In lieu of physiological measurements to verify that an organism is sleeping, the behavioral definition proves helpful. Upon observation of behaviors, even insects will provide additional clues—beyond just a resting posture—to let us know they are sleeping.
Insects
Like humans, fire ants have different stages of sleep. They start with their mouths open and antennae retracted or drooping (Figure 4.2). Then the antennae begin quivering as the fire ant moves into rapid antennal movement (RAM) sleep. Might they be dreaming? Could RAM sleep be an equivalent to REM sleep—even though much research suggests that insects have no equivalent to REM? At this point, it remains an enigma, motivating further investigations.
Another insect, the fruit fly, often examined when there is a need for deciphering genetics, is a favorite for sleep research. Fruit flies have many genetic and physiological similarities to humans, including getting a buzz from caffeine that keeps drowsy flies awake. As with humans, if fruit flies do not get sufficient sleep they have poor memory, have reduced learning function, and die earlier. They have also been used to demonstrate the reduction in sleep associated with starvation. If a creature is not getting enough calories, the body, much to its detriment, will reduce the time spent sleeping to instead seek food. One of the most compelling sleep-related findings in this research was the discovery of a gene that controls circadian rhythm (see chapter 3). These scientists were awarded the 2017 Nobel Prize in Physiology or Medicine for their work.1
Fish
For ages, scientists believed that sharks do not sleep because their eyes are open when they settle into their quiet resting posture. Upon further investigation, it became apparent that sharks were sleeping, but they do not close their eyelids during sleep (Figure 4.3). Some sharks have a clear membrane covering their eyes, and others have eyelids partially covering them. The purpose of shark eyelids is not sleep related; it is to protect their eyes when fighting or attacking. Great white sharks, which do not have eyelids, have to roll their eyes back to protect their eyes when attacking. This Discovery Channel video captured a great white shark sleeping.2 You may have heard that some sharks need to move while sleeping to get oxygen across their gills, and this is true. But other sharks have spiracles that draw in water and move it over their gills so they do not need to move during sleep.
Reptiles
As late as 2016, many believed that reptiles have NREM sleep but not REM sleep. However, when researchers at the Max Planck Institute for Brain Research in Frankfurt began to study the Australian dragon—a type of lizard that is a popular pet in Germany—a surprising story emerged (Figure 4.4). The scientists set out to study visually guided behaviors and were continuously recording the lizards’ brain activity using electrodes. They did this for several days at a time, also using infrared cameras to record nighttime behavior. Although sleep was not the focus of the study, they found the lizards had the typical NREM slow-wave sleep activity—and also REM brain activity, combined with tiny twitches in the eyelids during the REM phases. Part of the significance of this finding is related to how REM sleep is also found in birds and mammals, creatures that evolved separately and much later than reptiles. Until this Australian dragon research, the conventional wisdom was that evolutionary pressure along two isolated evolutionary pathways (those of birds and mammals) resulted in the emergence of REM sleep—people thought REM sleep did not exist before birds and mammals. Now it is clear that REM sleep existed much earlier in the evolutionary line and was likely handed down to both birds and mammals.
REM sleep in other reptiles remains to be examined, but reptilian behavioral sleep patterns have been observed for ages. The beloved honu (green sea turtle of Hawaiʻi) sleeps in the ocean for several hours, usually close to the surface, holding its breath (Figure 4.5). Near the shore, they doze several feet under water, cozy under the edge of a coral reef.
It can be difficult to determine if snakes are sleeping or simply not moving. Since the thin membrane covering their eyes is clear, they appear to sleep with their eyes open.
Some geckos do actually sleep with their eyes open, but they constrict their pupils down to protect the retina. Other geckos are fortunate enough to have eyelids they close, just as humans do when sleeping.
Birds
Birds have evolved to sleep with one hemisphere of their brain at a time, keeping an eye on things while they snooze. This leaves the awake half of the brain alert and able to process information coming from its associated eye, which remains open during sleep. Ducks and many other birds use this skill not just for their own self-preservation but also that of their community. Sleeping in a row, sentinels on each end of the row keep one eye open, sleeping with only one hemisphere of their brain. The birds in between the sentinels enjoy a completely restful night of shut-eye, with both hemispheres sleeping at once (Figure 4.6). At either end, each sentinel bird’s open eye faces out, so after a period of sleep in this position, the bird arises and turns around to face the opposite direction, opening the closed eye, and letting the previously active brain hemisphere and eye get some sleep. Even though they are processing information with only half of their brain, it takes less than a fifth of a second for the guard birds to react to a predator. Although REM sleep is normal for birds, it seems both hemispheres must be engaged in sleeping to generate REM. Consequently, these vigilant guardians on the end of the row are only able to get NREM sleep when on duty.
Humans have a rather subtle version of sleeping with one hemisphere for the sake of vigilance. Have you ever noticed you sleep a little lighter on the first night staying over at a friend or family member’s home? And then if you stay with them for a few more nights, you notice your sleep feels more satisfying. During NREM in new surroundings, half of our brain will have a lighter version of deep sleep; the other half will have its normal restorative depth. This allows us to keep watch, ever so slightly, in our less-than-familiar setting until we have settled in for a few more nights and feel entirely comfortable.
Birds have another fascinating sleep-related adaptation. Due to their need to migrate thousands of miles over the ocean, they have evolved to safely fly nonstop for hours and hours, seemingly without sleep . . . or are they sleeping while flying? Yes, they are, and it is a unique sleep pattern. Frigate birds (ʻiwa in Hawaiian) fly for months straight and so will sleep about ten seconds at once, in flight, getting less than half an hour of sleep each day.
Some birds, such as the white-crowned sparrow, have even attracted attention from groups such as the US Department of Defense. This sparrow can stay awake for two weeks at a time during its migratory period and apparently not suffer the usual deleterious consequences of sleep deprivation. During these phases, the bird also remains capable of proficiently responding to stimuli. The US military, with its history of pressuring troops to use various forms of stimulants such as amphetamines (with deadly consequences), is highly motivated to determine a way to keep people such as pilots awake for long stretches at a time without not compromising their judgment or damaging their health. At this point, the science suggests it is not possible. Let’s hope for the sake of the troops, their families, and the world community that there are also researchers actively investigating revolutionary and innovative solutions to minimize the need to put people in harm’s way.
Mammals
Imagine swimming through the ocean with half of your brain asleep, or catching z’s while dangling like a ripe mango from a tree (Figures 4.8 and 4.9). The unihemispheric sleep of dolphins allows them to swim and communicate—during sleep—with other dolphins. Up in the trees, bats, with their unique wing structure, are unable to create the rapid vertical takeoff mastered by birds. The best way for a sleeping bat to escape a hungry raccoon lumbering toward its roost is for the bat to drop from the tree and take flight midair. In the sea, dozens of sea otters come together and wrap themselves in seaweed, creating a sea otter raft for safety in numbers and to keep from drifting away while snoozing. Occasionally, they even hold hands (Figure 4.10). In open grassy areas, cows, horses, zebras, and elephants can sleep standing up, able to quickly flee if attacked. They have a “stay apparatus” that allows them to essentially lock their legs, minimizing muscular effort to remain standing. At times, these big mammals also lie down to sleep in order to complete their sleep architecture.
Noticing the range of adaptations and behaviors that make it possible for animals to sleep and survive, we see that sleep has persisted even in the face of environments where it seems it would have been simpler to just eliminate it from the mix. As Alan Rechtschaffen, a sleep science trailblazer, has said, “If sleep does not serve an absolutely vital function, then it is the biggest mistake the evolutionary process has ever made.”3 Then how can we resist the question, “Why is sleep so vital?” Let’s further investigate animal sleep and get some answers.
Although they live and sleep in the water, whales, seals, and dolphins are mammals and so must breathe in the air. If they fell fully asleep underwater, they would drown, so like some birds, only half of their brain sleeps at a time. When one hemisphere is sleeping, the other hemisphere guides the animal to the surface and activates the body to take a breath. The visual system of the awake hemisphere is vigilant for danger and stays connected to other animals in its group, such as companions or offspring. These elaborate evolutionary adaptations suggest that sleep must provide a crucial function, since sleep does not make it to the bargaining table when evolutionary pressure looks for behaviors to remove. At first blush, it seems it would be easier to evolve to not sleep than to evolve the mechanisms necessary to sleep while swimming. In other words, sleep is indispensable!
Getting back to unihemispheric sleep in aquatic mammals, there are exceptions: seals have bihemispheric sleep underwater (they hold their breath) and sperm whales sleep with both hemispheres too, seemingly dangling in the water, tail down, until they awaken to swim to the surface to take a breath (Figure 4.11).
Dolphins and some whales do not show obvious signs of REM sleep, but scientists speculate that they may experience transient REM sleep or REM brain activity in deeper structures than the cortex. This motivation to not rule out dolphin and whale REM sleep is partially due to observed muscle twitching, penile erections, and eyelid movements during dolphin sleep. These behaviors are associated with REM sleep in land mammals but also occur during waking states, so REM sleep in dolphins remains an area of active research. Dolphins are so highly evolved that maybe we will see they have a unique form for REM that provides additional survival benefits beyond those given to us humans during REM sleep.
Fur seals do clearly have REM sleep, but they add a unique variation to its predictability. On land, the fur seal sleeps with both hemispheres at once and goes through REM and NREM stages, similar to most mammals (Figure 4.12). However, when a fur seal sleeps in the water, its sleep is similar to a dolphin’s: it sleeps with one hemisphere at a time, and NREM is the only obvious sleep stage. Because fur seals spend weeks at a time in the sea, they go for long stretches without REM. But why would a seal have two different patterns of sleep, depending on whether or not it was sleeping in the water?
A current theory about REM is it increases brain metabolism and warms the brain and brainstem, balancing out the lower metabolic rate and brain temperature of NREM. When fur seals, dolphins, and whales sleep in the water, one hemisphere at a time, and exclusively in NREM, the theory is they would not need REM to warm up the brain, since half of the brain is always awake and warm. Then when the fur seal is sleeping on land, it reverts to the typical land mammal pattern of bilateral NREM interspersed with REM. We humans feel much more alert if we wake up shortly after or even during an REM period, as opposed to when our alarms go off in the middle of deep NREM sleep, when our brains are cool and sluggish.
If REM sleep did not provide an essential benefit, then it seems the fur seal would continue its unihemispheric NREM sleep when snoozing on land. However, it has evolved to incorporate REM sleep whenever it returns to its terrestrial home. With the myriad REM sleep–associated benefits—including emotional healing, cardiovascular system regulation, and more—it is tempting to believe REM sleep would be incorporated into a creature’s sleep cycle if at all possible.
Let’s look way back in time to compare variations in mammalian sleep. During the early stages of mammalian evolution, monotremes branched off from placentals and marsupials. Monotremes (e.g., platypuses) are egg-laying mammals. This is in contrast to placentals (e.g., humans), which carry the fetus in the uterus until a relatively late developmental stage, and marsupials (e.g., kangaroos), which give birth before the animal is completely developed, so after birth, it is usually carried in a pouch on the mother’s body (Figure 4.13). Although for decades, scientists believed monotremes do not experience REM sleep, there are now studies showing that platypuses not only have REM sleep but have a higher rate of it than placentals or marsupials.5 During REM, the platypus has rapid eye movements and twitches its bill. Its REM EEG is similar in many ways to newborn placental mammals, which have high rates of REM too. The brainstem EEG of a platypus shows that REM occurs at the same time as cerebrocortical slow-wave sleep, explaining why early investigators may have miscategorized its sleep pattern.
Hibernators
A common myth is that hibernation, which can last a few hours or as long as several months, is the same as sleep. However, although hibernation has evolved from sleep, there are fundamental differences between the two. In fact, some animals will bring themselves out of deep hibernation in order to get sleep and then return back to hibernation after a satisfying snooze. Also, sleep is easily and rapidly awoken from, but it takes an hour (or more, depending on the animal) to rouse from hibernation. And what about the function of hibernation compared to sleep? A fundamental purpose of hibernation is to save energy. If sleep and hibernation had energy conservation as a common goal, it would not make sense to expend energy to warm up the body during hibernation in order to create conditions necessary for true sleep. Yet some animals, even in frosty conditions, do exactly that.
Ground squirrel body temperature can remain close to zero degrees Celsius, the temperature at which water freezes, during hibernation, which could last the entire winter (Figure 4.14). However, once a week, for around twenty-four hours, they bring themselves out of hibernation. It takes a while, and a lot of energy, for them to speed up their metabolism and activate their brain. When they are hibernating, their EEG is practically a flat line, as though they were not alive. Remember learning about dendrites in chapter 2? During the first day of hibernation, ground squirrels lose about one-fourth of their dendrites. Then, within hours of coming out of hibernation, the dendrites are restored. Other physiological functions, such as urine production, come to almost a halt during hibernation as well. But after rousing from their week of hibernation, during that twenty-four-hour period at their normal body temperature, they eat, pass waste, and sleep before returning to another week of hibernation.
If I say “hibernating animal,” what creature comes to your mind? If it is a bear, you are in good company, as this is the typical response (Figure 4.15). You may find it surprising that some scientists argue that bears are not true hibernators; others suggest that theirs is just a different form of hibernation. A bear’s body temperature will drop only a few degrees, even when outside temperatures are below freezing. This closer-to-normal body temperature allows bears to generate NREM and REM sleep during hibernation. They also stay in their state of torpor for the entire winter, not bothering to invest energy in the weekly rousing practiced by the ground squirrel. Lastly—and this is a significant difference from the ground squirrel, as intrepid hikers will tell you—a hibernating bear can be roused quickly and easily.
1 “The 2017 Nobel Prize in Physiology or Medicine—Press Release,” Nobel Prize, accessed May 28, 2021, https://www.nobelprize.org/prizes/medicine/2017/press-release/.
2 Discovery, “Great White Naps for First Time on Camera,” YouTube video, 2:33, June 28, 2016, https://www.youtube.com/watch?v=B7ePdi1McMo.
3 E. Mignot, “Why We Sleep: The Temporal Organization of Recovery,” PLoS Biology 6, no. 4 (April 2008), https://doi.org/10.1371/journal.pbio.0060106.
4 Ocean Conservation Research: Sound Science Serving the Sea, last modified May 2021, https://ocr.org/.
5 J. M. Siegel et al., “Monotremes and the Evolution of Rapid Eye Movement Sleep,” Philosophical Transactions of the Royal Society B: Biological Sciences 353, no. 1372 (July 1998): 1147–57, https://doi.org/10.1098/rstb.1998.0272. | textbooks/socialsci/Psychology/Biological_Psychology/The_Science_of_Sleep_(Shook)/04%3A_Animals.txt |
Introduction
In the early moments of trying to fall asleep, we may experience a stunning hallucination that startles us back to reality, leaving us to wonder, “What was that?!” It is disorienting because we feel we were not yet sleeping. These hypnagogic hallucinations occur around sleep onset (Figure 5.1). They are a type of dream—if you define dreaming as something going through your mind while you are asleep—but some people refer to hypnagogic hallucinations as “sleep thinking.” We dream in all stages of sleep, NREM and REM, but during REM sleep, our dreams become more intense in their content and often bizarre in nature. Conversely, if awakened from NREM dreams, many report that it feels as though they were simply thinking about something rather boring. We know dreams occur throughout the night, but in this chapter, dreaming refers, unless stated otherwise, to the dreaming state associated with REM sleep. Let’s begin with a discussion of the importance of dreams to our mental well-being, because in the words of Nobel laureate Elias Canetti, “All things one has forgotten scream for help in dreams.”1
Emotional Healing
Dreams help us cope with, and better understand, our emotions. During the day, emotional events happen, but we rarely pause for reflection because we are pressed to continue with the business of the moment. When we are dreaming, it is an opportunity to take the emotions of the day and relate them to memories—even those from long ago—to see if we can make sense of the situation and be better prepared for the next time something similar occurs.
Imagine an emotional event during the day, such as a group activity in class in which you felt socially uneasy, like you did not fit in (Figure 5.2). Maybe you said something that was poorly received or were given a disapproving look by one of your classmates, but you had to continue with the work at hand. You may or may not have forgotten about it by the time you went to bed. Either way, that night, your dream might have an emotional theme of social rejection, but in a scene involving people you haven’t seen in years rather than your current classmates. Through dreams, your brain can create a mash-up of current and previous experiences to optimize your future behavior—the perfect harm-free dress rehearsal.
While our dreams are synthesizing such relationships between recent emotions and distant memories, the brain experiences its lowest levels of stress hormones over the course of twenty-four hours. One of these hormones, norepinephrine (also called noradrenaline), is present in the brain at various levels throughout the day and night—except during REM sleep. During our vivid emotional dreams, we can replay events without the stress response triggered by norepinephrine. Matthew Walker, a sleep scientist at the University of California, Berkeley, has led brain imaging research in this area to show how the brain takes advantage of this zero-norepinephrine condition to relate clear recollections of crucial events to previous memories without engaging the flight-or-fight brain circuits that would distract us from calm introspection. The result is that we are able to shed the emotionally painful layer of the memory and still retain details of the situation to help us be better prepared to face another day . . . or that judgy classmate!
Dreaming about emotional events brings us to a place where we are more comfortable with the situation. Psychologist Rosalind Cartwright, also a world-renowned sleep specialist and expert on dreaming, has published extensive research showing the benefit of dreaming for emotional recovery. Dreams mentally evolve us to a point where our daily activities, as well as our sleep, are less disturbed by feelings associated with challenging life events. She says a part of the purpose of dreaming is so that “negative mood [can be] down-regulated overnight.”2 Although, she is quick to clarify that recovery from difficult life events will take many nights, maybe months, of dreaming about them. Cartwright has done brilliant research and clinical work with patients experiencing despair at the time of an upsetting life event, such as a breakup with a partner (Figure 5.3). She found people who dreamt of the event, especially around the time of its occurrence, experience a significant amelioration of depression compared to those who did not dream of the event (even if they did still dream of other things).
This progression to recovery has a more complicated path for individuals faced with trauma and nightmares. If someone experiences a frightening or dangerous event, and feelings of being scared or stressed remain strong long after the danger has passed, they may have posttraumatic stress disorder (PTSD; Figure 5.4). People with PTSD have increased levels of norepinephrine in their brains during REM sleep. This is the opposite of the norepinephrine-free condition responsible for emotional healing during REM dreams, which is experienced by those without PTSD. For folks suffering from PTSD, the presence of norepinephrine during REM sleep disrupts the ability for dreams to reduce the emotional intensity associated with disturbing events. But because the mind still wants to work out the problem while dreaming, it will repeatedly attempt to do so with a dream, sometimes every night, resulting in recurring nightmares—one of the most common symptoms of PTSD.
Imagery rehearsal therapy (IRT) has been used successfully to help people with PTSD work with a therapist to transform nightmares into less disturbing dreams. The concept is to create a more comfortable version of the nightmare and retain enough nightmare details so the mind will slip into this new version. For example, if a person has a nightmare of being attacked by a shark, the new version will still have water splashing, a fin in the water, and the feeling of a strong bump against the body of an animal. However, in this new version, the splashing is from a dolphin playing nearby, not a shark thrashing; the fin is a dolphin fin, not a shark fin; and the bump to the body is gentle, from the friendly dolphin (Figure 5.5). People write out the new version of the dream, create an art piece depicting it, and tell the new dream as a story to another person. They meditate on the new dream before sleep to train the mind to shift over to this new set of details. Harvard psychologist Deirdre Barrett has written extensively about the benefit of working with dreams as a part of the recovery process for PTSD. She explains that dreams provide a “barometer” of a person’s mental state, delivering insight into a patient’s progress.3
Memory and Learning
Through dreams, our brains create connections between recent experiences and long-term memories. This equips us with new perspectives, allowing us to better respond to similar situations in the future. Our sense of self or identity also changes through dreams as we see our role in a recent situation through the lens of a memory from our distant past. Procedural memories—those for things like playing a song on the guitar or making your favorite cookies—are also processed and stored through dreaming. When we are learning a new procedure, let’s say a dance routine, we will notice the first time we try it after a night of dreaming, we will do much better than on the previous day. NREM and REM dreams both play a role in memory formation, but there is a difference. NREM dreams serve more to strengthen memories, and REM dreams restructure them, marrying fresh experiences to earlier ones.
Taking a deep look at how dream content has an impact on learning and memory, sleep science experts Erin Wamsley and Robert Stickgold studied navigation in a virtual maze (Figures 5.6a and 5.6b). Human participants trained on the virtual maze and then were allowed to sleep overnight. In prior animal studies on the subject, researchers had already shown that the animals’ brain firing patterns during sleep closely matched the patterns seen when they were learning a maze, but how could we know what the animals were dreaming about? Previous studies on animals and humans also provided evidence of improved performance being associated with sleeping after attempting a task. The novelty of Wamsley and Stickgold’s research was in asking participants about their dreams’ content during the night and establishing the clear relationship between maze-themed content in dreams and success in navigating the maze the next morning. They also found that the participants who did not perform well during practice sessions with the maze were more likely to see the maze in their dreams that night.4 If something is challenging for us, our brain knows we will be more likely to overcome the challenge if we dream about it. This fits well with Antti Revonsuo’s “threat simulation theory,” which posits that dreams help us develop better skills to behave successfully in the midst of difficult situations. He states that through dreams, we are able to rehearse threatening scenarios, with the result being improved outcomes in our next waking encounter with similar challenges.5
Knowing sleep has such a powerful influence on storing memories, University of California, Los Angeles, neuroscientist Gina Poe proposed a solution to help people with PTSD. The background for her theory is work from an assortment of scientists studying the relationship between sleep/dreams and memory. When trying to learn something, one way to increase memory capacity and accuracy is to sleep, and thus dream, shortly after exposure to the content. It turns out the timing of the sleep matters for optimal memory creation: the sooner, the better. This knowledge about timing is used to schedule sleep for someone who has experienced a traumatic event. Poe suggests it is beneficial to hold off—for around eight hours after the event—before sleeping.7 Delaying sleep onset diminishes the brain’s ability to store traumatic event details, so the person is less likely to create vivid and lasting memories that would haunt them.
With so much focus on dreams helping us create memories, we could overlook a theory from the 1980s stating one of the functions of dreaming is to “unlearn” information. Scientists Francis Crick and Graeme Mitchison present a model of dreams as a mechanism for sorting through information in order to discard unnecessary memories from the day. Consider when you are ready to leave campus: it is important you remember where you parked your car or locked up your bike. The next day, it is best if the parking information is not crowding up your memory space, but you still need to remember how to get to class; dreams, in this theory, are a way that parking information is culled while the route to class is maintained.
Problem Solving and Creativity
Stories of creative inspiration arriving through dreams are pervasive: Dmitri Mendeleev’s periodic table of elements, Mary Shelley’s idea for Frankenstein, Elias Howe’s design of the needle for the sewing machine, Friedrich Kekulé’s vision of benzene ring structure, Keith Richards’ guitar riff in Satisfaction, and more all are said to have come from dreams (Figure 5.7). Rather than take space here to provide the details of these worn-out stories, you may Google the topics—and when you do, you will find Google itself was born in a dream!
There is one story worth telling here. It is about an inspiring role model, Sarah Breedlove Walker (a.k.a. Madam C. J. Walker), who was a civil rights activist and philanthropist up through the early 1900s (Figure 5.8). The wealth she built to provide the financial resources to support herself and her philanthropy came in small part from a hair-loss remedy recipe that came to her in a dream. I say “small part” because most of her success is likely due to her fortitude. She started life as the daughter of freed slaves, living in financial poverty in Louisiana, and spent years working as a single mother, since her parents and husband died by the time she was twenty. Then after a hard day of work as a washerwoman, she had a dream where a recipe came to her, including ingredients she ultimately had shipped from Africa, for a product to help her restore her hair, which had fallen out. She ultimately became a successful businessperson and multimillionaire, dedicating herself to helping others. W. E. B. Du Bois said of Walker, “It is given to few persons to transform a people in a generation. Yet this was done by the late Madam C. J. Walker.”8
With scores of stories about dreams providing creative solutions, it is no surprise there is a bounty of research that shows sleeping and dreaming on a problem makes us more likely to gain insight into a solution. Psychologist Ullrich Wagner and his colleagues scheduled cognitive testing sessions so that one group of participants would sleep in between their first and second testing sessions, while the other would not. The test consisted of tiresome math problems. Rules were provided for generating the solutions, but it was still a tedious and prolonged experience. But there was a secret shortcut that could be used if the participant had an epiphany about an abstract rule. Such epiphanies, in which the solution to a seemingly unsolvable problem suddenly becomes clear, happen more often if we dream about our problems. Wagner’s team found that the people who slept between their first and second attempts had significantly more revelations that led them to the secret shortcut than those without the opportunity to sleep.9
But what if the participants’ epiphanies about the secret shortcut were due simply to the fact that they’d slept and had nothing to do with the brain restructuring memories of the test problems during their dreams? To factor this in, they needed to do additional data collection on participants who took the test only once. They divided participants who had not yet seen the test into two groups, those who slept before the test and those who did not. There was no difference between the two groups in terms of discovery of the hidden rule: the sleepers and nonsleepers had the same rate of epiphanies upon taking the test for the first time. In other words, a participant had to have seen the test problems before sleep (as in the protocol with people taking the test twice) in order for the rate of epiphanies to increase (as they did when those participants saw the problems again). Notice how the researchers included this essential part of the scientific method: seek to disprove your hypothesis and look for alternate explanations, even if it seems you have proven the hypothesis you were seeking to prove.
Emotional Intelligence
One of the pillars of healthy interpersonal relationships is emotional intelligence, which includes the capacity to understand and control one’s own emotions and to recognize and interpret the emotions of others (Figure 5.9). One aspect of this is being able to distinguish different facial expressions, such as for fear, anger, or joy. Without the benefit of time in our dream state, this ability drastically deteriorates to the point where not only do we lose the ability to tell the difference between a friendly or angry expression on someone’s face but also our tendency is to assume the more threatening option. This was discovered in Matthew Walker’s University of California, Berkeley, lab. Regarding how misinterpreting neutral facial expressions as threatening could cause harm, Andrea Goldstein, lead author on the study, urges, “Consider the implications for students pulling all-nighters, emergency-room medical staff, military fighters in war zones and police officers on graveyard shifts.” One of the fascinating aspects of the study was that they studied the participants in an MRI scanner, so in addition to having the evidence reported by the subjects themselves, the scientists could also see how well (or poorly in the case of less REM sleep) their brain structures were doing in terms of distinguishing between the different expressions.
How and Where Dreams Are Created
In the 1970s, Harvard medical doctors J. Allan Hobson and Robert McCarley postulated a neurobiological model of dreams. They described REM-associated bursts of electrical activity in cats as originating in the pons (a part of the brainstem), traveling through the thalamus, and arriving at the cerebral cortex (Figure 5.10). Along with this neuronal activity came twitches in whiskers and muscles, as well as jerky eye movements not typical of a cat tracking something visually, which therefore seemed random. The brainstem’s electrical activity patterns were not similar to those seen in animals processing real sensory information either; however, the cortex, influenced by this chaotic activation, was still trying to make sense of the activity and synthesizing the brainstem activity into a dream. This is the premise of Hobson and McCarley’s activation-synthesis model of dream creation. For some people, the activation-synthesis model makes a case for dreams being meaningless, since they are based on seemingly random activity. However, if we consider that the dreamer interprets dream content based on its relationship with their waking experiences, interpretations of dreams generated according to this model still provide an opportunity for personal insight.
Debate about the activation-synthesis model continues, but there is no debate about how the advent of magnetic resonance imaging (MRI) has increased awareness of many additional brain regions that are active during dreaming. Contemporary research has utilized MRIs to provide a window into dream-related brain activity, showing high levels of activation in brain regions associated with feeling emotions, creating movements, and comprehending visual scenery (Figure 5.11). Surprisingly, some of the brain structures, such as the amygdala, anterior cingulate gyrus, and hippocampus—parts of the “emotional brain” or limbic system—are even more active during dreaming than when we are processing information in our waking state.
In sharp contrast to the activation-synthesis model, neuropsychologist Mark Solms proposes that dreams are created in the cerebral cortex, specifically the ventromedial prefrontal cortex (Figure 5.12). This part of the brain is involved in a range of functions such as goal-seeking, regulating challenging emotions, recognizing facial expressions, and processing risk. It also provides connections between the limbic system and the frontal cortex. In the early half of the twentieth century, this was one of the brain regions destroyed in attempts to treat mental illness. Solms found that patients with damage to this area, whether from surgeries or other injuries, did not dream.
In this chapter, we are focused primarily on REM dreams, but let’s briefly consider the activity of the limbic system during NREM dreams, which we know are bland and lacking in emotion. So it follows that the limbic system is quite mellow during NREM dreams.
On the other hand, it turns out, if we look further, that there are brain areas that become inactive during REM dreams as well. When we consider the bizarre and sometimes embarrassing things we do in REM dreams, the deactivation of one of these brain regions in particular makes perfect sense: the prefrontal cortex, which guides you to use good judgment and be sensible and socially appropriate, has exceptionally low activity during dreaming (Figure 5.13). Another area that seems to almost drop out of the game during REM dreaming is the primary visual cortex. This region of your brain is involved in consciously detecting visual stimuli from your eyes, so it is logical that, since your eyes are closed while dreaming, the primary visual cortex has almost no activity at that time. The brain is still able to generate visual content for dreams, as it still relies on the visual association cortex—the part of the brain that processes more complex visual information.
Rather than looking at particular brain structures for the source of dreams, Harvard Medical School sleep researcher Robert Stickgold asked participants to look at the events and emotions of their days for the source of dream content. Participants kept diaries of their daytime activities and their dreams. The dreams were not a replay of daytime events. However, the worries and emotional themes of the day were often incorporated into the dreams. If we leave the activation-synthesis model behind in favor of theories such as Stickgold’s, suggesting dreams contain meaningful information, then it’s time to move to the next section for a discussion of how to interpret dreams.10
Interpretation of Dreams
Over five thousand years ago in Mesopotamia, people were having their dreams interpreted and looking to them for divine guidance (Figure 5.14). Throughout the years and around the world, dream interpreters and sometimes priests have been trusted to translate dream content into something meaningful for the dreamer. With the emergence of the discipline of psychology in the nineteenth century, the practice began to include psychologists as interpreters as well.
Sigmund Freud, an Austrian neurologist, believed dreams contained symbolic information requiring interpretation by an expert. He thought dream content would be too disturbing for the dreamer. For example, the dreamer may feel too ashamed to admit to a fantasy they are having, so the content has to come to the dreamer in symbols. (It was convenient for Freud that he, as a self-proclaimed expert, could charge people money for these interpretations.) According to his approach, dreams contain two categories of content: manifest and latent. Manifest content is the obvious material from the dream—the way the dreamer would describe the dream. Latent content is the hidden material that indicates the dreamer’s secret desires and fears. These repressed feelings, once revealed in a dream analysis session, could be used to identify and treat a person’s problems. There is great value in exploring and analyzing dream content, but Freud’s particular method of dream interpretation has been put to the test by scientific studies that have shown that different experts using his technique will come up with vastly different conclusions about the meaning of the same dream. These studies suggest a lack of reliability in his approach to dream analysis.
A psychiatrist colleague of Freud’s, Carl Jung, disagreed with Freud on the need for dreams to be deciphered. Jung was convinced that the same symbol means something different to each person, so there was no use in trying to create a book of dream symbols that could be applied as a part of dream analysis. Instead, Jung thought our instincts convey wisdom to our rational mind via dreams. He said we were disconnected from nature and our instincts because of modern society, so we should use our dreams to reconnect and be transformed.
At around the same time that Freud and Jung were placing emphasis on the deep psychological meaning of dreams, Mary Whiton Calkins, a pioneer in psychology, was developing an opposing theory. As a part of her project at Clark University in Massachusetts, she examined over two hundred dream reports and concluded that dream content is closely related to recent experiences, almost like a related replay of the day’s events and sensations, and that dreams do not contain hidden meaning. She said, “In fact, my study as a whole must be rather contemptuously set down by any good Freudian as superficially concerned with the mere ‘manifest content’ of the dream.” Calkins must have been courageous not only for challenging conventional wisdom about dreams but because she was attending psychology seminars at Harvard with special permission, since women were not often allowed at the then all-male college. She even fulfilled all the requirements for a doctorate in psychology at Harvard, receiving high recommendations from professors, including William James, but the institution still refused to grant her the degree since she was a woman. However, she went on to Wellesley College, where she created a psychology lab—one of the first in the nation—and became the first woman to serve as president of the American Psychological Association, soaring beyond Harvard’s discriminatory policy.
Dreams in Different Cultures
Looking around the world to consider diverse attitudes about dreams, a common theme is that many view dreaming as an opportunity to connect with the divine. In the Quran, one type of dream is called ru’ya. Rather than being created by the dreamer’s mind, a ru’ya comes from God or the angels, and therefore a ru’ya is believed to have a purpose and meaning. There is a parallel in Hawaiian culture in that dreams from spirits are thought to have significance, in contrast to dreams the dreamer created, which are thought to be meaningless and pupule (crazy). In Hawaiian tradition, dreams are part of the bond between the spirits of those who are living and those who have passed. While a living person is dreaming, the spirit leaves the body through the luaʻuhane (“pit for the spirit,” which is our tear duct) and travels to receive guidance from ʻaumākua (ancestral guardian spirits) and akua (gods).
Ancient Egyptians also used dreams to travel in their dream body and connect with gods and the spirits of the departed. People would visit the temple of a god or goddess, such as the goddess Isis (who has nothing to do with the Islamic State terrorist group) for dream interpretation or incubation (Figure 5.16). The dreamer might spend days preparing for dream incubation—a time to encourage dreams rich in guidance—by purifying themselves through ritualistic bathing, fasting, and praying before sleeping in a temple. After awakening from the dream, an oracle would be available to interpret the dream, which was especially valuable, since one of the strategies was to interpret dream content as the exact opposite of its literal meaning. The temples were also available for visitors who had slept and dreamt at home to consult with priests on dream interpretation.
Lucid Dreaming
Bringing awareness that we are dreaming into the middle of a dream, opens another world for learning, creativity, resolving trauma, and more. Lucid dreaming refers to the conscious realization that we are dreaming while still remaining asleep and deep in the dream. Taking things one step further, a person who is lucid dreaming can learn to control the content and progression of the dream. This control is particularly beneficial when working with persistent nightmares because dream scenery and outcomes can be resolved and transformed into something pleasant.
Here are some steps to take if you would like to learn to lucid dream:
Conducting a Dream Group
This is best done in a group of four to six people seated in a circle but is still satisfying and productive in groups of different sizes (Figure 5.18). It is reassuring to state at the beginning that nothing shared in the group will be discussed outside of the group. Everyone is responsible for keeping the group on track with the steps, but it is helpful to designate a leader to take that responsibility. A different person takes over the role of leader when the group finishes all the steps and finishes the analysis of one dream before moving on to the next person’s dream. Revelations about each dream’s meaning will arise throughout the process, usually in pieces, and the interpretation discussion can continue without structure after the final step. The group should work on one dream at a time, going through all the steps and including the final open discussion about interpretation before moving on to the next person and their dream.
1 Deirdre Barrett, ed., Trauma and Dreams (Cambridge, MA: Harvard University Press, 2001), 282.
2 R. D. Cartwright et al., “REM Latency and the Recovery from Depression: Getting Over Divorce,” American Journal of Psychiatry 148, no. 11 (November 1991): 1530–35, https://doi.org/10.1176/ajp.148.11.1530.
3 Barrett, Trauma and Dreams, 282.
4 “National Suicide Prevention Lifeline,” accessed on December 3, 2021, https://suicidepreventionlifeline.org/.
5 Erin J. Wamsley and Robert Stickgold, “Memory, Sleep, and Dreaming: Experiencing Consolidation,” Sleep Medicine Clinics 6, no. 1 (March 2011): 97–108, https://doi.org/10.1016/j.jsmc.2010.12.008.
6 Antti Revonsuo and Katja Valli, “Dreaming and Consciousness: Testing the Threat Simulation Theory of the Function of Dreaming,” Psyche 6 (October 2000), www.researchgate.net/publication/232499090_Dreaming_and_consciousness_Testing_the_threat_simulation_theory_of_the_function_of_dreaming.
7 “Faculty: Gina Poe,” UCLA: Integrative Biology & Physiology, accessed June 6, 2021, https://www.ibp.ucla.edu/faculty/gina-poe/.
8 W. E. Burghardt Du Bois, “A Great Woman,” Crisis 18, no. 3 (July 1919): 131, https://modjourn.org/issue/bdr512386/.
9 Ullrich Wagner et al., “Sleep Inspires Insight,” Nature 427, no. 6972 (January 2004): 352–55, https://doi.org/10.1038/nature02223.
10 Magdalena J. Fosse et al., “Dreaming and Episodic Memory: A Functional Dissociation?,” Journal of Cognitive Neuroscience 15, no. 1 (January 2003): 1–9, www.researchgate.net/publication/10896475_Dreaming_and_Episodic_Memory_A_Functional_Dissociation. | textbooks/socialsci/Psychology/Biological_Psychology/The_Science_of_Sleep_(Shook)/05%3A_Dreams.txt |
Introduction
While a disorder such as obstructive sleep apnea—not breathing during the night—is very serious and has lethal consequences, it is also essential that other, seemingly more subtle sleep disorders, such as periodic limb movements, get diagnosed and treated. For example, if someone has daytime sleepiness, and sleep apnea has been ruled out, the person still needs to find out what is leading to their sleepiness. In chapter 1, we have covered sleep debt and its serious consequences, such as depression, stroke, heart attack, obesity, diabetes, and more. Therefore, we must not take lightly any condition that disrupts sleep. In my workshops, I have met people all over the world who tell me they had undergone sleep studies in which, after apnea was ruled out, they were sent on their way with no further investigation or advice. Or worse still, they were given a prescription for sleeping pills, which are not meant as a long-term solution and can lead to countless other problems with no benefit or, at most, perhaps twenty additional minutes of sleep a night. We need to advocate for ourselves and loved ones to get health-care practitioners to persevere until we know what is causing daytime drowsiness and get it treated. Our lives depend on it.
Insomnia
Many people who suffer from poor sleep think they have a disorder called insomnia. However, most people who believe this actually have a particular factor causing their poor sleep, and such factors can usually be addressed—the sleep is improved and insomnia goes away. It is rare for a person to have insomnia not caused by something else, such as stress, physical pain, medication, a psychiatric disorder, a physical illness, or poor sleep-health habits. These are what I mean by “factors” causing insomnia.
The most straightforward factors to address are poor sleep-health habits. Refer to chapter 1 to identify and determine strategies to attend to habits disturbing your sleep. That chapter also has detailed instructions for several techniques to alleviate insomnia as well as recommendations for effective treatments; the gold standard is cognitive behavioral therapy for insomnia. For many people, following the guidelines in chapter 1 will fix their sleep. If not, a sleep specialist can determine what other factors need to be addressed and develop a treatment approach. Once all these factors have been addressed, if the person is still not sleeping well, then a clinical sleep study may be necessary to identify an underlying sleep disorder causing the insomnia. But for most people, their sleep will be restored before they get to that stage.
Snoring
Have you ever noticed that if a snoring person rolls on their side, it sometimes brings even the most skull-shattering sound to a halt? Snoring can be caused by the architecture and muscle tone of the structures in and around the pharynx, which is made up of the nasopharynx, oropharynx, or laryngopharynx (Figure 6.1).1 During sleep, the waking-state muscle tone is lost and this tissue closes in to varying degrees and vibrates as the breath moves past. There are other locations, such as the nasal passageway or between the lips, that can cause snoring. It often occurs during inhalation but also happens with exhalation. Consuming alcohol, smoking cigarettes, or having nasal congestion from a cold worsens snoring, as can being overweight or pregnant. While some snorers have no idea they are snoring and will swear, “I never snore,” others will awaken themselves with the noise. Heavy snoring might be an indication of obstructive sleep apnea, but not always. Because of the potentially lethal consequences of obstructive sleep apnea, if a heavy snorer is also sleepy during the day, it is important to consider a sleep study to rule out apnea. This path of preventative medicine may save the snorer’s life.
Treatments for snoring include side sleeping, losing weight (if overweight), eliminating nicotine, and avoiding or reducing alcohol. There are also many devices that can help, from inexpensive over-the-counter gadgets to costly oral appliances designed by dentists trained in sleep medicine. The range of efficacy of these devices is broad, with the same device working well for one person and not at all for another.
One of the populations overlooked in regard to sleep disorder–related breathing is children, even though up to 15 percent of them may have it. It is disconcerting that 90 percent of such cases are undiagnosed. It can be associated with headaches, irritability, bedwetting, and of course, daytime sleepiness. Causes range from problems with tonsils to irregular facial bone development, so engaging a pediatric otolaryngologist (ear, nose, and throat physician) can be impactful.
Obstructive Sleep Apnea
The statistics surrounding obstructive sleep apnea (OSA) are alarming when we consider it occurs in more than one in four adults between thirty and seventy years old, with over 80 percent of cases undiagnosed. If a risk factor such as having posttraumatic stress disorder or being overweight is added to the equation, the likelihood of having OSA increases dramatically. OSA can cause diabetes, weight gain, stroke, heart attack, high blood pressure, and depression, so we must increase education and screening for OSA. But what is OSA exactly? The airway is obstructed during sleep, and the oxygen levels in the body and brain drop, with associated damage to tissue depending on the severity of the disorder. If oxygen levels drop low enough, small parts of the brain and the heart could die each night. The reason airflow gets blocked is usually because the tissue of the throat or the weight of the tongue closes off the opening in a manner more extreme than snoring. Snoring allows the air to pass through, despite the vibration of the tissue. In contrast, with OSA, the air is blocked for a varied amount of time, happening a few or hundreds of times each night, often without the sleeper having any idea. Heavy snoring can be an indication of OSA, but also people who do not snore at all might still have OSA. Waking up with headaches, feeling sleepy during the day, and having cognitive decline or unexpected weight gain are all OSA symptoms. OSA is diagnosed with a sleep study, and now there is also at-home equipment that can be used in many cases, making it even easier to take this crucial step to improve health.
Once OSA is diagnosed, there is an assortment of choices for treatment, including weight loss (if a person is overweight), quitting smoking and/or drinking alcohol, sleeping with an apparatus to keep the person on their side, and using oral appliances or devices that keep the airway open with air pressure. Continuous positive airway pressure (CPAP) consists of a piece that goes over the mouth and/or nose connected to a hose that supplies a flow of air to keep the airway open. There is an array of shapes and sizes, so if a patient is not comfortable wearing what they are given, it is important that they advocate for themselves to get a more comfortable device (Figure 6.2). There are also oral appliances that can hold the tongue or move the jaw forward, and these do not rely on an airflow machine. Some patients resort to surgeries, but they are typically not as effective as CPAP. Visit this Harvard Medical School website2 for apnea resources and a video of retired basketball player Shaquille O’Neal going through the process of being diagnosed and treated for his OSA.3
Central Sleep Apnea
Central sleep apnea (CSA) is a rare disorder compared to OSA and is associated with the brain not sending the signal to breathe.
Some cases of CSA are caused by problems with the heart or kidneys or from taking opioids for longer than two months. The concept of CSA is similar to OSA in that the person will be tired during the day because they are not receiving enough oxygen when sleeping, but treatments will vary depending on the cause.
Sudden Infant Death Syndrome
Sudden infant death syndrome (SIDS) is not a sleep disorder per se, but it is worthy of discussion in this chapter. SIDS is the sudden, unexplained death of an infant younger than one year old and is the leading cause of death in children one to eleven months old. The highest rates are in babies two to four months old. Research suggests it is linked to an abnormality in the brainstem. Studies are underway to further investigate the possibility of a hearing screening test to identify babies at increased risk of SIDS. The connection may be the hearing pathway traveling through the brainstem. While the cause of SIDS is not known, certain practices increase the risk and so are best avoided: inhaling secondhand smoke, sleeping on soft surfaces, overheating, or sleeping on the stomach. The current advice is to put babies on their backs to sleep, use a firm mattress, and put babies in sleep clothing or a sleep sack so covers are not necessary (Figure 6.4). Breastfeeding has also been shown to dramatically reduce the risk of SIDS.
Restless Legs Syndrome
Having restless legs might not sound too bad, but with an increased risk of depression and anxiety, as well as the myriad consequences of poor sleep, restless legs syndrome (RLS) has far-reaching repercussions on a person’s life. This arises from what is often an indescribable sensation—perhaps tingling or itching—that triggers an overwhelming urge to move the legs. The sensations tend to disrupt daily activities, such as riding in a car or sitting in a classroom, and they are deleterious to sleep.
Sometimes the cause is unknown, but anemia, diabetes, or pregnancy could give rise to RLS or make it worse. Medications such as antidepressants, allergy medications, over-the-counter sleep drugs, and antinausea medications can cause and aggravate RLS.
Exercise may relieve the symptoms of RLS, but interestingly, exercising with too much intensity can increase them. Stress-reducing and muscle-relaxing practices such as yoga, meditation, warm baths, and massages mitigate the symptoms and promote sleep. Eliminating nicotine, alcohol, and caffeine is crucial.
Periodic Limb Movements
Occasionally people confuse periodic limb movements (PLM) with RLS, but they are separate disorders. While RLS sensations cause the urge to move the legs, PLM is an unconscious and uncontrollable movement itself. The big toe or leg moves a couple of times a minute for up to an hour. Sometimes, though rarely, movements are also in the arms. These common leg movements often do not disrupt the sleeper and, if that is the case, would not be considered a disorder. In fact, a sleeping partner is the one who may have their sleep disrupted, while the person with PLM is snoozing peacefully. If the movements do disrupt the sleep of the person with PLM, at that point, it is considered a disorder and will have all the consequences of poor sleep.
Sleep Leg Cramps
Almost everyone will experience sleep-related leg cramps at least once in their life, but some individuals have several of these intense and painful muscle contractions every night. Both the cramp itself and the lingering pain make it difficult to sleep. Sleep leg cramps are more likely in the presence of diabetes, dehydration, electrolyte imbalance (including potassium, calcium, or magnesium), diuretics, and some medications. While strenuous exercise is sometimes listed as an aggravating factor, the association might be more about a lack of rehydration, stretching, or electrolyte replacement after the strenuous exercise rather than the exercise itself. In most cases, daily exercise, including stretching, helps prevent leg cramps (Figure 6.5). In addition to a daytime exercise program, light exercise—like a walk or gently riding a stationary bike—for a few minutes before bed can fend off cramps. During the cramp itself, stretching, walking, massaging, and heat provide relief. Health-care practitioners are able to determine if there are imbalances (such as an electrolyte imbalance) or other medical conditions that, when treated, will resolve the leg cramps.
Bruxism
Strongly clenching the jaw or grinding the teeth during sleeping or waking states is called bruxism. As a sleep disorder, the episodes happen from a few to hundreds of times each night. Depending on its severity, bruxism can damage teeth, disrupt sleep, and lead to headaches or pain similar to an earache. In many instances, people who have bruxism are wholly unaware. Risk factors are stress, anxiety, anger, frustration, extreme competitiveness, hyperactivity, medications (including antidepressants), nicotine, alcohol, caffeine, and some mental and physical health disorders (such as gastroesophageal reflux disease). To aid in resolving bruxism, consider cognitive behavioral therapy (for anxiety, stress, etc.) and relaxation strategies such as mindfulness, meditation, and yoga, as well as addressing the risk factors. Oral appliances—similar to mouth guards—protect the teeth during sleep but do not address the disorder.
Sleep Paralysis
Since it is normal to be paralyzed during REM sleep, “sleep paralysis” does not sound like a disorder, but it is. Perhaps it should be called “presleep paralysis” or “postsleep paralysis” because those are the times it occurs. An episode of a few seconds or minutes may happen several times a year or only once in a lifetime. A person is unable to speak and cannot move except to breathe and move their eyes. Most of the people I have worked with who have sleep paralysis have reported visual hallucinations, such as seeing a person at the foot of the bed, and also feelings of anxiety during the episode (Figure 6.6). Being sleep deprived or stressed or having an irregular sleep schedule increases the likelihood of having sleep paralysis. It is also associated with particular medications, narcolepsy, and psychiatric conditions, including bipolar disorder. Other than ruling out and addressing mental or physical health problems and narcolepsy, the treatment usually involves attending to stress and getting regularly scheduled eight-hour sleep sessions each night. To reduce their anxiety, I have coached people on meditation and breathing techniques to use during the paralysis. They have all reported to me that the practice makes them feel less aversive and fearful of the episodes, and consequently, their sleep quality improved.
REM Sleep Behavior Disorder
When the normal paralysis of REM sleep does not take over, a person will act out their dreams by jumping, shouting, swinging their arm, or whatever happens to be taking place in the dream (Figure 6.7). This is REM sleep behavior disorder (RBD). Unlike sleepwalking, a person with RBD will usually have their eyes closed and rarely walk. Upon awakening, they swiftly become alert and are able to report their dream, which will contain activities that match their observed movements. This can happen four times a night, every night, or as rarely as once a month. The sleeper does not have awareness of the episode. Alcohol use (and withdrawal), certain medications, and sleep debt exacerbate RBD. Because more than one in three people with Parkinson’s disease also have RBD, health-care practitioners recommend monitoring RBD patients for signs of Parkinson’s so early treatments to slow the course of the disease can begin immediately. RBD patients are also at greater risk of experiencing other sleep disorders, such as narcolepsy and sleep apnea, so they require regular sleep studies so these other disorders can be diagnosed and treated. RBD itself is usually treated with medication.
Sleep-Related Eating Disorder
Getting up in the middle of the night for a snack might sound harmless, but that is not the only thing happening with sleep-related eating disorder (SRED). In this case, the person will typically binge eat quite rapidly and, since they are not completely alert, could cut or burn themselves cooking. The foods they eat are also sometimes inedible items such as raw meat, coffee grounds, or even cleaning supplies (Figure 6.8). Unlike a sleepwalker, who will likely be scared if you awaken them, a person with SRED tends to be angry and hostile when aroused from an episode. They may or may not have any memory of the event, so it can be frightening to arise in the morning to a messy kitchen and a stomachache. Using antidepressants, sleep prescriptions, and other drugs can cause SRED. Getting poor sleep increases the frequency of these episodes. Typically, a doctor will prescribe medication to treat the symptoms.
Sleepwalking
During slow-wave sleep in the first half of the night, a person may walk, or sometimes run, out of bed, with glazed-over and open eyes (Figure 6.9). They talk or engage in other behavior, sometimes elaborate and/or inappropriate. Episodes can occur just a few times in a year or several times each night, or even during a nap. Awakening someone from sleepwalking can be very scary and disorienting to the sleepwalker. While it is a myth that it is dangerous to awaken a sleepwalker because they may die from the fright, it is in fact dangerous to awaken a sleepwalker too suddenly because, in their confusion, they may attack and hurt you or themselves. If you feel completely comfortable, gently guide the person back to bed, touching them as little as possible, coaxing them in the right direction until they get into bed themselves. That approach is risky, so the other option is to get a safe distance away and make a noise, gradually increasing in volume, until the person awakens. They will startle, but at least you are out of harm’s way. Then gently explain to them that they are all right and were sleepwalking.
Alcohol, sleep prescriptions, stress, irregular sleep schedule, posttraumatic stress disorder, asthma, premenstrual syndrome, fever, certain drugs, and sleep debt can cause sleepwalking. Mindfulness, meditation, hypnosis, and stress-reduction strategies can treat it. It is also important to do a safety check in the bedroom and home to minimize the harm that may come during an episode. For example, make sure the sleepwalker will not have easy access to prescription drugs, scissors, or car keys, and put gates across stairways.
Now that you are familiar with sleepwalking, take a moment to compare it to REM sleep behavior disorder.
Bad Dreams, Nightmares, and Night Terrors
It is normal to have an occasional “bad dream,” slightly distressing in its feeling. However, when a dream is so upsetting that it causes you to wake up, it is called a nightmare (Figure 6.10). On their own, nightmares are not a sleep disorder unless they occur so often that they are making you lose sleep. One of the difficulties is that since nightmares occur during REM sleep, the dream is vivid, so upon awakening, and even throughout the next day, it can be a challenge to clear it from the mind.
In contrast, when aroused from a night terror, which usually occurs during slow-wave sleep, with its associated dull dreams, there is usually no memory of the dream. However, there is nothing dull about the physiological response to night terrors. A person awakens from a night terror with an overpowering sense of fear and a pounding heart, shaking and perhaps even screaming, jumping out of bed, or striking out at someone. They are also usually disoriented and slow to respond to someone trying to soothe them. Night terrors are typical during the first third of the night, when we have the most slow-wave sleep, while nightmares usually occur during the latter third of the night, during our longer periods of REM.
Nightmares and night terrors have a range of causes including antidepressants, high blood pressure prescriptions, alcohol, posttraumatic stress disorder, exhaustion, mental disorders, and inconsistent sleep schedules. Treatments include addressing these factors as well as implementing stress-reduction and mindfulness practices. Imagery rehearsal therapy is a promising treatment as well and has also helped reduce daytime trauma symptoms (see chapter 5).
Bedwetting
A child might not be able to control their bladder during sleep until they are five years old, so unless there is bedwetting twice or more a week in a child over five years old, it is not considered a disorder. It is crucial that if a child wets the bed, their self-esteem is considered in the parent’s handling of it. In addition to being harmful to the child’s emotional health, shaming them for it is also known to make the bedwetting more severe and take longer to resolve.
If a child has gone six or more months without bedwetting and then suddenly begins again, it could be due to stress, a urinary tract infection, constipation, or another disorder. In elderly adults, bedwetting may occur with dementia, depression, or obstructive sleep apnea. Some forms of diabetes also cause bedwetting. Rarely, hormonal imbalances could cause bedwetting at any age. Normally, antidiuretic hormone (vasopressin) levels rise during sleep to keep the amount of urine produced low enough so the bladder holds it all night. If these levels are too low, the bladder may fill multiple times during the night, so the person would need to wake up repeatedly to go urinate in the bathroom and might eventually be too tired to awaken. Once mental and physical health issues have been ruled out, treatment should focus on minimizing any shame associated with bedwetting combined with behavioral therapies such as enuresis alarms and positive reinforcement.
Jet Lag
Traveling across time zones can be ruinous to your sleep schedule. You may find yourself waking up in the middle of the night, wide awake and with no ability to go back to sleep, and during the daytime, you may get hit with a strong and sudden wave of uncontrollable sleepiness (Figure 6.11). For many people, it takes one day for their circadian rhythm to shift one hour, so in the days before a trip, try shifting your bedtime closer to that of your destination. Stay hydrated and avoid or have only minimal caffeine and alcohol during the flight. Consider incorporating bright light in the morning or early evening, depending on the direction of the shift; daily exercise; and scheduled fasting. Some people find melatonin supplementation shortly before bedtime on the night of arrival or at the beginning of a red-eye flight to be helpful. Be cautious, and seek advice from your health-care practitioner regarding melatonin because it interacts with some medications and natural remedies. Also, researchers have found that some melatonin supplements carry dangerously high levels of the hormone (many times higher than what is stated on the bottle), and some products labeled “melatonin” contained no melatonin at all.4
Short Sleeper
Research suggests there is a genetic difference that changes the sleep need of a rare few—less than 1 percent of the population—so they need less than six hours of sleep a night. They never sleep longer than six hours, even on weekends, and they do not need naps. Every morning, they wake up feeling refreshed; they do not have any drowsy periods during the day and so do not need caffeine or any stimulants to stay alert. This sleep pattern begins in childhood, lasts throughout life, and tends to be accompanied by other characteristics like a generally upbeat mood, less of a reaction to painful stimuli, and a somewhat manic personality. It is not possible to teach yourself to be a short sleeper, and if you need to sleep in on weekends, wake up less than revitalized, feel drowsy during the day, or need caffeine to stay alert, you are not a short sleeper. Most people who sleep less than eight hours a night are sleep deprived and are causing harm to their bodies and minds.
Delayed or Advanced Sleep-Wake Phase
There are two separate disorders, delayed sleep-wake phase (DSP) and advanced sleep-wake phase (ASP), categorized as circadian rhythm disorders (see chapter 3). Someone with DSP might refer to themselves as a “night owl” or “night person” because their tendency is to stay up late and get up late. A “lark” and “early bird” refers to those with ASP, who go to bed early and are up before dawn (Figures 6.12a and 6.12b). If someone with either DSP or ASP is able to follow their natural rhythm and still sleep eight hours peacefully each night, their disorder may not cause problems in their life and require no treatment.
If the person’s schedule does have to be changed—for example, due to school, family, or work commitments—research indicates that the use of melatonin, guided by a sleep specialist, is effective in shifting the sleep schedule. Bright-light therapy is also effective for both DSP and ASP, though with inverse timing. For DSP, bright lights and blue light from devices should be avoided in the two hours before one’s desired bedtime, and bright-light exposure (sunlight, if available) should be sought at the time one wishes to wake up. For ASP, light should be avoided in the morning, and sunglasses are recommended for those commuting in the bright morning sun. Then, in the afternoon and early evening, exposure to bright light is important. Because sleep quality is disrupted if DSP and ASP schedules are shifted, cognitive behavioral therapy for insomnia is helpful (see chapter 1).
Narcolepsy
Some films use narcolepsy as a joke, depicting those afflicted as having a sleep attack, suddenly falling asleep midconversation. I try to counter this in my classroom by humanizing narcolepsy, showing students interviews with people who have this disorder to demonstrate that it is debilitating and difficult, not funny at all. At my campus of around eight thousand students, I tell those in my classroom to look at the faces of their fellow students and know there could be four students on our campus suffering from narcolepsy, which affects one in every two thousand people. I have had several students with narcolepsy in my classes, and their stories inspire me. They have shared how they have coped and become outstanding students, pursued their academic dreams, and helped people in our communities.
The most notable symptom of narcolepsy is extreme daytime sleepiness—indeed, sometimes sleep attacks (sudden onset of sleep)—that may be accompanied by cataplexy, a loss of muscle tone. Cataplexy can be subtle, such as difficulty with speech, or as severe as total paralysis, causing the person to drop to the ground, sometimes causing serious injury. A person with narcolepsy might not be completely alert when they are going through their day—for example, while in the classroom, talking to someone, or reading a book—and thus may face memory problems as well.
Treatments for narcolepsy involve various medications and prescribed sleep schedules, including naps at regular times during the day. Exercising and avoiding alcohol, nicotine, and drugs are also helpful strategies.
Clinical Sleep Study
Once a person has gone through the Sleep Wellness Guide (see chapter 1) and put in place as many of its strategies as they can, if they are still experiencing daytime drowsiness, it is vital that they consider a clinical sleep study to rule out a sleep disorder. As we’ve seen throughout this chapter, untreated sleep disorders can lead to serious mental and physical health consequences. Thankfully, most insurance companies cover sleep studies, and the experience itself is not unpleasant: most places have created comfortable and private sleeping spaces that feel like a nice hotel (other than the tiny wires placed on your head and in a few places on your body).
That being said, there is one especially troubling thought: Who has insurance, and of those who do, who can afford the copay? If we know sleep debt causes strokes, heart attacks, Alzheimer’s, diabetes, obesity, depression, and more, then whoever cannot afford to fix their sleep is at a huge disadvantage in terms of their health, which should be a basic human right—especially in countries like the United States, where there is access to excellent medical treatment . . . for those who can afford it. I encourage you to consider how you can help bring sleep wellness education and access to clinical sleep studies to everyone who needs it.
1 See also Capital Otolaryngology Head and Neck Surgeons, “What Causes Snoring and Obstructive Sleep Apnea?,” YouTube video, accessed May 5, 2021, https://www.youtube.com/watch?v=i5p0I-Jvtss.
2 Division of Sleep Medicine, “Apnea: Understanding and Treating Obstructive Sleep Apnea,” accessed on December 3, 2021, http://healthysleep.med.harvard.edu/sleep-apnea.
3 Harvard Medical School, “Shaq Attacks Sleep Apnea,” YouTube video, 4:16, May 5, 2011, https://www.youtube.com/watch?v=4JkiWvWn2aU.
4 Madeleine M. Grigg-Damberger and Dessislava Ianakieva, “Poor Quality Control of Over-the-Counter Melatonin: What They Say Is Often Not What You Get,” Journal of Clinical Sleep Medicine 13, no. 2 (February 2017): 163–65, doi.org/10.5664/jcsm.6434. | textbooks/socialsci/Psychology/Biological_Psychology/The_Science_of_Sleep_(Shook)/06%3A_Sleep_Disorders.txt |
Introduction
How would you react if you saw your bus driver, your surgeon, or your pilot drinking cocktails while performing their job? You would be appalled. Yet sleepiness can be worse than drunkenness in terms of its likelihood of causing an accident (Figure 7.1). Researchers have shown that sleep-deprived individuals drive more recklessly (hit more cones in driving courses) and have worse coordination and reaction time than those who are drunk. Sleepiness in fact causes as many deaths and injuries from car accidents as drunk driving. Those numbers are probably lower than they should be, since highway patrol officers do not have a test for sleep debt and also because people are often unaware of their degree of sleepiness. People can experience four seconds of sleep while driving, performing surgery, flying a plane—you name it—and not realize they are asleep. It is chilling to combine this information with the fact that one in three Americans admits that at least once in the previous month, they have put themselves in the driver’s seat even though they were finding it challenging to keep their eyes open. More than 40 percent of adults report that they rarely or never get enough sleep on weeknights. If legislators could see the deadly effects of drowsiness the same way they see those of drunk driving, perhaps we could motivate them to support an effective educational and health-care movement to address our national sleep debt emergency.
Economics
An effective approach may be to talk to people about the financial cost of sleep debt. One of my mentors in social justice and antiracism work told me, “We do this work because we know it is the right thing to do, but if we can show leaders how making these changes is a way for them to save or make money, then we get their attention.” What is the financial cost of sleep debt? \$411 billion annually for the US. This comes from a RAND Corporation 2016 report that also listed the annual cost of insufficient sleep for Japan (\$138 billion), Germany (\$60 billion), the United Kingdom (\$50 billion), and Canada (\$21 billion) (Figure 7.2).1 If loss of life is not enough reason to justify the allocation of resources for sleep wellness education, saving hundreds of billions of dollars each year should do it.
Valiant efforts have been made to help change attitudes toward sleep in the US. William Dement, known as the father of sleep medicine, dedicated decades to the cause, in particular from 1991–94, when he served as chair of the US Congress–mandated National Commission on Sleep Disorders Research. Yet still, we find our country to be, in the words of US senator Mark Hatfield, a “vast reservoir of ignorance about sleep, sleep deprivation and sleep disorders” (Figure 7.3). It may surprise you to know that Hatfield made this remark all the way back in 1993, and yet sleep debt–related tragedies have been multiplying ever since.
Antiracism
As we consider the need for action to address the issue of sleep debt, we should keep in mind race-associated inequities in sleep wellness. Is healthy sleep a luxury, afforded only to “non-Hispanic whites”?
Before moving on, it is important to clarify that race is a social construct. There is no biological or anthropological evidence that humans come from different races. We are one race: the human race (Figure 7.4). But race labels, such as Black, are a part of this discussion due to the research, in which they are used to create groups for data analysis. Sometimes, these groups have to do with ancestry, such as in the case of Alaska Natives, so some of these terms are mixed into this section, depending on the studies being cited.
The US Centers for Disease Control and Prevention analyzed data from over four hundred thousand adults and found the prevalence of healthy sleep duration to be significantly lower in Native Hawaiians / Pacific Islanders, non-Hispanic Black people, multiracial non-Hispanics, and American Indians / Alaska Natives compared to non-Hispanic whites, Hispanics, and Asians. This study is just one of several that have provided evidence that there is racial inequality in sleep wellness. Harvard researchers reported that Black participants are five times more likely to have insufficient sleep compared to other groups. Even when socioeconomic status is factored out, the Black participants still get less sleep.2 This has enormous implications when we consider which groups have the highest rates of diabetes, obesity, high blood pressure, and other sleep debt–related disorders. For example, if Blacks and Native Hawaiians, two groups with higher rates of those disorders, are getting poor sleep, and we know poor sleep can cause these disorders, we have an extra layer of responsibility to address the racial inequalities around sleep health.
It is important to point out that the scientific community agrees that there are no innate biological reasons for the sleep differences based on race. Researchers suggest the experience of racism, even in its subtlest forms, impacts a person’s ability to sleep well and, in particular, to enter the deep and restorative sleep of NREM 3. This likely plays a role in the poor sleep reported by those experiencing discrimination based on sexual orientation as well, so consideration for sleep equity must go beyond race, to all groups experiencing discrimination and oppression. It makes sense that sleeping deeply would require the mind to be in a state of ease, knowing we are safe and free. The situation is exacerbated by the reported connection between lack of sleep and reduced opportunity for civic engagement, such as being able to safely and conveniently vote. Insufficient sleep is associated with reduced political participation and decreases in other measures of social capital (Figure 7.5).
Thus sleep inequality research adds one more justification, on top of the mountain of reasons, for fighting racism. It also illustrates the importance of developing targeted sleep wellness education and health services for these groups.
Business
Company leaders are in a strong position to make their mark, and increase profits, by addressing employee sleep debt. One study of four large companies in the US determined sleepiness was costing them—in lost productivity alone—around \$3,000 annually per employee. For the four companies in the study, the yearly capital loss was over \$50 million. On a national level, poor sleep causes on average, per person, eleven days of lost productivity in the US. In the United Kingdom, one in five workers report that they had recently arrived late to work or skipped work due to insufficient sleep. More than one in four employees in Canada take sick days because of sleepiness. Sometimes the reason we don’t get enough sleep is because we are staying at our jobs late into the evening in hopes of completing more work. The irony is that if we are low on sleep, it will take us longer to finish the work because of decreased cognitive and physical functioning. We, and our companies, would be better served to call it a day, get a good night’s sleep, and start new in the morning. But first, a company must develop a prosleep culture that supports this wise decision-making.
In Japan, as part of a response to survey results indicating that 90 percent of adults do not get enough sleep, some companies are paying their employees to sleep. One Japanese company uses a phone app to record hours of sleep, and if the employee reaches the target, they earn points to use for cafeteria purchases. In the US, Ben and Jerry’s, Google, Huffington Post, and Nike have places where staff can sleep while at work (Figure 7.7). Reboot, a marketing company in London, provides a peaceful room for napping. Many companies around the world are seeing the benefit of allowing their employees to work the hours better matched to their chronotype: for example, letting the night owls start their shift later in the morning. Considering the impact of poor sleep on cognitive function, productivity, accidents, and illness, companies could get an enormous return on their investment by supporting healthy sleep for their employees.
High Schools and Colleges
An international comparison found that among the fifty countries studied, the US has the most sleep-deprived students. One in three high school students fall asleep in class, and although teenagers need nine hours of sleep each night, most are sleeping around seven or fewer; less than 10 percent of them are getting enough sleep (Figure 7.8). African American and Hispanic students, as well as those from low-income households, get even less. In Japan, half of high school students are sleeping six or fewer hours on weeknights.
Adolescent sleep deprivation is an alarming epidemic. The American Academy of Pediatrics, the American Association of Sleep Medicine, and the American Medical Association have all identified insufficient sleep in adolescents as a serious public health issue and recommend that high schools should not start before 8:30 a.m., even though most of them still begin much earlier. Consider the short- and long-term impact of insufficient sleep on teenage mental and physical health, such as increased rates of depression, anxiety, high blood pressure, obesity, and diabetes. Research suggests teen suicide, violence, and accidents are reduced if teens are given the opportunity to get a healthy amount of sleep. In addition to educating families about the importance of sleep, convincing school districts to move to later start times would start a revolution with tremendous and far-reaching impact. Along with higher academic achievement, school officials could boast about reductions in their students’ rates of illness, depression, tardiness, and suicide.
If the traditional school start time is 8:00 a.m. and a student awakens at 6:30 a.m. to get ready and catch a bus, it is almost impossible that the teen could have gotten enough sleep: to get the nine hours most teens need, they would have to be sleeping by 9:30 p.m. Add to the equation their delayed circadian rhythm, a normal physiological part of being a teen, and it is even more unlikely they would be able to pull this off, even under the best of circumstances. For their bodies, the experience of getting up at 6:30 a.m. would be like an adult getting up at 4:30 a.m. every day for work. So it comes as no surprise that schools that shift to a later start time report a reduction in mental and physical health problems, alcohol and drug use, and traffic accidents, as well as increased academic success.
In Japan, Australia, New Zealand, England, and Finland, they have had later school start times for decades, and each of these countries has higher achievement rates than the US on standardized exams. In the fall of 2019, California became the first US state to mandate later high school start times, reflecting its value for its children’s health. Since studies have shown that bus scheduling, after-school programs, student jobs, and sports activities are not affected by later start times, hopefully more states will follow California’s lead (Figure 7.9).
Get involved in your community by having discussions with local school administrators about the American Academy of Pediatrics 2014 policy statement3 and the Society of Behavioral Medicine position statement,4 which are calls to action, with compelling scientific evidence, for delaying school start times. You can also contact wise politicians such as US congresswoman Zoe Lofgren, who in 2017 introduced the ZZZ’s to A’s Act as a House Bill to “direct the Secretary of Education to conduct a study to determine the relationship between school start times and adolescent health, well-being, and performance.” An easy step for getting involved, and a way to find a range of resources, would be to visit startschoollater.net.
When you were in school, do you remember having lessons about healthy foods and sex education, as well as classes emphasizing the importance of physical fitness? Most people in the US would answer yes. However, what about lessons on the importance of sleep? Let’s encourage our teachers and school administrators to incorporate lessons on the importance of healthy sleep for academic and athletic performance, stable mood, safe driving, and physical health. Getting children and teens motivated to sleep well is a wise place to build momentum for this much-needed sleep revolution.
Students fortunate enough to make it to college are faced with further challenges. With the high cost of tuition and textbooks, there is considerable pressure on college students to work long hours and take too many credits at once to finish school early so they can get a job, leaving only a small amount of time for sleep. In a survey of industrialized nations, with the adult population sorted by age, college-age people get the worst sleep. In Korea, college students sleep on average 6.7 hours per night, and I imagine many college students reading this book wish they could get six hours. The connection between depression and poor sleep, along with the high rates of depression and suicide on college campuses, adds more urgency to the issue. Surveying college students about their sleep is one way to start conversations and increase awareness about sleep debt. This opens the door for us to share resources about how to improve sleep. Illinois State University students made up their faces to look like zombies and walked around campus handing out sleep kits as a part of their “Don’t Become a Zombie” campaign (Figure 7.10). Stanford University has a Refresh program that has been modified and implemented on many other campuses as well, including Dartmouth, the University of Chicago, and the University of Iowa. These programs teach students about the importance of sleep health and provide successful strategies for getting healthy sleep. Several campuses in the US, the United Kingdom, and Japan have also created napping spaces for students. Some have beanbags and others have cots in areas where students sign up for a napping timeslot. Students can reach out to their student government organizations and student health centers for opportunities to provide sleep wellness education activities and find resources to create napping spaces.
Health-Care Providers
The issue around sleep debt and health-care providers has three components. The first one is foundational: the lack of education on sleep wellness and sleep disorders provided to our doctors and nurses. Studies report that the total amount of time dedicated to sleep education in our doctors’ preclinical training is only fifteen minutes. If they received more education, we could expect a decrease in the current rate of sleep disorders that are left undiagnosed (95 percent).
The second component is the lack of sleep-health education and screening provided by health-care practitioners (kahuna lapaʻau in Hawaiian) to their patients.
Primary care physicians should administer a sleep-quality questionnaire and screen patients for sleep problems like how they screen everyone for high blood pressure (Figure 7.12). If a physician sees a patient for something as minor as a splinter, they still have the medical assistant slap on a blood pressure cuff to screen for hypertension (high blood pressure). We need to approach sleep-health screening in the same manner. Every patient should get surveyed; then the survey data should be used as talking points to emphasize the importance of sleep and address any problem areas. Drowsiness should be discussed and pursued. Patients should be asked to keep ten-day sleep diaries and submit those in follow-up appointments. A primary care clinic in Idaho surveyed a little over 1,200 patients who were coming to the clinic for a variety of reasons (besides sleep disorders) and found over 60 percent of them also had sleep disorder symptoms. At that point, all but two of the patients had not been diagnosed. Imagine if we could generalize this type of care and reduce illness, accidents, and deaths related to sleep debt and sleep disorders.
The third and final component is the demanding shift work required of our health-care providers and hospital workers. We must change the guidelines for this because there are too many deaths and accidents clearly documented and linked to health-care provider sleep debt. For example, physicians in their residency (the two to seven years they practice while learning their specialty) are working with such high sleep debt that one in twenty report that they have killed a patient due to errors they made because they had not gotten enough sleep (Figure 7.13). In a survey of residents in a San Francisco hospital, over 40 percent of residents disclosed killing at least one patient due to sleepiness. Stanford University researchers have used the multiple sleep latency test for years on numerous residents and nurses, and according to them, of all those respondents, only one person “was not in the twilight zone of extreme sleepiness.”5 Johns Hopkins released a study in 2016 stating medical errors are the third-highest cause of death in the US, making medical errors the reason for 10 percent of all US deaths.6 Knowing how sleep-deprived medical workers are, it is not a leap to consider lack of sleep playing a part in those medical errors and therefore deaths.
The medical establishment needs to be held accountable and revise the work schedules of our health-care providers. The National Academies of Science, Engineering, and Medicine gathered a group of medical and scientific experts to examine evidence and propose revised work schedules for medical residents. For example, with these revisions, they would get a five-hour break for sleeping after working sixteen of their thirty hours in a shift. However, the Accreditation Council for Graduate Medical Education (ACGME) has done too little to have much impact, and way too many sleep debt–related accidents and deaths continue to occur. To put it in perspective, in the US, the ACGME mandates that the maximum number of hours a resident can work per week is eighty, but many European countries, whose medical programs still have excellent success rates and train physicians in a similar number of years, set the maximum at forty-eight.7 We need to increase awareness of the tragic number of preventable deaths and injuries associated with the sleep deprivation imposed on our health-care providers and pressure the medical establishment to change.
Your Next Steps
The World Health Organization says we are in the midst of a “global epidemic of sleeplessness.” The Centers for Disease Control and Prevention report that over 40 percent of adults said they had fallen asleep during the day unintentionally at least once in the past month. In the US and Japan, more than 65 percent of adults are not getting enough sleep. The problem is not limited to industrialized societies. A study of people living in rural, low-income communities without the trappings of industry in eight African and Asian countries found that a large number of adults were not getting enough sleep. The authors used their study’s results to urge people to see the global nature of the sleep debt epidemic.8
There are many approaches to resolving this problem and decreasing its associated catastrophes. One place to start is to address the lack of awareness about sleep debt and the dearth of public policies promoting healthy sleep.
Let’s take a glance at previously successful campaigns that had impacts on public health. Thanks to scientific evidence about the dangers of cigarette smoking, we saw the rise of consumer warnings added to packaging as well as designated nonsmoking areas. After learning more about automobile accidents, we went from cars not even having seat belts to passing laws requiring that all passengers wear them. Vaccine awareness and access helped eradicate smallpox and almost eradicated polio and other diseases. Research on death in infants led to the Back to Sleep campaign to reduce the incidence of sudden infant death syndrome. Widespread distribution of posters provided education about reducing the spread of disease through handwashing. We know we can have an impact, and now is the time to act to increase sleep-health education.
After completing my course on the science of sleep, or simply reading this book, you are likely a sleep expert compared to most of the people in your community, so I ask you to take that knowledge and use it to make an impact on your community. You can read the previous sections for ideas, but here are a few more:
• • Choose some sleep wellness and sleep disorders information (for potential content, see chapters 1 and 6 in this book) and put it in a format you like—a flyer, brochure, poster, or sheet of talking points—and go with a friend to do targeted sleep-health education in underserved neighborhoods. You might consider visiting a beauty salon, barbershop, church, or school to share your expertise (Figure 7.14). A good way to start the conversation is by asking people to tell you about their sleep and their early evening routines. People usually like to share their stories. Your first step is to encourage dialogue about sleep.
• • Get resources from, or provide support to, a nonprofit such as Pajama Program9 and help children get sleep.
• • Reach out to educate leaders in occupations known to have increased levels of sleep debt: health-care workers, airline employees, bus drivers, truck drivers, police officers, first responders, and military.
• • Visit your campus health center and ask them to consider providing sleep wellness screenings and to discuss snoring, insomnia, apnea, and daytime drowsiness.
• • Talk to colleagues at work about their sleep. Identify things at your workplace that could change to support healthy sleep. Approach an ally in a leadership role in your company and discuss the financial gains likely achieved if they adopted a prosleep culture. Ask them to consider creating a safe napping space, providing sleep disorder and insomnia screening, starting a healthy sleep awareness program, and adjusting shift hours based on chronotype.
• • Start a petition or grassroots effort to eliminate daylight savings time.
Consider the successful business leaders, school administrators, and politicians mentioned earlier in this chapter, who have chosen to make healthy sleep a priority for large groups of people and achieved much along the way. Please find an arena where you have a natural interest—perhaps a school, a local political group, a veterans club, an eldercare facility, a health clinic for the underserved, your workplace or college campus—and begin a conversation with someone about how to raise consciousness about sleep wellness. Let’s work together to help people get the sleep they deserve so we can bring more equanimity, health, and peace to our communities and beyond.
1 Marco Hafner et al., “Why Sleep Matters—the Economic Costs of Insufficient Sleep: A Cross-Country Comparative Analysis,” Rand Health Quarterly 6, no. 4 (2017): 11, https://doi.org/10.7249/RR1791.
2 Yong Liu et al., “Prevalence of Healthy Sleep Duration among Adults—United States, 2014,” Morbidity and Mortality Weekly Report 65, no. 6 (February 2016): 137–41, http://dx.doi.org/10.15585/mmwr.mm6506a1.
3 Au, Rhoda, et al. “School Start Times for Adolescents.” Pediatrics 134, no. 3 (2014): 642-649, https://publications.aap.org/pediatrics/article/134/3/642/74175/School-Start-Times-for-Adolescents.
4 Trevorrow, T., E.S. Zhou, J.R. Dietch, B.D. Gonzalez. “Start Middle and High Schools 8:30 a.m. or Later to Promote Student Health and Learning.” Society of Behavioral Medicine, (November 2017): https://www.sbm.org/UserFiles/file/late-school-start-statement-FINAL.pdf.
5 Rafael Pelayo, C. William Dement, and Krystle Singh, Dement’s Sleep and Dreaming (self-published, 2016), 430.
6 Johns Hopkins Medicine, “Study Suggests Medical Errors Now Third Leading Cause of Death in the U.S.—05/03/2016,” Johns Hopkins Medicine-News and Publications, May 2016, https://www.hopkinsmedicine.org/news/media/releases/study_suggests_medical_errors_now_third_leading_cause_of_death_in_the_us.
7 Pelayo, Dement, and Singh, Dement’s Sleep and Dreaming, 428.
8 Saverio Stranges et al., “Sleep Problems: An Emerging Global Epidemic? Findings from the INDEPTH WHO-SAGE Study among More Than 40,000 Older Adults from 8 Countries across Africa and Asia,” Sleep 35, no. 8 (August 2012): 1173–81, https://doi.org/10.5665/sleep.2012.
9 “Pajama Program,” accessed on December 3, 2021, https://pajamaprogram.org/our-programs/. | textbooks/socialsci/Psychology/Biological_Psychology/The_Science_of_Sleep_(Shook)/07%3A_Politics_Sleep_and_You.txt |
Some things in life cause people to feel, these are called emotional reactions. Some things in life cause people to think, these are sometimes called logical or intellectual reactions. Thus life is divided between things that make you feel and things that make you think. The question is, if someone is feeling, does that mean that they are thinking less? It probably does. If part of your brain is being occupied by feeling, then it makes sense that you have less capacity for thought. [Saying "part of your brain" shows how feeling and thought take up the same space, or might use the same abilities or similar processes in the mind. It shows how you really can't do two things at once, especially since they are both cognitive processes (they both take up your memory and attention).] That is obvious if you take emotional extremes, such as crying, where people can barely think at all. This does not mean that emotional people are not intelligent; it just means that they might be dumber during the times in which they are emotional. Emotion goes on and off for everyone, sometimes people cry, and sometimes they are completely serious. [This could further mean that an emotional person might be less emotional if they are doing serious thinking.] In 1941 Hunt said that classical theories of the definition of emotion “concern themselves with specific mechanisms whereby current behavior is interrupted and emotional responses are substituted” (W. Hunt, 1941)
The previous paragraph explored the difference between and nature of emotion and thought (or intellect). Understanding the nature of emotion and thought might help explain Descartes’ statement “I think, therefore I am” because his statement implied that thought is the important element for existence. What role do feelings and thoughts play in determining if and how you exist?
Some things in life can identifiably cause more emotion than other things.
1. Color causes more emotion than black and white. So anything with more color in it is going to be more emotional to look at, whether it is the difference between a gold or silver sword, or a gold or silver computer. In both cases the gold is going to be more emotional. [That example with the sword makes it obvious that color is more emotional than things with less color, it usually is hard to tell if each thing is more or less emotional just based off of the color. It might be that something black is more emotional than something colorful if they are different objects. Also, it seems like color is a shallow source of emotion, like you can identify that color causes more emotion, but if you have an attachment to something if it has a black and white color instead of being colorful, or something else is going on, then the black and white object might be more emotional than its colorful version.]
2. Things that are personal are emotional, personal things that people like and that they feel are “close” to them. Things like home or anything someone likes actually. That is a definition of emotion after all, something that causes feeling. So if you like it, it is probably going to cause more feeling. Other things aside from liking something could cause emotions from it, such as curiosity, but usually like is one of the stronger emotions. You could say that the two are directly proportional, the more you like something, the more it is going to cause feeling. [Or the more curious you are, or any other emotion, would probably generate more feeling. If you are emotional about something, that is saying that it is causing you to feel more. This is more clear when the difference between emotion and feeling is explained later in this section. Aristotle, however, claimed that the core of emotions were beliefs and desires. That shows how strong beliefs and desires are emotionally. Desire is a less cognitive term than the word "like" because desire implies that it is an automatic emotional response whereas the word like means that you consciously like something. How much you like something comes from understanding your desires because like is your understanding of how much you desire something.]
But there are things that people like that cause thought. You could like something and it causes you to think, and we previously defined emotion as feeling, not thought. That thoughts are separate from emotions because thought is a period of thinking. What exactly is thinking then? You can think about emotions, “how did I feel then?” etc. So is thought just a period of increased attention? Or is it a sharp spike in attention focused on one particular thing that is clear? [Thought feels like you are paying clear attention to something, whereas you aren't always paying as clear attention to your feelings.] It is hard to focus that much if you are feeling a lot, however. This makes me conclude that there is an overlap of feeling and thought, like a Venn diagram. But there are still parts of thought that don’t have feeling or emotion in them, and parts of emotion that don’t have thought in them. [So thoughts are also going to influence feelings, since they overlap, not only would feelings influence thoughts.] That means that thought requires more concentration than feeling does, since we defined thought as a period of increased attention. You can be emotional and have more attention, but usually if you are emotional you are going to be less attentive than you would be if you were thinking more. [That ties into the idea that you can only do one thing at a time, if you are paying attention to your thoughts (or thinking more) it is going to be harder to pay attention to your feelings (or "feel" more) because you can only pay attention to a limited number of things at once.] Then again, if you are emotional you are being attentive to your emotions, whatever they may be, and if your emotions are on something like the sun, then when you see the sun you are going to be attentive to it, but not be thinking about it. So you can pay attention to something and not be thinking about it at the same time. [If you are paying attention to something but not thinking about it, what exactly is this increased attention doing? It could be helping you process and understand what feelings that thing causes in you, or just make you feel more about it, which would make you pay more or less attention to it. You could be feeling a lot about something and be paying attention to something else, but that is clearly going to be harder (usually, based on the circumstances) than if didn't have that emotion. That is a clear example of how emotion can be a distraction (from thought and even other emotions). But you aren’t going to be paying attention to anything else. [That further shows how emotion can take up your attention, especially if you are paying attention to the emotion, as in that example.] It seems that thought is more attention than emotion, however. If you try to “feel” your computer you still don’t give it as much attention as if you were thinking about your computer. Then again, it depends what you are thinking about your computer, if you are thinking that your computer sucks, you are going to give it less attention than thinking that it is great. It also depends what your feelings are about that computer. If you feel that the computer is good, then you are going to give it more attention than if you feel that it is bad (possibly). [Does this mean that when you think about your computer your attention is on what it is you are thinking about your computer? Thinking about your computer might generate emotions, which would then cause you to be feeling and thinking about your computer. The thought of the computer might just pull up the general feeling of the computer (the feeling from the computer you get when you usually interact with it or think about it, not some other feeling about it which wouldn't then be "general", not necessarily the feeling of the computer that corresponds with that particular thought. Those ideas raise the question, "when you have a feeling about something, what exactly is that feeling causing you to feel and think (consciously and unconsciously).] The thoughts and the feelings correspond, however. That is, if you are thinking it is bad, then you are going to feel that it is bad. Thus thought and feeling are really one and the same. [It might be that if you think it is bad, you feel that it is good, but that would only be if you are confused, like if you consciously think it is good but it really makes you feel bad.] But thoughts are really clearer than feelings. Thought and feeling may result in the same amount of attention to something, but thought is more precise. It is more precise for you to think that the computer is good, then to feel that the computer is good. Who knows why you feel the computer is good, but if you were thinking the computer is good then you would know why you thought that. Emotions and feelings are more obscure.
So, the more you like something (or hate something, or have any strong emotional reaction to anything), [Something shallow that doesn't generate a lot of feeling might not be called "emotional".] the more emotional it is, but that doesn’t mean that it might not also cause you to think about it. One can’t label everything in life as either emotion or thought however. Life isn’t a scale with emotion on one end and thought on the other. There are other factors involved, things like adrenaline and physical action, which might also cause increased attention that isn’t either emotional or thoughtful. [You could be more specific with that scale and mention which emotions, or which thoughts.] When you’re running you have a lot of attention on the fact that you are running, and you’re not thinking about it or being emotional about it. This means that just because you like something, doesn’t mean that it is emotional. You might like running, but it doesn’t cause emotions in you. [But when you think about running it is going to cause more emotions in you since you like it, and you are probably going to be experiencing better emotions when you are running if you like it then if you don't, unless you enjoy pain then you could like something that generates bad emotions in you (it could be generating negative short term emotions, but since you like it, positive emotions over the long term, or positive emotions when you think about it (or even a mix of the emotions since it is more complicated that you like it but it causes pain).] What does emotion mean then? Emotions must be thoughts that you can’t identify, when you feel something, it must be that you are thinking about something unconsciously. You just have no idea what it is, usually. Emotions and feelings are thoughts then. By that I mean that they can be broken down into parts and figured out what those parts are. And thoughts are just really parts that you can identify. So the difference between emotions, feelings and thoughts is that you know what thoughts are about, but you don’t have as good an idea of what emotions and feelings are, as they are more obscure and harder to identify.
Thus once you find out what is causing the emotion, it is no longer an emotion, but it is a thought (that is, you now call the emotion a thought, so the thought is still probably generating emotion. In your mind then there is still an emotion, but this emotion is now “part” of a thought, it becomes part of the thought associated with it because you created this link, and hence you would call the emotion/thought just a thought because while thoughts can generate emotions, emotions cannot generate thoughts (by themselves), unless you realize what the emotion is (then you are generating the thought, not the emotion generating it), but you are realizing it is a thought, not an emotion: so this realization takes over and now the emotion is part of that realization (because you consider the emotion a part of you, and you generated the realization), instead of the realization being a part of the emotion (and since it seems like the emotion belongs to the realization (you), instead of vice versa, you call it a thought instead of an emotion, because you generated the thought (and hence it also seems that you are now consciously also generating the emotion (the emotion coming from the thought))). So that would mean that all emotions have route in real things, and these real things can be explained with thoughts, so all emotions then are really thoughts that you haven’t realized; an emotion would just be a thought that you haven’t identified yet, so the term “emotion” goes away when you realize it is a thought (because that is what it really was all along, a thought) (though this thought might still be generating a feeling). So, since you perceive the emotion as belonging to you, and you generate thoughts consciously, you consider the emotion to be part of a thought, not vice versa (and hence call identified emotions “thoughts”). So when you identify an emotion, it is a thought because thoughts can generate emotions, so if the emotion is still there after you identified it you would say it falls under the category “thought”, because the thought is making it. [That brings up the question, "do thoughts about your emotions accurately represent what that emotion is?". If the thought doesn't accurately represent the emotion, then you would really need more thoughts to represent the entire emotion (show what that emotion is). Also, can you ever really perfectly explain emotion with thought? Emotion seems infinitely complicated, finite and dynamic.] You might be lazy however and not want to spend time thinking, which are what emotions are for. “Ah that gold sword is pretty” might be the emotion, but to your conscious mind you would have no idea that you like the sword because it is pretty, you might just know that you like the sword and it is making you emotional about it. Therefore, emotional things are really any feelings that cause unconscious or conscious thought. Feeling is also another word for unconscious thought. That then leads to the conclusion that thought can be emotional (because thoughts are going to be about things that can cause emotion). I think that emotions can be more emotional than thought, however, because emotions can contain more than one thought (while thoughts are very slow consciously), therefore causing it to cause more feeling, or be more emotional. [So thought is simpler than emotions and therefore they might cause less feelings by themselves, but the feeling a thought brings up is probably going to be more complicated than the thought alone, since feelings are usually more complicated than thoughts.] While you can only express a few thoughts a minute, your emotions can contain endless numbers of thoughts per minute – they are not as exact and hence don’t make as much sense as thoughts do.
Since emotion is really thought, when you are experiencing emotion you could almost say that you are thinking. You really are thinking about emotion when you experience it because thought is just paying attention to something in your mind. You also might learn (or unlearn) from processing or experiencing emotion because emotions are similar to thoughts, or could be said to be a type of thought. You are probably going to learn more unconsciously if you are experiencing emotions then not, because that is something that would be occurring causing you to learn instead of just learning from nothing. This also explains Descartes’ statement “I think, therefore I am” because if all emotion is really thought, then that shows how emotions contribute to your existence in a meaningful way. They do because you learn from them like you learn from thoughts, emotions are real things and meaningful because they are thoughts to you (or things (thoughts) that symbolize real things (what you are thinking or feeling about) which cause you to experience the world and learn).
So thought is just a lot of attention on one little thing. And emotion is attention on lots of individual things, or possibly one thing. So things that are emotional are things that cause you to think, consciously or unconsciously. [A conscious feeling would just be a feeling that you have identified (or recognized) more than an unconscious one.] And therefore they would cause you to feel, consciously or unconsciously. So the more you like something you can’t consciously identify as to why you like it, the more emotional it is, and the more you like something where you can consciously identify what it is, the more conscious thought it is going to cause, and the more logical that thing is going to be. Emotion is just unconscious thought.
How This Chapter shows how Intelligence is intertwined with Emotion:
• “Emotion goes on and off for everyone” – this statement shows how there are degrees to which someone can be focused on and feel thought, and degrees to which someone can be focused on and feel feeling. That then also explains the next statement in the chapter “some things in life can identifiably cause more emotion than other things”.
• Since there are parts of emotion that don’t have thought (assuming that emotion and thought overlap – but that is a logical assumption because thoughts generate feelings and are therefore less independent) then emotion (especially emotion without any thought) is going to need less focus or concentration, because emotion is a more pleasurable experience, but thought is one where concentration is usually used.
• Emotions can direct and control thoughts – if you are feeling that your computer is bad, then you might then give it less or more attention, and conscious attention is a function of thought because you need to think to start to focus on something. Or when you notice something you noticing it is a conscious experience because you “notice” it and thoughts are things which you are aware of which would then contribute to consciousness.
• Next mentioned is how emotions and feelings are just harder to identify then thoughts, and that therefore emotions and feelings are really thoughts themselves, or vice versa. If all thought is really emotion, and all emotion really thought, then all intelligence could vary and be dependent on emotions. This is further evidenced by the statement “thus once you find out what is causing the emotion it is no longer an emotion, but it is a thought”. That shows how an emotion is a thought that you just aren’t identifying. It is just a matter of definition of the terms. Thought is concrete things which are real in the world, and emotion is something that you feel but can’t visualize. So therefore intelligence is just the ability to do things which are real, versus feeling something, which isn’t as “real” as thoughts are.
• If a thought is clear then it could be easy to understand. However that doesn't mean that it is a complicated thought. A complex idea or thought could be easy to figure out - and it could relate to its associated feelings.
• What would it mean for a thought or group of thoughts to be clear? An abstract thought could be an abstract concept, which could also be clear, however it would also be more emotional or have feeling. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.01%3A_Emotion_and_Logic.txt |
What is the difference between emotion, feeling, thought, logic, and intelligence? Use of any of them requires a lot of attention. Even when you are feeling something emotional your attention is directed toward that thing. The answer is that everything in life eventually results in a feeling. Even emotion results in a feeling. Emotion is unconscious thoughts about things, and thoughts are conscious thoughts about things. Thought results in feelings, so unconscious thought (emotion) is also going to result in feelings. [The question is, do the feelings come from the thoughts simultaneously, or later on, or both (and if later on, when exactly).]
If you think about it that way, thought and emotion are both in part feelings, that is, to some extent you feel them right away, in addition to them resulting in feelings later on. But that still means that feelings are always the end result. Then again, thoughts might be the result of current thoughts. That is like emotion, unconscious emotional thoughts are going to result in unconscious emotional thoughts later on. Even feelings could be called unconscious thoughts, because thought is just focusing on one thing for a brief period of time. [When thought about that way, what is the difference between an unconscious thought and an emotion? Is the unconscious thought stronger, more specific, or just something that has more of an influence on what you are thinking than feeling. Thoughts might have a better influence on other thoughts then they do on emotions (and emotions might have a better influence on emotion then they do on thought). Think of it this way, if you are doing something, but you "feel" like you don't want to do it, that isn't going to stop you from doing it as much as you thinking unconsciously over and over that you don't want to do it. The thinking in that instance seems like it is just more intense than the emotion, but not necessarily "felt" as much since it is just thought, not feeling. It has more of an influence over your actions, however, and maybe would generate anxiety instead of emotion. It is like the unconscious thought in that instead comes from your understanding that you don't want to do that activity, and since understanding is a function of thought, you would say that your unconscious thoughts are stopping you from doing it more than your emotions are stopping you.]
Therefore emotion, thought and feeling are really just periods of focus on certain things. With thought you just recognize what it is that you are focusing on. With emotions you feel deeply about what you are focusing on, and with feelings you are focusing on it less. Physical stimulus also results in feelings, and then you focus on those feelings, you aren’t necessarily focused on what caused the feelings (the physical stimulus itself) however. [This ties into the idea that someone can only pay attention to a few number of things at once (including emotion and thought) because if you are focused on one thing, you are probably going to be less focused on something else.]
Thus life is really just different types of feelings; you could categorize all of life as feeling. Even when you think you are in a period when you’re not feeling anything, you really are feeling something; you just don’t recognize what it is that you are feeling. Remember that feelings are thoughts you can’t identify. And since a thought is going to be about something, another way to think about life is just stuff happening. Stuff happening results in feelings in your brain, where more stuff happens. It is all-concrete. [And stuff happens all the time, so you are probably going to be feeling something more than you can recognize when you are feeling something.]
The definition of intellect and thoughts is almost understanding (those concrete things). Emotion is feeling, completely separate from facts or information. All facts and information are going to be about things that cause feeling, however, since all things that happen cause feelings and all facts and information are about things that happen. So facts and information are just feelings organized in a logical manner. [Unless the fact doesn't generate feeling, but most things cause feelings.] Intellect and thought also generates feelings when those thoughts are processed in your mind. Since thought is really only about feelings, it is logical that thought actually has root in feelings. For example, all events are really feelings in the mind, so thoughts are actually just comparing feelings. You take two feelings and can arrive at one thought. Take the feeling of a frog moving and the feeling of a threat of danger. The two feelings combined equal the idea or thought that the frog needs to move when there is danger – the thought is actually just understanding how feelings interact. All thought is is the understanding of how feelings and real events interact with themselves. Feeling is what provides the motivation to arrive at the answer (the thought). If you just had the facts, there is a threat, and the frog can jump, you aren’t going to arrive at the conclusion that the frog should jump away. You need to take the feeling that there is a threat and the feeling that the frog can jump and then combine the two sensory images in your head to arrive at the answer. [It is like the feeling provides the motivation, without emotion thought really wouldn't be possible because there would be no need to arrive at any conclusions.]
That shows how all intellect is powered and motivated by emotion. It also shows that frogs have thoughts; the frog has to have the thought to jump away when it sees a threat, as a thought is just the combination of two feelings resulting in the resulting feeling of wanting to move away. That process of feelings is like a thought process. Thoughts are a little different for humans, however, because humans have such a large memory that they are able to compare this experience to all the other experiences in their life while the frog only remembers the current situation and is programmed (brain wiring) to jump away. The frog doesn’t have a large enough memory to learn from new information and change its behavior. That shows how humans are very similar to frogs in how they process data (in one way at least), and that one thing that separates a human from a frog is a larger memory which can store lots of useful information and potential behavioral patterns. [That brings up the question, exactly how good is a frogs memory? It can make its way around a pool without hitting the same spot over and over, so it lasts at least a few minutes. But that is memory for simple things, it isn't smart enough (remember from the logic chapter) to process more complicated data to remember. On the other hand, humans really don't do complicated stuff most of the time, so we are very similar to frogs).]
It is clear that emotion motivates thoughts to occur at all, for instance you want something, and then that brings up the thought that you want that thing. That same thing happens almost instantaneously for some things, like the frog jumping away. It must have been an emotion that caused the frog to jump away because that is similar to how a human would respond if there is a danger, the feeling of danger causes you to jump away. If emotions can influence behavior like that, it could be that a different emotion arises from word to word in your own thought process. Like thinking of a loved one might bring up the emotion love, which might influence what you say next. So emotions have a clear motivating role for simple actions and thoughts, and that is because emotion is simple. One emotion cannot be a complicated thought process that you understand, it could bring up a complicated thought process, and you may have an emotion for the thought process - but the emotion itself isn't that informative. Izard said "Feeling in basic emotion affects action but not higher-order cognition, which has little or no presence in basic emotion processes." (Izard)
Thoughts, especially in humans, are not that independent – they can be much more complicated and it can appear to be that nothing is as it seems. If someone says to you, “I know x”. He isn’t just saying that he knows x, but there is a chain of other thoughts that also occur in your mind. You analyze the statement he made and it causes you to think automatically, “Do I know x too?” “Why does he think I care that he knows x?” “Is there anything else about x that is significant that I am missing?” “What if this other person is smarter than me?” that doesn’t lead to a feeling of being dumb (it might), instead it leads to another concrete thing “maybe I am stupid” or the thought “maybe that person is stupid” interacting with the thought “because that thing he said was wrong”. So one simple thought for a human can mean much much more than that one thought. That example shows another way in which humans are different from frogs – they are capable of more simultaneous thoughts. It is also the memory working hand in hand with that capacity of simultaneous thought as well, if you had no memory then you wouldn’t have information to compare and bring up those simultaneous thoughts. [Remember that thoughts can lead to and are sometimes emotions (unconscious ones), so that example of the unconscious thought process was really an emotion - worry. It was worse than just worrying though, it was worrying about specific things, so the emotion was of a more specific type then just worry. Thus there isn't just one emotion of "worry" but there is "worry about your intelligence, etc".]
They can all be moving at the same time as well, not only does one thought follow another; but it occurs instantaneously. If the thing the person said was something you didn’t know, it might make you feel stupid, thus the thought results in a feeling. But that feeling can be translated to a thought. So it isn’t the feeling, “I am stupid” it is the thought “I am stupid”. Feeling stupid might make you feel bad, but it isn’t just that you are feeling bad, you are also thinking over and over “I am stupid” unconsciously, and that is what is making you feel bad. Or you are paying attention to the fact that you are stupid. Thus thought, feeling, and emotion is just paying attention to different things in your head. Concrete things. [In other words, all emotions are not only real things, but it could be said that all emotions have a source, since they are real things.]
It is a little more complicated than that, however. It is going to be a mix of a lot of concrete thoughts interacting with each other, not just the thought “I am stupid” repeated over and over but maybe also a less intense idea of “well I know x and y that that person doesn’t, maybe this was just one event”. So anything that is said or done is possibly followed by a long series of unconscious thoughts and thought processes. [Or, there might be many implications to any one thought. (there might not be, however, like you could relate almost anything to sex, but in reality that isn't necessary).]
There were two examples of thoughts, one was with the frog and the danger of a threat, and the other was a questioning of ones intellect relative to someone else. The example with the frog was an example of a thought process that was simple, while the example with the person showed how some thought processes can be much more complicated than they appear.
A good example of how feelings are mixed in with emotions and physical reactions, and how feelings help motivate thoughts; can be found in this explanation of Wundt’s ideas (by organic they mean bodily) “Wundt starts with the unanalyzable feelings that alter the stream of ideas. For example, the unanalyzable feelings of “fear” or “joy” can influence the current stream of ideation, encouraging some, discouraging some, or inhibiting other ideas. This altered stream of ideas produces a secondary feeling as well as organic reactions. And the organic reactions produce sensory feelings that are added to or fused with the preceding feeling (or sensation) and thus intensify the conscious feeling.” (Mandler 2003)
How thoughts and feelings interact delicately was also pondered by Titchener, the following is an explanation of his theory: “Titchener postulated that a train of ideas need be interrupted by a vivid feeling, that this feeling shall reflect the situation in the outside world (as distinct from inner experience), and that the feeling shall be enriched by organic sensations, set up in the course of bodily adjustment to the incident. The emotion itself, as experienced, consists of the stimulus association of ideas, some part of which are always organic sensations.” (Mandler 2003) That shows how thoughts and feelings can occur in sequence or simultaneously, and that the feelings you have can also be physical ones that interrupt or encourage your thoughts.
How This Chapter shows how Intelligence is intertwined with Emotion
• It is stated first that use of emotion and thought requires attention, and therefore they both cause feelings, and if they both cause feelings then they are going to be similar in nature. Your intellect (or ability to do things which are real) is going to generate feelings just like emotions do.
• Feelings can result in thoughts – this was shown with the frog example, the frog has the thought “jump” which comes from the feeling of a threat of danger, and the feeling of it’s understanding that it can jump. That shows how thoughts can be encouraged by feelings and mixed in with them.
• Thought is also powered by feeling in other ways, as when you are nervous that you didn’t understand something, your feelings then cause you to think nervous things like “do I know that too?, does he think I care that he knows that?” Those thoughts are a function of intelligence, because they are causing you to think about real things, which is what intelligence is. [Therefore feeling is also a function of intelligence, since feelings are about real things, and intellect causes you to think real things, and feeling is unconscious thought.]
References
Izzard, Carroll (2009). Emotion Theory and Research: Highlights, Unanswered Questions,and Emerging Issues. Annual Review of Psychology: 60, 7.
Mandler, George (2003). Emotion. In D. Freedheim & I. Weiner (Eds.) Handbook of Psychology: Volume 1, History of Psychology, p 161. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.02%3A_Thoughts.txt |
One definition of emotion can be "any strong feeling". From that description many conclusions can be drawn. Basic (or primary) emotions can be made up of secondary emotions like love can contain feelings or emotions of lust, love and longing. Feelings can be described in more detail than emotions because you can have a specific feeling for anything, each feeling is unique and might not have a name. For instance, if you are upset by one person that might have its own feeling because that person upsets you in a certain way. That feeling doesn't have a defined name because it is your personal feeling. The feeling may also be an emotion, say anger. "Upset" is probably too weak to be an emotion, but that doesn't mean that it isn't strong like emotions are strong in certain ways. Cold is also just a feeling. There is a large overlap between how feelings feel and how emotions feel, they are similar in nature. So there are only a few defined emotions, but there are an infinite number ways of feeling things. You can have a "small" emotion of hate and you could say that you have the feeling hate then, if it is large you could say you are being emotional about hate, or are experiencing the emotion hate. You can have the same emotion of hate in different situations, but each time the feeling is going to be at least slightly different.
William James thought that emotions were not direct, similar to how I believe that they are slower than feelings and more subtle, he stated that emotional consciousness is ““not a primary feeling, directly aroused by the exciting object or thought, but a secondary feeling indirectly aroused” (James, 1894, p. 516). He did consider primary however ““organic changes . . . which are immediate reflexes following upon the presence of the object” (p. 516). Organic there meaning more bodily. If you take that further, however, you can classify all of feeling, not just bodily feeling, as being more shallow and more immediate than emotions, which could be considered to be deeper and more complicated. Wundt believed that a feeling was an unanalyzable and simple process corresponding to a sensation. Feelings corresponding to sensations are not complicated and therefore not deeply analyzable, however there are also mental feelings which are more like emotions that still feel similar to feelings from sensations (like touching things) because they are both shallow in nature.
You can recognize any feeling, that is what makes it a feeling. If you are sad that is a feeling, but if you are depressed that isn’t a feeling it is more like an emotion. You can’t identify why you are depressed but you can usually identify why you are sad. Feelings are more immediate, if something happens or is happening, it is going to result in a feeling. However, if something happened a long time ago, you are going to think about it unconsciously and that is going to bring up unconscious feelings (the reason the things that happened previously are going to be more similar to emotion than things that are happening currently is that sensory stimulation (or things happening currently) is a lot closer to feelings than things that are less linked to direct sensory stimulation (such as emotions which are therefore usually going to be about things which require memory to figure out, things like thoughts that are less like feelings and more like emotion)). So emotions are unconscious feelings that are the result of mostly unconscious thoughts (instead of feelings – a feeling can trigger an emotion, but it isn’t a part of it). Feeling defined there as something you can identify. Also, you can’t identify the unconscious thought that caused the unconscious feeling, but you can identify the unconscious feeling itself (aka emotion). [Memory isn't the only thing that is going to be more similar to an emotion than a feeling. Any type of thinking, (emotional thinking or non-emotional thinking) or using logic would be more like emotions since thought is deeper than feeling. Those things still generate feelings, or are in part feelings. You can't say when you bring up a memory you don't feel anything, but since this memory is less tangible than something which you are currently experiencing, it is going to be more like an emotion. It requires thought to bring up, so that is a deep experience because thought is deep, which makes it more like an emotion (because emotions are deep) but it is lacking a feeling of "realness" or reality. Thought probably also lacks that feeling, because thought is just something in your head, not something which you can feel like a physical object.]
Another aspect of unconscious thought, emotion, or unconscious feeling (all three are the same) is that it tends to be mixed into the rest of your system because it is unconscious. If it was conscious then it remains as an individual feeling, but in its unconscious form you confuse it with the other emotions and feelings and it affects your entire system. So therefore most of what people are feeling is just a mix of feelings that your mind cannot separate out individually. That is the difference between sadness and a depression, a depression lowers your mood and affects all your feelings and emotions, but sadness is just that individual feeling. So the reason that the depression affects all your other feelings is because you can no longer recognize the individual sad emotions that caused it. The feelings become mixed. If someone can identify the reason they are sad then they become no longer depressed, just sad. Once they forget that that was the reason they are depressed however, they will become depressed again. [It is like the depressed emotion transfers to a sad feeling. That makes sense since you can only concentrate on a few things at one time, so if you are feeling it as a feeling, you are going to ignore it as an emotion.]
That is why an initial event might make someone sad, and then that sadness would later lead into a depression, is because you forget why you originally got sad. You might not consciously forget, but unconsciously you do. That is, it feels like you forget, the desire to get revenge on whatever caused the sadness fades away. When that happens it is like you “forgetting” what caused it. You may also consciously forget but what matters is how much you care about that sadness. It might be that consciously understanding why you are depressed or sad changes how much you care about your sadness, however. That would therefore change the emotion/feeling of sadness. The more you care about the sadness/depression, the more like a feeling it becomes and less like an emotion. That is because the difference between feelings and emotions is that feelings are easier to identify (because you can “feel” them easier). [And if you care about something, you are making it more important in your mind, so you are elevating that emotion into a feeling, the emotion might still be there, but you can also feel it as a feeling. In fact, if you focus on one of your emotions it becomes a feeling because you are then feeling it better since you're focused on it. This idea can be applied to various degrees of focus, you can be focused long-term (hours, minutes, whatever) on an emotion or be caring about that emotion (not just short term (seconds)), and you would "feel" it more. Or some circumstance could occur that is negative or positive causing you to think about that emotion.]
The following is a good example of the transition from caring about a feeling to not caring about a feeling. Anger as an emotion takes more energy to maintain, so if someone is punched or something, they are only likely to be mad for a brief period of time, but the sadness that it incurred might last for a much longer time. That sadness is only going to be recognizable to the person punched for a brief period of time as attributable to the person who did the punching, after that the sadness would sink into their system like a miniature depression. Affecting the other parts of their system like a depression. [Depressions are so deep that they probably cause you to feel bad in many ways. Lowing of mood because of depression shows how it can affect all your emotions and "depress" them.]
In review, both feelings and emotions are composed of unconscious thoughts, but feelings are easier to identify than emotions. Feelings are faster than emotions in terms of response (the response time of the feeling, how fast it responds to real world stimulation) and it takes someone less time to recognize feelings because they are faster. Feelings are closer to sensory stimulation, if you touch something, you feel it and that is a fast reaction. You care about the feeling so you can separate it out in your head from the other feelings. “You care” in that sentence could be translated into, the feeling is intense, so you feel it and can identify it easily. That is different from consciously understanding why you are depressed or sad. You can consciously understand why you are depressed or sad, but that might or might not affect the intensity of that sadness. [That brings up the idea that although thought clearly affects how much you are feeling, how much can thought affect emotion? Since emotion is deeper it is going to be harder to affect it with just thought than feelings are to affect. But if the thought is significant, or powerful, it could trigger strong emotions. Any thought can trigger a feeling, since feeling is shallow, but to pull someone's emotions it might take more.]
If the intensity of the sadness is brought up enough, then you can feel that sadness and it isn’t like a depression anymore, it is more like an individual feeling than something that affects your mood and brings your system down (aka a depression). Also, if you clearly enough understand what the sadness is then it is going to remain a sadness and not affect the rest of your system. That is because the feeling would get mixed in with the other feelings and start affecting them. The period of this more clear understanding of the sadness mostly occurs right after the event that caused the sadness. That is because it is clear to you what it is. Afterwards the sadness might emerge (or translate from a depression, to sadness) occasionally if you think about what caused it or just think about it in general. [So when someone says "I'm sad" that is different from saying "I'm depressed". Depression isn't like an emotion, it is something that is long term, that you notice a lowered mood, or many individual instances of sadness, but you cannot "feel" a depression like you feel an emotion, it isn't as real in real time.]
The difference between emotion and feeling is that feelings are easier to identify because they are faster, a feeling is something you are feeling right then. An emotion might be a deeper experience because it might affect more of you, but that is only because it is mixed into the rest of your system. That is, a depression affects more of you than just an isolated feeling of sadness. In other words, people can only have a few feelings at a time, but they can have many emotions at the same time. Emotions are mixed in, but to feel something you have to be able to identify what it is, or it is going to be so intense that you would be able to identify what it is. Emotions just feel deeper because it is all your feelings being affected at once. [At least, that is what it feels like is happening. A feeling is isolated and strong, but an emotion is more complicated and broad and far reaching.]
Since emotion is all your feelings being affected at once, emotions are stronger than feelings. Feelings however are a more directed focus. When you feel something you can always identify what that one thing is. When you have an emotion, the emotion is more distant, but stronger. All your feelings must feel a certain way about whatever is causing the emotion. So that one thing is affecting your entire system. Feelings can then be defined as immediate unconscious thought, and emotions as unconscious thought.
How This Chapter shows how Intelligence is intertwined with Emotion:
• Feelings are more direct than emotions and thought because they are more sensory – when you touch something you get a feeling. That shows further how emotions are really about things in the real world, only it is more like you are thinking about them instead of feeling them in real time. Things that come from memory are going to be emotions and/or thoughts, not feelings because feelings are things which are more tangible, those memories might result in new feelings, but the memories themselves are not feelings because they are just thoughts. That shows how you can feel some things more than others, that thought and feeling are indeed separate and intelligence is sometimes driven by feelings and emotions, and sometimes it isn’t. You can think about things and not have feelings guiding those thoughts, or your feelings could be assisting your thoughts.
• If you care about a feeling then it becomes easier to identify it – that shows how your feelings can help you to identify other feelings, so your emotions contribute to your emotional intelligence.
• If a certain emotion is larger than others then to your intellect it is going to be easier to recognize, and easier to think about (that is why a depression feels like it does, because you don’t know the individual emotions contributing to it so you cannot feel a specific emotion of sadness from it.
References
James, W. (1894). The physical basis of emotion. Psychological Review, 1, 516–529. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.03%3A_Emotions_and_Feelings_and_the_Difference_Between_them.txt |
Feelings are more immediate than emotions, they are easier to identify and are “faster”. You can also have only a few feelings at a time but your emotions are possibly composed of many more components. That is, you can have a feeling about a Frisbee, and you can have a feeling about a Frisbee game as well. But if you have emotions about the Frisbee game then in order to get those strong emotions there would have to be many things you are feeling about the Frisbee game. [Since emotions are deeper, they are harder to get to than feelings. The stronger the emotional experience, the deeper the emotion it is going to evoke. So something like a Frisbee game might evoke emotions, but just sitting on a couch might not.]
So one could think of emotions as just more than feelings. Emotions are greater than feelings and therefore they must have more parts in order to cause that greater feeling. Feelings are easy to understand because they are simple, but emotions are harder to understand because they are more complicated. A moody person would be described as emotional because emotion is a component of mood. Emotion is something that affects your entire system like a depression does. A feeling such as sadness is only an individual feeling and can be identified as such. [So our person sitting on the couch might be feeling happy, but this happiness is going to be limited because they aren't doing anything intense, so they might not be as emotional.]
If something is intense, then it is a feeling, emotions aren’t intense they are deep. They aren’t as intense as feelings but you could call them intense. Feelings are more intense because that is how we define feelings, if you can feel something then it is a feeling because, well, you “feel” it. Emotion is just something that affects you, your mood, how you are, etc. That is why feelings are easier to identify, because they are more intense. Emotions are deeper, however, when someone becomes emotional you can’t just snap out of it instantly, it hangs around in your system. That is why they are probably made up of more parts than feelings are. [The simpler the emotion, the faster it would probably take to process. You could dwell on something simple, but you'd probably have to be more interested in it for it to stick, instead of it hanging around naturally because you are trying to figure it out.]
Wilhelm Wundt from the 19th century had a system which went from simple to complex feelings and then to true emotions. Complex emotions were analyzed in terms of various types of more minor feelings (Wundt, 1891). If you think about that it makes a lot of sense. Since emotions are stronger than feelings, it should be possible to describe your emotions with the feelings that make them up. For instance if you have the emotion hate, it is probably a result of many specific feelings of hate you have for whatever it is you are hating. The emotion hate is so strong that it must be made up of many smaller feelings that are all real and can be described. In fact, there is probably an overlap between various feelings and emotions all the time. If you are angry, you might be slightly irritated, upset, depressed or any combination of other feelings and emotions mixed in. Also, if you are experiencing a deep emotion, you might also be experiencing that emotion shallowly as well in a different way.
The reason feelings are both more intense yet shallower than emotions is probably because your system can only handle so much intensity at a time, so you can only experience shallow things intensely. If you compare it to a river, emotions would have a lot of water and be going slowly, and feelings would have less water, but be going faster. The feeling is therefore going to touch more things in your mind shallowly, and the emotion is going to touch more things in your mind deeply.
Why then do some simple things cause us to become more emotional if emotion is a deeper experience? That is because the feeling must trigger emotions, the simple thing is actually a feeling itself, but it triggers emotions. Like how color can be more emotional than black and white. It is actually that color causes more feeling, and we become emotional then about that feeling. But while you are looking at the color it is a feeling which you are feeling, not an emotion. The feeling made you feel good, however, and that good feeling infects the rest of your feelings and emotions, and then you become emotional.
In fact, all feelings make someone more emotional. The only difference between feeling and emotion is that feeling is the immediate feeling you get from something. It is the thing which you are experiencing currently. Feeling is another word for current stimulation. You can only feel something that you are either thinking about or experiencing. Otherwise you aren’t really feeling it, and it is an emotion. That is why the word feeling is the word feeling, because you can feel it intimately, closely.
How is it then that emotions are generally considered to be deeper? That is because with emotions you are actually feeling more, you just aren’t as in touch with what it is that you are feeling. So you would experience the effects of having a lot of feeling, such as heavy breathing, crying, laughing, they would be things that make all your other feelings and emotions feel the same way. However your mind isn’t intensifying that experience because it would be too much for you to handle. Therefore emotion is just many feelings (or one strong feeling) that is dulled down, and it would actually be a stronger feeling(s), you just can only experience it fully as an emotion. You can also probably experience parts of that emotion as feelings since parts of it are going to be less intense than the whole, and you can “feel” them then. [So if you're processing something complicated, you are not capable of separating out individual aspects of that easily to make them into feelings. You can't have feelings about everything in it since it is so intense, but you can have a dulled emotion of the entire thing, which would be like a summary of all the feelings of it. It might be that an isolated feeling from it arises, but too many can't arise at once because that just isn't possible. Humans can only feel so many strong feelings at one time.]
So people can basically only “feel” or focus on small amounts of feeling. If it is a feeling that is very large it becomes an emotion with more parts. It isn’t that this emotion isn’t as deep as the feeling, it is actually deeper, but you simply cannot comprehend the entire emotion at once to “feel” it like you feel feelings. You can bring up feelings from memory (by thinking about sensory stimulation) but those types of feelings are going to be less direct and therefore more like emotions (less intense) than current, direct sensory stimulation that you are feeling in the real world. [Since it is easier to focus on feelings, they are probably are going to be easier to identify too. Maybe all emotion is really feeling. Maybe when you think about your emotions, they become feelings because then you can feel them because you're thinking about them. And when you think about emotions you were having in the past (not current real time) then you feel them too, and have the misconception that they were feelings and you were feeling them, but really they were more dulled down because you weren't thinking about those emotions as much as you are now. So maybe emotion is more of an unconscious experience than feeling, which is more conscious. Since feeling is more conscious, it is more a function of conscious thought. Thought is a period of attention to something, and since you pay attention to feelings, it is almost like you think about your feelings consciously. That differs from emotions, which, since they are deeper and less "in touch" with your conscious mind, it is like you are thinking about them unconsciously. So any feeling, emotion, you could say you "feel" it or "think" about it, the two are almost the same. The difference is that when you are "thinking" about it you are slightly more consciously aware of it because you are paying it more attention then when you are just "feeling" it. That shows how feelings are shallower than thought. However, emotions can be very deep and meaningful, they just aren't completely consciously understood. In fact, since emotions are harder to figure out than feelings since they are more complicated and deeper, most of what people see when you look at you are probably emotions, since you are mostly made up of deep emotions, you're just not feeling them all the time. Someone would have an "emotional makeup" that determines who they are, not a "feeling makeup", because feelings are more short term and shallow, something like, "I felt that" versus "That is an important part of me".]
Just as feelings can generate emotions, emotions can also generate feelings. For example, something like a fly buzzing might generate the feeling of annoyance, and this feeling might generate the emotion sad. You respond to the feeling first because feelings are faster and more immediate than emotions. An example of an emotion generating a feeling would be being sad that you are depressed. The depression is more of an emotion than the sadness because it is deeper and "slower" but the sadness is more like a feeling because it can be more immediate (it can also be an emotion, but in this example it is a feeling). [Feelings and emotions are going to be mixed in a lot too, like most feelings probably feel emotional to some extent.]
How This Chapter shows how Intelligence is intertwined with Emotion:
• If emotions are dulled feelings then your mind is capable of taking feelings and making them into emotions, and vice versa. That means that a part of intelligence is your ability to control your own feelings and emotions and thoughts.
References
Wundt, W. (1891). Zur Lehre von den Gemüthsbewegungen. Philosophische Studien, 6, 335–393. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.04%3A_Emotions_are_Dulled_Feelings.txt |
A thought is thinking about something in specific. You can have a thought about an entire paragraph, but it is going to be just a thought, it is going to be about one thing, and that one thing might be a summary of the paragraph - but it is still a thought. So what we think of as thought is really just a short period of thinking - one unit of thinking that lasts for a short period of time. An essay is composed of many thoughts, but just one thought would be “I went to the store”.
Then again, “I went to the store, and Jason followed me” might be considered one thought as well. So how long exactly is a thought? If it is longer than “I went to the store, and Jason followed me” then it is probably going to be considered multiple thoughts. Thus humans use the word thought as just a short period of time in thinking.
Thoughts are in general talked about as being verbal, people rarely think of emotions and feelings as thoughts. But emotions and feelings are thoughts if you think about that emotion and feeling. The short period of time in which you think about the emotion or feeling is a thought. So thoughts can be about emotions and feelings. They are just harder to identify because they aren’t verbal.
The reason that verbal things are easier to identify is because they are distinct sounds (that we have definitions for). Distinct sounds, different sounds, are easy to separate. It is easy to identify one sound from another sound, and that is all words are, different sounds. So it could be that someone is talking and you don’t have any thoughts about them talking, or you are not thinking about them talking. In that case you just aren’t listening to them, or you are not paying attention to the sounds they are making.
So thought then is really just any short period of high attention. And thinking is long or short periods of high attention. So if you are thinking for more than a few seconds, then you are probably going to be thinking about several thoughts. Since you can think about emotions and feelings too, however, you can think about your emotions or feelings for long periods of time.
Just as thinking is made up of individual components of thought, feeling, or emotion, each of those components is made up of their own further components. In fact, when you think about an emotion or feeling you intensify that feeling or emotion a lot. Each emotion, however, is made up of experiences in the real world. The real world can include thoughts and feelings in your head as well.
So emotions, feelings and thoughts are made up of real experiences. A thought isn’t just a thing in your head, but it is something that has components that are real in the world. Those things might be sounds (when you think about someone speaking, you make that sound in your head). A sound in your head is just like a sound in reality, you are mimicking the emotion that the sound in reality is causing in your head by yourself, without having the real sound be there. Just try it and think about any sound, it produces the same emotions as when the sound itself occurred outside your head.
So a thought in the end boils down to you thinking about sensations, any sensation, taste, touch, sound, smell, feeling, or emotion. How can a thought be of emotion? Aren’t thoughts supposed to be specific and quantifiable? Well a thought about an emotion is basically a summary of that emotion. If you played Frisbee and you get an emotion from playing Frisbee, then that emotion is a summary of the things in which you remember about playing Frisbee. The same goes with feelings. The feeling you have about something is really all the feelings that that thing causes in you, and when you focus on different aspects of that feeling, you are focusing on different aspects of the real experience which caused the feeling.
So when you think about an emotion you are intensifying the feeling of those real experiences. You have no conscious idea of which parts of the feeling you are thinking about, however. Maybe if you think about directly different parts of the real experience you can link it up to different parts of its emotion.
Thus any emotion or feeling can be broken down into the sensations and real events that caused it. And you can think about any of those things (with thoughts). You can also think about those things as individual thoughts. A thought isn’t just a short period of your attention, but it is a short period of your attention during which you are trying to think about something (at least it feels like you are trying, you could not be trying and have a thought). Your natural attention span varies, but if you think about something you can boost that attention, you are trying to boost that attention on something specific or something broad (like an emotion).
Emotions and feelings are so intense, however, that it is like you are trying to focus your attention on them. So emotions, feelings, and thoughts are all periods of focused attention. A thought is just more focused attention than a feeling or emotion (unless it is a thought about a feeling or an emotion, in which case it is going to be even more attention than the feeling or thought or emotion by itself since it is a combination).
So emotions, feelings, and thoughts are all related, they are all things that you pay more attention to. And since emotion and feelings are made up of stuff which occurs in the real world, you could label each one of those things which occurs in the real world a thought, and say that emotions are made up of thoughts, or are broad thoughts. That is, you pay attention to your thoughts, and you pay attention to your emotions, so you could say that emotions are just a bunch of individual thoughts squished into one thing.
What then is the difference between a thought and an emotion? Emotions are usually more intense and therefore last longer in your brain when you think about them, or “bring them up”. You usually can only bring them up by thinking about them, however. Other things might bring up an emotion, like other emotions or other feelings, consciously or unconsciously. The same with feelings and thoughts.
People "bring up" emotions, feelings and thoughts in various ways. One way to bring up an emotion would be using thought, such as thinking "I like my dog" would bring up the emotion of the dog. You could also think directly about the emotion of the dog without using the verbal discourse, however. This could also be described as just "feeling", "feeling out" or "being emotional about" your dog. A feeling could also bring up a thought (and all the other combinations of "bringing up" between thoughts, feelings and emotions). They might also be concurrent, that is, when you have one emotion there is an associated feeling with it (and the other combinations of that with feelings, thoughts and emotions). Don't forget that one of those combinations is that thoughts can also bring up or be concurrent with other thoughts (as with feelings and emotions).
How This Chapter shows how Intelligence is intertwined with Emotion:
• Since emotions are made up of many parts which are real, then intelligence is ultimately just your ability to manipulate real things, and therefore your emotions are going to determine what it is is in your mind, and give a larger pool of things for your intellect to explore. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.05%3A_Emotions_and_Feelings_are_Broad_Thoughts.txt |
I previously discussed how emotions were deeper than feelings, yet are “felt” less because it isn’t as obvious they are occurring because they are deeper and more intellectual. Emotions therefore involve more thought than feelings. Sensations are more related to feelings because they are simple things that don’t involve thought. So since feelings are less deep than emotions, could it be that certain emotions and feelings are more cognitive than others? Although feelings are more like sensations, they can be intellectual like emotions too. For instance, the feelings curiosity and frustration are both related to thought, but they are not deep enough to be emotions. Some emotions and feelings, however, are more primary (less related to thought) and related to instinctual reactions than others, which might make them more cognitive and intellectual. Since emotion, feeling and thought are mixed – and some of those are sometimes more intense than the rest – then it makes sense that some emotions might be more consistently less intellectual than others. I could say that immediate, shallow feelings are more instinctual than deep, pondering emotions and thought.
Silvano Arieti categorized emotions into three orders, the first order being the simplest emotions and the third order being the most complicated. He listed 5 types of emotions as first-order ones – tension – which he said was “a feeling of discomfort caused by different situations, like excessive stimulation and obstructed physiological or instinctual response”, appetite, fear, rage, and satisfaction and said that satisfaction was “an emotional state resulting from the gratification of physical needs and relief from other emotions”. (Arieti) He classified the first order emotions as being bodily, elicited by stimuli perceived to be positive or negative, have an almost immediate effect and if they have a delayed reaction the delay would be from a fraction of a second to a few minutes, and require a minimum amount of cognitive work to be experienced. Those emotions aren’t as simple as sensations, which consist of just feeling things without thought. To me those emotions also seem very strong, and perhaps they are strong because if someone is going to have an instinctual reaction, it is going to have to be strong to interrupt their thought process. So those more instinctual emotions interrupt thought because they are so strong and almost physical. In fact, small amounts of any of those emotions would make it possible for the person to reflect on the emotion because they aren’t being distracted by large amounts of it, therefore making the emotion less of a first-order emotion and more like a complicated emotion. If you take rage and think about your rage, you make rage into a complicated emotion and less like a simple emotion. You also make it into more a feeling since now it is shallower. So a full-blown rage would be much more instinctual than just having a little rage, the small amount of rage is more controlled and initiated by cognition, whereas the large rage was triggered instinctually (or more basically, emotion is more instinctual and powerful and distracts from thought).
Arieti thought that second-order emotions started not from an “impending attack on the system” but by cognitive processes which he believed to be visual symbols or representations in the mind of real things (images). He explains how important images are to humans “Image formation is actually the basis for all higher mental processes. It enables the human being not only to recall what is not present, but to retain an affective disposition for the absent object. The image thus becomes a substitute for the external object.” If the image is pleasant it acts as a motivator, and if it is unpleasant it has the opposite effect. Then he explains how these images play a role in the higher order cognitive processes of some second order emotions. It is clear to me, however, that not only images play a role in thought, when people think of a word they don’t always see a strong image. There is going to be an image associated with practically everything, but you don’t always bring up that image all the time. He lists the following second-order emotions:
• He said that anxiety is “the emotional reaction to the expectation of danger”, and that it isn’t the result of simple perceptions or signals (which would mean anything real that initiates a reaction) but the result of images which enable a human to anticipate danger and its consequences, and that anxiety is image-determined fear (fear is a first order emotion because it is the result of direct stimulus).
• He stated that anger is rage elicited by the images of stimuli. Rage leads to an immediate reaction, however anger lasts longer and that is possible because it is mediated by images in the mind. Rage is useful for survival, and anger is useful to retain a hostile defensive attitude.
• Wishing is “made possible by the recall of the image or other symbols of an object whose presence is pleasant”.
• The emotion security. He didn’t know if security as an emotion actually existed or was just the absence of unpleasant emotions. You can visualize an image of security, an “image-determined satisfaction”.
My take on this is that images make the second-order emotions higher cognitive processes. Without an image someone isn’t really thinking, they are just responding to stimulus instead of conjuring up something in their mind, which is going to take longer. However, rage and the other first order emotions are going to also bring up images immediately in a more unconscious way (but also some might be conscious just very fast) before someone can respond to the stimulus. In that way rage can be intellectual. If you think about it, something in your own mind can cause you to be enraged, and therefore it was an intellectual process which started the rage and is associated with it when the rage is being experienced. It isn’t like rage is completely mindless, it is actually driven by anger, which is a second order emotion. Rage is simply more related to direct stimulus because that is much easier to get upset about because it is real and requires less thought. So anger is a more intellectual emotion because it lasts longer than rage and is easier to maintain because it only needs thought to be maintained, but rage is somewhat of the opposite. Rage and anger overlap to certain degrees as well. The same can be said of the other first and second-order emotions. The important fact is that real world stimuli elicits more powerful emotions that are less cognitive in first order emotions than in second order ones, however both are cognitive (which also means might be assisted by images) and both might be assisted by events in the real world (stimuli). Things that happen in the real world are simply more likely to stimulate a stronger emotional reaction.
Arieti described that with third order emotions language plays a greater role. This follows from his explanation that third-order emotions “although capable of existing before the advent of the conceptual level, expand and are followed by even more complex emotions at the conceptual level”. That means basically that words are conceptual instead of visual or simply automatic responses from stimuli. He states that important third-order emotions are depression, hate, love and joy. Depression contrasts to anxiety because anxiety usually caused by the thought that a dangerous situation is about to occur. Depression, on the other hand, was caused by factors a while ago. I believe that that shows how there are other emotions that can be placed as second-order emotions, like sadness. Basically any emotion that isn’t a strong immediate reaction and isn’t a complicated emotion like the third-order emotions would be a second-order one. Anything that is caused easily by thoughts or images (like sadness) could be a second-order emotion. However third order emotions are going to be even more complicated, taking many factors over a longer period of time to generate the emotion.
Arieti thought that depression followed “cognitive thought processes, such as evaluations and appraisals”. For instance if someone is told of a death of a friend, what makes that person depressed is their ability to evaluate the news. Those ideas from Arieti make it clear that depression really is complicated and supported by thoughts, and therefore is a third-order emotion. Depression can bring up sad feelings at any time, so those sad feelings are still really second order emotions because they were generated by something real (unconscious depressive thoughts). The feelings of depression, however, are the third-order emotions because they are more complicated than simple feelings. Each feeling of depression is going to involve more complicated thoughts associated with it because it is going to involve more parts, like evaluations and appraisals. If looked at that way, sadness could have a lot of parts as well. However, for each circumstance of sadness you can usually identify why you got sad, even if you got sad because you were depressed. When you are depressed, however, it is often so complicated you don’t know all the factors leading to that depression.
Arieti said the following about hate, “…hate is the third-order emotion which corresponds to the second-order emotion anger and to the first-order emotion rage. The three together constitute hostility, but hate is the only one among the three which has the tendency to become a chronic emotional state sustained by special thoughts. Thus a feed-back mechanism is established between these sustaining thoughts and the emotion.” To me this shows how powerful third-order emotions can be. That they really penetrate your consciousness for a long time. It shows how emotions are really also intellectual things. That you might interact with someone, and this interaction could make you feel things for a long time after. That long term feeling isn’t necessarily going to be just an emotion, however. If you think about it you cannot sustain and be able to identify an emotion from just one interaction or one relationship for a long time. However, if you consider that the emotion is also an intellectual experience, then you realize that you can sustain it for a long time because you are aware at some level of the relationship you have with this other person, so it is emotional and intellectual. Don’t forget that the emotional/intellectual experience is going to be able to be described with the thoughts and experiences that are supporting it. Albert Wellek said this about deep emotions, “Love, friendship, faithfulness, are emotions of the heart; they concern, involve, and engage a man in his very nature; they may move, touch, stir, or shake him and even change or transform him in his identity. On the other hand, anger aroused by a trifle, or by hurt vanity, is superficial and shallow, not matter how intense.” (Wellek)
Wellek also went on to show the difference between intensity and depth in emotions. That relates to Arieti’s orders of emotions because each of the higher order emotions are more deep than the first-order ones. Wellek said this “A man’s emotional disposition may tend predominantly or almost exclusively toward explosive affectivity or, on the other hand, may tend predominantly or almost exclusively toward profound experiences. When extreme, examples of the first type of disposition are said to demonstrate lack of sensitivity, toughmindedness, or even brutality; examples of the second type, sensitivity, emotional responsiveness, or tendermindedness” That shows how some emotions are very deep, while others very shallow. He also said “…if we say that a man is emotional, the question is: do we mean that is sensitive, excitable, or sentimental?”. That shows how deep emotions may trigger those sentimental feelings. But remember deep emotions aren’t just emotions, they are supported by thought processes making them an intellectual experience. So it isn’t like the person is emotional all the time, you could say they are being intellectual all the time. What shows the nature of the difference between depth and intensity is two examples that aren’t really either deep or intense, yet are profound – those examples are aesthetic experiences and strongly held convictions.
Wellek also said this about the nature of depth and intensity, “ Depth is characterized by breadth and continuity, intensity by its temporal limitation and resultant discontinuity. Intensive emotions are usually shallow and blow over quickly. For the very reason that too much vital energy is consumed in a comparatively short time, the emotion is quickly spent and little or nothing is left. No normal man can rage for hours on end – though a maniac may. Intensive emotions are shock-like, eruptive, explosive, volcanic; they show organic drive.” Those intense emotions would relate to Arieti’s first-order emotions, and less to the third-order ones. The third-order emotions would be more deep instead of intense. I previously showed how feelings are intense but not deep, and emotions are deep but not intense. Feelings are more like those intense emotions described by Wellek because you can really “feel” them, while emotions are more intellectual and you might experience them more in a more satisfying, sentimental, thought provoking way.
References
Arieti, Silvano (1970). Cognition and Feeling. In M. Arnold (Ed) Feelings and Emotions: The Loyola Symposium.
Wellek, Albert (1970). Emotional Polarity in Personality Structure. In M. Arnold (Ed) Feelings and Emotions: The Loyola Symposium. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.06%3A_Levels_of_Emotion_and_Thought.txt |
People respond negatively to pain or any negative emotion. Pain might also hinder development of emotions because it isn’t encouraging. The right factors need to be applied to someone in order to get them to experience the fullest potential of their emotions. This could simply mean having the right people around you who are supportive of you and your emotions. In fact, the words “thrive” and “support” are really key for emotion generation. That being said, it cannot be ignored that emotional events which feel painful in the short term may be beneficial in the long term, and even cause a person to thrive and experience good emotions.
It needs to be clarified what is significant about emotions, or how are they meaningful. There can be an individual emotional event, but this event might impact everything else that occurs in someone’s life. In that way everything is tied in. Even words, or therapy, might change how someone views the world and greatly influence how they experience emotion. For instance, understanding that a loved one likes you – or loves you – consciously would cause your emotions as a whole to change. So not just your understanding of that thing in specific would change, but also your experience with that person. A cliché saying that explains this would be “once you let love in, the world becomes a beautiful and sunny place”.
That expression explains the importantance of positive encouragement, the impact of one event or person on someone’s overall emotions all the time, and the importance therapy can have. That one statement might make someone realize they love someone else and what this love does for their life. [I apologize if this article is starting to sound cheesy, but it is important to realize that all emotions are tied into each other, and that small events or even your cognition (which could be influenced by therapy or words (as in the cliché example)) can greatly influence your life.] Conversely, if something very bad happens to someone, they might not care about their life anymore and start to experience all their other emotions less.
In fact, everything that happens to someone probably influences everything else that happens to that person. You could also just look at life as individual events that only have minor impacts on each other over the long term. I suppose I am asking the question, “what is everything, how does everything feel, and how does everything relate”. Is there a way to describe all emotion other than, “you’re feeling something”? Certain activities bring up certain emotions, individual circumstances and their emotional parts can be described as action-reaction relationships. If all of life is described in that way, does that explain everything? If you describe how everything feels individually then that would describe everything if you take into your account of each situation how all the other things that happened influenced how you feel for that one thing. So that means how you feel most of the time, the general emotions you have that are mostly independent of what is happening – and also how you feel for each thing that happens.
Analyzing anything, however, has many levels of complication. A kid playing a video game generates the emotion fun. That could be the first level of analysis of an event, stating the obvious emotions involved. The next level would be asking, “what are all the emotions involved”. To do that you would have to understand that all emotions are mixed, that the emotion “fun” the boy has could be mixed in with the feeling anger or frustration if he lost a fight or something. Also, how a specific negative event playing the game (say losing a battle) influenced his feelings of fun after that event. Also, his cognition might play a role, did he say something to himself after he lost to make himself feel better? Did his therapy session talking about how to deal with defeat alleviate his pain at the loss?
To have a complete understanding of everything, you could analyze the degrees of fun the boy has during the game, when it elevates and when it decreases. Is all of life like this video game, with variations of fun and anger and cognitive influences? If viewed simply, then yes, however there are many many things that happen in life that can be analyzed and the emotional components explained. It would be useful if I could describe a few principals that would apply to all of these events:
• Negative events generate fear, which causes people to either flee or shut down.
• Positive events generate pleasure, which results in encouragement and motivation.
That’s pretty much all I can think of, I suppose I could say that my theory has two parts, the pleasure instinct and the pain instinct, and that all emotions stem from these two instincts. Everything is going to generate some amount of pleasure and some amount of pain, causing reward and punishment, it is almost Pavlovian. However it is more complicated than that, while my theory works on the small individual direct event level (thing A causes you to be motivated to do thing B) it also works in small ways on everything, like one event might motivate you for something else entirely. Freud believed in a death instinct and a sex instinct, which, if you think about it, is similar to my theory.
The pleasure and pain instincts apply when any emotion happens. Every emotion is going to be a certain amount painful, and a certain amount pleasurable. Furthermore, the meaningful aspect of the emotion is going to be how pleasurable or painful it was. Learning emotionally could be viewed as long term pleasure. So if an event is meaningful instead of just fun or pleasurable it would still be placed under the category of pleasure because this meaningful activity adds to your life overall, thus causing long-term pleasure. It is almost like intelligence is fun, only in a different more long term way. Also, an even that is fun is also going to contribute to long term intellectual emotional development as well, because a fun event is going itself to contain information, and be motivating and inspiring. That also explains why negative and painful events can be beneficial over the long run for both fun and emotional intellectual development. They can be because the event itself might communicate information to the person, or help them understand something. Almost like learning a lesson the hard way. The point is that pain or pleasure is the stimulus behind all fun, learning, and long-term fun and learning. In other words, the pain and pleasure you get from events helps you out all the time, not just for those specific events. Pleasure is inspiring and encouraging, while pain is more of a learning experience. So every emotion is going to inspire in some ways if it is pleasurable, and you might learn from painful emotions.
Pleasure and pain function in the mind in many ways. The influence emotions, thoughts and the long term conscious and unconscious impact on thoughts and emotions. There are different types of emotion and thought that are influenced by different types of pain and pleasure:
• Different types of thought can vary in how emotional they are, for instance moral decisions could involve a lot of emotion compared to simple decisions. Important thoughts about emotional things (like loved ones) might also be very emotional.
• Emotional thoughts are more intellectual pleasure than regular pleasure (because it is a thought instead of a real event).
• The more emotional the thought, the greater its long term impact and significance might be on your emotions. Like the thought "I love person x". Of course, a non-emotional thought might also have a long term impact on how much pleasure and pain you experience.
• For each different type of emotion, you could have a thought that is emotional in that way.
• Every emotion is going to be a certain type of pain or pleasure. This pain or pleasure will vary between being intellectual and emotional. The more aware you are of the pain or pleasure, the more intellectual it will be. That shows how you might be suffering or in pleasure but not know it. If you don't know how much pain or pleasure you are experiencing, how much are you actually experiencing it? There is an unconscious element of pain and pleasure. Also, the pain and pleasure, or the emotion generating those feelings, might itself be of a more intellectual type or emotional type. For instance, if you are picked on, it is because you understand that you are being insulted that results in the emotional pain. That makes the pain in part intellectual because it stems from your understanding.
• Just like every emotion is going to be a certain type of pain or pleasure, every thought is going to be as well. Like emotional thoughts or non-emotional ones.
• An insult affects emotions because you understand that it is an insult, but normal events (like working or interacting with someone) generate emotion because you have a large unconscious emotional understanding of the significance of the event. At birth they might generate emotion because that is simply how you experience emotions, however after a long time the emotion that events generate is going be based much more on your experience, and what your experience is going to teach you is how much you enjoy that event.
• The fact that thought can influence emotions, pain and pleasure is amazing if you think about it. Is a thought a real experience? Thoughts don't even last very long. However, you could think of thoughts as tied in with emotion (since thoughts can be emotional, that shows how they are real). For instance, if something bad happens, you are going to experience pain because of real reasons that could be thought about. You change the nature of the emotion by altering how you think it affected you because the emotion was really just thoughts about the event, so you change the emotion by changing the thoughts that make up the emotion.
• Since emotion is so tied in with thought, pain and pleasure can be long term because you are always thinking. Something bad might happen to you, but you unconsciously think aboout the event for a while after, causing you to experience pain.
• The type of pain and pleasure can be explained by explaining the thoughts that make up the emotion, or the emotions that make up the thoughts. Also, real events and their emotions can be explained with thoughts. It is like a real event causes a series of thoughts about the event that determine how you are going to feel about the event both during the event and after. The thoughts are so real (are based in emotion), yet only thoughts, so therefore you could control how you feel about events and how they affect you to some degree. That shows the importance of talking about your feelings. There are also learned responses which also show the importance of thoughts. The response might have been learned from thoughts or unconscious thoughts. Therefore, it could also be unlearned just by thinking.
• Thoughts can change the nature of emotion. For instance, if someone makes you happy, the more you highlight why they make you happy the more the relationship will be enhanced. Also, thoughts can direct a negative emotional response. For instance, if something bad happens to you, if you think that what happened was really bad then you might feel even worse then if you trained yourself to not care. In other words, your emotional response to events is really just an intellectual, learned response that is determined by thoughts and your thoughts over the long-term. If someone is insulted, they have learned that insults are bad over time, and that is why it makes them feel bad. It also causes them to think about the negative thing that was said, and if it is true, might make them think that they are a failure in some way. In that case, simply by thinking about the insult and why it isn't true, or why it shouldn't affect your feelings could make it so the insult doesn't carry weight the next time.
• Changing your thoughts in an attempt to change your emotions is almost like trying to change your programming because emotions are harder to control than thoughts. In the movie Terminator 3, the evil terminator changed the programming of the good terminator to kill the hero of the movie. When it was time to kill the hero, the hero tried to convince the terminator that it didn't want to kill him. The terminator struggled with back and forth switching between programming commands until it finally was able to not kill.
In review, by exploring the importance of pleasure and pain on emotion in general we gained insight into emotions, and that gave us insight into how they can be manipulated with thoughts, or your thoughts be manipulated by your emotions. So pain and pleasure function with individual thoughts as well as with emotions, that is obvious if you remember how tied in emotions are with thought - and I already explained the importance of pain on emotion. Also, thoughts can be emotional, when you think something it can bring up pain. That pain could just be an enlarged version of the pain caused by the thoughts the rest of the time (the time you're not thinking consciously of them) unconsciously. You highlight the pain by thinking about what is causing it. That might help you to change the thought, however, and therefore the unconcsious thoughts and emotions making you feel at other times.
While my pain and pleasure instincts can be applied to almost every emotional situation, there are other principals which can be applied in many situations that are almost as important as those. For instance, the social aspect of the human experience is probably one of the most important generators of emotion. You could classify everything someone does as either social or non-social, and how important and emotional can be interacting with inanimate objects? The important aspect of the social aspect, however, is personality. That is so because no matter what someone says or does, their personality is going to have a large impact on the people around them because there is an unconscious emotional interaction going on between different personalities. Of course, what someone says and does is going to be reflective of their personality, but just by describing personality types it can be inferred what that type of person would do differently. Though it is important to note that basic interactions are almost all the same, the only thing that varies is how the people have different and individual personalities and this changes the emotional interchange.
There are several things that determine what someones personality is going to be. There are important factors and non-important ones. For the principals to be general and far-reaching, I am only going to talk about the important factors. Personality could be described and the things listed be important to what that person does, and what type of intellect they have, however this would not be looking at the important aspects of personality. The important aspects of someone's personality are the ones that going to affect how much emotion they experience, and those aspects are going to be ones that influence their social emotional interchanges. However, non-important personality traits may be related to important ones. For instance, although "Organized and hard working" is not an important factor, (how hard someone works is not going to play a large role in the emotional interchange when this person interacts) how serious that person is, which might be shown in how hard working they are, might play a role in a social interaction. For instance, there might be a violent clash between the personality of a serious person and a laid back person, generating a lot of emotion. So although two people might be equally hard working, maybe only one reflects this trait emotionally when they interact (or "radiates" it). There are only a few basic factors that generate large amounts of emotion when any two people interact:
• How serious (or mature) somone is could clash with how lazy (or immature) someone else is, causing either tension or an interesting interaction
• How cool or not people are or are perceived to be could cause a status conflict
• How physically appealing someone is could generate sexual interest or, if not sexual interest unconscious sexual interest that would be shown by how much someone likes someone else even though they might not be aware their interest is sexual in nature (that shows how this can function unconsciously)
• How old someone is could cause either identification and relation, or the opposite of that which might cause either tension or an interesting interaction
• How intelligent or dumb someone is could cause tension or relation (this also might vary depending on what the sitation is, becuase in certain situations different types of intellect are more valued)
• What someone's profession is would matter when interacting with that person in the context of their job (that shows how the context of the interaction (or what the interaction is even) also matters)
• How friendly or shy someone is could generate openness or seclusion in interaction | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.07%3A_A_Theory_of_Emotion.txt |
It can be inferred from my pleasure and pain principal instincts that people are motivated to process positive things, and discouraged from accepting negative ones. The idea that the mind processes positive things better than neutral and negative ones is not new. However, this idea is much more significant, and it applies in many more circumstances than you can probably guess. For instance, this idea could mean that people are simply more open to positive, happier emotions than negative ones. That things which cause pleasure are better and clearer understood than things that are painful. However, something painful may cause you to become more awake, and this in turn would lead you to process information better. This information itself might be pleasurable, even though the original stimulus was painful. Pain may also cause long term pleasure in a different way. If the stimulus is negative, you could still process it better because of the original negative stimulus which “woke” you up.
There are examples of negative things which cause people to pay attention, something like spanking, any loud noise (scratching a fingernail on a chalkboard for one), or even a painful emotional experience could cause you to take life more seriously temporarily, and this might cause you to be more awake, active, or intellectual. However, those negative things just make someone better able to receive or understand positive stimulus more so than negative, because someone is still probably going to ignore negative information more than positive information, even though they are in a more alert state. Negative things are ignored because, simply, people tend to believe what they want to believe. [The statement "people believe what they want to believe" shows how people can be delusional at times. It shows the people want positive things more than negative things, and since they want positive things more than negative ones, they are going to be more accepting of them.]
It is almost as if for every emotion someone says, “do I want that?” and if the answer is yes, they are much more responsive to it. [That sentence shows how for even emotions, which are a natural process, complicated thought patterns and selection processes occur with them] So someone might ignore someone they don’t like, and pay attention to someone they do (what determines if they like someone could be based off of many factors). Or, if someone doesn’t like someone, then that person doesn’t cause as much pleasure because the other person has decided to ignore them. When someone sees an opportunity to enhance emotion they grab onto it, and similarly if they see something or someone is causing displeasure they instantly ignore it. It is pre-conceived notions and conceptions of the person, or even an understanding of who that person is (or an unconscious understanding determined by the emotional relationship), that determines what emotions that person causes. It is like real facts about that person are being stored unconsciously, and then those facts are brought up in the future to determine how much pleasure that person is going to cause.
That ties into the idea that positive things are processed better than negative ones because if something is positive, or if you “think” something is positive (which might mean having preconceived notions about someone) then that person is going to generate less pleasure for you because you think they are not positive. [So thinking someone is bad can be a conscious and/or unconscious experience, but even if you are thinking they are bad unconsciously, this is still going to be reflected in your conscious mind, so these "preconceived notions and facts" might determine what someone is thinking about the person and how open they are to them. Or (better phrased), you thinking that someone is bad is going to have a wider impact on your feelings about them then you might think because you might be shutting off that person because you think they are bad. This "wider impact" might happen because of all those negative unconscious things you might think about the person. At any one time you could be thinking (unconsciously) a thousand negative things about them, the effect of that might limit your emotional response].
What then is the difference between thinking if someone is positive and them actually being positive? The difference is at some level (unconsciously) you are thinking that they are positive, you just might not be consciously aware that you are thinking those things. You probably also don’t have control over those thoughts. Conscious awareness of as much of what is going on unconsciously with those thoughts will enable someone to understand what is going on, and possibly change what those thoughts are. [So you could be ignoring someone and you might not know it because you haven't consciously recognized that you don't like them. It is clearly shown how a person can start to consciously recognize when they are ignoring someone by the example "Ah, I was ignoring you, I'm sorry" - that also shows how powerful your unconscious mind can be, and how you unconsciously can close off negative things.] | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.08%3A_How_Emotion_is_Processed.txt |
This topic is about the difference between physical feelings and mental feelings (feelings of emotions, of thoughts). It could be viewed that most stimulation is physical. More importantly, that physical feeling is mixed in with mental feelings. Also, that physical feeling is with you all the time and can even serve as a baseline for your emotions. Whenever you experience an emotion, or even in your general state, you are feeling physical feelings. If you look at this by the definition of stimulation then it makes sense that most stimulation is physical since stimulation is usually something strong, and physical feelings feel much more real and alive than mental ones. You know you are alive if you are experiencing pain. What happens when someone concentrates on physical feelings? Doing intense physical activity (like playing a sport), feeling pain, going to the bathroom, eating, and having sex are the five strongest physical feelings I can think of. However you also have physical feelings all the time because you are aware of yourself not only in a mental way but in a physical way. You are aware of the physical feelings your body produces all the time and how these feelings are mostly the same as time changes. You are also aware of what it feels like to be you, which is going to be mostly your mental feelings but also your physical ones. So intellectually your mental feelings are stronger if you are doing serious thinking, but if you are doing physical activity then your physical feelings are stronger. Also, your physical feelings interact with your mental feelings in a certain way, one might cause the other to increase, there might be a chain of cause and effect interaction.
This is related to the difference between emotion and thought because one distracts from the other, and physical feelings are more like emotions than thoughts. This is why pain isn't as much of an emotion than the other emotions because the other emotions are more mental and therefore intellectual. In fact, if you explore the feeling of pain it helps one to understand what a physical feeling is like because pain seems to be the strongest physical feeling. It is also a negative emotion similar to sadness, however, because it might make you feel sad very quickly or simultaneously. If it makes you feel sad simultaneously then it is like pain is an emotion because it is related to the feeling sad. So pain is a physical feeling that overlaps with the emotion sad. If someone is in pain it makes them sad, but that is much different from being sad in the normal way someone gets sad. It is like a physical sadness. Similarly if someone is having sex it might make them happy, but in a physical way much different from the normal emotion happy (that is pretty amusing). So saying pain is an emotion is like saying that sex is an emotion. Sex may provoke emotions but is it an emotion itself? The answer is really that physical feelings are so similar to emotions that the two are tied together. You get a small amount of real emotion from something physical whereby it seems like the emotion is part of the physical feeling because the physical feeling feels so much like a certain mental emotion.
People respond to emotions. They get a feeling or emotion, then they think about it. If a feeling is large enough to be felt consciously, then it is going to be thought about. “Thinking” is really processing in a larger context, thus all emotions are processed in the mind (even physical ones). In this way emotions become complicated, that is, life isn’t just continuous sensory stimulation. All the sensory stimulation adds up and people have feelings about the total amount of sensory stimulation. Either that or there is a deeper feeling which people get simply from being alive, that isn’t related to sensory stimulation. This feeling must come from something, however. The world (the physical world) is real and it exists, this is the only source of potential feeling (since that is the only thing to get feeling from). Pain feels extremely real, it might be that people are happy simply because it is an avoidance of pain, or that happy only exists relative to sad, so you understand how you are happy and can be happy because you know what happy is because you know it isn't extreme pain. It seems like pain is too large to be compared to regular sensory stimulation, like visual stimulation. This means that most emotions (if you consider pain to be an emotion, here I just mean that people are more distracted by the physical than the mental emotions) people have are from just their immediate environment, feeling things and touching things. Feeling their own body and the physical feelings they get from it. Vision doesn’t cause that much pleasure compared to physical.
When someone gets happy from emotions (non physical stimulus), however, they get very happy. This source of happiness must come at least partly from the physical, that is, they get happy because they feel better about their physical emotions (or when they get mentally happy, they can feel their body more because they are more alive and this experience is tied into being happy - the physical experience is also more real so it seems like your mental emotions derive from the physical). If someone is nice to them, then they feel like they are helping them, and this means helping them stay alive, which would prolong their life and the feelings get from their body. In a similar way, all emotions are tied into the physical. Part of what makes people happy is reward which they associate with prolonging their life. They feel deeply about prolonging their life because they get deep physical feelings from their body and from its existence. So emotions actually come partly from physical sensations, just not directly. That is if they were directly from physical sensations it would just be a physical sensation, but people feel deep emotions if it relates to protecting their physical sensations. In this way people are very animal-like. Seeing things and hearing things makes people feel good but this feeling is very mild. Most feeling comes just from a physical awareness of one’s own body. This makes sense considering that physical pain at its height is much worse than any emotional pain.
In review, emotional pain has its source in physical feelings and pain. This also means that emotions are really physical things. Emotions cause physical feelings. Any “feeling” is really a physical feeling, even if it is from vision or hearing. The sensory feeling triggers a deeper physical feeling because the sensory feeling reminds you that you are alive and have a physical body. In this way all sensations are tied into your physical body.
This all just really means that the physical is much more "real" than emotions are. You could say that emotions are feelings by themselves, but whenever you experience an emotion, you are also experiencing physical sensations. The physical is always there and it is strong because it is real, it is who you are. It is like a baseline for your emotions, it is a reminder that you are alive. If there was no physical world, you couldn't experience emotions because emotions are in root all physical, since everything comes from sensory stimulation initially. Thinking of it that way, all emotions are physical themselves since they remind you of seeing and touching physical things, which brings up a sense of your physical presence in that environment. Also, if the emotion isn't physical, then how is it in any way real? How can someone feel something other than physically? Can you say, "I felt that intellectually?" How much sense does that make? | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.09%3A_Physical_Stimulus_and_its_Role_with_Emotions.txt |
What is the difference between logic and emotion? When someone says that they are “emotional” which emotions do they mean? I guess they mean that they experience all emotions more. They could specify further, however, and say which emotions they experience more, which emotions they are more prone to.
If someone is emotional does that mean that they enjoy life more? What if someone was emotional, but only experienced positive emotions more than most people, and didn’t experience negative emotions. Then that person would be happier I guess. Unless they separated out the emotions joy and sadness and just talked about those. Can you be an emotional person and just have excess amounts of the emotion happy? So anyone just “happy” is therefore being emotional. You’d probably be a lot more emotional if you were happy and sad at the same time however (the mix of the two would drive someone mad most likely, however).
Happy and sad seem to be the two strongest emotions. They are stronger than fear, anger, surprise, disgust, acceptance, and curiosity. That would make anyone bipolar (experiencing swings from happy to sad) very emotional. Does the swing mean that someone is more emotional than just experiencing one at a time? The emotional change is hard I think and that is more of an experience than just being very happy all the time, so the change from happy to sad is what adds the emotion in. Or because someone was so sad before, it is harder to be very happy because of the dramatic contrast, and this causes tension. That is, your body goes through changes as it experiences major emotional changes.
There are two degrees of change in emotion however; one is a major change from depression to mania (which is what bipolar is). Another is just your ordinary change from sad to happy, which can occur many times in a day. So if someone is manic or depressed are they being more emotional than someone who is just happy or just sad?
Symptoms of mania ("The highs"):
• Excessive happiness, hopefulness, and excitement
• Sudden changes from being joyful to being irritable, angry, and hostile
• Restlessness
• Rapid speech and poor concentration
• Increased energy and less need for sleep
• High sex drive
• Tendency to make grand and unattainable plans
• Tendency to show poor judgment, such as deciding to quit a job
• Drug and alcohol abuse
• Increased impulsivity
The symptoms of bipolar depression are the same as those of major depression and include:
• Sadness
• Loss of energy
• Feelings of hopelessness or worthlessness
• Loss of enjoyment from things that were once pleasurable
• Difficulty concentrating
• Uncontrollable crying
• Difficulty making decisions
• Irritability
• Increased need for sleep
• Insomnia or excessive sleep
• A change in appetite causing weight loss or gain
• Thoughts of death or suicide
• Attempting suicide
I don’t think that people with the two extremes of mania and depression are any more emotional than people who are just happy or sad. That is because being too happy or too sad shuts off the other emotions people would experience like anger, fear, disgust, surprise, acceptance, and curiosity. Why does it? Because with all the other symptoms of mania and depression, there isn’t really any room left for emotions other than happy and sad, a person’s system can only handle so much emotion. If you are crying all the time (like you would if you were severely depressed) there isn’t any more room for you to experience other emotions (this is obvious if you remember that emotion uses your attention and memory capacity). Or if you are as happy as you can be, you’re probably too out of it (in your happy land) to think about anything else.
A person could be happy or sad and be less emotional than someone with mania or depression, however. But a person (if they were experiencing the other emotions other than happy and sad) could be just as emotional as someone with mania or depression. Although those people may be crying or have expressions of extreme glee on their faces, happy and sad are not the only emotions someone can experience and therefore they may not be as emotional. The question is, what constitutes a deep emotional experience? Simply experiencing large amounts of emotion might be different from being happy because being happy might be a result of finding something meaningful instead of finding something more fun, which would result in more emotion instead of satisfaction.
Emotion means that you are feeling something; if you are feeling emotions other than happy and sad, then wouldn’t the other emotions (if they were positive) increase the happy emotion and you then have a happy emotion that is larger than the other positive emotions you are experiencing? I guess that would be happy, but it would probably lead to overload. That is why it makes sense that people who are emotional experience a range of emotions from happy to sad ones, so that if they just experienced happy ones it would lead to too much happiness causing overload (or too much excitement).
Why would emotions be balanced, why not just have only positive emotions? Because if you are curious, your curiosity is going to backfire when there is a failure (you’d be curious in a failure). Or if you are overly surprised, you would be just as surprised at a bad thing happening as you would as a good thing happening, leading to being happy and sad. Or if you got angry at something, you are then likely to become pleased by the opposite thing happening, so the emotions tend to balance out.
So is it really that the positive and negative emotions balance out? It is probably too hard for your mind to wait to become emotional at things that are only going to lead it to become happy. That is, you would have to consciously say to each thing, ah that is a positive emotion, I can have that emotion now. It seems more natural that when something bad happens, you get more upset, and when something good happens, you get happier. So you don’t have to calculate and spend time to assess if you should “feel” in those instances.
That is a good way to size people up, assess how happy they get from what things, and how sad they get from other things. Why is it that happy and sad are the two strongest emotions? It seems that way because all the other emotions follow suit with them. When someone is happier they are likely to be more curious, or more accepting. When someone is sad it also makes him or her less reactive to things (the surprise emotion).
The other emotions don’t occur as much as well. You can easily be happy or sad all the time, no matter what you are doing, but the other emotions need to fit into what you are doing. Like the emotion curiosity needs something to be curious in, and the emotion disgust needs something to be disgusted by. When you are doing nothing the emotion you are going to feel most of the time is just plain happy or sad, thus those two emotions are also our “idling” emotions (when we are idle we have them).
If the other emotions don’t occur as much, then why would someone be happy or sad in the first place? Are the emotions happy and sad simply the result of other emotions in your body? If that is the case, how is it possible for someone to become manic or depressed? Mania and depression are such extremes of happy and sad that other emotions can’t be experienced as well. What then is the source of that extreme happiness or sadness?
Either it seems like life has enough in it to justify being manic or depressed or it doesn’t. If it doesn’t then the mania and depression would arise from people just being unstable and fragile creatures, easily upset and disturbed. If it does then by a logic process one should be able to figure out the cause of their mania or depression is and solve it. An episode of mania or depression could be caused by severe stress, however.
How This Chapter shows how Intelligence is intertwined with Emotion:
• It could be viewed that emotion is entirely driven by intellect, that everything that you feel you feel because you are who you are, and who you are is determined by your thoughts and your own intelligence. Or it could be rephrased the opposite way, that intelligence is entirely driven by emotion for the same reasons, those viewpoints are obvious when you take emotional highs where it seems like you are acting out of control - because then you realize why it is you are having those emotions, and you are having them because of something you did (which was driven by your intellect) or something you were feeling (which is driven by your emotions). Your intellect determined how you felt the emotion, because you are your intellect, and that (you) would then determine how you feel about something that happens. Someone’s emotional template (who they are, how they respond to the world) could be viewed as being an intellectual template because intellect is understanding real things, and your emotions determine what it is that you process and how you process them. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.10%3A_Emotions_Vs._Logic.txt |
This article theorizes that attention is not linear. That is almost obvious, however, because when you look at a room full of objects your attention must fluctuate many times just then as you go from object to object. The question is, what is your conscious awareness of what you are paying attention to? If you look at it from that perspective, your attention doesn't change that much because you are not aware of that many major changes. If you look at it from an unconscious perspective, your attention fluctuates greatly all the time, with minor variations in how you are processesing everything. Are someones unconcsious fluctuations in attention important, however? When someone first starts paying attention to something do they have to pay more attention at the start to bring the object into cognition? Is it necessary to pay sharp attention to things every so often because you need to be kept awake? This chapter tries to show that your unconcsious attention must pay sharp attention to things in spikes in order to a) keep noticing things and b) stay sharp. However, if people might are not aware that they need to refocus on objects, then how important and significant is this refocusing of your attention in a spike pattern?
People need to pay attention to things in order to keep their minds alive and active. They need to pay attention to little things all the time. That is why spikes occur, when people refocus their attention on little things over and over it occurs as a spike, because the new object needs to be processed as a whole and this processing takes energy in the form of a “spike”. [The key thing there is that the object needs to be processed as a whole. You pay attention to lots of little things all the time, but you only pay attention to complete things infrequently, so infrequently that when you actually do pay attention, it occurs as a spike.]
Humans cannot pay attention to everything, and the things they do pay attention to they need to “spike” their attention initially to get that object into their attention and focus. It is possible to not use spikes of attention, but if you did that then life would be boring. In order for life to be interesting people naturally spike their attention on certain things every so often (once a minute or so) to make life more exciting. Life would be boring if you never paid sharp attention to anything. Spikes of attention keep life “crisp”. [You could rephrase that as, if you never pay attention to anything, you are never going to be interested in anything. And if you actually pay attention to something, you would need to direct your attention to it at some point, putting in maximum attention so you grab it into focus, that is the spike.]
A good example of a spike in attention is if you direct your attention to something that is going to be shown to you for only a short period of time. For instance in a study by Sperling (1960) found that when subjects were presented with visual arrays lasting 50ms, containing twelve letters, they were able to report only about four or five items. However, subjects said they could "see" the whole display for a short time after the display was terminated. That shows how it is possible for people to direct high attention to be able to pay attention to something for that short a period of time, but also that this attention will die down very quickly after it was given – shown by the fact that they forgot what they saw soon after.
If life occurs in sharp spikes, why then doesn’t it feel like life occurs in sharp spikes? It seems pretty smooth to me. If it seems this way, then you aren’t realizing or paying attention to the complicated emotional and cognitive processes that are going on in your mind, life is not “all smooth” but there are changes in attention going on all the time. Each little thing you pay attention to (actually pay attention to that is, not just “absorb”) actually occurs as a spike in attention. This is because most of the time your attention isn’t extremely directed, but you need to make it extremely directed sometimes (once a minute or so) in order to properly stay awake. It is also because you don’t absorb every little thing, you only absorb a few things once in a while, and these things that you do absorb are the spikes. They are spikes because they are relative to most of your activity which isn’t absorbing things intently or deeply. Every minute or so you need to absorb something. That thing is the spike. [People think all the time, and since thinking is an intense activity (we defined it as a period of high attention), it needs to be supported by intense activities, and a spike is one of these activities. So you could spike your attention on a thought, or use a thought to pay attention to a vision, etc.]
When you pay attention to your attention (or what you are paying attention to) how does life feel to you? Does it feel smooth or rough? Life seems rough if you pay attention to it like that, with occasional spikes of interest in things. It is rough because there are many little fluctuations of interest in various things, but intensity is needed somewhere. This intensity comes from the spikes, otherwise life would just be rough and there wouldn’t be anything smooth. The top of the spike is smooth, however because it is clear and it lasts a little while (a few seconds or a few dozen seconds). Paying sharp attention to things allows you to have a clear mind for the time you are giving that sharper attention. It separates out all the other things and you focus more on what it is you processed. This clears your mind because you just received a lot of stimulation. In this way spikes can make life be smooth. Without spikes life would always be rough because of all the little things. But if you use a spike then life is smooth afterwards because you are satisfied. [So emotional stimulation could come from the spikes, since they are significant since you are paying a lot of attention they might generate more emotion. This emotion helps you to focus on the thing you are trying to pay attention to because you're more interested in it. Since you're paying attention to something now, you don't need to pay attention to other things, so life is smooth because you are being occupied. (not rough by lots of little things you're not really paying attention to) - that applies to various strengths of spikes, whether it is just focusing on something small or a significant amount of emotion or focus generated from something large]
Life is many small variations in attention over time. There are periods of focused attention and periods of non-focused attention. The periods of focused attention are the spikes. This is very complicated if you try to follow your own spikes because there are so many things you are “spiking” and paying sharp attention to all the time. There are three groups of things, things you pay sharp attention to, things you pay attention to, and things you don’t pay attention to. You pay sharp attention to things much less often than the other two categories, and that is why the sharp attention is a spike, because it is uncommon and doesn’t last as long as the other things, so it looks more like a spike when compared with the other two categories than a leveled plain. [You might have little spikes then, things you pay a little attention to. Most of the time you're not paying attention to things, or paying attention in a steady form. But when that attention starts it is grabbed because it is something new, and if it is something new that you're not going to be paying attention to much after processing it, then it might just be a small spike because you stop paying attention to it before your interest can grow.]
Also, people’s emotions change all the time. The change probably occurs both gradually and like a series of steps. There are so many emotions in a person’s head that some of them are going to interact with each other suddenly, causing a sudden sharp change in emotion, and others are going to interact more slowly, causing gradual changes in emotion. [So some changes might even be spikes.]
It might be that the changes are just sharp, however. You could look at the mind as a system that only changes when it gets a trigger, and that would probably mean that it only has sharp changes of emotion. However those changes wouldn’t just be sharp changes. Large, sharp changes of emotion don’t just happen by themselves, but deep emotional experiences are often followed by similar emotions that are less intense. That is, if you experience emotion A, emotion A is going to linger in your system. [That is, you would need a spike to produce a major change, but a minor change might be induced by a smaller spike (but still a spike relative to the emotion after the change) so emotional spikes would work like attentional spikes, starting with a spike because there is an initial period of interest.]
That excludes the staircase model, but there still could be something like a staircase, only instead of steps at a 90 degree angle they would be something like an 100 degree angle. With 10/360 percent being the emotions that hang around after an initiating event. That would be just emotion changes resulting from large events, however. Either a large event within your own system (something like a thought or a feeling, or a mix of thoughts and feelings), or a large external event (like something happening outside your body). [So an attentional spike might result in an emotional spike, or vice versa.] That’s because your mind needs to understand, “ok now I am sad”. As intellectual, thinking beings all major emotional events that occur in the mind need to processed intellectually (unless you’re sleeping). So in other words if you just get sadder and sadder and are not aware of it you are not going to get nearly as sad as when you realize that you are getting sadder. The points when you realize (at some level) that you are getting sadder are going to be when you start feeling a lot sadder (the steps on the downward staircase of sadness and depression).
There must be other stuff going on in the mind, however. While a clash or mix of two feelings or emotions or thoughts could be figured out, and that would probably result in a noticeable emotional change (the staircase or spike model). There are probably other things going on in your conscious or unconscious mind. That is, some things that happen to people take a long time to recover from. But the main point is, everything, whether or not is a slow, gradual change or a sudden, quick change, resulted from some mix of emotions and feelings and thoughts and external events happening.
Furthermore, any mix of those things, when they interact, is going to be a large change. That is because it is a large change relative to your normal state, which is most of the time feeling nothing, because nothing is going on most of the time. People experience events in life and things in life and they occur in individual units.
Thoughts, emotions, and feelings are the three main components of the brain. “Everything” isn’t stimulating enough to cause sharp spikes. There is vision, that is, you see things all the time, but your emotion doesn’t go up or down a lot when you close or open your eyes. Unless you are looking at something that is causing a feeling, of course. But even then that feeling is only going to last a few seconds before it dies off. Therefore vision clearly functions with the sharp spikes pattern.
The same with hearing, if you hear something interesting, there is a sharp spike of initial interest, and then it dies down to almost normal. That must mean that feelings and emotions are probably a combination of thoughts, feelings, and emotions. That you almost think about the event that is occurring, and that when you think about it there is a large spike upwards. That the combination of feeling and emotion with thought results in large spikes, which form our best and common regular life experiences. [So in other words, the spikes are like thoughts because they are a period of high attention (which we defined thought as) even though they might be mostly emotional, they can still be like thoughts as well.]
That is, you can’t really tell you are thinking about it because it isn’t verbal. But it feels like you are thinking about it during that brief time. That means that your attention is going to be focused on it, basically. Sometimes when someone is in a depression these spikes can be very large because that person is very upset. A large spike would result in emotional damage, furthering the depression, thereby causing the depression to go down like a staircase. It is easy to do emotional damage, but it can’t be repaired in a series of spikes, as it would go up gradually (still small compared to the spikes however). [So if you are going through a hard time, what makes that time hard might not just be a higher level of sadness, but there might be more complicated traumatic feelings which are intense mixed in (like spikes).]
Just think of it as fabric; damage needs to be mended, and mending takes time. It is easy to do damage to the fabric, you can only mend it slowly. No one just “snaps out” of a depression. Furthermore it is easy to stimulate the fabric, just poke it. That poke would be similar to a life experience, the poke has ripples, but the main event was the poking. [So spikes are significant beyond just the spike, they can help you to pay attention to what you spiked, or cause damage if it was a negative spike.]
The sharp spike occurrences show just how short of attention span humans have. That for brief periods we are capable of almost perfect attention, and during those periods is the height of the spikes. These spikes actually look more like lumps since they go up gradually and cause a stay in attention for a few seconds, but they are so fast that they are best called spikes. Say looking at an attractive girl/guy causes a feeling. The first few seconds you look at her/him, you are going to have perfect attention, but then it is going to die off. Everything else in life is somewhat like that, whether you are looking at your pencil, or your computer, or whatever. The item you are looking at needs to be initially processed, and your attention needs to be directed to it first off.
Everything in life needs to be processed before it enters your system, and that process is going to be a sharp spike of emotion, feeling, and thought. After you process looking at the computer you can move along to just wandering your eyes throughout the room. If you pause at any one of the things you are wandering your eyes around, you will experience a sharp spike of emotion/thought/feeling. That is, looking at things also causes emotion as well as the thought needed to direct your attention to it, if you are paying more attention to something which causes emotion, then logically you are going to feel more emotion from it.
This doesn’t mean that you aren’t thinking/feeling when you don’t pause or stop. You could say that people are thinking, feeling, and are having emotion all of the time just in amounts so small it is hard for them to detect. That these amounts only go up in sharp spikes when they actually pay attention to something either in their mind or outside it. This “paying attention” doesn’t have to be conscious or deliberate. If two feelings interact within your mind it could cause you to pay conscious or unconscious attention to them.
Something like, your girlfriend meeting your ex girlfriend would cause a clash of feelings for your new girlfriend, with feelings for your old girlfriend (possibly). But that clash of feelings wouldn’t occur in a thought spike, it would occur in an emotional spike. It would also be a slight rise of tension in the feeling between which one you like more. Also, the rise in that feeling wouldn’t be significant compared to if you thought about that feeling at the same time. When you think about the feeling it would result in a sharp spike, and that spike would last a few seconds, then die away. That is because that feeling was a potential explosive one, one that exploded when you thought about it, resulting in a spike. Also, thought about anything else, a feeling, a vision, whatever, results in lesser spikes of thoughts/feelings/emotions. That anything and everything, when thought about, is interesting for the first few seconds, but then that interest dies off. It is the same principal when you pinch yourself. When you pinch yourself the first time, it hurts the most. That is because the first time you are thinking about it a lot more, after that your interest in it dies off. Amazing how much our attention can fluctuate to cause life to occur in short, sharp spikes. [So some experiences and emotions can produce spikes easier than others. Though an emotional spike like in that example isn't a "fast" one that just grabs attention, but is one that stays around as high attention for a while. There are minor spikes when you grab your attention to something or have an emotional change, and there are longer ones where you stayed "grabbed" for a while.] The girlfriend example is different than spikes that occur more frequently all the time, when you pay attention to little things. The girlfriend example was an example of when a spike can happen, but that is a spike that you are going to notice a lot more then something like, you just refocusing on what you are typing. It is spikes like that which happen all the time so you stay focused.
Although there are spikes of emotion and feeling, spikes of thought are needed to direct attention. Not thought in the verbal sense, but thought in the sense that it is under your control and feels more similar to thoughts. Thought occurs as basically a bunch of spikes, and since people think all the time and about everything, life occurs in those spikes. They don’t feel intense because it is just thought. But basically whenever something new comes into your vision or your attention there is an initial sharp spike of interest. And if you are going to be doing the same thing for a long period of time, then it is going to take additional sharp spikes every couple of seconds or every minute to keep your attention. It is easy to test that, try and read something with the same bland expression as when you start reading it (but after your initial interest at the beginning when you notice the piece) and you just can’t do it. To maintain attention your mind needs to snap back to what it is paying attention to. Feelings and emotions are going to follow the thought, however (that is emotions and feelings are imbedded in thoughts). That is why people need to think all the time, to maintain a healthy level of mental activity, it is a part of life. Emotions and feelings can also be described as thoughts, however, so those spikes continue even after you stop thinking, just in the form of emotion-feeling-thoughts (they are still more similar to thoughts however since they are short and spiky). [So a "thought" is required to direct attention. That is because to direct attention you need to pay sharp attention, and any thought is something which you are paying attention to. Something could grab your attention unconsciously, and it could be more like an unconscious thought that pulls you in. That shows that there is going to be degrees of consciousness to which your attention is grabbed, sometimes you do it deliberately, and sometimes you spike your attention without thinking about it at all.]
Basically your attention needs to be initially “grabbed” for anything that you are going to pay attention to. That grabbing is the initial period of paying attention to it. During that first period of paying attention to something is where the spike is because you are processing the item/object. You need a spike to grab your mind and attention, otherwise you wouldn’t be paying attention to anything. You can still process most of life without the spikes, but that is only because spikes had brought you back to reality in the first place in order for that attention to be grabbed. Furthermore it is going to be easier to process new things based on what the spike was about, that is, it is going to be easier to process similar things more related to the spike then to other things in the area. If you focus on a school bus, then you are going to be more attentive to the other school buses you see for the next few seconds or minutes because you were just paying attention to one school bus, and your mind is wired to notice school buses. [That is because you are probably going to have a higher emotional interest in the school bus, making you more aware of it. Even if the thing you saw generated a negative emotion, you still would pay more attention to it (unconsciously) because you are alerted to it. It is in your mind, so when you see it you can process it better.] A study shows that prior experience of a semantically associated word such as "doctor" speeds up naming to a subsequent related probe word such as "nurse" (Meyer & Schavaneveldt. 1972).
Furthermore there is a similar way in which your mind processes each spike. For spikes that are under your control, first the spike would be a period of thought about something, say a school bus or a coffee machine. Then what you just saw or thought about becomes an emotion, or an unconscious series of thoughts. That is you are less focused consciously on what it is you are seeing or whatever but your mind is still processing it. Next, after your mind processes the unconscious thoughts it becomes a feeling, you then feel something about what it is you were focusing on. So it isn’t when you look at something you immediately get a feeling, that doesn’t make any sense. First you think about it, then you feel it in a general way (an emotion) then after you understand what that feeling is, you feel it (but that basically happens instantaneously so in a way you do feel it right away - also, that same process can happen over a longer period of time). That is because you know what it is, you know where it is, and you know what to focus your attention on. An example of unconsciously processing something you see is when you look at match you then think about fire. Then after you think about the fire you can almost “feel” the fire, following the pattern of thought to emotion to feeling (you think about the match, then something happens unconsciously (this unconscious thought process is emotion (remember emotion is unconscious thought) which then causes you to feel the fire – a feeling). [Not everyone is going to feel fire when they look at a match, and for the people that do, that feeling is probably going to be unconscious. That was just an example of how things can be thought about more than just what they are, and since they are going to be thought about, they are going to go from thought to emotion to feeling (emotion more similar to thought than feeling). Since it is a spike of thought that directs attention, the spike dies off, so it goes from thought to emotion, since the emotion is less intense than the thought, after the thought period (or the spike period) you don't need to think about it anymore since you already processed it consciously, you simply then think about it further unconsciously - that thinking unconsciously is the final part where it is a just a small feeling (so it goes from thought to emotion to feeling (with some overlap). Emotion is more like almost consciously thinking about something compared to feeling. Feeling is the final part because feelings are shallow and small, when you touch something you get a feeling, it is not a deep experience that involves thought, it is just like a trickle and emotion is like a stream.]
It could be that a few minutes passes before a conscious spike occurs (that is a spike that is under your control). A spike is basically just anything that you are going to start paying attention to. During those first few seconds of when you are going to pay attention to something there is a sharp spike upwards. Without these periods of attention humans/animals would never pay attention to anything. Basically once every few minutes or so you need to pay attention to something or your brain is going to be too inactive. After you pay attention to one thing, however, your general attention is grabbed and you don’t need to have another spike for at least a few minutes.
Everything that is processed, not just spikes, follows the sequence of thought to emotion to feeling. That is because thoughts are clearer than emotions and feelings, and emotions are more similar to thoughts than feelings are (discussed previously) so when you see something or hear something or whatnot for the first time, it is clearer in your mind. Then it becomes less clear and you think about it unconsciously. You think about it unconsciously because it takes further processing in order to isolate the feeling that that things gives you. Some things are just too complicated to feel them right away. Other things, however, can be felt right away, say if you are touching something the feeling arises right away. That is because the physical stimulus is more immediate than emotional stimulus. [Emotion is very complicated, so the emotions something gives you you aren't going to understand well (for instance a feeling of depression is very complicated but the feeling of sadness that comes from it isn't that emotion lead to a more simple feeling). That also shows how this thought-emotion-feeling pattern occurs quickly, and is just based off of the thought period "dying off" and becoming less and less like a thought, and more and more like a feeling. You could think about it later and spike it again, and then the series would repeat. This doesn't mean that everything occurs as thought to emotion to feeling, only when you have a thought the thought is going to be brief, and when it goes away a feeling is left. Feelings and emotions last longer than thoughts (in fact, you're feeling all the time, but only thinking specific things some of the time).]
Emotional things, however, are simply to complicated to “feel” them right away, they need to be processed first. That is logical, just take looking at anything, say a book. In order to feel the feelings that the book causes in you, you are going to have to at least unconsciously think about it first (that is, after you start paying attention to it, which you do by starting to think about it or just see it and notice it more than you usually notice things in the area). Since you don’t need to think about physical stimulus since it is just a physical stimulus, (not something like vision) you don’t really unconsciously process it.
Spikes are dramatic rises in attention. They can be assisted by load noises or something dramatic visually, but they don’t need to be. In other words they can be internal or external. You can pay sharp attention to something in the real world or something in your own head. If there is a load sound in the environment, it is most likely that your spike in attention is going to occur during that period. It doesn’t have to, you could pay attention to something else in spike form, but the main point is that you have to have about one sharp spike in attention a minute at least. That is, you have to pay attention to something in your environment or something in your head, sharp attention in the form of a spike (lasting a second or a few seconds) every minute or so.
Otherwise the world would just go by you and you’d be completely out of it. You don’t just need to pay attention to things, you occasionally need to pay sharp attention to things. Furthermore this attention in the form of a spike can’t be dissipated and spread out, it is always going to occur in a spike. If, in between the spikes, you are trying to get the highest attention you can in an attempt to spread the spike out, (that is, if you are trying to spread out your attention instead of having spikes) the normal spike would still be a spike relative to even the extra attention you gave to the non spike period, because that attention would still be too low, so you couldn’t give it that high of an attention level, as it would be very low compared to the spike still. Spikes of emotion and feeling also need to occur every few minutes or so. The human system needs to be “shocked” into reality because you need to pay attention to life.
Say it is time for another sharp increase in attention (that is you waited too long without focusing on anything) and something occurs like a dog barking. Then you are going to focus on that dog barking intently in the form of a spike. So if the dog continues to bark for the next few seconds or minutes, your attention will be on that more because you paid attention to it initially more so than other things in your environment. This is very important because if someone doesn’t use their spikes say to someone they are talking to, they could be talking to that person and not be paying attention at all. You could hear what they are saying but not really be interested in it nearly as much as you would in a normal conversation (if you choose not to think about the person talking to you – remember if you do think about the person talking to you then naturally you are going have a thought spike because that is how thought initiates when thinking about new objects, the new object needs to be grabbed and processed first).
If you direct your attention spikes away from the things you don’t want to hear (say if there is a loud noise in the background, just don’t pay sharp attention to it) then most of your attention will follow along suit. If attention was uniform then people wouldn’t be able to direct their attention easily. In order to ignore the other things in your environment and just focus on one thing, the only way to get just that one thing into your focus would be to use a spike in attention. After that spike the thing you “spiked” would be in your attention at a low level, but the other things around you would be at an even lower level. The spike is necessary to differentiate what you are paying attention to, to differentiate the new thing which you are paying attention to from everything else. You can’t just go to a slightly higher rise in attention for one thing (you can pay attention to something new, but you wouldn’t be paying more attention to it than other things in the environment already, you’d just be isolating that thing, it wouldn’t be a rise in attention, or an insignificant one), because people can only focus on one thing at a time for this reason. Because of the spikes in attention, people can isolate (focus intently on) one or a few things.
That limitation (of only being able to focus intently on a few things) happens because each spike eliminates the other things which they were paying attention to previously. You can spread out one spike to different things, however (if you do it at the same time), that is how your attention can be spread. You can’t do a series of smaller spikes because that confuses your mind, it is like saying, pay attention to this, then pay attention to that, and then pay attention to that. It is too confusing. It is easier to say at once, pay attention to this that and that, and then you can do it.
That explanation also explains why spikes occur at all – because it is much easier to pay a lot of attention in a short period of time then to keep jolting yourself over and over at each thing that you want to pay attention to. That way is too jarring and much less smooth. You don’t notice the spike when it occurs because it is more like a refocusing than a spike. People basically need to be focused on little things continuously, and this focus is directed by short periods of refocusing labeled here as spikes. One way in which these spikes occur is that when something is first presented it takes more energy and brain power to process it at first because it is new. It is easier to try and comprehend the entire thing at once than to comprehend it in pieces, as the latter just doesn’t make any sense. People comprehend things as wholes not as parts added up over time. The other reason these spikes occur is to initially catch your attention and hold it at a high level on something. That is, in order to go from a state of inactivity to a state of activity, you cannot just go up to the level of activity, but you need to motivate yourself to get there by having a spike (this spike is also the initial processing of the new object/event and occurs because of that as well).
In order to get someone’s attention they can’t just lazily look at you like they are looking at everything else, but they need pay sharp attention to you for the first instant (this is the initial “grabbing” talked about). Otherwise people would be paying attention to anything and everything at the same time. There has to be a way of separating out what it is that is in someone’s attention field. That method of separating is by the use of the spikes.
Spikes work for emotional things and feeling as well as for thought. That is things that are emotional occur in the same spike pattern, as well as things you feel (feelings). Another way to note this would be that your attention is only focused on things that change (things that change, the change usually occurring in spike form). It might be that something grabs your attention a little, and you only put a spike in after it initially grabs your attention a little to then pay full attention to it. Lots of time something happens, like a loud noise, that you only process after it occurred, or slightly after it occurred. So there might be a delay in when you process it, or spike it, or you might not spike it at all. You might also not need to spike something if a similar spike occurred with a similar thing previously.
How This Chapter shows how Intelligence is intertwined with Emotion:
• Someone’s attention determines what they see and figure out about the world, if someone is paying more attention then they are probably going to realize more things, or notice more things visually and intellectually. Since attention varies based on emotion, your intellect is going to vary based on your emotions. If you are emotionally interested in things then it might make you pay more attention to them and then you might realize more about those things. If something causes more of an emotional impact (or more of a spike) you might retain understanding it longer (memory is also a part of intellect) or it could increase your emotional intelligence about that thing.
• Everything that is processed follows the sequence of thought to emotion to feeling – that shows how everything in the world is real, and these real things all cause feelings, you recognize what it is (a thought) and then you feel that thought, your emotional processing of your thoughts is part of your thoughts themselves – this is obvious with emotional spikes because when you feel something strongly that strong feeling clearly aids in you understanding things about what it is you are feeling.
• People also only comprehend things in their entirety, because if it isn’t completely understood then you cannot verbalize it and make a thought process of it, therefore things that aren’t completely understood or verbal are going to be emotional and you are going to “feel” them, not think them.
References
Meyer. D.E.. & Schvaneveldt. R.W. (1971). Facilitation in recognizing paths of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology, 90, 227-234.
Sperling. G. (1960). The information available in brief visual presentations. Psychological Monographs, 74, (Whole No.498). | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.11%3A_Life_Occurs_in_Sharp_Spikes.txt |
If someone is sad or depressed, it is natural that they are going to be upset that they are that way. Therefore it is probable that all depression or sadness has feelings of anger and agitation mixed in. In fact it is easy to see a combination of those three feelings as when something bad happens to someone their reaction is an intense feeling of sadness/anger/agitation. Like if you punch someone in the face, or shoot him or her, they aren’t going to be just sad, they are going to sad, angry, and upset.
After the event occurs (such as getting punched in the face) the sad/angry/upset feeling only lasts a few seconds on that persons face, to various degrees of visibility to other people. What happens after that is more interesting however. After the first few seconds of sad/upset/angry their mind loses focus on what happened and it no longer becomes a single emotion. They are focused on the event and that is why it shows up on their face, after they lose focus, however, the emotions become unconscious.
In their unconscious form the emotions are like a depression. A depression is something that affects someone’s mood, his or her entire system. When the angry/sad/upset emotions go into the unconscious, they start affecting the other emotions around them, and your entire system becomes sad, angry, and upset. This might not be visible on your face because it isn’t as intense, you didn’t just get punched, or something bad didn’t just happen to you, but it has left a mark.
It seems like the angry and upset emotions are more temporary, and the sad feeling is retained longer. That is because you forget why you are sad, you forget the event that caused the sadness, but your emotions remember the impact of the upset and anger, and that impact was to make you sadder. The emotion sad is simply easier to remember. It is marked in your mind for vengeance, you associate the sad emotion with being bad for you, but the anger and the agitation are more hormonal, temporary emotions.
That is, it is hard to be angry if you don’t know why you should be angry. You need to be able to logically justify your own feelings. I have never seen anyone angry for a long period of time, but it is often that sadness occurs for a long period of time. There are still elements of anger and agitation mixed in however, just less so than the sadness. So after an initiating event there are the three emotions equally present for a few seconds, after that mostly the sadness remains, still with elements of the other two emotions.
It is hard to be angry or upset when you don’t remember what it is you are angry at. It is easy to be sad because you don’t need to remember anything to be sad at something, the sad feeling simply stays in your system because you are used to sad feelings and you don’t need to justify them like you would an angry feeling. Or it could be that being angry and upset takes up more energy than being sad does, being sad lowers how energetic you are because it brings you “down”. When you are angry and upset you are much more energetic and agitated.
So it is like, ok that really pissed me off, but I am too tired to be pissed, I can be sad though. The sadness in your system isn’t even an individual emotion after the first few seconds from the initiating event, however. It becomes mixed in with the other emotions and feelings in your body because you no longer remember what caused the sadness. So it is like a depression because it effects your entire system and mood like a depression does.
So there is really a difference between being sad, and being upset. You might even call that period after the few seconds for that person “the person being upset” instead of them being sad. That is how much the upset and agitation emotions are mixed in, that after someone is punched you could say either they are upset, or they are sad, or they are agitated, it depends on the person and the circumstance. That is a lot of proof to show that all three are often mixed in together.
You might say that they are upset, but they are probably going to be more sad, however, because if you are upset and angry then you are going to sad about that, just like you are going to be upset and angry that you are sad. But I think the sad is going to dominate because no one has enough energy to be upset and angry for very long. When you are upset and angry your tone is louder, you are moving faster and more agitated like, you are more aggressive and looking for retribution. Anger and agitation almost need something to take vengeance on, while sadness you don’t attribute to someone else causing it. You do attribute anger and agitation to something external, however.
How This Chapter shows how Intelligence is intertwined with Emotion:
• If it is hard to have emotions if you don’t remember something, then that shows how your emotions are based off of your intellect as well. What your memory (which is a function of intellect) remembers is going to bring up emotions, which are then in turn going to determine (to some extent) your emotional intelligence. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.12%3A_Angry_Upset_and_Depressed.txt |
Emotion is such a strong feeling that it must be the combination of thoughts and feelings. If you think about it, if you combine positive thoughts and positive feelings, you’re going to have a general overall greater experience, (if the thoughts and feelings are on the same idea or the same thing, you are going to have a greater positive single emotion about that thing). Just take the strongest emotion you can experience, it would have to be a combination of all the positive things in your mind, and people can control their thoughts to a large extent.
By a combination of feeling and thought I mean a combination of what it feels like to have a thought, with the feeling of what it feels like to have a feeling – I don’t mean the combination of actual verbal thoughts with feelings, but non-verbal thoughts which are like verbal thoughts in that they are about something, you just can’t identify what it is all the time because it is non-verbal.
Since thoughts are conscious and unconscious, emotion could be redefined as the combination of feeling and thought - that you only have emotion when you are thinking about something, and feeling something at the same time, and the combination of the two results in individual emotions. There is evidence for this from the facts that you can only experience one strong emotion at a time, and you can also only think about one strong emotion at a time. That shows how emotions are pulled up by thoughts, or controlled and generated by them. It might be that this only applies to strong emotions, but it depends on each individuals definition of emotion (it might vary), but I don’t think anyone can experience two strong emotions simultaneously. You can feel it for yourself, try and feel any combination of the following emotions (strongly) at the same time - anger, fear, sadness, disgust, surprise, curiosity, acceptance, or joy. You just can’t do it. A slight feeling of curiosity is exactly that, a feeling and not an emotion. Emotions are stronger than feelings, and stronger than thoughts, but what are they made of? The only logical conclusion is that they are made up of thoughts and feelings.
The type of thought that makes up emotions isn’t just words or sentences or verbal ideas in your head, but basically any period of thinking. It doesn’t have to be intense thinking, in fact, if you are intensely thinking there probably isn’t enough room left to process a strong emotion, but rather emotion arises from periods of very low intense thinking, and less intense feelings (you still have to be trying to be thinking, that is why negative emotions don’t exist, because people just don’t try to think about them). During those periods of low intense thinking (from which part of emotion arises) you don’t have to even understand what you are thinking about, just understand that to some degree you are more thoughtful than usual. Feelings are generally considered to be shallower than emotions, and thought is considered a deep experience, so in order to have the strong, deep feeling of emotion, it must be made up of the part of your brain that experiences deep things, (the thought part) (remember feelings feel like feelings from sensory stimulation, which isn’t “deep” at all).
Furthermore, emotion isn’t just a strong feeling, a strong feeling can give rise to an emotion, just like a strong idea can give rise to an emotion, but an emotion is the combination of a lesser feeling and a lesser idea or thought process (this thought process might be unconscious, leading the person having it to just know that they are thoughtful during the experience). You can’t have a strong feeling and a strong emotion at the same time because there just isn’t enough room or processing power in your mind to do that (it’s easy to feel that in your mind just by testing it).
Is a thought sensory input? No it isn’t, you can think about sensory input, and that would give rise to a feeling of the sensation itself, but a thought is much faster in the brain. A thought is like a fast firing of neurons while a feeling or a sensation is an experience that actually takes some amount of time longer than it takes for a neuron to fire, which (it feels like anyway) is the length of a short thought. So basically, emotions must be the result of feelings and thoughts in your brain because there isn’t anything else left that they could be made up of. All that is in your brain is feelings and thoughts. It is obvious how you can turn off a thought automatically, but you can also do that to some feelings. This is so because feelings are in large part triggered by thoughts. That’s because feelings are experiences of sensory stimulation. If you are feeling something that you don’t want to feel, however, because that sensory stimulation is present in your environment, there is nothing you can do. But if it results from a memory or something in your mind, you are going to shut it off automatically. This way feelings and thoughts work together; you have your present experience of the sensation, and your mental direction of thinking about that sensation. The latter part you can turn on if you want to make that natural, environmental feeling a strong one. It is hard to experience a strong feeling just by bringing the feeling up in your head, to have a strong feeling you need to have some type of direct sensory input and be thinking about that sensory input at the same time.
So a strong feeling is just like a strong emotion, only you need direct sensory input and thoughts to feel it, while with emotions you just need a feeling (which can result from the memory of a sensation) and some thoughts. So, very simply, everything in the brain is either a feeling or a thought. And emotions are combinations of feelings and thoughts.
Thinking about things generates feeling because you are simulating the emotions of that thing in your head. Although you are not experiencing the stimulation in real life, you still understand what it feels like to be in that situation, and this memory of that stimulation you can feel almost like being in the real situation itself.
If you have emotion about something then you are feeling that thing. Thus you are directing thought about that object, and directing thought is what thought is. Thought is just directed to something specific, while feeling is more generalized, you have only a few feelings for many many things, and thought is only a way of categorizing those feelings. For example, you can simulate many feelings by thinking, “I am going to go to the store then I am going to come home”. Instead of feeling “store” which you feel in the store, you are adding the feeling of traveling to the store and being home. Those feelings are less intense than actually traveling to the store and actually being home, but they are still there and present in the thoughts. So when you have a thought about the store, you feel the store because you are simulating the idea of being in the store in your head.
Emotion always precedes thought; thought is always just going to be an explanation of emotion. Everything in the end turns out to be an emotion in your system, so therefore everything is really an emotion. When you say “I want to leave” the feeling of you wanting to leave is always going to precede the thought. Actually first you quickly understand what it is that you are feeling when you realize what it is you are feeling as an unconscious thought process, then you have a more regular feeling about it, and then you are able to verbalize that feeling into a thought. Unless something is said to you instead of you thinking it, in which case the process is reversed. First it is a thought because it is expressed that way, then it is a feeling, and then it is a quick unconscious thought process to think about what was said.
When the thing is said or thought of verbally it is most clear what the meaning is. In this way words assist understanding. This is probably because the combination of adding the stimulation of sound to the stimulation of the visual (or other sense) of the object/idea enhances understanding and forces you to think deeper about it because sound is an enhancing mechanism for thought.
Feelings are fast, you don’t pause and think about them. Emotion you could say, since it is deeper, that you almost “think” about it.
How This Chapter shows how Intelligence is intertwined with Emotion:
• Thoughts also contribute to what it is you are going to feel, and what you feel and how you feel it is then going to determine your emotional intelligence, and over the long run would help determine other aspects of your intelligence as well. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.13%3A_Emotion_is_a_Combination_of_Feeling_and_Thought.txt |
Before reading this chapter, it is important to understand something that seems obvious, but that people don’t pay attention to because they do it so automatically. That is that when people do things they don’t always think heavily about what it is they are going to do. When people do things like move their hand to open a door, or just move their hand in a certain fashion, they don’t say to themselves “ok, now I am going to move my hand in this way” – they simply do it. They are conscious that they are moving their hand in that way, but they could be a lot more conscious of it if they thought about it a lot before they did it. It is almost like you are doing these simple actions unconsciously because you don’t really think about them before you do it, but are only minorly aware that you are doing it. There are degrees and types of examples for this, for instance typing is much more automatic than opening a door.
When someone has an intention, or does anything such as thinking something or doing something without thought, what is the exact mental process that lies behind that action? What combination of emotions, feelings and thoughts makes that happen? Here is what is at the bottom of the "Emotion is a Combination of Feeling and Thought" chapter:
“Emotion always precedes thought; thought is always just going to be an explanation of emotion. Everything in the end turns out to be an emotion in your system, so therefore everything is really an emotion. When you say "I want to leave" the feeling of you wanting to leave is always going to precede the thought. Actually first you quickly understand what it is that you are feeling when you realize what it is you are feeling as an unconscious thought process, then you have a more regular feeling about it, and then you are able to verbalize that feeling into a thought. Unless something is said to you instead of you thinking it, in which case the process is reversed. First it is a thought because it is expressed that way, then it is a feeling, and then it is a quick unconscious thought process to think about what was said.”
So there is an unconscious thought process before everything you think/do, however there are also patterns of feelings which are also there. The feelings described are an important part of it, when you do something there isn't an unconscious thought right before you do it. You first have the unconscious thought when you have the original feeling that caused you to want to do that thing - you first have a feeling that you want to do something, then you understand what that feeling means as an unconscious thought, and then that is translated back into a feeling which remains there until you do the action. So the unconscious thought is not right before you do the thing, the feeling is there before you do it because feelings are faster than thoughts, so your mind has the feeling ready at hand to act on the unconscious thought process. That is because once you realize what it is you are going to do as a thought process, you don’t need to spend the time to think the entire thing through again, but it is stored in the instinctual part of your brain where your feelings are. Remember from the instinctual frog example that feelings are faster than thoughts, and feelings are also unconscious thoughts so they can also store information to do. This is the frog example in the chapter “Thoughts”:
“The definition of intellect and thoughts is almost understanding (those concrete things). Emotion is feeling, completely separate from facts or information. All facts and information are going to be about things that cause feeling, however, since all things that happen cause feelings and all facts and information are about things that happen. So facts and information are just feelings organized in a logical manner. Intellect and thought also generates feelings when those thoughts are processed in your mind. Since thought is really only about feelings, it is logical that thought actually has root in feelings. For example, all events are really feelings in the mind, so thoughts are actually just comparing feelings. You take two feelings and can arrive at one thought. Take the feeling of a frog moving and the feeling of a threat of danger. The two feelings combined equal the idea or thought that the frog needs to move when there is danger - the thought is actually just understanding how feelings interact. All thought is is the understanding of how feelings and real events interact with themselves. Feeling is what provides the motivation to arrive at the answer (the thought). If you just had the facts, there is a threat, and the frog can jump, you aren't going to arrive at the conclusion that the frog should jump away. You need to take the feeling that there is a threat and the feeling that the frog can jump and then combine the two sensory images in your head to arrive at the answer.
That shows how all intellect is powered and motivated by emotion. It also shows that frogs have thoughts; the frog has to have the thought to jump away when it sees a threat, as a thought is just the combination of two feelings resulting in the resulting feeling of wanting to move away. That process of feelings is like a thought process. Thoughts are a little different for humans, however, because humans have such a large memory that they are able to compare this experience to all the other experiences in their life while the frog only remembers the current situation and is programmed (brain wiring) to jump away. The frog doesn't have a large enough memory to learn from new information and change its behavior. That shows how humans are very similar to frogs in how they process data (in one way at least), and that one thing that separates a human from a frog is a larger memory which can store lots of useful information and potential behavioral patterns.”
It would be too slow for you to just do something based on an unconscious thought process, you would have to wait to have this unconscious thought right before you do the thing, instead of having the thought at one point in time and storing it, and then doing the thing later on. If it is just an instinctual reaction, however, it is just a feeling that you are responding to because it is too fast to have an unconscious thought process. It is just a manner of the definition of what an unconscious thought is - that it is going to be more like a thought than a feeling - which is also an unconscious thought, so it depends how you view it.
If it is an instinctual, immediate reaction, say if you slam a door on your hand then you are going to say "ouch" - that is a thought that resulted from two feelings, the feeling of pain and the feeling that you need to express that pain. The thought is so fast you might consider it unconscious, that is also like in the frog example.
It gets even more complicated than that - this is in the "Life Occurs in Sharp Spikes" chapter of the book:
“Everything that is processed, not just spikes, follows the sequence of thought to emotion to feeling. That is because thoughts are clearer than emotions and feelings, and emotions are more similar to thoughts than feelings are (discussed previously) so when you see something or hear something or whatnot for the first time, it is clearer in your mind. Then it becomes less clear and you think about it unconsciously. You think about it unconsciously because it takes further processing in order to isolate the feeling that that things gives you. Some things are just too complicated to feel them right away. Other things, however, can be felt right away, say if you are touching something the feeling arises right away. That is because the physical stimulus is more immediate than emotional stimulus.
Emotional things, however, are simply to complicated to "feel" them right away, they need to be processed first. That is logical, just take looking at anything, say a book. In order to feel the feelings that the book causes in you, you are going to have to at least unconsciously think about it first (that is, after you start paying attention to it, which you do by starting to think about it or just see it and notice it more than you usually notice things in the area). Since you don't need to think about physical stimulus since it is just a physical stimulus, (not something like vision) you don't really unconsciously process it.”
That shows that it is really all mixed in - thoughts, emotions and feelings - that there isn't just an unconscious thought process but you could also just say that feelings or thoughts are first - this is because when you process something you might think about it first, and it certainly feels this way because when you are processing something it is a very intellectual experience, it is clear in your mind and it feels like you are thinking about the thing so clearly that you must be using thoughts instead of emotions. I say that things are first clear in your mind when you first see it or whatnot, - that would be the "thought" but then it is an emotion, and you do that (make it into an emotion) to isolate the feeling the thing causes in you, so then you feel it (after you isolate the feeling) - thought to emotion to feeling.
So when you have an intention to do something could it be that first it is an unconscious thought and then you just do it? First you are going to have an unconscious thought about it, then you are going to have a conscious thought about it (because it is an intention) and then you are going to do it. Your conscious thought about it may or may not be verbal, you don’t have to think about everything verbally in order to do it. You do have a conscious thought about it because that is almost the definition of intention, your intent. If you don’t have a conscious thought about it then it is more instinctual, or it could be a mix of the two. Everything someone does is going to be on the spectrum somewhere between complete intention and completely instinctual.
Intentions and instincts (or things you do) aren’t just thoughts, but feelings and emotions are often involved as well, where do they fit in? First an emotion could start an intention, and then it would be an unconscious thought process, and then it might become another emotion because you can feel everything (you are going to feel the thought, or have a feeling about it) and feelings are very fast so this feeling can fit into the time after you think about it and before you do the action, or after the initiating event and before the unconscious or conscious thought process. When you do think it is very fast, in fact your thinking might be slow, but there is one point in time where your thinking leads to a conclusion and that is culmination is considered to be when you had the “thought” because it is a conscious thought that your mind understands, but leading up to that conscious thought (which could be verbal or not verbal) was unconscious thoughts (or thinking) because it is hard to reach difficult conclusions instantly. This thought is then held in your mind until you do the action, it prepares your mind for the action, and during that time that thought might generate a certain feeling – maybe fear or a lack of confidence. This feeling is then used when you do the intention, because when you do something you do it so fast that you don't "think" about it right before you do it, but you use the feeling that is “storing” the thought.
You might not have feelings about it and your action might not be swayed by feeling, but if it is then your thoughts might be under the influence of your feelings. Your feelings might cause you to stop doing the thing if you are too afraid, for example.
So there is an unconscious thought before every intention, that is what thought is, it is figuring out what you are going to do, and you are going to have to figure out what it is that you are going to do first before you do it. Unless it is like the frog example where you just feel it at the same time that you do it, but in that case the feelings are mixed in with the thoughts, so then it is a matter of how you define "thought". Thought is really a conclusion (not a partial thought, which could be an emotion), so you take two feelings and come at a conclusion, which is the thought, then you do the thing, and that means that you do have an unconscious thought right before the intention, the feeling really is a thought, it is just so fast that it is a feeling and a thought. So right before you do something there can be a feeling - which is also a thought, that causes you to do it finally. So is it a thought or is it a feeling? The feeling is the drive behind the thought (or thinking), which builds up along with the feeling. The feeling is powering the thought (or thinking) because it is so instinctual. So things that are more instinctual are going to be faster and involve more feelings, feelings can speed up thoughts (this is obvious with the instinctual example, where instinct then is really just powerful feelings causing you to think very fast).
So if you do anything there is going to be unconscious thoughts before you do it, because thoughts are just understanding real things. That includes if you have intentions, only intentions (since they are more conscious) are going to involve conscious thoughts as well as unconscious ones, unless it is an intention you intended to do unconsciously. The reason intentions involve unconscious thoughts as well is because you need to think to arrive at the conclusion, and most thinking isn’t completely consciously understood. How many people can think without using words, yet understand what it is that they are thinking? You can understand that you are going to do a certain thing without using words, but you can’t think for a long period of time without using words and still follow your thought process. Complicated non-verbal thought processes are unconscious. And almost all thoughts and everything you do is going to be complicated - and therefore they are going to involve long unconscious thinking about them (by long I just mean longer than instantaneous, which would be what you would do if it was instinctual).
So right before you do something there is going to be something in your mind that understands what it is you are going to do, this is a thought because it is real (versus feelings which are things which you just feel). You might even "feel" the thought really. That is what happens right before you do something. However, leading up to that final thought/feeling it is going to be like described before; first you might have a feeling. If humans were computers I would say that first it starts with its programming and then it has the thought, but for humans feelings are their programming – so humans first have feelings and then we have thoughts. Feelings can originate from thoughts however, so it is then a which came first, the chicken or the egg debate. But if the original feeling started because of a thought, the thought was more further away in time from the feeling -by a few seconds at least – that is because conscious thoughts (verbal ones) have space of time around them, if you think, “I am going to shoot” you don’t shoot as quickly as you would if you just understood that you were going to shoot, the conscious verbal thought slows you down. So when you have an intention or when you are going to think something (which is what thoughts are - they can be verbal because you can express anything verbally almost, including all intentions) then that follows the process of feeling to unconscious thought to feeling again to store it. I said before “a feeling, then an unconscious thought process, then a more general feeling”.
I said that because the first feeling is just the real feeling of the intention you are going to have - which you could say is an unconscious thought because as discussed previously all feelings are unconscious thoughts - and it is clear they are when you realize it is an intention, which is going to be doing something real, and intellect is understanding things that are real. So the first feelings/thoughts are when you first feel that you want to do something, then you need to unconsciously think about it to realize what it is you want to do exactly (this is not a conscious non-verbal thought, but an unconscious one), and then you have a more specific or general feeling about it (by general there I really mean larger or more clear) to store that clear thought, the general feeling then is going to be more clear because you now unconsciously understand what it is that you are going to do, and then it is a real conscious thought and then you could translate that conscious thought to a verbal thought or an action.
So to explain the statement, "first it is a feeling, then it is an unconscous thought process, and then it is a more general feeling and then you are able to make that feeling into a conscious thought (or do an action which would stem from that clear thought)" - that was originally said in the book at the end of the "Emotion is a Combination of Feeling and Thought" chapter in this form - "actually first you quickly understand what it is that you are feeling when you realize what it is you are feeling as an unconscious thought process, then you have a more regular feeling about it, and then you are able to verbalize that feeling into a thought". Whether someone’s state before they have that thought is one that started with an emotion or without an emotion, that state must have originated from a previous state, or from some other previous stimulus. In terms of someone’s first feelings, their first feelings probably came from physical feelings before the brain was developed in the womb. First people would have just physical feelings, not deep emotional ones because all there is in the beginning is sensory stimulation - mostly feeling your own body and your surroundings.
So the first thoughts/feelings originated from physical stimulus, like, "ouch that hurts". Or "that looks cool". After the human develops they can have thoughts and feelings that can originate from sensory stimulation, physical stimulation, or other thoughts and feelings. But that doesn't explain what happens right before someone thinks something or does something. It explains that originally there are those things which would cause the intention, but not how the intention is formed. Since humans have strong emotions, many intentions are going to be formed from emotion. Intentions are also going to be formed from conscious / unconscious thinking. Feelings are also going to have elements of thoughts, however (so it isn’t either feeling or thought that originated the intention, it might be both at the same time). Say if you want to switch a switch - it is going to be a progression of feeling/thought. That is, it is going to take time for you to realize what it is you want to do, so it could be feeling and thinking all along, and at some point in that feeling/thinking you are going to realize fully what you want to do, and then you could call it a thought because it is completely formed (this thought might be conscious or it might remain unconscious and only later become conscious). When you realize you want to switch a switch it isn't instantaneous, but it takes time. But when you do switch the switch instantaneously, are you acting off of the thought or the feeling? You are probably acting off of the feeling, the thought was a period in time a while ago, but that thought started the feeling of you wanting to do it, which lead to you switching the switch off of the feeling instead of the thought. Unless you happen to do the thing right after you finally figure out what it is you want to do, then you could say that the thought made you do it.
That reveals that you are always going to have some feeling about what it is you are going to do right before you do it, because then you “think” or “feel” what it is you are going to do. It isn’t going to be as strong in terms of thought as when you first thought of what it was you were going to do, because you don’t need to think as much to realize what it is you are going to do. You are probably going to be feeling more than thinking right before you do it because you are going to be excited about doing something, you already realized what you were going to do which was the thought part, now it is time for the feeling part. The thought is still there of course otherwise you wouldn’t know what to do, however right before you do it feeling is probably going to dominate.
Right before you do something your mind needs to get ready to do it, and you need to remind yourself what it is you need to do and that you need to do it. So that means your mind probably feels something based on what it is you are going to do. This feeling can be simulated if you read a book and then later reflect on how you feel about the book. Reading the book in this instance would be the original thought process, and reflecting on it later would be simulating the feeling right before you do something. You don’t need to think about everything in the book to understand the feeling that the book causes you. You don’t need to think as hard to understand the same things because it was already understood at one point. The second time it is easier. That is like when you first have an unconscious thought process to understand what you are going to, when you are going to do it later you already understand what you are going to do, you simply then “feel” what it is you are going to do because it is more clearly understood, it is understood emotionally now (more instinctual) so you don’t need to “think” as much as you did before. Emotion replaces thought because emotion is easier than thought. Someone isn’t going to think unless they have to, you basically have already done the hard part, so the second time you bring it up the thought would be reduced and the emotion would remain. The further excitement of being about to do the thing would raise the emotion even more. But here learned is another thing, if you think about something once the next times you bring it up (especially if you bring it up right after you figure it out) it is going to be much easier to understand so thought is going to be reduced and feeling raised relatively.
So in other words, before the thought or your understanding of what it is you are going to do is complete, you are going or are not going to be having emotions that are encouraging this thought process or affecting this thought process. Emotion and intelligence are intertwined. That is why first comes the emotion, then the complete thought, and then you might have an emotion about that thought itself as well, - in other words the state of the emotion you are feeling is probably going to evolve as the thought does. This reveals that while emotion is unconscious thought, not all unconscious thought is emotion.
Humans don't just say things without thinking about them first, so everything is going to be unconscious first. Speech is much much slower than your thoughts are, and unless you start saying something and don't know the complete sentence before you say it, you are going to have the entire thing thought out first. So technically everything starts with an unconscious thought. However this thought has levels of understanding, there are levels to which you understand the thought, that is why you can't just say everything all at once, you usually have to think about it for a bit first. When people think, it takes time to think, and they don't think unconsciously in sentences. They think unconsciously with emotions, thoughts, visualizations, anything your mind can simulate. When they think unconsciously with emotions you could be taking large emotional experiences and trying to analyze them, or little ones, you could be combining different experiences, or combining emotion with thought or emotion with visualization (etc.). Your mind doesn't just use sentences to figure out what it wants to do, that would take too long. Sentences are actually just sounds that represent things, you don't need to simulate a sound in your head in order to think. It might be that you simulate tiny sounds, or however it is your neurons fire to organize the thoughts, the point is the thoughts are not fully formed instantly. It isn't the firing of one neuron once that makes a complete sentence. There is a progression of thought. This is obvious because when you are doing a problem, say a math problem, you often can reach the answer without having to say anything. What is happening is that you are thinking about things unconsciously, maybe you are visualizing the number of things you need to visualize to find the answer (say adding 1 to 1 you have to visualize the separate objects, and then visualize the two objects together). | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.14%3A_Intentions.txt |
When you go into a situation or an event the attitude you have is going to impact your emotional experience. If you think something is going to be fun, when in reality it isn’t, and you continue to think that that thing was fun afterwards, it is going to make you feel worse than if you had the right understanding of how much fun the event was. This is because there is something in your mind which understands how fun the event was automatically, and compares it to your assessment. There is also something in your mind which rates how intelligent you are and bases your self confidence off of that. So in other words, you mind is going to know if you are being stupid or not, and feel bad if it made the wrong decision. Your mind basically has integrity. To prove that just realize that your mind compares its thoughts to each other constantly, if you work hard all day, then you relax when you get home, the fact that you worked hard increases your amount of relaxation. That is because your mind is comparing how relaxed you are now to how much you worked during the day, and then it feels more relief (since you did the work).
Also, an overly optimistic attitude causes you to consciously focus on things which you enjoy more, but your conscious mind can only recognize a tiny amount of things which you enjoy. So you are amplifying a disproportionate amount of emotion in your own mind. That throws things off balance in your head and you start to wonder (consciously and unconsciously) why you are enjoying some things more than others, and it throws off your responses to natural, ordinary events. In other words, your mind compares the positive things which you are amplifying to the things you aren’t amplifying (like how it compared how you worked during the day to how you rested at night). Furthermore ordinary events start to become more dull because you are amplifying a few events you just think are fun, when in reality all of life is fun if you give it an equal chance.
What those people fail to realize is that basically everything can be viewed as fun, they don’t need to grab onto a few things with their overly optimistic attitude. Emotions are fun, and life is so full of emotions that any scene or event in life can be broken down into its many emotional parts. Emotion just means how something makes you feel, and that in turn means what kind of reaction things make you have. In fact, each individual object in life gives an emotion, and makes you react in a certain way.
If you have an optimistic attitude towards life, or an overly optimistic attitude, then most of the emotion that you get is going to be undercut (undermined, etc, because it is going to be outweighed by the few things which you are praising, or have an optimistic attitude for) and therefore overall be leading to a dulling of emotion. That is because this overly optimistic attitude is a conscious thing, that only enhances a few of the events in life and doesn’t understand that everything in life can be viewed as being fun (if you take the same attitude and just twist it that is).
You’re not still being optimistic because you’re dismissing the verbal discourse whereby you rate some things in life as higher than other things. You are still being optimistic in a way but now you understand that you shouldn’t be over inflating some things more than others. It is like saying, wow that duck tape is really really cool. But then you are missing all the other things in the room which are also cool, maybe a lot less cool than the duck tape but they can still be viewed as being cool. So instead you’d say, hey that duck tape is cool, to keep it more in line with how cool the other things are. This doesn’t mean that you are less optimistic towards life, it just means you are more aware and considering of the whole.
Similarly, an overly negative attitude can bring down how cool an object is. You can basically manufacture false emotions about things. While you might feel a temporary sensation of elation (if you’re being optimistic) or a temporary down feeling (if you’re being pessimistic) afterwards you are going to feel bad because you basically insulted all the other feelings in your mind as being weak compared to it. Either that or you feel bad because you inserted an emotion that was too hard to deal with in your mind because it was so strong, and you feel bad afterwards because that strong emotion lingers in your mind and takes up room that it shouldn’t, in addition to throwing your system off balance.
That is what an overly optimistic attitude does, it takes all the things in your mind that you might verbally over inflate, and inflates them. That creates a tension in your brain because then most of the ordinary things which you should also be enjoying, seem dull. The reverse is true with an overly negative attitude, which is also bad.
How This Chapter shows how Intelligence is intertwined with Emotion:
• Your attitude is determined by your thoughts, and your thoughts are going to be determined by your intellect because your intellect is who you are, and you decide what it is that you are going to think. Your attitude is going to lead you to have different emotions, and these emotions are then also going to change how it is you understand the world emotionally, or your emotional intelligence. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.15%3A_An_Overly_Optimistic_Attitude_towards_Life_Leads_to_a_Dulling_of_Emotion.txt |
Extremely deep feelings and emotions, like sadness or anger, usually only last a few seconds. However, those deep feelings often trigger lesser feelings of sadness and anger for the period afterwards. This intense, brief period of emotion can trigger a long array of smaller, similar emotions afterwards. Say if the deep emotion was you being sad, the following emotions that person is going to experience would be lesser sad emotions. These emotions aren’t just by themselves, but are often accompanied by thoughts, behaviors, or environmental stimulus.
If you have a brief period of being extremely happy it is more likely to be followed by extremely optimistic thinking, like thinking, I am great, I am amazing, and wow I really did a good job. A brief period of extreme sadness is likely to be followed by pessimistic thinking because that is how your brain is wired. Your brain is programmed to associate sad with failure, and success (or happy) with optimism.
Why do intense emotions only last a few seconds? They do because emotions work in accordance with thoughts. Thoughts only last a few seconds, and therefore it is logical that the most intense emotions you experience are going to be periods of intense thought and intense emotion at the same time. These periods are so intense that they are probably capable of being noticed by the person experiencing them.
Such an intense emotional experience is going to leave a mark, however. That is why those brief periods of intense emotion are going to be followed by lesser, similar emotions. Say if you were extremely happy for a few seconds, then you’d be slightly happy for a while afterwards.
Why does the brief period only last a few seconds? Can’t it be longer? If life were great, I guess the positive intense emotional experiences would last longer, and the short negative emotional experiences not even exist. But the attention span of the average human/animal is actually very short, and they can only handle so much intense emotion in a certain period of time.
That leads to another phenomenon called overload. A person or animal can only experience so many intense periods of emotion in a certain amount of time. Say you made someone laugh really hard, and then would tell an equally funny joke right after, that person wouldn’t laugh as hard because the laugh brain circuitry is already exhausted. It is like being jaded, only in the short term. This theory is easy to test, just pinch yourself, then pinch yourself again, and you’ll realize that it hurts a lot more the first time. That is because pain is an emotional experience as well, and that first pinch is exactly similar to the brief periods of intense emotion mentioned before. Furthermore, the pinch is followed by lesser amounts of pain. When all that residual pain is gone you can pinch yourself again and it will hurt just as much as the first time.
In other words, the brief, intense emotion was so intense that it leaves an aftereffect of lesser amounts of that same emotion. I could also just change the word emotion with thought. If you think something strongly, then similar thoughts are likely to follow, only less intense. The intensity of the emotion/thought goes downhill after the main event solely because your mind is exhausted by the intensity of the intense experience of emotion or thought. Humans/animals simply don’t have the capacity for a more intense experience then an intense emotional or intellectual experience.
People just don’t have very, very, very intense emotional or intellectual experiences. The mind just can’t handle it. People can have very, very, very intense physical experiences, however. That is only because evolutionarily humans and animals evolved going through very intense physical experiences, but there just isn’t any need or purpose to go through intense intellectual/emotional experiences. It would even be boring after the first few seconds. That’s because most emotion and intellect is originally from sensory stimulation, which is found in the real world and not in your head.
There are many examples of the intensity of intellectual and emotional experiences dying off. It is simply because something repeated over and over in your head becomes less and less interesting as its newness dies off. You could take any idea and repeat it to yourself over and over and you’ll notice how doing that becomes less and less interesting.
In fact, sometimes it is better to not initiate thinking about something that would lead to you to continue to repeat it (or similar ideas or emotions) because it is unhealthy to repeat things (or experience emotions that last too long) because the intensity of the experience dies off and you are stuck in a pattern of thinking about something, or feeling something, that you don’t want to be thinking or feeling because it isn’t providing enough stimulation. But you are still stuck feeling/thinking it because for whatever reason your mind doesn’t let go of it easily.
It is healthier to not be so interested in the thing in the first place so your mind doesn’t over inflate it and you wind up going through a period of over-excitement, which you don’t really enjoy, followed by a period of under-excitement, which you don’t really enjoy. It is like an addiction to emotion that would lead to this behavior. Or an overly optimistic attitude towards life. Someone that is overly aggressively approaching life, trying to grab onto whatever positive emotions or thoughts they can. Or someone overly upset about something and, just being persistent, doesn’t realize that it becomes less and less interesting to be upset about that thing, but continues to persist in thinking about it. They just need to move on.
In fact, you could view this two different ways, one is to not experience the more intense thoughts/emotions and try to spread it out over time. The other way to view it is the sharp emotional spike is a good thing. It is probably only a good thing if you like hurting yourself, however. It is a bad thing because it is so out of character with your everyday emotions/thoughts, which are much less intense. Such a drastic change from the ordinary would cause a violent mood swing. Your mind is going to be upset that things around it are changing so fast, and it would lead you to continuously try and figure out what is going on (consciously or unconsciously). Your mind has in it an automatic thing which tries to figure out what is happening to it, and that device is going to short circuit if you put in short, brief periods of intensity. It is like the brief period of intensity jolts your entire system. Like a hot wire.
If you are going to go for the brief period of intensity then that is a way of looking at life, it is a philosophy that you need to grab on to anything that throws its way to you. Or if you are looking for the brief period of negative intensity then that philosophy would be looking to grab onto (really anything, not just anything positive) that comes your way. Someone with those attitudes would think something like, “ok there is a positive experience, lets do it, I mean lets really go and do it that would be really really really fun”. They are so upset about life that when they see a positive thing, they cling onto it desperately. What they don’t realize is that clinging onto something positive (or negative) or any clinging, causes your mind to stop liking it due to repetition and overload.
How This Chapter shows how Intelligence is intertwined with Emotion:
• When you have a strong emotion it just doesn’t disappear, but it disappears gradually. This shows how your emotions are going to determine your thoughts and therefore your intellect. It shows that emotions cannot be completely controlled and therefore are going to change your thoughts and therefore possibly the reliability of your intelligence. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.16%3A_Smaller_Emotions_Follow_Brief_Intense_Emotions.txt |
Things that are easier to picture are easier to understand. Take the difference between understanding, we are going to play with the Frisbee, and if you throw the Frisbee twice as fast, it will arrive at its destination in half the time. It is clearly easier to understand what playing with the Frisbee is then it is to calculate how soon it will get to the other person. That is because the emotional event of playing with the Frisbee is large and distinct, and involves many things.
One thing was an emotional event; the other thing was a precise calculation. You could also view that backwards, that the calculation is actually an emotional event, and the emotional event is actually a calculation. The emotional event of playing Frisbee is in fact a calculation; you are calculating everything that there is involved with playing Frisbee. When someone says, “let’s play Frisbee” you imagine and picture in your head everything that playing Frisbee involves.
Thus for anything that is said you bring up a picture of it in your head. Even if it is a sound or a smell, you always try to picture what is causing it. That is because the vision enhances the experience and makes it more enjoyable to think about and therefore it is also going to be easier to remember. It is like vision is tied in with everything, and that if something can’t be visualized, it simply doesn’t exist.
Empty space is the absence of vision. But when you think hard about just an empty space, you’d like to imagine something there because you know that you would enjoy looking at that space more that way, that it just isn’t right for something to be empty like that. Even blind people visualize things because they can feel in three dimensions with their bodies and hands.
That is also why harder mathematical problems are harder to do, because they are harder to visualize. You have to memorize what 12 times 12 equals, but you can easily visualize what 1 times 2 is. Just one group of 2, that equals 2, you can picture that object in your head easily but when you picture adding up 12 groups of 12 the image gets too large.
Even if you think about a smell that is an invisible gas, you are going to picture something in your head like a gas outlet or a gas tank, or the air being filled with an invisible substance. Vision is in all of our thoughts and emotions, the other senses aren’t. Only some things smell, only some objects make noise, but everything can be seen. Everything exists somewhere physically, that is, and if it exists somewhere physically, then even if it is invisible you are going to be trying to imagine the space in which it is in.
In that manner blind people can see. They have an image of the world similar to what we do (even if they have never seen) solely from feeling objects and imagining where everything is. If someone asked you what the properties of an invisible gas were, you’d be thinking about the empty space in which the gas was in. How is it that people can visualize empty space? If there wasn’t empty space there, then there wouldn’t be anything, just empty space. So when most people visualize empty space they probably think of something like an empty room, or the corner of an empty room and just not focus on the walls, trying to look into the empty space by having an unfocused look to their eye.
It also seems that the easier it is to picture something, the easier it is to understand and remember. That is because things that have a stronger visual presence cause more emotion to be invoked in a person, and it is has a larger presence in that persons mind, and therefore is easier to remember. So the easier the vision is to comprehend, the easier it is also going to be to remember.
Also, the more emotional the event, the easier it is to remember. (and all events and such things in life are visual, as well). That is why dogs remember the words they care the most about like walk, Frisbee, food, and their name. It isn’t just easier to remember these larger things, but it is easier to understand them. The smaller and more complicated it gets, the harder it is to understand. So easier physics problems would be something like ball A hitting ball B, but harder ones would involve something like friction, which you can’t see as well. For example what is easier to understand, what is the force of friction on the ball, or what is the force of my hand on the ball? Mathematically they would seem to take just as much physical work to write down the mathematical solution, but emotionally it takes more work to do the friction part of the problem. (because it is harder to visualize) That means, however, that it is going to be harder for you to do the mathematical problem, or the friction part of the mathematical problem.
The easier something is to visualize, the less the strain on your mind processing that thing is going to have. Things that are easier to picture are easier to understand as well.
There are also degrees to which you visualize something. Say you are doing a math problem that involves distances. You can focus on those distances when you think about them to varying degrees. That is, when you think of the word distance you have unconscious thoughts about something like, “oh was that a very long trip?” Or you think more or less clearly about how straight the line of the distance is because you are thinking about trips now. Or thinking about the force of friction on an object, you have to try and visualize the tiny particles rubbing against each other. There are degrees of effort you can put into thinking about each visualization. Fields like engineering and physics require a lot of visual intelligence. People who can focus more and visualize things better would probably do better in those fields. Since vision relates to everything, better visual ability could help in countless situations to varying degrees.
Is emotional intelligence visual? How does the statement, “boys are aggressive so they would be more likely to buy a book about aggresivity to encourage their own aggressiveness than if they weren’t aggressive” relate to visual intelligence? You have to be able to imagine boys being aggressive and then you have to think about the response (which is visual) to boys when they are encouraged to be aggressive. Emotional intelligence is then just observing slight visual changes in affect. However to notice these slight changes in affect it is important to point out or lead one to notice better certain visual things by more intellectual observations, which are actually just visual observations themselves.
They are visual observations themselves because almost everything is a visual observation, the only things that aren’t visual observations are observations related to the other senses, but those other senses might play a lesser role than visual since visual is the sense people are most in tune with since it occurs all the time.
Emotional intelligence, however, might also relate to understanding physical senses because you need to understand how people physically feel in order to understand their emotional state, as the physical contributes to emotion. You feel your own body all the time and the senses from your skin and muscles changes all the time as well. Those feelings play an important part in how you feel, and serve as a baseline for emotions. That is you can close your eyes and stop thinking, but you are still going to feel something. That thing you are feeling then must be mostly physical since you aren’t getting any other inputs (other than unconscious emotional ones, but you can do things like focusing on your heart beat or breathing to eliminate more of that focus and focus more on your body).
How This Chapter shows how Intelligence is intertwined with Emotion:
• Emotional intelligence is sensory (or comes originally from sensory data), and your senses are directed by your thoughts and emotions (or you – and you are your intellect). So it becomes clear then that someone is their intellect, and their intellect then must comprise their emotions and their thoughts (since someone is only emotions and thoughts just behaving in a certain pattern). | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.17%3A_Visual_Learning.txt |
There are two significant ways of viewing how to cure depression, one view is hedonism, which says that well-being consists of pleasure and happiness, and the other view is eudaimonism, which believes that well-being consists of realizing or fulfilling ones true nature. I argue that depression can be cured by simply eliminating negative emotions which would seem to be more in line with the hedonic view. However, I believe that pleasure can also be achieved by finding ones true nature, and that can help eliminate depression as well. This is because thoughts are tied into the emotional experience, if someone thinks that what they are doing is meaningful, it is going to be more enjoyable for them. If someone believes they have found their true nature, then that is going to be a self-fulfilling wish and they will become happier solely from that realization.
Depression arises from wanting things that you can’t have (also, don't make self-comparisons). You basically need to be satisfied with your current state/condition. Even thinking that although things are bad now, but there is hope for them to get better means you’re satisfied with your current condition. If someone wants something that they can’t have, they get depressed. Therefore that is the logical cause of depression.
That works on the small scale too in addition to the large, if you are unhappy with yourself in general, that is probably going to result in a larger depression than if you can’t go to the store right away. If you want to go to the store right now, but can’t, then it might make you sad, but that isn’t as large an issue as if you are dissatisfied with something like your personal life or who you are in general.
What if there is something that will make you happy but you don’t know about it? That is ok because thankfully there are only a few general causes of depression. The human condition can be studied and similar things that people want arise in each instance. Just go through everything that you might want but can’t have and say in each instance, it’s ok that I don’t have that, I don’t need everything.
Wouldn’t ignoring something that you want but can’t have be imposing blocks on yourself, that if you want something, you should let your emotions run free and let the desire go? Well if you do that, you’re going to be upset. You basically somehow need to justify that your current condition is the best thing.
The best way to do this is to realize that each person is an individual and unique, and that a difference should be viewed as an asset. That if you are different in some way, that that way is positive, not negative. That other people appreciate you for who you are. You need to have confidence in who you are and the state your life is in.
Is having too much confidence in yourself arrogant? Yes it is slightly arrogant, but it also means that you have what you want. If someone has what they want, they are going to be confident. That won’t be bad however, because people like people that are confident in themselves because they are easier to be around. Lower self confidence would cause someone to act differently. This is because they would be unsure that each thing they are going to do is going to be ok, so they are going to be hesitant and unsure, causing them to act different and more uncertain. Therefore confidence is the most important thing for someone to have in order to combat depression.
Confidence also eliminates fear. When you aren’t confident you are afraid that life is failing you, you are afraid that there is something out there that you want but can’t have. It is very important to not be afraid of anything. What if there is something you’re afraid of but you don’t know what it is? You need to go through everything that you might be afraid of, and eliminate that you are afraid of them. Self-comparisons and wanting things you can't have are also components of fear.
What if you’re afraid of fighting a lion? Something like that would be a test of how fearful you are in general. Once you pull up the fear emotion by doing something fearful, if you are more afraid than you should be then something is wrong. That was just a test. You shouldn’t have a lot of fear in life for anything. You should have a lot of self confidence. So you shouldn’t be too afraid to do something like fight a lion, you should, however, realize that it is probably going to cause you to die.
How is it possible to not be afraid of death? Surely everyone is afraid to die. Well it is perfectly possible. Think about the situation if you were not afraid of death. What would you be, and how would you be acting, if you weren’t afraid to die. If you can imagine that, then you know that it is possible. If you can’t imagine that then go up step by step. Take something you are just a little afraid of, and imagine doing that without fear. Then keep going up. Eventually you won’t be too afraid of anything, including death.
Fear isn’t necessary. Part of logic is the understanding of facts. So if you logically understand that you are going to die, that is ok. If you get a weird feeling when you think about death (aka fear) then you should realize that you don’t really need that feeling. The feeling of fear is almost completely unnecessary. You don’t need strong feelings of fear to remind yourself that you are going to die if you fight a lion, or to motivate you to run away. Maybe the emotion fear can’t be eliminated completely, but the more that is eliminated, the more self-confidence you are going to have.
In fact, logically, eliminating any negative emotions is going to help eliminate depression. That is the definition of negative after all, bad and likely to cause sadness and therefore depression. Just go through the negative emotions of anger, fear, sadness, disgust and surprise. Try to go through anything that might cause those feelings and eliminate them. Also you can do the test like we did with the death test for fear. If you have a larger amount of that emotion than you should for an extreme example, (like death) then that is indicative that there is too much of that emotion in your system, that you are too afraid in general and need to reduce how much of the emotion fear is in your system.
Logically only positive emotions are good, and all negative emotions should be eliminated. They basically don’t do any good. The only reason to have minor amounts of them in your system would be to cause a small, healthy amount of anxiety to keep you on edge, but the key word there is still small.
Wanting things that you can’t have counts as a negative emotion which is called dissatisfaction. Also a lack of self confidence is a negative emotion because that is more likely to cause fear. If you have 100% confidence when fighting a lion you aren’t going to be afraid.
Basically psychology doesn’t need to be complicated. If psychology is complicated, then things like depressions can arise easily because there are complicated factors going on. Psychology, however, is actually simpler than it seems. Just imagine a person standing anywhere. This person is not doing anything; there are no inputs in and no outputs. If there are no inputs in and therefore no outputs, then there is no possibility for error (or a depression). Life doesn’t get much more complicated than just standing around and doing nothing, so where could a depression arise from?
It is logical then that something like a slight confidence boost (say imaging having enough confidence to fight a lion) should raise someone out of a depression and into feeling normal, like how they would in the situation where they were just standing around, getting no inputs in and therefore no outputs (output like a depression).
In fact, if you imagine yourself just standing around doing nothing, not only are there no outputs, but you probably feel good about yourself too. There is a simple pleasure in just absorbing the surroundings. That means that humans are like cars, when in idle they are set to go at a minimum speed. Our general state is one of mild happiness. They don’t stop when you put them in drive but the engine keeps running at a slow pace. From where can a depression arise if our natural state is a happy one? | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.18%3A_Curing_Depression.txt |
If the term consciousness is defined as everything a human experiences, then the solution to what consciousness is can be found by identifying the major contributors to human experience. Something like how humans experience color is not a major contributor to human experience because vision is only a shallow source of emotion. Seeing things may bring up large emotions, and because we see things we can understand the world, however the objects themselves are not pleasing because of their colors and shapes. This is clear because there isn’t much to process deeply about how something looks, it is simply a simple configuration of shapes and colors, nothing more. That configuration may bring up large amounts of emotion, but it is not the configuration itself that is causing this emotion, it is what the configuration makes you think of that generates the emotion and therefore the deep thought about the object. Objects themselves are insignificant to the large amount of emotional data that a human can process in its mind, what is one object compared to all of someone’s experiences? In essence, all of someone’s experiences is who that person is. Their experiences and how they understand those experiences as a whole and as individual parts. That is what makes a human conscious, understanding everything that happens to it and the role these things have in their lives. Seeing one color isn’t going to play a large role, unless that color stands for something else. So subjective experience is very complicated and is in essence consciousness, however it needs to be clear what subjective experience is. Color is not that subjective because it is just that, a color. Chalmers classified consciousness into two problems, the ‘hard problem’ and the ‘easy problem’. The easy problem consists of aspects of consciousness that can be researched by empirical methods, and the hard problem consists of subjective experience. How he showed what subjective experience is, however, is inaccurate and didn’t show what the deep aspects of subjective experience are, only shallow ones such as the “quality of deep blue, the sensation of middle C?”:
"It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. If any problem qualifies as the problem of consciousness, it is this one." (Chalmers, 1995, p. 201)
The reason that physical processing gives rise to a rich inner life is because our physical experiences are extremely complicated. For instance, a human itself is very complicated, and seeing this human in the physical world might bring up large amounts of emotion because there is a lot to think about from that one person. Everyone has a unique personality, and simply by seeing one person you could associate with that person all the happy memories you have had with people in general. You see something in the world, and then you associate that with a happy memory. It may be that you shouldn’t get happy from a simple experience, like playing a sport, but you do get happy from that because people are animals and they enjoy simple things that are physical. Life is a combination of all these simple physical activities that in the end result in large amounts of emotion. What makes humans conscious is their ability to experience deep emotions that are also deep intellectual experiences because emotion is very complicated.
Information processing can occur in computers and in life forms less advanced than humans (other animals), so therefore what makes humans conscious is advanced information processing (or simply deep thought). What consists of advanced information processing is primarily the ability to reflect and from this reflection, experience deep emotions. Dogs seem to experience deep emotions, they are known to be emotionally sensitive, and from that observation comes the conclusion that it takes more than emotion to be conscious. Simply experiencing deep emotions doesn’t make someone conscious. If you understand the place each experience you have has relative to your life as a whole then you enrich the emotional and cognitive processing of each experience. A dog will also be able to reflect on each experience and its place in their life as a whole, but it doesn’t seem like the dog really understands as well how important it is. The dog will not be able to describe with words different aspects of his experience, how it made the dog feel, why that experience was important to it. However, not all of experience can be defined by your ability to describe it with words, there can be very subtle levels of emotional learning involved, that even if you can’t describe it with words can change who you are. When you process an experience, learning is going to be involved. You reflect on the experience on many levels, there is the actual experience, and then there is going to be what you think about it in your mind. You think about it in many ways, and how it relates to many aspects of your life. This reflection is a representation of the actual event in your mind. The nature of the experience becomes changed based on how it relates to your life. For example, you may say, “that event wasn’t that serious because I have done that before and don’t care”, or you could say, “that experience was serious because I learned something new”.
Those examples show how you can reflect on an experience on many levels. All those levels are processed unconsciously. If you think about them with words and describe them, it only makes them conscious and might change how you process them a little, but you still would process them and be changed by the experience if you don’t reflect on it with words. The point is that high level thinking occurs by any simple experience. This is what makes humans conscious because it shows how we understand a situation and its place in our life. That type of higher level thinking shows that it is also possible that you learn from every situation in life. If you can process it on so many levels, and ask so many questions about it, then part of consciousness is learning. Sometimes people note how they are unconsciously pondering about something or worrying about something. Higher order thinking and conscious processing of events is similar. You unconsciously process events and they have a certain level of clarity and distinctiveness in your mind, or lack thereof. A micro level example of this would be that you might only process a certain event fully and gain a high quality understanding of it after a certain amount of time has passed. After certain periods of time the experience might be subject to different levels of thinking about it. So it might take time before you realize something in specific about an experience. The time processing it without words is a part of a higher order network of thinking and associations relating to each other in your mind that helps make us reflective and conscious.
After pointing out the importance of unconscious learning and knowledge, the next observation to make from that is how much unconscious knowledge influences our conscious understanding without our consciously understanding what it is that lead to your conscious understanding. For instance, real events are going to make you learn something, but you aren’t going to necessarily know what exactly caused that learning, or even be aware that you learned something. Also, how is it so certain that people always learn from experiences? Just because you have more experiences does that necessarily mean that you are learning? Is it possible to have such a high order processing system without using words, that is independent and functions by itself and learns progressively?
References
Chalmers, D. (1995) ‘Facing up to the problem of consciousness’, Journal of Consciousness Studies 2(3): 200–219. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.19%3A_What_Makes_Humans_Conscious.txt |
In the previous chapter I showed how consciousness is the experience of deep emotion, deep thought, and your ability to process ordinary events. That type of consciousness, however, is simply what makes humans aware of who they are, which is different from being aware and conscious of their environment. Being conscious of your environment is another type of consciousness altogether, and would involve things like working memory (which is storing and manipulating information in the short term). That is because when you are in an environment the data around you goes into your mind and then leaves shortly, like observing a cars passing by. Baddeley (2001) associated consciousness with the central executive component of working memory. There was a central executive system that was aided by two subsystems, one concerned with acoustic and verbal information, the articulatory, and the other for visual and spatial information. There was an “articulatory rehearsal” that was supposed to repeat words in your head so you could remember them for longer than a few seconds.
Working memory isn't the only functional aspect of consciousness. There would have to be a central processing unit of sorts to process the information and use it effectively. That unit would be more core to who you are because it would be the part making decisions, which is more conscious than memory, which only parts of come into consciousness for short periods of time. For instance, if it was possible to leave your body and take over some mechanical machine, or control technology or mechanical devices with your mind because you simply couldn't think about doing all the things needed to control the device at once becuase your central processing unit just can't do that. Say control a car with your mind, that wouldn't be possible even though it is possible to do it with your body because you can't feel what it is like to press the pedal a certain degree with just your mind. The physical experience makes it real. Even if you just understood what the machinery should do, it wouldn't be possible because you'd have to understand exactly what it should do, which you can't really think about because you only have a loose idea of, say, where the car should be and at what speed.
So you can only make a few decisions per minute, and when it comes to doing technical things like placing a car in the right location, are general and not specific. This explains why if someone were to use magic their mind would have to be clear in order to properly visualize what should happen. Though even that wouldn't be possible with a clear mind because the image you have of the result you want isn't going to be perfect. An example of that would be moving things using "the force". However, there are also emotional components to supporting consciousness. I do not believe a zombie or anything like that could be conscious because it wouldn't have the proper emotional support. Zombies are so lifeless that they wouldn't be sharp enough to support the conscious functions humans do.
Overall consciousness, however, occurs when feeling and understanding meet, this is because consciousness is shown in the ability to reflect on your feelings. In other words, when you understand what it is that you are feeling you are the most conscious. That is because during that time you are most aware of what is going on. This awareness could be described as an understanding of life, not just general understanding. That is you could be doing a math problem, but that math problem isn’t going to increase how conscious you are, because doing it isn’t going to increase your understanding of how it is that you are feeling. It could be that doing the problem makes you more awake, and as a side effect of that you understand how it is that you are feeling better, but that is just a side effect. Understanding how you are feeling makes you more aware of yourself because that increases how much you are thinking about yourself (or your feelings).
Since thoughts and emotions lead to feelings, the more you understand them as well the more conscious you are going to be. So if you are doing a math problem, the more you understand that you are doing a math problem, and the place the math problem has in your life, then the more conscious you are. That is, it isn’t doing the math problem that is making you more conscious, but it is understanding the place of what it is you are doing and feeling (in this case a math problem) and where that fits into your life that determines how conscious you are. It is your inner reflection of how the math problem makes you feel as a whole that separates humans being conscious from other animals. Consciousness basically means aware. This means that the math problem actually does lead to increased consciousness, because you are becoming more aware of the place of that math problem in your entire life as you do the math problem.
So consciousness basically means how aware someone is of themselves (it means other things as well). The more aware of yourself you are the more conscious you are. In order to be aware of yourself you need to understand where everything in your life fits in. It is this awareness, or commonsense, that is more important to understanding who you are. In order to be aware of yourself, or have a concept of self, you have to have a concept of how yourself interacts in the world as a whole, not just as individual parts.
Even though you might be sleeping, you are conscious because you still understand who you are. Then again, during dreams you don’t act in as rational a manner as when awake, as dreams tend to not make as much sense as real life. Therefore you wouldn’t be as conscious during a dream as you would when you are awake. You are still conscious to some degree, however, since you are functioning in a somewhat reasonable manner. But you still aren’t clearly perfectly aware of yourself or your place in the world since in dreams sometimes you do things and see things that don’t make sense, but you apparently don’t notice them. This indicates further that consciousness is more a matter of commonsense and how well you know yourself than just standard intellect like would be present say when doing a math problem. Your ability to reflect on yourself might not be related to normal IQ, but might more likely be more highly related to emotional IQ.
In other words commonsense can be measured just as standard intellect can be. But what leads to commonsense is emotional intelligence not intelligence that is more related to memory or something built up over time, like skill. The more commonsense someone has the more conscious they are because they know what it is that they are doing. This is a different type of consciousness then the type that makes humans human, this is the practical type of consciousness that makes someone aware of their environment and their ability to function, versus a deeper human consciousness. In dreams people have very little commonsense, for example, in a dream you might try to do the same thing over and over again even though it might be failing, and you just randomly appear in scenes or scenarios with no background knowledge of how you got there or where in the world you are. That suggests that during dreams you are solely emotional. So commonsense isn’t just emotional intelligence, but it is a general awareness that would result from understanding your emotions, thoughts, and feelings all at the same time (and their place in the world). In order to understand the proper place of emotions, thoughts and feelings just a large assortment of knowledge isn’t going to increase your understanding of who you are. What is going to increase your understanding of who you are however is understanding how your emotions, thoughts and feelings fit into the general assortment of facts and information which makes up the world.
In review, commonsense and a general knowledge of where you are leads to consciousness. Those things both are clear facts separated from a bunch of haziness (the real world). So something like a bee might act like it understands its place in the world, but it doesn’t consciously understand it because if you put it in a glass cage it might just bat against the wall trying to get out over and over, not aware that it is ever going to get anywhere. The bee has no commonsense or knowledge. Knowledge in that case would mean understanding that it is in a glass cage, and commonsense would mean understanding that it is never going to get out. So to have commonsense you do need knowledge, but you need to take knowledge and appropriately configure it in order to gain common sense, or consciousness.
You need some knowledge and standard intellect (like memory) to attain commonsense (or consciousness). The more memory you have (random assortment of facts and information) the more information you have to put together in an organized way. It could be that it is easier to put together small amounts of information since it is less to process, leading to more commonsense than just being confused with a lot of memory. However, if you have a lot of data (or memory) and are also capable of putting it together effectively (like you wouldn’t be doing in say a dream) then you would have more commonsense then if you had less data and put it together just as effectively, because overall you’d have more data that is properly processed. So commonsense (or consciousness) is your ability to organize the data in your head. This data is organized relative to yourself, therefore giving you a greater understanding of where you are relative to the data. Disorganized data doesn’t count at all. A greater memory might increase your commonsense, but only if you can put that extra data together effectively. The bee didn’t understand the data that it was in a glass cage, and it didn’t understand that it wasn’t getting anywhere by hitting against it over and over. If bees had some commonsense they would fly around a room trying to get out instead of trying to get out in the same place over and over. They just have no idea what they are doing. But that is because it probably doesn’t remember what it just did. It might remember to some extent, but that memory might not be clear. So it isn’t the bees fault that it has no commonsense, because it didn’t have a large enough memory to collect enough facts to potentially use commonsense. A person with no commonsense in that example would be someone constantly running into the door without using the handle. You know the person has a large enough memory to remember that it just did that and it shouldn’t do it again, but it is still doing it over and over. That human is not conscious at all.
That human is showing no understanding of its actions. Understanding actions leads to commonsense because it shows that you know your place in the world. That human apparently isn’t aware of its current place in the world, which is that it is never going to get out of the room with that strategy. So the more sense someone has, the more likely they are going to understand their place in the world and what they are doing, therefore being more conscious.
The better one understands the statement “I am happy” the more that person understands how they are then relative to their condition at previous times. That would lead to them understanding themself better. The better someone understands themself, the more aware of themself they are, leading to increased consciousness. That is an example of how understanding feelings leads to increased consciousness. That is also different from what makes humans truly conscious, however. It is someone’s own deep understanding of who they are and how they are happy at that specific time relative to their life, and the meaning of that which makes someone really aware.
So life is a bunch of data that needs to be sorted in some ways in order for a sense of self to be identified. One way to sort the data would be to identify things similar to yourself. A data point in the center would be you, the points closest to that would be the points most similar to you, and the points further out would be more different. That type of sorting would lead to a long term understanding of sense of self. The other type of sorting where the closest points are what is most relevant to you at the time would be a temporary sense of self. Take the bee example, the bee doesn’t understand that hitting the wall over and over isn’t getting it anywhere, so for it a temporary data point that it is missing that would increase its sense of self awareness is that it isn’t getting anywhere by doing that.
The other type of sense of self is a more long term one. Things like what you like and dislike, and what emotions different things cause in you repeatedly would help you identify “who you are”. So consciousness isn’t just awareness of your environment, it is an understanding of yourself and who you are relative to your environment. That means a deep psychological understanding of your emotions, thoughts and feelings, an understanding of how you perform both in individual and general instances, and what your ability is to perform in those instances.
Putting together some data points doesn’t increase self consciousness as much as if you put together data points that relate to yourself. It is when you relate data point(s) to yourself that even more increased consciousness occurs, because you are relating yourself to more information, increasing your interaction with the world and therefore understanding yourself better relative to the world. So doing a math problem isn’t going to increase your understanding of yourself a lot, because those data points don’t really relate to you. It is going to increase your understanding of yourself a little because you understand what it is that you are doing, which increases your understanding of yourself, but it doesn’t increase how much you are thinking about yourself, which would increase your awareness of yourself even more. If you are trying to leave a room (the bee example) however, you linking your desire to leave the room and the fact that opening the door allows you to do that is linking a point about you and a point about the door together, strengthening your sense of self and how much you are thinking about yourself.
So basically any thought about oneself is going to increase ones sense of self. You have a permanent understanding of who you are that doesn’t change, and that is your long term understanding of self, but when you think about yourself, or you doing something (like trying to leave a room) your sense of self is temporarily increased because you are thinking about yourself more. So consciousness fluctuates greatly based on thought. It also increases greatly if you are having feelings or emotions about yourself as well. It increases when you are thinking, feeling, or being emotional about yourself because during those times you are more aware of yourself.
Commonsense increases someone’s ability to put data points (facts) together, but the more those facts (and resulting combinations of facts) relate to yourself the more that your consciousness is going to be increased. This leads to the conclusion that consciousness is just the awareness of the experience of oneself, and that experience includes ones actions, thoughts, feelings, and emotions (both long term and short term). It could be rephrased that consciousness is awareness of someone’s life experience, both short term and long term. The more commonsense someone has the more aware of their life they are going to be because they are going to be able to organize their life and their actions in an efficient, clear manner (both short term and long term) by connecting facts to themselves (the more distant the fact, the less consciousness it leads to because it is less related to yourself causing you to think about yourself less). The more someone is thinking about themself (or experiencing feelings and emotions about themself) the more they are going to be aware of that life experience because their life is going to be temporarily elevated in their minds.
It is impossible to have a perfect understanding of self, or consciousness because to do that you would have to be aware of the exact effect of each emotion, feeling and thought you have. To do that you’d have to be aware of everything in your environment, and everything that you can remember all at the same time. This means that your consciousness evolves based on your memory, that is if your memory changes, who you are changes because you can’t base yourself off the same things anymore. Who you are also changes based on your environment, and how aware you are of your environment.
You are going to be more aware of your environment if you are thinking more about your environment, or processing data about it (again this type of consciousness is more a functional one versus a deeper one). Processing data about your immediate environment leads to a greater sense of self because who you are is dependent on your immediate environment, because you automatically process what is going on in that environment. You get a lot of sensory stimulation from the environment you are in. That can be proved because when you think about your immediate environment your awareness of it increases much more than if you think about an environment you are not in. If you think about being in an environment you are not in your sense of self is going to decrease more than you would be if you weren’t thinking about anything, because your minds awareness is going to be divided between two places, so you’d have two senses of self. That links into the idea that processing data that is more relevant to yourself leads to greater consciousness, if the data is physically in your environment it is going to increase your self awareness because that is where you are (so you’d be thinking more about yourself).
While thinking about yourself being in another environment leads to less consciousness then just thinking about nothing, thinking about another environment without yourself in it leads to even less self consciousness then either of the two. That is because you just aren’t thinking about yourself at all. If you are processing data in your environment it is like you are thinking about that environment, only less so, so processing data in your environment would increase your sense of self more so than thinking about nothing in your environment, but less so than thinking about your environment directly. By “your environment” I mean the area directly around you, the closer it is to you the more related it is to you, so the more it is going to cause you to think about yourself. If you look at trees in the far distance you aren’t going to be as focused as if you were looking at someone right in front of you because your attention is on something less related to yourself.
In summary, when you think about your environment, or you being in an environment, your sense of self changes, (listed from most positive to least positive amounts of change) a) if you think about you being in your environment, b) if you are processing regular data in your environment c) if you are just in your environment not thinking, d) if you think about yourself in another environment, and e) if you just think about another environment (because you are removing you from yourself). This thinking about oneself leads to greater consciousness because that is what consciousness is, awareness of oneself which is going to increase a lot when you think about yourself (or have feelings and emotions about yourself).
Those rules apply unless the environment has data which is similar to yourself, say if there is a painting of yourself far away that you are looking at, it would cause you to think more about yourself then if you were just focusing on your immediate environment. So if the environment is just environmental, sensory stimulation those rules apply, but if there is something in the environments that causes you to think deeply about something then you are going to be either even more removed from yourself (if you are thinking deeply about something not related to yourself like a math problem or a person who is different from you) or even more related to yourself (greater consciousness) if you are thinking about something deeply which is similar to yourself (say a person similar to yourself, or an experience of yours was a personal experience about you).
That shows that if you think about consciousness as a short term thing, your consciousness changes all the time and drastically. For instance, one might have barely any consciousness at all if they are completely out of it (drunk, really unfocused, laughing really hard). During that time you simply have little or no short term consciousness. There are multiple different time spans of awareness, however, one is of your life in the long term (many years), the other is of your life in the short term (a few years), and another is of your life in its immediate, current phase (days or so) (or any combination of time). People about over 50 might have a consciousness for each 10 year or so span of their life, and they would constantly remember all 5. People are aware of themselves and their lives at different periods. The only thing that is very consistent that people have of themselves is their understanding of who they are, how they interact in the world, and how their emotions, feelings, and thoughts respond in similar instances. Those are things which don’t change a lot based on the environment they are in, and that sense of self, or consciousness, is a more long term one. So long term consciousness is based off of how well you understand the psychology of your emotions, feelings, and thoughts, and also how those three interact as a whole to produce your long term psychological state/condition.
So having a larger memory isn’t going to necessarily increase your consciousness a lot because it isn’t going to lead to a greater understanding of yourself. What you remember of yourself changes your consciousness, but it doesn’t increase or decrease it a lot unless it is a dramatic amount of difference in memory, like the difference in memory between a dog and a human. Unless the greater your memory the greater your emotional experience and you’d need to constantly remember all prior experiences in order to maintain the most advanced level of emotional experience you have. In that case a decrease in memory would decrease your emotional experience, and the more advanced ones emotional experience the more likely it is they are going to have a better understanding of themself.
That leads to the idea that certain emotional experiences lead to a greater sense of self more so than other emotional experiences. If someone was in a war they would have the emotional experience of understanding how they respond in combat, and their sense of self would then forever (or as long as they can remember) be a more action oriented one. So the deeper the emotional experience, the more it contributes to your self consciousness. The more individual the emotional experience, that is, the more related the experience is to yourself, the more the experience is going to increase your self consciousness. That means that there isn’t just self consciousness, but people can be conscious about the world around them and other people, and that there is an overlap between self consciousness and world consciousness.
That is, if you have an experience with another person, you then become more aware of that person as well as more aware of yourself. So you’d have more consciousness of that person, and more self consciousness. The same idea goes if you have an emotional experience with an object, or group of objects (in the case of a war it might be something like guns). Going to war might increase someone’s consciousness of weapons or danger. Consciousness therefore means awareness in general, not just self awareness. If you are aware of something, then you are conscious of it.
Most dictionary definitions of consciousness just list it as being the things people are most aware of. There are things to be aware of that aren’t major things, things which you aren’t “most” aware of. Awareness just happens to center around the self. That is a selfish view of the world. Someone could be only most aware of wrongdoing, more aware of wrongdoing than they are of themself, that is possible. If that were true for most people then consciousness would be defined as wrongdoing, not someone’s interest, or awareness in themself.
So the best definition of consciousness is therefore “everything that someone is aware of”. People are aware of things in both the short term and the long term. A fly is probably only aware of things in the short term, since it has almost no memory compared to a human. A human’s consciousness can change drastically, however (their consciousness, or what it is that they are aware of in total). Conscious just means, “Are you aware in general”, but consciousness means, “what are you aware of exactly”.
The next question is, what are people usually most aware of? Most dictionary definitions have as definitions for consciousness things like awareness of ones surroundings, ones feelings, ones identity, things that people are usually most aware of. Those definitions are people’s long term sense of consciousness. Over the long run, most of the things you are going to be aware of are going to be related to yourself somehow; therefore most of consciousness is based on the self. However, you can think about things that aren’t related to yourself, and your thought changes drastically, so during periods of thought about things that aren’t related to oneself that person is almost completely not focused on themself. It is impossible to be completely not focused on oneself because you are experiencing physical sensations from your body all the time (which are going to be about yourself), not just mental ones.
So someone can have consciousness about something, the question “what is consciousness” is like asking “what is awareness”. Awareness is when you focus on certain things and therefore think about them and/or have more feelings and emotions about them. In review, consciousness means “awareness”, “everything that someone is aware of”, “everything that someone is aware of currently”, or “everything that someone is aware of currently or during a certain period of time (say their life)”. So you could ask, “what was your consciousness over the last 5 years”. That would mean, over the last 5 years, what have you been aware of. The response could be “wrongdoing”, “myself”, or a large list of things. A more specific version of that would be to ask, “what are you aware of, and when are you aware of it”, or “over the last five years what were you aware of, and when were you aware of it”. If someone wants to know someone else’s life time consciousness they could ask, “what were you aware of throughout your life”. If someone wanted to know if someone was conscious about something (or what their consciousness was of something) they could ask, “what is your awareness of that thing”, or “what is your consciousness of that” (for example, “what is your consciousness of war”). You could also say, “what does it truly mean to be human” that could also mean what is consciousness.
How This Chapter shows how Intelligence is intertwined with Emotion:
• Explaining the definition of consciousness shows how intelligence isn’t just random thoughts and emotions, but some parts of intelligence are directed thoughts and directed emotions, and that direction is what makes someone conscious.
References
Baddeley, A.D. (2001) ‘Is working memory still working?’ American Psychologist 56: 851–864. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.20%3A_Functional_Consciousness.txt |
Dreams are significant because they reveal how the unconscious mind functions. In order to control your emotions it will help to understand how wild and crazy the unconscious mind is. The unconscious mind acts on impulses not rational thought. The experience of emotion is driven by the unconscious. When something happens you don’t think “I’m going to feel happy about that” you simply are happy for unconscious reasons. The unconscious mind works on impulses and learned responses. Since the “happy” response is learned it can be changed in part, however, because how your mind directs your emotions how to feel. It is important to understand, however, that emotions need to be lead on to how they should feel because they are illogical and function unconsciously, similar to how dreams function.
We need the escape of dreams from the logical, rational world in which we operate. There is a desire within humans to break everything down and tear everything apart. Why? Because breaking things is fun. No one wants to see everything continue as usual, why? Because things continuing as usual represents nothing out of the ordinary. Things that are out of the ordinary are going to be more emotional, and more stimulating. That’s why humans intentionally engineer their dreams, to have something fun to escape into. Take this dream “We’re in a hotel. We all have rooms, but we’re in Steve’s room. There are multiple beds that may be stacked. We are trying to make music. A boy starts playing guitar and it’s fantastic. Steve holds up my cell phone, it’s recording, he hands it to me. Steve asks me to play it back. There is a lot of music. One song my clarinet is so sharp. Steve says ‘if you can’t hear that…’ condescending. Steve leaves the room. We are competing for his attention, girls and boys. I am on a bed that is high. I know I’m the favorite and they’re asking me about it and I decide to leave. I slide off the bed, then reach up under the rail and grab a black candle (handmade) and a cigarette and something else.” It should be obvious that that is a fun event.
If you take all dreams and think about them, you will realize that they are fun, even nightmares are fun because they are emotional. It is fun for a person to have a deeply emotional experience because it is stimulating, people will do anything for stimulation even if that stimulation is a negative emotion. All dreams represent some sort of significant or large emotional event. The event doesn’t have to be real it just has to provoke a large emotional reaction in the person. As long as this emotional reaction doesn’t incur damage, then all emotional reactions are good. It is the saying, what doesn’t hurt you only makes you stronger, only it’s more like, what doesn’t hurt you only makes you stronger. So if it’s emotion, and it doesn’t hurt you, then it makes you stronger and you even like it.
People enjoy all their dreams while they are sleeping, because during sleep they are solely emotional beings. As solely an emotional being you aren’t engaging the logical part of your brain. So even if you dream about something like the death of your parent, you are still going to enjoy the dream because it is emotional and you’re not thinking about the consequences of that. That is why you dream, because dreaming is fun, even if it isn’t fun to think about when you wake up. If you were awake and thinking clearly you’d realize that you don’t want your parent to die, but during the dream you are solely and emotional being and just interested the thrill of the death of a loved one.
That is, you are interested in the emotional intensity of the death of a loved one because in dreams you are solely emotional. You are not thinking of the logical consequences, and therefore in dreams people are just emotional. There might be a little logic, but the emotional experience would tend to override it resulting in dreams like the death of relatives. The reason you might "enjoy" the death of a loved one is because the death causes you to think more about that person because you are emotionally involved in experiences such as deaths. While awake you are intellectually involved in experiences such as deaths and this intellectual involvement would lead to a realization that they are bad, but in dreams it would lead to no realization, just feeling for the person who is dying, which you might enjoy (not the fact that they are dying).
Why again would the death of a loved one be thrilling? Because it would be a huge emotional experience, and your system is interested in the shock of that experience, that is why you are likely to dream about it. In fact, any nightmare is just really a system shock that causes a healthy amount of anxiety. The person dreaming also “knows” that it is a dream when it is taking place. You know this because in dreams you don’t really worry about consequences, since they are just emotional to begin with. Logic means worrying and such, you can tell that if you had a dream of a death of a loved one, you wouldn’t worry about it in the dream, but you might worry about it while you are consciously awake. Let’s go back to the playing music in the hotel, if you are playing music in the hotel room, you aren’t going to worry about if there are other people near you in the dream that you might wake up (and you can tell that dreams are like that). But you are certainly going to think about it in reality. That’s because in dreams the emotional content is emphasized, and the dreaming mind isn’t aware that the logical one is going to be upset that the dream doesn’t make any sense when it wakes up, or that the logical one is going to be upset you killed a relative for fun.
Just because something is emotional doesn’t mean you worry about it while you are awake. Dreams try to eliminate thinking, the less thinking, the more emotional it is going to be. So dreams might have a lot of sexual content in them as well. You dream about things you want to experience, but only things you want to experience in the dreaming state. The dreaming state is a state in which you don’t have control over your body, and you have a very childish control over your emotions. Your emotions run free in dreams, if you want it, it’s yours (in the dream). So dreams are a reflection of your worse desires and worst fears, because those two things are most emotional. However, in the dream you aren’t really afraid because you aren’t clear thinking. It’s like why people like scary movies, it is something scary that you aren’t directly involved in, so you can safely experience it. You aren’t directly involved with the dream because it is a dream, it is not reality, and your mind responds to that by making dreams that are entertaining to watch, not to experience, so it is very similar to watching a movie, you’re equally distanced from the event.
It would be more real to watch something like a murder in real life then to watch a murder taking place in a dream, in the dream situation the murder might even seem fun. That is also how people can like watching violence in cartoons like Tom and Jerry, where all the characters do is beat each other up, people even find it amusing. Watching something like that of course in real life wouldn’t be amusing however (unless you’re sadistic). Dreams are just like cartoons, you’re not involved in it, it isn’t real, and if you are involved in the dream then it isn’t very physical since you can’t feel your limbs. You can even feel it, imagine a cartoon character in pain, is that fun or sad? It is fun because it is just the right amount of stimulation (it might be sad intellectually, but emotionally, like how dreams are emotional, it is fun). It’s the right amount of stimulation because your mind recognizes it as not real, you recognize logically that it is just a cartoon, or just a movie, and you don’t feel as bad as you would if it were real. That’s why in dreams we need more to properly stimulate us, because simply it isn’t real. That’s why dreams need to be more emotional and entertaining. If you had that much entertainment in real life (like if the dreams you had were actually real), you’d have way too much stimulation and you wouldn’t like it at all. Dreams just reflect the proper amount of stimulation you need to keep you stimulated. That’s probably why people dream at all, for the same reason people think all the time while they are awake, because boredom causes an incredible amount of anxiety. People simply need to think about something all of the time, even while they are asleep. But since it is a dream, they can think about things that aren’t realistic and don’t make sense so they can have fun during those dreams. Doing something like moving some stuff around might be entertaining in real life because you are physically doing it, but in a dream it just wouldn’t suffice, you would need something spicy taking place like death, sex, fear, desire, emotion, or strong emotion.
Dreams in general tend to be weird. This would suggest that whatever engine is engineering, or designing the dreams is a weird and/or stupid one. Things in dreams often don’t make any sense in reality, but dreams are often incredibly sophisticated at the same time. This would suggest that dreams are emotional, not logical. Emotion is very complicated, but it often doesn’t make any logical sense. Dreams convey feelings very well, they amplify feelings, they don’t amplify logic.
For example, say you were thinking about a toothbrush that day, or had a lot of thoughts about brushing your teeth, or had some trouble with the dentist and it was bothering you. In your dream that night, you wouldn’t think about the events of the day, or logically think about how you could fix your tooth problem. In fact the logical thing would probably never occur in your dream, that would be out of character since dreams are more emotional, you’d probably never dream thinking “ah I should brush my teeth more thoroughly”. Instead you’d dream of a really big toothbrush or something immature, childish, and extremely emotional. Or maybe get a large sensation of your teeth being brushed. See how one is more emotional than the other?
Dreams are so emotional that there is little room for anything logical, it’s as if all your brain power is being converted into it’s emotional essence. This is easy to prove, think of any dream you’ve ever had, or ever heard of, whatever it was, it didn’t make complete sense. The fact that NO dream EVER makes complete sense must mean that the higher, logical part of your brain is shut off during sleep. That makes sense since if you were actually thinking, you’d want to experience real emotions and move your body around to get that experience, not just think about them.
This might make dreams more sexual or Freudian, but more importantly anything that is most strongly emotional to the person having the dream. Take this dream for example “I was at a type of arena-ish thing but it had balconies like a theater would.” Notice first off that it doesn’t make sense, arena’s don’t have balconies like a theater would. Clearly if the person was thinking clearly she/he wouldn’t have been able to put theater balconies in an arena. Now there sometimes are balconies in an arena, but this person must have been referring to balconies that were pretty like they are in theaters with strong contrast to the arena, say like a stone arena with pretty wooden balconies in pink and stuff in them. That description I gave sounds like a typical dream because it doesn’t make sense, and due to the contrast/mix of the arena and the theater, it is very emotional.
The mix of the two things makes it more emotional because it is something which you wouldn’t find anywhere in reality. Things that stand out tend to be more emotional, and anything that doesn’t make sense, like doesn’t make ANY sense, is going to be emotional because it stands out from your everyday experience. Something like a giant gumball rolling over and over in your head, that doesn’t make any sense, and its emotional. But why is it emotional? It is because you never find giant gumballs (that are chewed just standing around outside) so if you found one, you’d be in shock, and very emotional.
There are things that are emotional and can be found in real life of course. Take this dream “I was a warrior in a med-evil battle with Mel Gibson and we fought some kind of beasts with our golden swords lol Mel got his head chopped off and I awakened when I was being choked by a med-evil beast. ...” It would probably be more emotional for the dreamer to be doing something with Mel Gibson, since it’s not likely he’ll ever do something with Mel and therefore would find it rare when he did, so it’s a not realistic, out of the ordinary, emotional experience. Furthermore they are using gold swords, how often are gold swords used? Gold is a more emotional color than steel as well. Color is emotional, so color, a dramatic color, or large color contrasts are often found in dreams to further amplify emotion.
Take this dream, see how emotional it is, emotional, not realistic, and amplified for dramatic content.
“I am the best student in a hard science class of some sort. Every day before class I hold study sessions. Everyone fails the first test but me. We are all milling about in the hall after class. The teacher and some other students express interest in the study sessions, but I say I don't really need them. They seem disappointed. Then I tell everyone "Hey, all those study sessions that I've been having... BY MYSELF... will still be there next week" inviting them. The professor asks anyone with a disease to hang around and see her in ten minutes, saying she has the shakes. She's very concerned with her health, which has been strange for some time. I think about staying, but I leave. I see Joe Horvath in the hall and hug him, but I see that he has a finger the looks like it was smashed and healed flattish and deformed. There are flecks of blue paint or nail polish or the nail is flecked blue. When I ask him about it he says he didn't even notice and doesn't know what happened, but it doesn't hurt.”
The dreamer thinks he is the best in the class, not just any class, but a hard science class. He is so much better than anyone else, that he has “study sessions” by himself. Of course that doesn’t make any sense, the people were asking him about a study session, implying that a study session would involve more than one person, like they usually do. But in his dream he forgets logic and all of a sudden he is the only person needed for a study session, in real life he wouldn’t have said that because it just wouldn’t be a proper thing to say - he wouldn't say something that silly in real life. To make the dream even more emotional another out of the ordinary event is occurring: the teacher is feeling sick, and her health has been “strange for some time” not bad for sometime, but strange for some time, the word strange would imply something really out of the ordinary going on, like an extraterrestrial disease or something weird, the weirdness and out of the ordinariness being added for extra emotional content, of course. Does this mean that the dreamer is afraid of a strange disease? No it just means he is trying to entertain himself in his sleep by adding extra dramatic content by using the word strange, instead of bad. (it’s extremely rare to use the word strange when describing that one is sick, so what I suggested about extraterrestrial implications makes more sense). When you say, “oh I’ve been feeling strange lately” you are implying that something really weird is going on with you (or in this case your health) which would bring up further rise for concern, or a further rise in emotional, dramatic content!
Take this dream “We’re in a hotel. We all have rooms, but we’re in Steve’s room. There are multiple beds that may be stacked. We are trying to make music. A boy starts playing guitar and it’s fantastic. Steve holds up my cell phone, it’s recording, he hands it to me. Steve asks me to play it back. There is a lot of music. One song my clarinet is so sharp. Steve says ‘if you can’t hear that…’ condescending. Steve leaves the room. We are competing for his attention, girls and boys. I am on a bed that is high. I know I’m the favorite and they’re asking me about it and I decide to leave. I slide off the bed, then reach up under the rail and grab a black candle (handmade) and a cigarette and something else.” That is also very out of the ordinary, in fact that would probably never actually happen in real life because everyone in the hotel would hear the music. The dreamer obviously wasn’t logically, clearly thinking. If she/he was then the dream would have ended with the people next door complaining about the noise, or there being somewhere in the dream something about checking to see if the hall was clear, but even then someone might walk down it. The point is it is very out of the ordinary, which, since it is rare, is probably more emotional solely because it’s a new and exciting experience that you furthermore can’t have in real life, so it also has that “I want it since I can’t have it” emotional feel. This is the real kicker, you can sense that the dream wouldn’t have made any sense if they actually checked to see if there were other people in the hall. It is only an ordinary, regular dream, if it doesn’t make sense. And you can sense that that is true.
Let’s see how out of the ordinary this dream is. (All this so far proves that dreams are out of the ordinary, probably just to add emotional content because of the contrast with reality). “We are rehearsing. Instead of a lyrics sheet there is a flat piece of 3D art. It’s a series of concentric circles. One of the circles is made to look like a brick wall. That’s the verse I am supposed to sing. I get singled out and have to sing the verse alone. It’s about life going around and down forever. There’s an infinity symbol.”
For starters there is no such thing as a flat piece of 3D art, 3D is 3D, but you can see how that would be fun for the dreamer to think about, entertaining for him to think about how it could be 3D, yet not 3D at the same time. This emphasizes the emotional content, but it low on the logical content. Why is the emotional content emphasized? Because dreams are for entertainment, you’re trying to have fun in your dream. So he/she mixes the lyrics sheet, 3D art, and flat together. That’s a fun thing to do. Dreams in general are going to be more on the fun side, less on the logical, ah this makes sense side. Take the line “one of the circles is made to look like a brick wall”. That just doesn’t make any sense. Exactly, that’s what is fun about it, trying to imagine something that doesn’t make any sense. Trying to put together in reality, things that just can’t be put together. It’s like you’re trying and trying to do something that just can’t be done. That’s behavior typical of an immature child that just won’t give up. It’s fun to try and break reality and put things together that don’t belong together. That way you create something new and different, something you’d want to dream about. People don’t want to think clearly in dreams, they want to relax, have fun, and do things that they never could in reality. See things they’ve never seen, and experience emotions that they aren’t going to be able to experience in other places. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.21%3A_Dream_Analysis_Shows_how_Emotions_are_Difficult_to_Control.txt |
People can concentrate in various ways, and one of these ways is imbedded in how a person’s brain functions (their emotions, feelings and thoughts all contribute to a certain “brain structure” which would enable some people to concentrate more than others). All things which are harder to do and require a higher intelligence really require more concentration. Concentration is best understood when it is compared to a person’s emotional mind; that is, emotion and concentration are contrary to each other because as emotional development and temporary emotion increase, concentration decreases. As adults age their emotional development grows and how emotional they are increases as they learn to separate out the things they enjoy from the things they don’t, (as this is a sign of good emotional development) but their intelligence decreases. This must mean that something (probably emotion and emotional development) replaces the decline in intelligence that occurs as adults age. Emotion replaces it because that is the natural thing to happen. As animals use less and less of their conscious mind, they become more and more unconscious. For an animal with as large a brain as a human’s being more emotional would mean that they could be very emotional. The larger brain size increases emotional capacity. Since brain size doesn’t decrease over age the emotional capacity becomes used more as intellect goes down. When people are less intelligent, they tend to be more emotional because they have a more direct connection (they don’t have to “go through” or “think through” their intellect) to their emotions.
A good example of how concentration can have a large impact on intelligence is seen through the example of some people who cannot read and comprehend complicated sentences, but are capable of hearing and comprehending these sentences in real life (Durell, 1969). It may mean they just aren’t concentrating enough when they read as when they are listening. Listening leads to them being more interested in what is being said so they can focus on it deeper. The sound and/or social factors “wakes” them up and focuses their attention naturally. That means that solely because they were motivated their intelligence increased; that shows how emotion can influence intelligence.
Concentration is relative to emotion, which is unconscious thinking about something. Concentration is also another word for consciously or unconsciously thinking about something, usually when it is normally hard to think about that thing. That is, you need to concentrate more if you are being emotional or not focused in order to stay in focus, so concentration might then be better defined as thinking under pressure, or thinking in the absence of emotion. That is, someone very emotional would concentrate and that would be thinking under pressure, the pressure coming from the emotion, and someone non-emotional might just concentrate without having to battle wild emotions or distractions.
While concentration means thinking against the perils of disruptions and emotion, you can also concentrate when you’re not being disrupted. So any higher-level thinking can be viewed as concentration. This means that when you’re not concentrating, you’re doing more simple things, since those things wouldn’t be higher-level intellect. People can’t think about several emotions at once, so therefore emotional things are simpler than intellectual ones (so simple that you can’t think about them consciously easily – too simple). That is, as emotion increases, conscious thinking decreases, therefore the number of things you recognize yourself as “doing” also decreases. This happens because people can only think of a few things at a time, and if one of the things you are thinking about is emotion (which you would do just by being emotional) then you wouldn’t be capable of thinking as much consciously (remember emotion is unconscious thought) and that this lower thought capacity would be reflected in a lower intelligence. That is, unconscious emotional processes can replace the higher level functioning used in intelligence as your brain ages and physical factors in your mind decrease your intelligence you might accommodate that change by spending time and energy you’d otherwise spend remembering things and figuring things out by putting your mind into emotion. In the absence of thought you retreat into feelings because they are all your mind can physically handle. As people age their minds physically change to accommodate emotion more than intellect, which decreases. It could be that you understand how your brain is changing, and your emotional mind understands that as well, so you emotionally develop to accommodate your changing mental wiring. That is, as you get dumber (in certain ways) you learn to relax more because you don’t have to think as much. You retreat to become more embedded in your feelings and more sensitive to them because the intellect that was covering them up (partially blocking them) is gone. Younger adults might be wilder than older adults, but this does not make them more emotional because emotional means being affected by your emotions, so the younger adults might have a lot of emotion but their intellect isn’t affected by it, therefore they are less emotional.
That is, it could be that your emotional development happens to correspond with the physical changes in your brain. That is demonstrated by imagining an adult in a child’s mind (say around 3) it simply wouldn’t work because the mental wiring is so different. The child is simply too interested in the world and this greater interest is mirrored by faster learning connections in the brain. That is fitting because if you are interested in something, you want to learn about it. As you get older you want to learn less and your ability to learn mirrors your desire to learn. This coincidence is likely a product of good evolution. Learning uses higher level functioning because you need to draw conclusions based on data for the first time, and it is going to be harder to come to conclusions the first time you learn something then when you implement that learning later on. Using what you learned requires much less brain functioning because you aren’t getting used to new material which may require a different way to think about that material (it would probably require a new way since by definition you are learning).
Emotion is really any disturbance from concentration, which can be seen as higher-level intellect. So as emotion increases, your conscious concentration goes down, and therefore your conscious intellect goes down (that is when emotion increases a lot such that your willpower cannot overcome it, say during any highly emotional time like crying). But what then is unconscious intellect? It seems that unconscious intellect would be things like emotional intelligence, that is emotional intelligence would be processed unconsciously, since it is emotional. You can think about how “cool” something is but you don’t have a conscious thought process about it, you have an unconscious emotional one about it so therefore it is emotional intelligence and having more of that type of intellect might make you more emotional (because you are thinking and processing more things unconsciously, which means you are processing them with emotion). That means that emotional intellect is really just an understanding of things that make you feel, and therefore when you use this intellect you are having feelings so large you can usually identify that you are feeling something, like in the example where you identify how “cool” something is you probably are experiencing an emotion of enjoyment if the object is very cool. If the object is neutral (not cool or uncool) then you would still “feel” your emotions as your mind delves into the emotional part of your brain in order to figure out if you like it or not. You can test that for yourself; just think of a neutral object and ask, “How cool is that” – you become slightly more emotional when you ask the question because you have to think deeply in order to figure out the answer. If you ask the question of “how cool is that” to something cool then it makes you feel good because it is a cool object (this happens because it causes you to think deeply about how cool the object is, and think deeply means thinking more about how cool the object is, and since the object is cool you are going to enjoy thinking about it).
If you think about it emotion is really just things that distract you. Emotion and conscious concentration are completely contrary to each other; they are opposites. If something happens to you that is a disruption (like emotion) then you simply cannot concentrate as well, because you were disrupted. As in the cool example, when you think about how cool something is you start to have feelings about it, and this distracts you from other things that you might be thinking for that time period. That is, it feels like emotion “disrupts” you because it is unconscious, so it disrupts your consciousness because it causes you to feel which disrupts your conscious mind and you recognize your sense of self fundamentally as being a conscious being, not an unconscious one. In this way it is fitting that emotion would replace higher level intellect (as adults age), because it is clearly separated from it. That is, thinking about how cool the object is thought just like regular thinking is thought, you can feel that in your mind – this indicates that since emotion and thinking take up the same space they cannot exist concurrently. Thomas Aquinas asserted that emotions disturb thought and should be controlled. Baruch Spinoza broke with the view of emotions as bothersome intrusions and insisted that they be seen as natural and lawful phenomena.
Emotion feels like it is disruptions and unconscious thought (that is, because it is not logical so it disrupts your sense of logic and the rational continuity of life). When I say “rational continuity of life” I mean that you need to be logical in order to function in a way that would continue your life. You need to have a basic understanding of who you are and where you are and what you are doing (which having higher order brain processes as shown in a good learning ability helps). That understanding is often absent in dreams, where you are mostly emotional and you clearly don’t know what you are doing because if you did, you’d be aware that the dream you are in doesn’t make sense (as most dreams make little sense). Emotion doesn’t just disrupt people in that way (less logical continuity of life) but it would also cause someone’s mind to become more emotionally chaotic. In other words, emotion is unconscious because it cannot be understood. If emotion was understood, then it would be conscious and it wouldn't be emotion. That is why emotion disrupts consciousness and clear thinking, because it by nature is unclear and not understood. When something not understood such as emotion interacts with things that are understood (such as things in regular thinking and intellect) then the clearer thinking becomes disrupted, because something that is not clear and not understood in nature is only going to add components that don't make sense, instead of adding logical information which does make sense. That means that when emotion is on, thinking is off. Thinking and emotion cannot exist in the same space, because thinking by definition is something you understand, and emotion is something you don’t (you understand emotion to some degree, that is people can say, “I like that” which shows understanding of their emotions, but emotion is less understood than non-emotion related thoughts such as math, which is much more exact). To deal with this your mind must turn off emotion in order to think, and thinking off in order to feel; thus your brain separates periods of thinking from periods of emotion. The two components of intellect and emotion never exist together, they are by nature they are separate (in terms of time and separate in terms of nature).
If you are disrupted, you think about what happened unconsciously, so emotions and disruptions are the same (that is because disruptions cause people to become more emotional since they get so upset that they got disrupted, which in turn causes them to think about the disruption unconsciously, which is why emotion is unconscious thought - or an unconscious control process of conscious thought that is the mechanism by which the disruption causes you to stop; but what drew your attention to the disruption in the first place, however, was something unconscious because it was so fast - this quick attention to the disruption is emotion, and that is why emotion is thinking unconsciously). That further shows how emotion is different from higher level, conscious intellect.
If you are more emotionally developed does that mean that you think more unconsciously and therefore think less consciously? Emotion or unconscious thinking would replace your decreased intellect, and this is fitting because emotion also takes away from conscious thinking anyway because you only have so much space in your mind (you can only think about so many things at once, and it is harder to think about more things than less). That is, it is fitting that emotion would replace intellect because you are still capable of thinking of the same number of things, so you’d need to replace brain power used for intellect with something in order to maintain the same mental activity overall. That is, your brain still has the same power (which could be thought of as your number of neurons) but they are just used differently. That could also be thought of as when you age the number of activities you do remains the same, so you still need to use just as much brain power. When viewed that way humans can be compared highly with other animals, that is, most of life is really just doing simple, animal like actions. Someone could do something intellectual, but this isn’t going to result in a significant amount of more brain activity than non-human animals. Just because non-human animals don’t think in words doesn’t mean that they don’t feel similar emotions and feelings as humans. If one animal likes another they have a feeling about that. A human’s ability to put that feeling into words doesn’t necessarily add that much emotion or feeling. Most of the feelings people have come from external sensory stimulation, not internal (such as thinking) so therefore most emotions humans have are going to be similar to other animals (dogs, cats, etc). Therefore it becomes obvious that humans maintain a similar level of activity when they age as when they are younger. And a human’s intellect can be seen as just a mental blocking of their emotions; especially when compared with other animals in the world. Most emotions come from real sensory stimulation, not just sensory stimulation that you think of in your head say when reading a book. Doing the actions of the book in real life would generate more emotion than reading about them, for sure. So as people age they still get about the same stimulation, and this stimulation either needs to be felt or blocked out.
A good example of “blocking” emotional stimulation can be seen when certain behaviors of dogs are compared with that of humans. When a submissive (possibly younger) dog meets a more aggressive older dog (say the meeting between an American bull dog and a regular dog) the younger dog can show his/her submission by nipping the dominant dog’s snout. That is because the emotional interaction is so intense (due to the dominant dogs aggresivity and potential to harm the younger dog, who it views as annoying) that the submissive dog would be viewed as ignoring the dominant dog if it didn’t engage in a very friendly social interaction such as a nipping on the mouth. The nipping relieves the enormous tension between the two dogs, it is a way of saying, “it is ok we are friends”. The need for such a nipping comes from too much emotion between the two animals. If humans were in the dogs’ skins such an interaction wouldn’t occur because the emotional intensity wouldn’t occur in the first place. The humans’ intellect would block the emotional interaction, they simply wouldn’t be aware of it because they aren’t as aware of their emotions, the dog is more impulsive and responds directly to his/her emotions. The human might be intellectually aware that one dog is dominant and that this might be a problem, but they ignore it. Ignoring it would cause anxiety for the human in the dog’s body and the human wouldn’t know why. The human cannot give into their emotions and accept that there is a problem, and that it needs to be resolved.
This problem (the problem is there is a dominant dog and a submissive dog, and the submissive dog would be upset that there is a dog more dominant than it, and the dominant dog would be preoccupied by how annoying the non-dominant dog is, because it is so inferior to it that it is annoying, also there is a need to establish dominance) of dominance can be seen with other animals as well. If there are two roosters and too few hens the roosters are going to fight. If a human was in the rooster’s body (but had the rooster’s emotions such as a desire for the hens) then it would have to fight it out with the other rooster in order to relieve that anxiety of desire for dominance. The human is simply less in touch with its emotions than the rooster. That is, the rooster is capable of such desire for the hens that it is going to fight over the hens each time, humans on the other hand wouldn’t “have” to have a fight over anything that is emotional, they simply don’t experience emotions as well because they have too much intellect. Even though the rooster’s brain is much smaller than a humans, it is capable of much more emotion because of the lack of intellect. Emotional conflicts that aren’t solved then generate anxiety because they aren’t solved, so sometimes a lack of emotion leads to people being dumber instead of more intelligent. In fact more emotion means that animals would spend more time dealing with emotional issues, thereby causing less anxiety. It doesn’t appear that animals other than humans have the same level of anxiety or depression as a human. How often do you see a dog with a depression or long term anxiety? From those examples it is clear how intellect is a block of emotional stimulation, so if intellect (or memory, which is a part of intellect) is removed the result would be that the animal (including humans) would become more emotional.
Instead of intellect blocking emotions, it could be that intellect is simply changing the emotions to make them go away. That is like with the rooster example, a human might not be aware that there is a problem because he/she isn’t as in touch with its emotions (desire for the hens), or with the dog example he/she might not be aware that one dog is different from it and this causes a social issue consciously, but unconsciously he/she would be aware. So the tension still exists, only unconsciously, so the emotions related to the problem still exist. It is only that the human is blocking them out because of his/her conscious mind, which is capable of blocking the unconscious. He/she isn’t aware of these unconscious emotions because he/she is thinking too much (and thinking is a conscious process, so humans are conscious because they think, but this leads to a blocking of emotion). That could be viewed as that humans think in a way fundamental to their psychology and consciousness, so fundamental and important that it interferes with their emotions. That means that intellect is intricately tied in with emotions. If something is tied in with something else then as one increases ones awareness of the increase increases he/she is going to be aware directly proportionally of the larger portion (that is rather obvious). So as intellect decreases, the emotions that were always there from the large amounts of sensory stimulation and social factors become uncovered.
Just as emotion takes away from intellect, intellect also takes away from emotion. That is, if you are thinking about something you can’t be feeling as many things, because you can only think about so many things at the same time, and emotion is really just unconscious thought. If you have less conscious thinking then your memory is going to be less because you are thinking less about stuff. That is, emotion uses processes in the brain to think that relate to emotional things, like feelings, not intellectual, concrete things which you would be capable of remembering. Emotional things are complicated things which involve feelings and people have a very hard time thinking about them consciously (for this reason when people feel emotion it is almost all unconscious, that is, you do not associate emotion with a sense of self). Unconscious thinking isn’t as clear and defined as conscious thinking, so more unconscious thinking instead of conscious thinking would reflect less of an intellect (because it is less clear and defined, “cloudy”). What it might lead to is a greater emotional understanding, however. That is, it doesn’t help with concrete learning, like in school, since its nature is not concrete, but it might help with emotional learning, since its nature is emotional. That is, if you spend more time being emotional it might be that you have more insight into how it is that you are feeling, and have a more direct connection to your feelings.
The reason that less intellect would lead to greater emotion is because emotion is by definition feeling. And people don’t “feel” their thoughts. That is, thought doesn’t lead instantaneously to feelings. Thoughts can lead to feelings, that is you can direct which feelings you are going to have by thinking about certain things, but the thoughts themselves are not feelings. The thoughts are instantaneous; the feelings take time and linger in your mind. That is why there is an almost endless source of feeling, because you feel them and this feeling is more profound than something you don’t feel. It could almost be said that thoughts are just ideas, and feelings are real things. The ideas might generate feelings, but not directly. The reason that feelings are such a source of emotion and feeling is because feelings are more similar to direct feelings which you get from touching things, feeling things, smelling things, tasting things, hearing things and seeing things (the 5 senses). Stimulation of any of the 5 senses leads directly to feeling. It would seem like there would be an overabundance of such sensory stimulation if your intellect was taken away. That is why other animals’ minds are smaller than humans, because without the intellect if they had such a large mind to just process sensory information it would lead to an overload of sensory data. That is why most of the human’s mind is used for intellectual endeavors, and the feeling part of the brain is very small. In fact, how much people feel compared to how much they think is mirrored in the proportion of the size of the feeling part of the brain to the thinking part. That makes a lot of sense. People think much more than they feel. Animals other than humans tend to feel much more than they think. Just imagine you stopped thinking and just felt the world around you, like if you were a dog. That when you encountered a situation when you needed to think you instead just responded to feelings directly. If you did that then with the submissive/dominant dog example you would respond to the dominant dog (if you were the submissive dog) like the submissive dog does. You would feel the feeling “scared” when you encountered the dominant dog and feel that you would want to suck up, you’d do that by kindly nipping the dominant dog’s jaw. Instead people don’t respond directly to their feelings but they think about things. When they see the dominant dog they would think about the dog and not realize as well that they are scared. This would cause a tension in the relationship between dominant and submissive dog because it would appear that the submissive dog isn’t scared when it should be, and is therefore threatening the dominant dog’s dominance. That would cause both dogs anxiety and probably lead to the dominant dog growling at the submissive dog and the submissive dog running away.
In review, intellect disrupts emotion just as much as emotion disrupts intellect. This is because too much feeling or emotion can disturb an intellect because the intelligent mind is very powerful and can magnify the sensations and feelings it receives from the emotional/sensory part of its mind. Intellect also disrupts emotion because it blocks it out or minimizes it. It is capable of doing this because it is so much larger and more powerful than emotion. That is emotion is weak, but is capable of being large if allowed. It is like a river, emotion has a wide stream but it is moving slowly and has a weak current. Intellect has just as wide a stream but is moving much faster. Thus when intellect meets emotion, as it does in the mind, more “water” from the intellect comes in. If the water from the intellect is reduced, however, there is plenty of water from the emotion to take its place. The lake where the water from the emotion comes from is almost infinitely large, because people can feel anything, anytime. The lake behind the intellect however is more limited, so when you have nothing to think about you resort to feelings. This may make some people feel stagnant, (if they aren’t thinking) because they otherwise wouldn’t be moving around all the time. So for optimum enjoyment/health people either move around all the time, or think all the time, or do one or the other or both all the time. Before modern civilization people were hunter-gatherers and they moved around all the time, and probably thought less. In modern civilization it is more common for people to think all the time, and move around a lot less. That is a significant change. People might be more emotional and in touch with their feelings in pre-civilization time when they were exposed to more sensory and physical stimulation. Physical stimulation is a feeling, you get direct feelings from physical stimulation just as you get direct feelings from external sensory stimulation.
That is, either you are interacting with the world or you are thinking, and if you are interacting with the world you are receiving direct sensory stimulation, which leads directly to feelings. Sometimes intellectual topics lead to feelings, but they rarely lead to deep feelings (things like extremely intense arguments might generate deep feelings, and no one can handle those arguments all the time). Intellect leads to fewer feelings than real sensory input because intellect only leads to thought. How many thoughts can you think of that are more intense than doing the actual thought in real life? I cannot think of any. Real feelings in the brain mostly come from sensory stimulation and emotion, or unconscious thought. If a male sees an attractive female he might feel things and therefore get emotional, but he doesn’t have to think anything consciously to feel those things. So even though there are complicated thought processes (unconsciously) going on about the female, it was still sensory stimulation which triggered the emotion. That is, the sensory stimulation lead to no conscious thought that would be related to having a higher intellect. So that same person could feel all those things even if they had a lower intellect or consciousness (conscious mind) because the thoughts generated from seeing the female in that instance were unconscious. You can only think of a few conscious thoughts when the female is seen because you can only think so fast consciously, but you can think much faster unconsciously, and if it occurs unconsciously it is going to lead to emotion, because that is what emotion is, unconscious thought. Emotion is unconscious thought because if it occurs unconsciously it is something you are going to “feel” instead of “think”.
This emotional nature of emotion (separate from higher order thinking or learning ability) is best demonstrated during dreaming, where a person is entirely unconscious and therefore one can see how emotions (which are unconscious thoughts) function. Dreams are random, chaotic and rarely make sense – that is a reflection of the nature of emotion itself. During a dream you rarely know who you are and things occur which often reflect that you really don’t know where you are. There isn’t a strong sense of self in dreams because you can’t think clearly about yourself. “Thinking” is something which doesn’t really occur in dreams, because if you were thinking you’d realize that you were dreaming, and your mind would switch from its unconscious thinking which consists of making up an elaborate story for a dream to conscious thinking where you wouldn’t do that, or be capable of making up such a complex story and complex visual data that quickly. Emotion can really be defined then just as complicated confusion, such as exists in dreams, which are almost entirely emotional.
Dreams are so out of the ordinary in order to generate more feeling and emotion. The out of the ordinariness in dreams, however, also makes them less logical and make less sense. This means that in order for something to be emotional, it needs to not make sense; if it made sense, then it would be conscious thought not emotion, and that emotion therefore could be defined simply as stuff that doesn’t make sense that you think about, not just as unconscious thought. And “stuff that doesn’t make sense” isn’t going to be remembered because it isn’t stuff that you can think about consciously because it doesn’t make sense. Dreams still make sense to some degree, since there are events in them which are at least somewhat real. So while emotions make some sense, they still make less sense than conscious thought. That is, if you are feeling a lot then are you emotional, and if you are emotional then a lot of stuff is going on in your brain. It could be that emotional development causes people to focus more on things they enjoy as they get older and block out the things which they don’t like (this makes sense as it would be good emotional development) and that therefore they get to be more emotional and experience emotions better. That is, maybe people can separate themselves from the things they don’t enjoy and attach themselves to the things they do. Adults might even seem to be asking the question, “how does that relate to my emotions?” (Since they learn to separate out things they like from things they don’t like better, they’d have to relate everything to their emotions more.) This might mean that adults are capable of being both more distant and more “close” than teens/younger adults because of their emotional development, they simply don’t treat things as equal anymore and possibly as a result gain more feeling. The down side of getting older on the other hand is that the things you enjoyed before are now older and you potentially don’t enjoy them as much because of that (they are less “fresh”). More unconscious thinking (emotion) probably also helps to maintain a more emotionally developed mind, as emotionally developed minds would need to think more about their emotions since they have more of them. This means that as people get older they would get more unconscious, but more intelligent emotionally.
Evidence for the idea that adults learn to separate out emotional events from ordinary ones and emphasize the emotional more comes from studies in autobiographical memory retrieval. In a study done by Dijkstra and Kaup (2005) younger and older adults were tested for autobiographical memory retrieval. Older adults were more likely to selectively retain memories with distinctive characteristics, such as being self-relevant and emotionally intense, particularly when remote memories were involved.
In another study by Charles, Mather and Carstensen (2003) the forgettable nature of negative images for older adults was tested. Young, middle-aged and older adults were shown images on a computer screen and after given a distraction task, were asked first to recall as many as they could and then to identify previously shown images from a set of old and new ones. The relative number of negative images compared with positive and neutral images recalled decreased with each successively older age group. Since it is clear people don’t want to remember negative images as much, the study shows how age and emotional development cause people to select what they like more. This would cause people to “relax” more. That is, as adults get older and their intellect decreases, this lack of intellect enables them to be more in touch with their emotions and be more capable of selecting the more positive images.
Memory tests (R.t. Zacks, G Radvasky, and L. Hasher (1996)) show that young adults perform better than older adults when told to remember and forget data. The older adults remembered less than the younger adults when told to remember, and when told to forget data they remembered more than the younger adults.
The results show that younger adults have better control over their minds than older adults. A greater emotional makeup of the older adults is likely a consequence of this. Emotions would lead to less “mental willpower” which would enable younger adults to direct their thinking and to forget when told to forget, and remember when told to remember.
A paper by Einstein and Mcdaniel (1990) investigated the ability of old versus younger people to remember to carry out some action in a future time (known as prospective memory or PM). They suggested that different patterns might emerge between situations in which the PM target is triggered by some event (e.g. “when you meet John, please give him this message”), and those that are time based (e.g., “remember to phone your friend in half an hour”). Their work showed age-related decrements in time-based but not event-based tasks (Einstein, Mcdaniel, Richardson, Guyn & Cunfer, 1995). In my view that would indicate that the event based tasks were more emotional than the time based ones. That is, old people are programmed to work based off of emotional events that occur in real life, not based off something unemotional like time, which occurs all the time and isn’t associated with emotional events. Since they forgot more on the time based tasks but not on the event based ones, it suggests that older adults are cued into emotional events more than the younger adults because there wouldn’t be a discrepancy between the two. It is clear that the event based task is more emotional than the non-event based task because the non-event based task doesn’t occur along with an event. That is, the event is a trigger for the old adult to remember the task. Even if the older adult is more motivated to remember the event in the beginning, they still aren’t going to remember it later on unless this motivation is “triggered” again. That is, it is something unconscious (motivation, emotion) which helps them to remember the event. The motivation can be triggered better by the event based task because the motivation comes from the task itself, so they attribute a greater amount of emotion to the recipient(s) of the task. Events are simply more emotional than non-events.
You think of yourself as primarily conscious, therefore anything unconscious would take away from your consciousness because you can only think about so many things at the same time. If one of those things is unconscious that you are “thinking” about (and thinking about emotion is going to be difficult at best) then it would make you more confused because you would lose more of your conscious, clear, defined sense of self. That is, your sense of self is a clear and focused one (different from emotion, which is not clear). Your sense of self can’t be an emotional one, because emotion doesn’t really make any sense (already shown as in dreams) so you can’t really think about emotion consciously, because it defies conscious thinking or logic. So since your sense of self is what you think about consciously, you are not going to think of yourself as emotional, you are going to think of yourself as more logical than emotional and if you do call yourself emotional that just means emotional relative to other people. That shows that emotion is clearly different in nature from higher order logical processes. And that therefore as intellect goes down as people age as adults it is possible and easy for emotion to go up, because it is clearly separate from intellect. The idea you have of yourself is as a functional being, not an un-functional and chaotic emotional one (that is, if you were solely emotional, not logical, you wouldn’t be able to do anything, you’d just feel and not think – like a frog).
In review, as people age they learn to separate out what they like from what they don’t like, and this ability causes them to gain more emotion, and emotion, being chaotic and unclear in nature, clearly works differently in the brain than intellect does. Emotions are chaotic; they permeate all your thoughts and have an affect on them, like a cloud. When someone is emotional it certainly seems like your entire mind is affected. Some emotions even have physical effects. More evidence that emotion doesn’t use the same brain processes as memory and learning ability can be seen during very emotional times, like during sex or crying, where ones concentration is less. Concentration is needed to maintain intellect, and emotion is clearly different from concentration (as when you are very emotional during sex or crying you cannot concentrate). You can’t memorize multiplication tables (which to do you’d need to concentrate) during sex or crying.
If an adult is intelligent at the same time that he/she is emotional then he/she is relatively less emotional because the intellect balances the emotion. So older adults would be considered to be more emotional because their intellect (or learning ability) is less (if older adults have more emotional intelligence then that wouldn’t make them less emotional because to use emotional intelligence you don’t “think” to figure out the answer but you feel. Emotional intelligence is therefore a sophisticated way of being emotional that animals other than humans might or might not have). That is, younger adults are wild and they are smart. They would still be considered to be less emotional though since a greater portion of their brain is intellect. Animals (other than humans) would be considered to be even more emotional than humans because they have almost no intellect. Emotional is acting instead of thinking, and all animals do is act, not think. Younger adults could then be viewed as acting and thinking at the same time with a higher proportion of intellect than older adults, if you don’t think that older adults have a greater emotional intelligence than younger, that is.
The statement “people and their intellect are based on emotions” is a complicated one. They are based off of their higher emotions and their lower emotions. There is really no such thing as “no emotion” because people they are always thinking, consciously or unconsciously, and that is what emotion is. Sometimes it appears as if they have no emotion, but they are still thinking about things, they still have a memory and they are still using it, processing data and sensory inputs. Those things all cause thought and therefore emotion.
How then could someone be called non-emotional? It must be that they are feeling less, that is if they are concentrating deeply for a very long period of time then they might be a deep thinker that isn’t really wavering in their feelings, just simply thinking about things and not really doing anything interesting that would invoke a lot of emotion, or unconscious thought.
Many older adults complain about being too occupied, both emotionally and physically. That is better seen in very old people whose brains are decaying, for whom even tiny mental tasks can wear out their mind. It isn’t that their mind is being worn out; it is that they already lost most of their intellect but the pauses are filled with emotion. That is what animals are like, the experience you get from animals is an emotional one, not an intellectual one. Therefore animals spend more time being emotional. Emotional in that context means feeling, animals spend more time using unconscious thought and “feeling” the world around them. That is good evidence that as intellect, learning ability and memory decrease it is replaced with emotion. That is because emotion doesn’t need to increase, it simply needs the block of intellect to be removed. People were already thinking about enough things consciously and unconsciously. That is, someone’s unconscious mind is really being partly blocked at least as a younger adult, but when intellect is removed the unconscious becomes unveiled (like how animals are unconscious) and the person becomes more emotional as a result.
Evidence for the connection between higher amounts of emotion and a lower intellect can be found in test studies done on people with a depressed mood. In a meta-analysis done by Vreeswijk and De Wilde (2004) a confirmation of the connection between overgenerality and depression was done. The depressed patients were less specific in recalling their memory than the non-depressed.
Since being emotional is rated by how much proportionally larger the emotional part of your mind is than the intellectual part, older people do get more emotional since intelligence decreases over age. However they don’t necessarily get more emotion as they age, they simply get more of it relative to their intellect. The lowering of the intellect, however, would make them more in touch with their emotions and capable of greater emotional regulation (as evidenced by the study where successively older age groups remembered more and more of the positive images). They aren’t likely to get significantly more emotional, however because the amount of sensory stimulation they are receiving is going to be similar to what they received when they were younger. The only thing that would go down is internal stimulation or thinking which goes down from a lowering of intellect.
As adults age from 20-74 their IQ (Wechsler Adult Intelligence Scale) declines steadily (Kaufman, Reynolds and Mclean (1989). The verbal IQ actually stays about the same but it is performance IQ that decreases. From the postulates in this paper the conclusion would therefore be that verbal IQ is somehow related to emotions. Performance IQ is clearly not related to emotions because it tests mostly visual abilities. Verbal isn’t likely to go down because the things it tests have to do with emotion and emotional control of attention. You cannot control how effective you are doing visual stuff, however because it requires concentration to visualize objects because there is less motivation to visualize then there is to just think. Thinking is easier than visualizing because people are used to thinking about anything, however they usually only visualize things they want to visualize, not things that are going to be tested on the IQ exam. That is, you can use emotion to control thought but you cannot use emotion to control your basic intelligence as would be reflected in visual ability tests (performance IQ).
The “willpower” of adults won’t decrease as adults age. The willpower can direct a mind for under 20 second periods, and under 20 seconds is the time that it takes to do most intellectual tasks. Like a math problem. They could repeat the focus they put in every 20 seconds, “spike” their mind every 20 seconds or so to maintain this intelligence. The things on the performance test don’t require that much focus, either you know them or you don’t. Note that three of the verbal tests test mention attention or concentration specifically (which relate to willpower which relates to emotion as already stated). And the other parts of the verbal test measure things which are also going to relate to emotion such as information acquired from culture (you are emotionally interested in your culture) and ability to deal with abstract social conventions, rules and expressions (you are emotionally interested in social events) and verbal reasoning (tests things that occur in everyday life which you are emotionally attached to). The performance test on the other hand doesn’t test things that are likely to go down because of increased emotion. The performance test tests things that are more intellect related than emotion related, that is visual things require a more intellectual, flexible mind to move objects around in your head. While the verbal subtests just require some motivation to perform (only one component of verbal tests working memory (which isn’t that emotional and wouldn’t be subject to changes in concentration) - one component wouldn’t have a significant impact on the result).
Wechsler Adult Intelligence Scale
Verbal Subtests
Information
Degree of general information acquired from culture (e.g. Who is the premier of Victoria?)
Comprehension
Ability to deal with abstract social conventions, rules and expressions (e.g. What does - Kill 2 birds with 1 stone metaphorically mean?)
Arithmetic
Concentration while manipulating mental mathematical problems (e.g. How many 45c. stamps can you buy for a dollar?)
Similarities
Abstract verbal reasoning (e.g. In what way are an apple and a pear alike?)
Vocabulary
The degree to which one has learned, been able to comprehend and verbally express vocabulary (e.g. What is a guitar?)
Digit span
attention/concentration (e.g. Digits forward: 123, Digits backward 321.)
Letter-Number Sequencing
attention and working memory (e.g. Given Q1B3J2, place the numbers in numerical order and then the letters in alphabetical order)
Performance Subtests
Picture Completion
Ability to quickly perceive visual details
Digit Symbol - Coding
Visual-motor coordination, motor and mental speed
Block Design
Spatial perception, visual abstract processing & problem solving
Matrix Reasoning
Nonverbal abstract problem solving, inductive reasoning, spatial reasoning
Picture Arrangement
Logical/sequential reasoning, social insight
Symbol Search
Visual perception, speed
Object Assembly
Visual analysis, synthesis, and construction
Optional post-tests include Digit Symbol - Incidental Learning and Digit Symbol - Free Recall.
There is more evidence that emotion plays a role in intelligence. In a study done by Bartolic et al. (1999) the influence of negative and positive emotion on verbal working memory was tested. Their data showed significantly improved verbal working memory performance for positive emotions and a significant deterioration in verbal working memory during negative emotion. That shows how emotion can manipulate intelligence in the short term, as working memory is a short term ability. Therefore, however, long term intellect (like the rest of the verbal IQ test other than working memory) might be manipulated or under the control of long term emotions. It seems like your ability to learn all the rest of the verbal IQ tests would go up during the period of increased emotion as in this study, only it is hard to test for that. But that ability over the long run would be reflected in no decline in verbal IQ scores, and there isn’t. That is, it isn’t likely that just verbal working memory would increase due to increased emotion; that was just the only thing that they tested for. The subject probably became motivated overall and this motivation and good mood gave him/her greater mental powers, not just a better verbal working memory.
As adults age their explicit memory goes down Howard (1988) but their implicit memory stays about the same. Howard describes implicit memory as the ability to successfully complete memory tasks that do not require conscious recollection. Since emotion is unconscious, that lack of decline would provide further evidence that emotional process don’t decrease with age, but more intellectual ones do. That itself provides evidence that the emotional part of the brain is separated from the intellectual. The emotional part of the brain and the intellectual part still interact, however.
Emotion can enhance or detract from intellect, and intellect can enhance or detract from emotions. In the long run intellect does not disrupt emotion, but in the short term intellect and emotions intermingle and disrupt each other. It was shown how emotions are separate from intellect, and how therefore concentration (which can be defined as thinking under the pressure of emotion [since to give undivided attention you couldn’t be disturbed by emotional factors]) is an important part of intelligence (such as memory). When people’s intellect is removed they become more emotional, as this is what is left. The source of emotion (sensory stimulation) is so large that it can never be ignored. Intellect, however can be ignored and emotion would rise up in its place. In the case of adults aging this “ignoring” of intellect happens as the mind physically gets older and some of the intellect is removed. This reveals the idea that humans have the ability to hold off emotion and do intellectual endeavors, or to indulge and bask in emotion if they want to (and switch between the two) sometimes as fast as a split second, and they can switch from one to the other for years.
BIBLIOGRAPHY
Bartolic et al., 1999 E.I. Bartolic, M.R. Basso, B.K. Schefft, T. Glauer and M. Titanic-Schefft, Effects of experimentally-induced emotional states on frontal lobe cognitive task performance, Neuropsychologia 37 (1999) pp. 677-683.
Charles, S.T., Mather, M., & Carstensen, L.L. (2003). Aging and emotional memory: The forgettable nature of negative images for older adults. Journal of Experimental Psychology General, 132, 2, 310-24, June.
Dijkstra, K. & Kaup, B. (2005). Mechanisms of autobiographical memory retrieval in younger and older adults. Memory Cognition, 33, 5, 811-20, July.
Durrell, D. D. (1969). Listening comprehension versus reading comprehension. Journal of Reading, 12, 6, 455-60, March.
Howard, D.V. (1988). Implicit and explicit assessment of cognitive aging. In M. L. Howe and C.J. Brainerd (eds.), Cognitive Development in Adulthood. New York: Springer-Verlag.
Kaufman, A.S., Reynolds, C.R., and McLean, J.E. (1989). Age and WAIS-R intelligence in a national sample of adults in the 20 – 74 years age range: A cross-sectional analysis with education level controlled. Intelligence, 13, 235-254.
R.t. Zacks, G Radvasky, and L. Hasher (1996), Studies of directed forgetting in older adults, Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, pp. 146-148 (experiment 1b).
Van Vreeswijk, M.E., De Wilde, E.J. (2004). Autobiographical memory specificity, psychopathology, depressed mood and the use of the Autobiographical Memory Test: A meta-analysis. Behavior Research and Therapy, 42, 2, 731-43, June. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.22%3A_Concentration_and_Emotions_are_Important_Factors_in_Intelligence.txt |
Humans have emotions - feelings are tangible while emotions are - or could be considered to be deep and complicated. The idea that feelings are tangible basically means that they could be more sensory or less intellectual and deep. Emotions are more powerful than feelings; however, they could also trigger the human intellect.
What would it mean for emotion to be powerful? Would that involve physical feelings? Physical stimulation can also be deep or shallow, emotional or intellectual. If the feeling (physical, emotional) is intellectual then it could be emotional or it could also be tied in with sensory feelings (say when you touch something).
What would it mean for something to be intellectual? Would that mean that it is different from the persons emotions? Emotions can be tied in with feelings - however that means that the emotion could be shallow and thought provoking or deep, or a strong emotion that is also deep.
It is important to distinguish deep feelings from sensory feelings. Deep feelings are probably intellectual - they are tied in with complicated cognitions which include memory processes, executive functioning (control of thoughts, ideas and images) and understanding concepts.
Concepts can also be emotional since they are intellectual or intelligent. A concept is like an idea only it is general or generic. An idea is something that occurs to someone while a concept could be the definition of an idea or the idea that the person refers to or already understood. Those deeper concepts can trigger emotions that are related to the idea or concept. A single concept could be powerful or significant to the person.
A humans emotions could influence their thoughts - and their physical feelings can also influence either their thoughts or their emotions (or both at the same time). Thoughts could be complicated - they are a mix of goals and motivations with the persons environment and experience. Furthermore, a motivation could have complicated emotions, and their present situation could be causing complicated emotions.
The difference between feelings and thoughts is simple and complex - a thought could be complex because it could involve the persons motivations mixed in with the objects in their environment and their experience. They could have a thought for each object or each objective reality in their situation.
The difference between their feelings and thoughts then is that their feelings cause feeling, or stimulation and could be complex and intellectual while their thoughts could be unconscious or complex.
title
Humans have feelings. Humans can also think about their feelings. Other factors in reality help the thinking process - such as what is in the persons environment, and what they are paying attention to all assist the persons thinking process.
But what exactly is a thought process? Is it a sentence? Is it a single idea? Is it a few ideas that the person is trying to think about or understand?
The ideas someone is thinking about could be complicated and internal - or simple and related to their environment.
Humans have ideas - mulitple ideas can compose a thought process. The ideas can be about different things - stuff in the persons environement, other ideas or memories that they want to think about, and they can form thoughts or sentences about those ideas - they can also think about their feelings (with ideas or sentences).
For instance, a feeling could be an idea - or an idea could become a feeling
What does that mean an idea is? An idea is something that occurs to someone - it is a concept or intention, or an understanding of some sort.
Ideas can relate to a persons feelings - and to the persons thought process. That is, ideas can complete a thought process.
title
Ideas are thoughts that occur to people, 'that is an excellent idea' wold be the expression.
People have emotions. Their emotions are feelings that they feel. That means that they like to think about things
If humans think, then however it is more fun , however , it is not fun , however I now think that that makes sense. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.23%3A_Intellect_Cognition_and_Emotion.txt |
Human beings can think simple ideas or they can think complex ideas. How is a person supposed to know if an idea that they are thinking is something that needs further consideration?
There are lots of things that people can think about. Some things that people think about are simple topics that they regularly bring up in conversation. Other things that people think about are things that they emotionally ponder.
How is a person supposed to know if something that someone ponders is something that is important for them? Humans could think about many topics throughout the day.
What is thinking for that matter? When a person thinks they are pondering stuff - they form ideas about life or what they are doing, and they try to make sense out of what is happening.
If someone doesn't make sense out of what is happening, then they might not process what is going on in a situation. Their understanding could be emotional or it could be a practical understanding. A practical understanding could be emotional - that is, if they have a feeling for what is going on then they might also be capable of interpreting that understanding in practical fashion.
Thoughts and Concepts
It doesn't really matter if someone interprets a situation in a practical fashion, as long as they understand what the significant factors are or if they can respond in an effective manner.
Different ideas that the person has could be emotional ideas (ideas about the feelings that they are experiencing) or they could be ideas about what is going on or what their thoughts are. If their thoughts are on their feelings then they could interpret things differently from if their thoughts are coming from the situation.
Therefore, there can be different amounts of focus on ones thoughts or the situation - people can direct their thoughts at life in general or they could direct their thoughts at what they want from a situation.
Situations or life-scenarios impact a humans feelings in various ways - the person could try to interpret what is going on by analyzing either the external sources or what the impact on their feelings is.
1.25: My theory of subjective analysis
Emotion is subjective - that means that the feelings that humans experience are unique to each individual person - if each person has their own experience then they are going to form different opinions and different ideas than the people they interact with.
But what is emotion? Emotion mean feeling - that someone is feeling something. If someone is feeling something then it feels 'like' something.
If someone is feeling something then they are experiencing emotions. Emotion is an experience - it is something that you feel.
Humans can also think about things - the things that they think about help to make their emotions more complex. However, this also applies to other animals such as dogs - if a dog thinks about their owner then they make their feelings more complex. They are focusing their feelings on their owner, and that generates those feelings. They could also look at their owner, and so on - in order to help to trigger those complicated feelings.
That seems fairly simple - how is someone supposed to know when they are generating an emotion? If they are aware that they are experiencing an emotion then they might notice the emotion.
This applies to all things in life - different objects and experiences cause humans and animals to have emotional reactions. The reactions that they experience could be complicated reactions, or they could be simple reactions.
What would make a reaction simple? What would make an emotional reaction complex? Emotion can have subtlety - so even though there might be a single emotion - say the feeling 'joy' - then the feeling could still be complicated.
Analysis of Emotions
But I am just talking about experiencing emotions and feelings. There is more to life than experiencing emotions. Humans have to think about things, also.
There are lots of things in life that humans think about. People think about physical stuff like food or other objects in their environment, or different environments that they are in. All the things in someones environment are physical things.
If an object that a person is thinking about is physical then they could imagine it in their mind - 'picture' it. That means that there are lots of things that people can think about - abstract concepts that cannot be imagined by a visual mental picture, and concrete objects that can be pictured. Some concepts can be pictured, however there is a large range of stuff to think about and much of it involves complicated objects or a lot of objects (making it harder for the person to make a picture in their mind of the concept or environment).
What would make an object a complicated object? Someone can picture a human in their mind - however that doesn't mean that they are going to make it emotionally complicated.
What is the difference between just picturing something or someone and thinking deeply about it then? When I say 'picture' what does that mean other than literally making a mental picture of something. Perhaps my use of the understanding of 'picture or mental image' just makes humans think more about whatever it is that they are thinking about. | textbooks/socialsci/Psychology/Cognitive_Psychology/A_Cognitive_Perspective_on_Emotion_(Pettinelli)/1.24%3A_Advanced_Ideas_are_Important_Objects.txt |
Imagine all of your thoughts as if they were physical entities, swirling rapidly inside your mind. How is it possible that the brain is able to move from one thought to the next in an organized, orderly fashion? The brain is endlessly perceiving, processing, planning, organizing, and remembering—it is always active. Yet, you don’t notice most of your brain’s activity as you move throughout your daily routine. This is only one facet of the complex processes involved in cognition. Simply put, cognition is thinking, and it encompasses the processes associated with perception, knowledge, problem solving, judgment, language, and memory. Scientists who study cognition are searching for ways to understand how we integrate, organize, and utilize our conscious cognitive experiences without being aware of all of the unconscious work that our brains are doing (for example, Kahneman, 2011).
01: History of Cognitive Psychology
Upon waking each morning, you begin thinking—contemplating the tasks that you must complete that day. In what order should you run your errands? Should you go to the bank, the cleaners, or the grocery store first? Can you get these things done before you head to class or will they need to wait until school is done? These thoughts are one example of cognition at work. Exceptionally complex, cognition is an essential feature of human consciousness, yet not all aspects of cognition are consciously experienced. Cognitive psychology is the field of psychology dedicated to examining how people think. It attempts to explain how and why we think the way we do by studying the interactions among human thinking, emotion, creativity, language, and problem solving, in addition to other cognitive processes. Cognitive psychologists strive to determine and measure different types of intelligence, why some people are better at problem solving than others, and how emotional intelligence affects success in the workplace, among countless other topics. They also sometimes focus on how we organize thoughts and information gathered from our environments into meaningful categories of thought, which will be discussed later.
1.02: Historical Roots- History of Cognition
Learning Objectives
• Name major figures in the history of cognition
Cogito Ergo Sum
Maybe you’ve heard the phrase I think, therefore I am , or perhaps even the Latin version: Cogito ergo sum. This simple expression is one of enormous philosophical importance, because it is about the act of thinking. Thought has been of fascination to humans for many centuries, with questions like What is thinking? and How do people think? and Why do people think? troubling and intriguing many philosophers, psychologists, scientists, and others.
The word “cognition” is the closest scientific synonym for thinking. It comes from the same root as the Latin word cogito , which is one of the forms of the verb “to know.” Cognition is the set of all mental abilities and processes related to knowledge, including attention, memory, judgment, reasoning, problem solving, decision making, and a host of other vital processes.
Human cognition takes place at both conscious and unconscious levels. It can be concrete or abstract. It is intuitive, meaning that nobody has to learn or be taught how to think. It just happens as part of being human. Cognitive processes use existing knowledge but are capable of generating new knowledge through logic and inference.
History of Cognition
People have been studying knowledge in various ways for centuries. Some of the most important figures in the study of cognition are:
Aristotle (384–322 BCE)
The study of human cognition began over two thousand years ago. The Greek philosopher Aristotle was interested in many fields, including the inner workings of the mind and how they affect the human experience. He also placed great importance on ensuring that his studies and ideas were based on empirical evidence (scientific information that is gathered through observation and careful experimentation).
Descartes (1596–1650)
René Descartes was a seventeenth-century philosopher who coined the famous phrase I think, therefore I am (albeit in French). The simple meaning of this phrase is that the act of thinking proves that a thinker exists. Descartes came up with this idea when trying to prove whether anyone could truly know anything despite the fact that our senses sometimes deceive us. As he explains, “We cannot doubt of our existence while we doubt.”
Wilhelm Wundt (1832–1920)
Wilhelm Wundt is considered one of the founding figures of modern psychology; in fact, he was the first person to call himself a psychologist. Wundt believed that scientific psychology should focus on introspection, or analysis of the contents of one’s own mind and experience. Though today Wundt’s methods are recognized as being subjective and unreliable, he is one of the important figures in the study of cognition because of his examination of human thought processes.
Cognition, Psychology, and Cognitive Science
The term “cognition” covers a wide swath of processes, everything from memory to attention. These processes can be analyzed through the lenses of many different fields: linguistics, anesthesia, neuroscience, education, philosophy, biology, computer science, and of course, psychology, to name a few. Because of the number of disciplines that study cognition to some degree, the term can have different meanings in different contexts. For example, in psychology, “cognition” usually refers to processing of neural information; in social psychology the term “social cognition” refers to attitudes and group attributes. These numerous approaches to the analysis of cognition are synthesized in the relatively new field of cognitive science, the interdisciplinary study of mental processes and functions.
KEY TAKEAWAYS
Key Points
• cognition is the set of all mental abilities and processes related to knowledge, including attention, memory, judgment, reasoning, problem solving, decision making, and a host of other vital processes.
• Aristotle, Descartes, and Wundt are among the earliest philosophers who dealt specifically with the act of cognition.
• cognitive processes can be analyzed through the lenses of many different fields, including linguistics, anesthesia, neuroscience, education, philosophy, biology, computer science, and psychology.
Key Terms
• cognition: The set of all mental abilities and processes related to knowledge.
• cognitive science: An interdisciplinary field that analyses mental functions and processes. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/01%3A_History_of_Cognitive_Psychology/1.01%3A_Definition_of_Cognitive_Psychology.txt |
What is the nature of thought/how is it organized?
Concepts and Prototypes
The human nervous system is capable of handling endless streams of information. The senses serve as the interface between the mind and the external environment, receiving stimuli and translating it into nervous impulses that are transmitted to the brain. The brain then processes this information and uses the relevant pieces to create thoughts, which can then be expressed through language or stored in memory for future use. To make this process more complex, the brain does not gather information from external environments only. When thoughts are formed, the brain also pulls information from emotions and memories. Emotion and memory are powerful influences on both our thoughts and behaviors.
In order to organize this staggering amount of information, the brain has developed a file cabinet of sorts in the mind. The different files stored in the file cabinet are called concepts. Concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Concepts are, in many ways, big ideas that are generated by observing details, and categorizing and combining these details into cognitive structures. You use concepts to see the relationships among the different elements of your experiences and to keep the information in your mind organized and accessible.
Concepts are informed by our semantic memory (you learned about this concept when you studied memory) and are present in every aspect of our lives; however, one of the easiest places to notice concepts is inside a classroom, where they are discussed explicitly. When you study United States history, for example, you learn about more than just individual events that have happened in America’s past. You absorb a large quantity of information by listening to and participating in discussions, examining maps, and reading first-hand accounts of people’s lives. Your brain analyzes these details and develops an overall understanding of American history. In the process, your brain gathers details that inform and refine your understanding of related concepts like democracy, power, and freedom.
Concepts can be complex and abstract, like justice, or more concrete, like types of birds. In psychology, for example, Piaget’s stages of development are abstract concepts. Some concepts, like tolerance, are agreed upon by many people, because they have been used in various ways over many years. Other concepts, like the characteristics of your ideal friend or your family’s birthday traditions, are personal and individualized. In this way, concepts touch every aspect of our lives, from our many daily routines to the guiding principles behind the way governments function.
Another technique used by your brain to organize information is the identification of prototypes for the concepts you have developed. A prototype is the best example or representation of a concept. For example, for the category of civil disobedience, your prototype could be Rosa Parks. Her peaceful resistance to segregation on a city bus in Montgomery, Alabama, is a recognizable example of civil disobedience. Or your prototype could be Mohandas Gandhi, sometimes called Mahatma Gandhi (“Mahatma” is an honorific title).
Figure 2. In 1930, Mohandas Gandhi led a group in peaceful protest against a British tax on salt in India.
Mohandas Gandhi served as a nonviolent force for independence for India while simultaneously demanding that Buddhist, Hindu, Muslim, and Christian leaders—both Indian and British— collaborate peacefully. Although he was not always successful in preventing violence around him, his life provides a steadfast example of the civil disobedience prototype (Constitutional Rights Foundation, 2013). Just as concepts can be abstract or concrete, we can make a distinction between concepts that are functions of our direct experience with the world and those that are more artificial in nature.
Natural and Artificial Concepts
In psychology, concepts can be divided into two categories, natural and artificial. Natural concepts are created “naturally” through your experiences and can be developed from either direct or indirect experiences. For example, if you live in Essex Junction, Vermont, you have probably had a lot of direct experience with snow. You’ve watched it fall from the sky, you’ve seen lightly falling snow that barely covers the windshield of your car, and you’ve shoveled out 18 inches of fluffy white snow as you’ve thought, “This is perfect for skiing.” You’ve thrown snowballs at your best friend and gone sledding down the steepest hill in town. In short, you know snow. You know what it looks like, smells like, tastes like, and feels like. If, however, you’ve lived your whole life on the island of Saint Vincent in the Caribbean, you may never have actually seen snow, much less tasted, smelled, or touched it. You know snow from the indirect experience of seeing pictures of falling snow—or from watching films that feature snow as part
of the setting. Either way, snow is a natural concept because you can construct an understanding of it through direct observations or experiences of snow.
An artificial concept, on the other hand, is a concept that is defined by a specific set of characteristics. Various properties of geometric shapes, like squares and triangles, serve as useful examples of artificial concepts. A triangle always has three angles and three sides. A square always has four equal sides and four right angles. Mathematical formulas, like the equation for area (length × width) are artificial concepts defined by specific sets of characteristics that are always the same. Artificial concepts can enhance the understanding of a topic by building on one another. For example, before learning the concept of “area of a square” (and the formula to find it), you must understand what a square is. Once the concept of “area of a square” is understood, an understanding of area for other geometric shapes can be built upon the original understanding of area. The use of artificial concepts to define an idea is crucial to communicating with others and engaging in complex thought. According to Goldstone and Kersten (2003), concepts act as building blocks and can be connected in countless combinations to create complex thoughts.
Schemata
A schema is a mental construct consisting of a cluster or collection of related concepts (Bartlett, 1932). There are many different types of schemata, and they all have one thing in common: schemata are a method of organizing information that allows the brain to work more efficiently. When a schema is activated, the brain makes immediate assumptions about the person or object being observed.
There are several types of schemata. A role schema makes assumptions about how individuals in certain roles will behave (Callero, 1994). For example, imagine you meet someone who introduces himself as a firefighter. When this happens, your brain automatically activates the “firefighter schema” and begins making assumptions that this person is brave, selfless, and community-oriented. Despite not knowing this person, already you have unknowingly made judgments about him. Schemata also help you fill in gaps in the information you receive from
the world around you. While schemata allow for more efficient information processing, there can be problems with schemata, regardless of whether they are accurate: Perhaps this particular firefighter is not brave, he just works as a firefighter to pay the bills while studying to become a children’s librarian.
An event schema, also known as a cognitive script, is a set of behaviors that can feel like a routine. Think about what you do when you walk into an elevator. First, the doors open and you wait to let exiting passengers leave the elevator car. Then, you step into the elevator and turn around to face the doors, looking for the correct button to push. You never face the back of the elevator, do you? And when you’re riding in a crowded elevator and you can’t face the front, it feels uncomfortable, doesn’t it? Interestingly, event schemata can vary widely among different cultures and countries. For example, while it is quite common for people to greet one another with a handshake in the United States, in Tibet, you greet someone by sticking your tongue out at them, and in Belize, you bump fists (Cairns Regional Council, n.d.)
Because event schemata are automatic, they can be difficult to change. Imagine that you are driving home from work or school. This event schema involves getting in the car, shutting the door, and buckling your seatbelt before putting the key in the ignition. You might perform this script two or three times each day. As you drive home, you hear your phone’s ring tone.
Typically, the event schema that occurs when you hear your phone ringing involves locating the phone and answering it or responding to your latest text message. So without thinking, you reach for your phone, which could be in your pocket, in your bag, or on the passenger seat of the car. This powerful event schema is informed by your pattern of behavior and the pleasurable stimulation that a phone call or text message gives your brain. Because it is a schema, it is extremely challenging for us to stop reaching for the phone, even though we know that we endanger our own lives and the lives of others while we do it (Neyfakh, 2013).
Remember the elevator? It feels almost impossible to walk in and not face the door. Our powerful event schema dictates our behavior in the elevator, and it is no different with our phones. Current research suggests that it is the habit, or event schema, of checking our phones in many different situations that makes refraining from checking them while driving especially difficult (Bayer & Campbell, 2012). Because texting and driving has become a dangerous epidemic in recent years, psychologists are looking at ways to help people interrupt the “phone schema” while driving. Event schemata like these are the reason why many habits are difficult to break once they have been acquired. As we continue to examine thinking, keep in mind how powerful the forces of concepts and schemata are to our understanding of the world.
Summary
In this section, you were introduced to cognitive psychology, which is the study of cognition, or the brain’s ability to think, perceive, plan, analyze, and remember. Concepts and their corresponding prototypes help us quickly organize our thinking by creating categories into which we can sort new information. We also develop schemata, which are clusters of related concepts. Some schemata involve routines of thought and behavior, and these help us function properly in various situations without having to “think twice” about them. Schemata show up in social situations and routines of daily behavior.
Self Check Questions
Critical Thinking Questions
1. Describe a social schema that you would notice at a sporting event.
2. Explain why event schemata have so much power over human behavior.
Personal Application Question
1. Describe a natural concept that you know fully but that would be difficult for someone else to understand and explain why it would be difficult.
Answers
1. Answers will vary. When attending a basketball game, it is typical to support your team by wearing the team colors and sitting behind their bench.
2. Event schemata are rooted in the social fabric of our communities. We expect people to behave in certain ways in certain types of situations, and we hold ourselves to the same social standards. It is uncomfortable to go against an event schema—it feels almost like we are breaking the rules.
Glossary
• Artificial Concept: concept that is defined by a very specific set of characteristics
• Cognition: thinking, including perception, learning, problem solving, judgment, and memory
• Cognitive Psychology: field of psychology dedicated to studying every aspect of how people think
• Concept: category or grouping of linguistic information, objects, ideas, or life experiences
• Cognitive Script: set of behaviors that are performed the same way each time; also referred to as an event schema
• Event Schema: set of behaviors that are performed the same way each time; also referred to as a cognitive script
• Natural Concept: mental groupings that are created “naturally” through your experiences
• Prototype: best representation of a concept
• Role Schema: set of expectations that define the behaviors of a person occupying a particular role
• Schema: (plural = schemata) mental construct consisting of a cluster or collection of related concepts | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/01%3A_History_of_Cognitive_Psychology/1.03%3A_Mnemonic_Devices.txt |
Early Psychology—Structuralism and Functionalism
Learning Objectives
• Define structuralism and functionalism and the contributions of Wundt and James to the development of psychology
Psychology is a relatively young science with its experimental roots in the 19th century, compared, for example, to human physiology, which dates much earlier. As mentioned, anyone interested in exploring issues related to the mind generally did so in a philosophical context prior to the 19th century. Two men, working in the 19th century, are generally credited as being the founders of psychology as a science and academic discipline that was distinct from philosophy. Their names were Wilhelm Wundt and William James.
Table 1. The Most Important Approaches (Schools) of Psychology
School of psychology
Description
Important contributors
Structuralism
Uses the method of introspection to identify the basic elements or “structures” of psychological experience
Wilhelm Wundt, Edward B. Titchener
Functionalism
Attempts to understand why animals and humans have developed the particular psychological aspects that they currently possess
William James
Psychodynamic
Focuses on the role of our unconscious thoughts, feelings, and memories and our early childhood experiences in determining behavior
Sigmund Freud, Carl Jung, Alfred Adler, Erik Erickson
Behaviorism
Based on the premise that it is not possible to objectively study the mind, and therefore that psychologists should limit their attention to the study of behavior itself
John B. Watson, B. F. Skinner
Cognitive
The study of mental processes, including perception, thinking, memory, and judgments
Hermann Ebbinghaus, Sir Frederic Bartlett, Jean Piaget
Social-cultural
The study of how the social situations and the cultures in which people find themselves influence thinking and behavior
Fritz Heider, Leon Festinger, Stanley Schachter
Wundt and Structuralism
Wilhelm Wundt (1832–1920) was a German scientist who was the first person to be referred to as a psychologist. His famous book entitled Principles of Physiological Psychology was published in 1873. Wundt viewed psychology as a scientific study of conscious experience, and he believed that the goal of psychology was to identify components of consciousness and how those components combined to result in our conscious experience. Wundt used introspection (he called it “internal perception”), a process by which someone examines their own conscious experience as objectively as possible, making the human mind like any other aspect of nature that a scientist observed. Wundt’s version of introspection used only very specific experimental conditions in which an external stimulus was designed to produce a scientifically observable (repeatable) experience of the mind (Danziger, 1980). The first stringent requirement was the use of “trained” or practiced observers, who could immediately observe and report a reaction. The second requirement was the use of repeatable stimuli that always produced the same
experience in the subject and allowed the subject to expect and thus be fully attentive to the inner reaction. These experimental requirements were put in place to eliminate “interpretation” in the reporting of internal experiences and to counter the argument that there is no way to know that an individual is observing their mind or consciousness accurately, since it cannot be seen by any other person. This attempt to understand the structure or characteristics of the mind was known as structuralism . Wundt established his psychology laboratory at the University at Leipzig in 1879. In this laboratory, Wundt and his students conducted experiments on, for example, reaction times. A subject, sometimes in a room isolated from the scientist, would receive a stimulus such as a light, image, or sound. The subject’s reaction to the stimulus would be to push a button, and an apparatus would record the time to reaction. Wundt could measure reaction time to one-thousandth of a second (Nicolas & Ferrand, 1999).
Interactive Element
Follow the link for a deeper look at Structuralism & Functionalism
James and Functionalism
William James (1842–1910) was the first American psychologist who espoused a different perspective on how psychology should operate. James was introduced to Darwin’s theory of evolution by natural selection and accepted it as an explanation of an organism’s characteristics. Key to that theory is the idea that natural selection leads to organisms that are adapted to their environment, including their behavior. Adaptation means that a trait of an organism has a function for the survival and reproduction of the individual, because it has been naturally selected. As James saw it, psychology’s purpose was to study the function of behavior in the world, and as such, his perspective was known as functionalism . Functionalism focused on how mental activities helped an organism fit into its environment. Functionalism has a second, more subtle meaning in that functionalists were more interested in the operation of
the whole mind rather than of its individual parts, which were the focus of structuralism. Like Wundt, James believed that introspection could serve as one means by which someone might study mental activities, but James also relied on more objective measures, including the use of various recording devices, and examinations of concrete products of mental activities and of anatomy and physiology (Gordon, 1995).
GLOSSARY
• Functionalism: focused on how mental activities helped an organism adapt to its environment
• Structuralism: understanding the conscious experience through introspection
Ecological Validity
One important challenge researchers face when designing a study is to find the right balance between ensuring or the degree to which a study allows unambiguous causal inferences, and or the degree to which a study ensures that potential findings apply to settings and samples other than the ones being studied ( ). Unfortunately, these two kinds of validity tend to be difficult to achieve at the same time, in one study. This is because creating a controlled setting, in which all potentially influential factors (other than the experimentally-manipulated variable) are controlled, is bound to create an environment that is quite different from what people naturally encounter (e.g., using a happy movie clip to promote helpful behavior). However, it is the degree to which an experimental situation is comparable to the corresponding real-world situation of interest that determines how generalizable potential findings will be. In other words, if an experiment is very far-off from what a person might normally experience in everyday life, you might reasonably question just how useful its findings are.
Because of the incompatibility of the two types of validity, one is often—by design—prioritized over the other. Due to the importance of identifying true causal relationships, psychology has traditionally emphasized internal over external validity. However, in order to make claims about human behavior that apply across populations and environments, researchers complement traditional laboratory research, where participants are brought into the lab, with field research where, in essence, the psychological laboratory is brought to participants. Field studies allow for the important test of how psychological variables and processes of interest “behave” under real-world circumstances (i.e., what actually does happen rather than what can happen ). They can also facilitate “downstream” operationalizations of constructs that measure life outcomes of interest directly rather than indirectly.
Take, for example, the fascinating field of psychoneuroimmunology, where the goal is to understand the interplay of psychological factors - such as personality traits or one’s stress level- and the immune system. Highly sophisticated and carefully controlled experiments offer ways to isolate the variety of neural, hormonal, and cellular mechanisms that link psychological variables such as chronic stress to biological outcomes such as immunosuppression (a state of impaired immune functioning; ). Although these studies demonstrate impressively how psychological factors can affect health-relevant biological processes, they— because of their research design—remain mute about the degree to which these factors actually do undermine people’s everyday health in real life. It is certainly important to show that laboratory stress can alter the number of natural killer cells in the blood. But it is equally important to test to what extent the levels of stress that people experience on a day-to-day basis result in them catching a cold more often or taking longer to recover from one. The goal for researchers, therefore, must be to complement traditional laboratory experiments with less controlled studies under real-world circumstances. The term is used to refer the degree to which an effect has been obtained under conditions that are typical for what happens in everyday life (Brewer, 2000). In this example, then, people might keep a careful daily log of how much stress they are under as well as noting physical symptoms such as headaches or nausea. Although many factors beyond stress level may be responsible for these symptoms, this more correlational approach can shed light on how the relationship between stress and health plays out outside of the laboratory.
Behaviorism
How do we act?
Learning theories focus on how we respond to events or stimuli rather than emphasizing what motivates our actions. These theories provide an explanation of how experience can change what we are capable of doing or feeling.
Classical Conditioning and Emotional Responses
Classical Conditioning theory helps us to understand how our responses to one situation become attached to new situations. For example, a smell might remind us of a time when we
were a kid (elementary school cafeterias smell like milk and mildew!). If you went to a new cafeteria with the same smell, it might evoke feelings you had when you were in school. Or a song on the radio might remind you of a memorable evening you spent with your first true love. Or, if you hear your entire name (John Wilmington Brewer, for instance) called as you walk across the stage to get your diploma and it makes you tense because it reminds you of how your father used to use your full name when he was mad at you, you’ve been classically conditioned!
Classical conditioning explains how we develop many of our emotional responses to people or events or our “gut level” reactions to situations. New situations may bring about an old response because the two have become connected. Attachments form in this way. Addictions are affected by classical conditioning, as anyone who’s tried to quit smoking can tell you. When you try to quit, everything that was associated with smoking makes you crave a cigarette.
Pavlov
Ivan Pavlov (1880-1937) was a Russian physiologist interested in studying digestion. As he recorded the amount of salivation his laboratory dogs produced as they ate, he noticed that they actually began to salivate before the food arrived as the researcher walked down the hall and toward the cage. “This,” he thought, “is not natural!” One would expect a dog to automatically salivate when food hit their palate, but BEFORE the food comes? Of course, what had happened was . . . you tell me. That’s right! The dogs knew that the food was coming because they had learned to associate the footsteps with the food. The key word here is “learned”. A learned response is called a “conditioned” response. Pavlov began to experiment with this “psychic” reflex. He began to ring a bell, for instance, prior to introducing the food.
Sure enough, after making this connection several times, the dogs could be made to salivate to the sound of a bell. Once the bell had become an event to which the dogs had learned to salivate, it was called a conditioned stimulus. The act of salivating to a bell was a response that had also been learned, now termed in Pavlov’s jargon, a conditioned response. Notice that the response, salivation, is the same whether it is conditioned or unconditioned (unlearned or natural). What changed is the stimulus to which the dog salivates. One is natural (unconditioned) and one is learned (conditioned). Well, enough of Pavlov’s dogs. Who cares? Let’s think about how classical conditioning is used on us. One of the most widespread applications of classical conditioning principles was brought to us by the psychologist, John B. Watson.
Watson and Behaviorism
Watson believed that most of our fears and other emotional responses are classically conditioned. He had gained a good deal of popularity in the 1920s with his expert advice on parenting offered to the public. He believed that parents could be taught to help shape their children’s behavior and tried to demonstrate the power of classical conditioning with his famous experiment with an 18 month old boy named “Little Albert”. Watson sat Albert down and introduced a variety of seemingly scary objects to him: a burning piece of newspaper, a white rat, etc. But Albert remained curious and reached for all of these things. Watson knew that one of our only inborn fears is the fear of loud noises so he proceeded to make a loud noise each time he introduced one of Albert’s favorites, a white rat. After hearing the loud noise several times paired with the rat, Albert soon came to fear the rat and began to cry when it was introduced. Watson filmed this experiment for posterity and used it to demonstrate that he could help parents achieve any outcomes they desired, if they would only follow his advice. Watson wrote columns in newspapers and in magazines and gained a lot of popularity among parents eager to apply science to household order. Parenting advice was not the legacy Watson left us, however. Where he really made his impact was in advertising. After Watson left academia, he went into the world of business and showed companies how to tie something that brings about a natural positive feeling to their products to enhance sales. Thus the union of sex and advertising! So, let’s use a much more interesting example than Pavlov’s dogs to check and see if you understand the difference between conditioned and unconditioned stimuli and responses. In the experiment with Little Albert, identify the unconditioned stimulus, the unconditioned response, and, after conditioning, the conditioned stimulus and the conditioned response.
Operant Conditioning and Repeating Actions
Operant Conditioning is another learning theory that emphasizes a more conscious type of learning than that of classical conditioning. A person (or animal) does something (operates something) to see what effect it might bring. Simply said, operant conditioning describes how we repeat behaviors because they pay off for us. It is based on a principle authored by a psychologist named Thorndike (1874-1949) called the law of effect. The law of effect suggest that we will repeat an action if it is followed by a good effect.
Skinner and Reinforcement
Interactive Element
Watch a pigeon learn through the concept reinforcement.
B.F. Skinner (1904-199 expanded on Thorndike’s principle and outlined the principles of operant conditioning. Skinner believed that we learn best when our actions are reinforced. For example, a child who cleans his room and is reinforced (rewarded) with a big hug and words of praise is more likely to clean it again than a child whose deed goes unnoticed. Skinner believed that almost anything could be reinforcing. A reinforcer is anything following a behavior that makes it more likely to occur again. It can be something intrinsically rewarding (called intrinsic or primary reinforcers), such as food or praise, or it can be rewarding because it can be exchanged for what one really wants (such as using money to buy a cookie). Such reinforcers are referred to as secondary reinforcers or extrinsic reinforcers.
Positive and negative reinforcement
Sometimes, adding something to the situation is reinforcing as in the cases we described above with cookies, praise and money. Positive reinforcement involves adding something to the situation to encourage a behavior. Other times, taking something away from a situation can be reinforcing. For example, the loud, annoying buzzer on your alarm clock encourages you to get up so that you can turn it off and get rid of the noise. Children whine in order to get their parents to do something and often, parents give in just to stop the whining. In these instances, negative reinforcement has been used.
Operant conditioning tends to work best if you focus on trying to encourage a behavior or move a person into the direction you want them to go rather than telling them what not to do.
Reinforcers are used to encourage a behavior; punishers are used to stop behavior. A punisher is anything that follows an act and decreases the chance it will reoccur. But often a punished behavior doesn’t really go away. It is just suppressed and may reoccur whenever the threat of punishment is removed. For example, a child may not cuss around you because you’ve washed his mouth out with soap, but he may cuss around his friends. Or a motorist may only slow down when the trooper is on the side of the freeway. Another problem with punishment is that when a person focuses on punishment, they may find it hard to see what the other does right or well. And punishment is stigmatizing; when punished, some start to see themselves as bad and give up trying to change.
Reinforcement can occur in a predictable way, such as after every desired action is performed, or intermittently, after the behavior is performed a number of times or the first time it is performed after a certain amount of time. The schedule of reinforcement has an impact on how long a behavior continues after reinforcement is discontinued. So a parent who has rewarded a child’s actions each time may find that the child gives up very quickly if a reward is not immediately forthcoming. A lover who is warmly regarded now and then may continue to seek out his or her partner’s attention long after the partner has tried to break up. Think about the kinds of behaviors you may have learned through classical and operant conditioning. You may have learned many things in this way. But sometimes we learn very complex behaviors quickly and without direct reinforcement. Bandura explains how. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/01%3A_History_of_Cognitive_Psychology/1.04%3A_Early_Psychology_-_Structuralism_and_Functionalism.txt |
Behaviorism’s emphasis on objectivity and focus on external behavior had pulled psychologists’ attention away from the mind for a prolonged period of time. The early work of the humanistic psychologists redirected attention to the individual human as a whole, and as a conscious and self-aware being. By the 1950s, new disciplinary perspectives in linguistics, neuroscience, and computer science were emerging, and these areas revived interest in the mind as a focus of scientific inquiry. This particular perspective has come to be known as the cognitive revolution (Miller, 2003). By 1967, Ulric Neisser published the first textbook entitled Cognitive Psychology , which served as a core text in cognitive psychology courses around the country (Thorne & Henley, 2005).
Although no one person is entirely responsible for starting the cognitive revolution, Noam Chomsky was very influential in the early days of this movement. Chomsky (1928–), an American linguist, was dissatisfied with the influence that behaviorism had had on psychology. He believed that psychology’s focus on behavior was short-sighted and that the field had to re- incorporate mental functioning into its purview if it were to offer any meaningful contributions to understanding behavior (Miller, 2003).
European psychology had never really been as influenced by behaviorism as had American psychology; and thus, the cognitive revolution helped reestablish lines of communication between European psychologists and their American counterparts. Furthermore, psychologists began to cooperate with scientists in other fields, like anthropology, linguistics, computer science, and neuroscience, among others. This interdisciplinary approach often was referred to as the cognitive sciences, and the influence and prominence of this particular perspective resonates in modern-day psychology (Miller, 2003).
Noam Chomsky
In the middle of the 20th century, American linguist Noam Chomsky explained how some aspects of language could be innate. Prior to this time, people tended to believe that children learn language soley by imitating the adults around them. Chomsky agreed that individual words must be learned by experience, but he argued that genes could code into the brain categories and organization that form the basis of grammatical structure. We come into the world ready to distinguish different grammatical classes, like nouns and verbs and adjectives, and sensitive to the order in which words are spoken. Then, using this innate sensitivity, we quickly learn from listening to our parents about how to organize our own language [ [ For instance, if we grow up hearing Spanish, we learn that adjectives come after nouns ( el gato amarillo , where gato means “cat” and amarillo is “yellow”), but if we grow up hearing English, we learn that adjectives come first (“the yellow cat”). Chomsky termed this innate sensitivity that allows infants and young children to organize the abstract categories of language the language acquisition device (LAD) .
According to Chomsky’s approach, each of the many languages spoken around the world (there are between 6,000 and 8,000) is an individual example of the same underlying set of procedures that are hardwired into human brains. Each language, while unique, is just a set of variations on a small set of possible rule systems that the brain permits language to use.
Chomsky’s account proposes that children are born with a knowledge of general rules of grammar (including phoneme, morpheme, and syntactical rules) that determine how sentences are constructed.
Although there is general agreement among psychologists that babies are genetically programmed to learn language, there is still debate about Chomsky’s idea that a universal grammar can account for all language learning. Evans and Levinson [ surveyed the world’s languages and found that none of the presumed underlying features of the language acquisition device were entirely universal. In their search they found languages that did not have noun or verb phrases, that did not have tenses (e.g., past, present, future), and some that did not have nouns or verbs at all, even though a basic assumption of a universal grammar is that all languages should share these features. Other psychologists believe that early experience can fully explain language acquisition, and Chomsky’s language acquisition device is unnecessary.
Nevertheless, Chomsky’s work clearly laid out the many problems that had to be solved in order to adequately explain how children acquire language and why languages have the structures that they do.
Connectionism – Parallel Distributive Processing
Connectionism was based on mostly claiming that elements or ideas become associated with one another through experience and that complex ideas can be explained through a set of simple rules. But connectionism further expanded these assumptions and introduced ideas like and supervised learning and should not be confused with associationism.
Connectionism and Network Models
Network models of memory storage emphasize the role of connections between stored memories in the brain. The basis of these theories is that neural networks connect and interact to store memories by modifying the strength of the connections between neural units. In network theory, each connection is characterized by a weight value that indicates the strength of that particular connection. The stronger the connection, the easier a memory is to retrieve. Network models are based on the concept of connectionism. Connectionism is an approach in cognitive science that models mental or behavioral phenomena as the emergent processes of interconnected networks that consist of simple units. Connectionism was introduced in the 1940s by Donald Hebb, who said the famous phrase, “Cells that fire together wire together.” This is the key to understanding network models: neural units that are activated together strengthen the connections between themselves.
There are several types of network models in memory research. Some define the fundamental network unit as a piece of information. Others define the unit as a neuron. However, network models generally agree that memory is stored in neural networks and is strengthened or weakened based on the connections between neurons. Network models are not the only models of memory storage, but they do have a great deal of power when it comes to explaining how learning and memory work in the brain, so they are extremely important to understand.
Parallel Distributed Processing Model
The parallel distributed processing (PDP) model is an example of a network model of memory, and it is the prevailing connectionist approach today. PDP posits that memory is made up of neural networks that interact to store information. It is more of a metaphor than an actual biological theory, but it is very useful for understanding how neurons fire and wire with each other.
Taking its metaphors from the field of computer science, this model stresses the parallel nature of neural processing. “Parallel processing” is a computing term; unlike serial processing (performing one operation at a time), parallel processing allows hundreds of operations to be completed at once—in parallel. Under PDP, neural networks are thought to work in parallel to change neural connections to store memories. This theory also states that memory is stored by modifying the strength of connections between neural units. Neurons that fire together frequently (which occurs when a particular behavior or mental process is engaged many times) have stronger connections between them. If these neurons stop interacting, the memory’s strength weakens. This model emphasizes learning and other cognitive phenomena in the creation and storage of memory. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/01%3A_History_of_Cognitive_Psychology/1.05%3A_Contributions_to_Cognitive_Psychology_Birth.txt |
Learning Objectives
By the end of this section, you will be able to:
• Explain the figure-ground relationshoip
• Define Gestalt principles of groups
• Describe how perceptual set is influenced by individual's characteristics and mental state
In the early part of the 20th century, Max Wertheimer published a paper demonstrating that individuals perceived motion in rapidly flickering static images—an insight that came to him as he used a child’s toy tachistoscope. Wertheimer, and his assistants Wolfgang Köhler and Kurt Koffka, who later became his partners, believed that perception involved more than simply combining sensory stimuli. This belief led to a new movement within the field of psychology known as Gestalt psychology. The word gestalt literally means form or pattern, but its use reflects the idea that the whole is different from the sum of its parts. In other words, the brain creates a perception that is more than simply the sum of available sensory inputs, and it does so in predictable ways. Gestalt psychologists translated these predictable ways into principles by which we organize sensory information. As a result, Gestalt psychology has been extremely influential in the area of sensation and perception (Rock & Palmer, 1990).
One Gestalt principle is the figure-ground relationship. According to this principle, we tend to segment our visual world into figure and ground. Figure is the object or person that is the focus of the visual field, while the ground is the background. As [link] shows, our perception can vary tremendously, depending on what is perceived as figure and what is perceived as ground.
Presumably, our ability to interpret sensory information depends on what we label as figure and what we label as ground in any particular case, although this assumption has been called into question (Peterson & Gibson, 1994; Vecera & O’Reilly, 1998).
How we read something provides another illustration of the proximity concept. For example, we read this sentence like this, notl iket hiso rt hat. We group the letters of a given word together because there are no spaces between the letters, and we perceive words because there are spaces between each word. Here are some more examples: Cany oum akes enseo ft hiss entence? What doth es e wor dsmea n?
We might also use the principle of similarity to group things in our visual fields. According to this principle, things that are alike tend to be grouped together. For example, when watching a football game, we tend to group individuals based on the colors of their uniforms. When watching an offensive drive, we can get a sense of the two teams simply by grouping along this dimension.
Two additional Gestalt principles are the law of continuity (or good continuation) and closure. The law of continuity suggests that we are more likely to perceive continuous, smooth flowing lines rather than jagged, broken lines. The principle of closure states that we organize our perceptions into complete objects rather than as a series of parts.
lines meeting in the center.
Link to Learning
Watch this video showing real world illustrations of Gestalt principles.
According to Gestalt theorists, pattern perception, or our ability to discriminate among different figures and shapes, occurs by following the principles described above. You probably feel fairly certain that your perception accurately matches the real world, but this is not always the case. Our perceptions are based on perceptual hypotheses: educated guesses that we make while interpreting sensory information. These hypotheses are informed by a number of factors, including our personalities, experiences, and expectations. We use these hypotheses to generate our perceptual set. For instance, research has demonstrated that those who are given verbal priming produce a biased interpretation of complex ambiguous figures (Goolkasian & Woodbury, 2010).
Summary
Gestalt theorists have been incredibly influential in the areas of sensation and perception. Gestalt principles such as figure-ground relationship, grouping by proximity or similarity, the law of good continuation, and closure are all used to help explain how we organize sensory information. Our perceptions are not infallible, and they can be influenced by bias, prejudice, and other factors.
Self Check Questions
Critical Thinking Question
1. The central tenet of Gestalt psychology is that the whole is different from the sum of its parts. What does this mean in the context of perception?
2. Take a look at the following figure. How might you influence whether people see a duck or a rabbit?
Personal Application Question
1. Have you ever listened to a song on the radio and sung along only to find out later that you have been singing the wrong lyrics? Once you found the correct lyrics, did your perception of the song change?
Answers
1. This means that perception cannot be understood completely simply by combining the parts. Rather, the relationship that exists among those parts (which would be established according to the principles described in this chapter) is important in organizing and interpreting sensory information into a perceptual set.
2. Playing on their expectations could be used to influence what they were most likely to see. For instance, telling a story about Peter Rabbit and then presenting this image would bias perception along rabbit lines.
Glossary
• Closure: organizing our perceptions into complete objects rather than as a series of parts
• Figure-ground Relationship: segmenting our visual world into figure and ground
• Gestalt Psychology: field of psychology based on the idea that the whole is different from the sum of its parts
• Good Continuation: (also, continuity) we are more likely to perceive continuous, smooth flowing lines rather than jagged, broken lines
• Pattern Perception: ability to discriminate among different figures and shapes
• Perceptual Hypothesis: educated guess used to interpret sensory information
• Principle of Closure: organize perceptions into complete objects rather than as a series of parts
• Proximity: things that are close to one another tend to be grouped together
• Similarity: things that are alike tend to be grouped together | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/01%3A_History_of_Cognitive_Psychology/1.06%3A_Gestalt_Psychology.txt |
The picture you have in your mind of the nervous system probably includes the brain , the nervous tissue contained within the cranium, and the spinal cord , the extension of nervous tissue within the vertebral column. That suggests it is made of two organs—and you may not even think of the spinal cord as an organ—but the nervous system is a very complex structure. Within the brain, many different and separate regions are responsible for many different and separate functions. It is as if the nervous system is composed of many organs that all look similar and can only be differentiated using tools such as the microscope or electrophysiology. In comparison, it is easy to see that the stomach is different than the esophagus or the liver, so you can imagine the digestive system as a collection of specific organs.
02: The Brain
The nervous system can be divided into two major regions: the central and peripheral nervous systems. The central nervous system (CNS) is the brain and spinal cord, and the peripheral nervous system (PNS) is everything else (Figure 1). The brain is contained within the cranial cavity of the skull, and the spinal cord is contained within the vertebral cavity of the vertebral column. It is a bit of an oversimplification to say that the CNS is what is inside these two cavities and the peripheral nervous system is outside of them, but that is one way to start to think about it. In actuality, there are some elements of the peripheral nervous system that are within the cranial or vertebral cavities. The peripheral nervous system is so named because it is on the periphery—meaning beyond the brain and spinal cord. Depending on different aspects of the nervous system, the dividing line between central and peripheral is not necessarily universal.
Nervous tissue, present in both the CNS and PNS, contains two basic types of cells: neurons and glial cells. A glial cell is one of a variety of cells that provide a framework of tissue that supports the neurons and their activities. The neuron is the more functionally important of the two, in terms of the communicative function of the nervous system. To describe the functional divisions of the nervous system, it is important to understand the structure of a neuron.
Neurons are cells and therefore have a soma , or cell body, but they also have extensions of the cell; each extension is generally referred to as a process . There is one important process that every neuron has called an axon , which is the fiber that connects a neuron with its target.
Another type of process that branches off from the soma is the dendrite . Dendrites are responsible for receiving most of the input from other neurons.
Looking at nervous tissue, there are regions that predominantly contain cell bodies and regions that are largely composed of just axons. These two regions within nervous system structures are often referred to as gray matter (the regions with many cell bodies and dendrites) or white matter (the regions with many axons). Figure 2 demonstrates the appearance of these regions in the brain and spinal cord. The colors ascribed to these regions are what would be seen in “fresh,” or unstained, nervous tissue. Gray matter is not necessarily gray. It can be pinkish because of blood content, or even slightly tan, depending on how long the tissue has been preserved. But white matter is white because axons are insulated by a lipid-rich substance called myelin . Lipids can appear as white (“fatty”) material, much like the fat on a raw piece of chicken or beef. Actually, gray matter may have that color ascribed to it because next to the white matter, it is just darker—hence, gray.
The distinction between gray matter and white matter is most often applied to central nervous tissue, which has large regions that can be seen with the unaided eye. When looking at peripheral structures, often a microscope is used and the tissue is stained with artificial colors. That is not to say that central nervous tissue cannot be stained and viewed under a microscope, but unstained tissue is most likely from the CNS—for example, a frontal section of the brain or cross section of the spinal cord.
Regardless of the appearance of stained or unstained tissue, the cell bodies of neurons or axons can be located in discrete anatomical structures that need to be named. Those names are specific to whether the structure is central or peripheral. A localized collection of neuron cell bodies in the CNS is referred to as a nucleus . In the PNS, a cluster of neuron cell bodies is referred to as a ganglion . Figure 3 indicates how the term nucleus has a few different meanings within anatomy and physiology. It is the center of an atom, where protons and neutrons are found; it is the center of a cell, where the DNA is found; and it is a center of some function in the CNS. There is also a potentially confusing use of the word ganglion (plural = ganglia) that has a historical explanation. In the central nervous system, there is a group of nuclei that are connected together and were once called the basal ganglia before “ganglion” became accepted as a description for a peripheral structure. Some sources refer to this group of nuclei as the “basal nuclei” to avoid confusion.
Terminology applied to bundles of axons also differs depending on location. A bundle of axons, or fibers, found in the CNS is called a tract whereas the same thing in the PNS would be called a nerve . There is an important point to make about these terms, which is that they can both be used to refer to the same bundle of axons. When those axons are in the PNS, the term is nerve, but if they are CNS, the term is tract. The most obvious example of this is the axons that project from the retina into the brain. Those axons are called the optic nerve as they leave the eye, but when they are inside the cranium, they are referred to as the optic tract. There is a specific place where the name changes, which is the optic chiasm, but they are still the same axons (Figure 4). A similar situation outside of science can be described for some roads. Imagine a road called “Broad Street” in a town called “Anyville.” The road leaves Anyville and goes to the next town over, called “Hometown.” When the road crosses the line between the two towns and is in Hometown, its name changes to “Main Street.” That is the idea behind the naming of the retinal axons. In the PNS, they are called the optic nerve, and in the CNS, they are the optic tract. Table 1 helps to clarify which of these terms apply to the central or peripheral nervous systems.
How Much of Your Brain Do You Use?
Have you ever heard the claim that humans only use 10 percent of their brains? Maybe you have seen an advertisement on a website saying that there is a secret to unlocking the full potential of your mind—as if there were 90 percent of your brain sitting idle, just waiting for you to use it. If you see an ad like that, don’t click. It isn’t true.
An easy way to see how much of the brain a person uses is to take measurements of brain activity while performing a task. An example of this kind of measurement is functional magnetic resonance imaging (fMRI), which generates a map of the most active areas and can be generated and presented in three dimensions (Figure 6). This procedure is different from the standard MRI technique because it is measuring changes in the tissue in time with an experimental condition or event.
The underlying assumption is that active nervous tissue will have greater blood flow. By having the subject perform a visual task, activity all over the brain can be measured. Consider this possible experiment: the subject is told to look at a screen with a black dot in the middle (a fixation point). A photograph of a face is projected on the screen away from the center. The subject has to look at the photograph and decipher what it is. The subject has been instructed
to push a button if the photograph is of someone they recognize. The photograph might be of a celebrity, so the subject would press the button, or it might be of a random person unknown to the subject, so the subject would not press the button.
In this task, visual sensory areas would be active, integrating areas would be active, motor areas responsible for moving the eyes would be active, and motor areas for pressing the button with a finger would be active. Those areas are distributed all around the brain and the fMRI images would show activity in more than just 10 percent of the brain (some evidence suggests that about 80 percent of the brain is using energy—based on blood flow to the tissue—during well- defined tasks similar to the one suggested above). This task does not even include all of the functions the brain performs. There is no language response, the body is mostly lying still in the MRI machine, and it does not consider the autonomic functions that would be ongoing in the background.
Table 2: Structures of the CNS and PNS (Table 1)
CNS
PNS
Group of Neuron Cell Bodies (i.e., gray matter)
Nucleus
Ganglion
Bundle of Axons (i.e., white matter)
Tract
Nerve
Visit the Nobel Prize to play an interactive game that demonstrates the use of this technology and compares it with other types of imaging technologies.
In 2003, the Nobel Prize in Physiology or Medicine was awarded to Paul C. Lauterbur and Sir Peter Mansfield for discoveries related to magnetic resonance imaging (MRI). This is a tool to see the structures of the body (not just the nervous system) that depends on magnetic fields associated with certain atomic nuclei. The utility of this technique in the nervous system is that fat tissue and water appear as different shades between black and white. Because white matter is fatty (from myelin) and gray matter is not, they can be easily distinguished in MRI images.
Visit the Nobel Prize to play an interactive game that demonstrates the use of this technology and compares it with other types of imaging technologies. Also, the results from an MRI session are compared with images obtained from X-ray or computed tomography. How do the imaging techniques shown in this game indicate the separation of white and gray matter compared with the freshly dissected tissue shown earlier? | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/02%3A_The_Brain/2.01%3A_The_Central_and_Peripheral_Nervous_Systems.txt |
The brain’s lower-level structures consist of the brain stem, the spinal cord, and the cerebellum.
Learning Objectives
• Outline the location and functions of the lower-level structures of the brain
Key Points
• The brain’s lower-level structures are the oldest in the brain, and are more geared towards basic bodily processes than the higher-level structures.
• Except for the spinal cord, the brain’s lower-level structures are largely located within the hindbrain, diencephalon (or interbrain), and midbrain.
• The hindbrain consists of the medulla oblongata, the pons, and the cerebellum, which control respiration and movement among other functions.
• The midbrain is interposed between the hindbrain and the forebrain. Its ventral areas are dedicated to motor function while the dorsal regions are involved in sensory information circuits.
• The thalamus and hypothalamus are located within the diencephalon (or “interbrain”), and are part of the limbic system. They regulate emotions and motivated behaviors like sexuality and hunger.
• The spinal cord is a tail-like structure embedded in the vertebral canal of the spine, and is involved in transporting sensorimotor information and controlling nearby organs.
TERMS
• Proprioception The sense of the position of parts of the body relative to neighbouring parts of the body.
• Ventral On the front side of the human body, or the corresponding surface of an animal, usually the lower surface.
• Dorsal With respect to, or concerning the side in which the backbone is located, or the analogous side of an invertebrate.
The brain’s lower-level structures consist of the brain stem and spinal cord, along with the cerebellum. With the exception of the spinal cord, these structures are largely located within the hindbrain, diencephalon (or interbrain), and midbrain. These lower dorsal structures are the oldest parts of the brain, having existed for much of its evolutionary history. As such they are geared more toward basic bodily processes necessary to survival. It is the more recent layers of the brain (the forebrain) which are responsible for the higher-level cognitive functioning (language, reasoning) not strictly necessary to keep a body alive.
The Hindbrain
The hindbrain, which includes the medulla oblongata, the pons, and the cerebellum, is responsible some of the oldest and most primitive body functions. Each of these structures is described below.
Medulla Oblongata
The medulla oblongata sits at the transition zone between the brain and the spinal cord. It is the first region that formally belongs to the brain (rather than the spinal cord). It is the control center for respiratory, cardiovascular , and digestive functions.
Pons
The pons connects the medulla oblongata with the midbrain region, and also
relays signals from the forebrain to the cerebellum. It houses the control centers for respiration and inhibitory functions. The cerebellum is attached to the dorsal side of the pons.
Cerebellum
The cerebellum is a separate region of the brain located behind the medulla oblongata and pons. It is attached to the rest of the brain by three stalks (called pedunculi ), and coordinates skeletal muscles to produce smooth, graceful motions. The cerebellum receives information from our eyes, ears, muscles, and joints about the body’s current positioning (referred to as proprioception). It also receives output from the cerebral cortex about where these body parts should be. After processing this information, the cerebellum sends motor impulses from the brain stem to the skeletal muscles so that they can move. The main function of the cerebellum is this muscle coordination. However, it is also responsible for balance and posture , and it assists us when we are learning a new motor skill, such as playing a sport or musical instrument.
Recent research shows that apart from motor functions the cerebellum also has some role in emotional sensitivity.
Human and shark brains
The shark brain diverged on the evolutionary tree from the human brain, but both still have the “old” structures of the hindbrain and midbrain dedicated to autonomic bodily processes.
The Midbrain
The midbrain is located between the hindbrain and forebrain, but it is actually part of the brain stem. It displays the same basic functional composition found in the spinal cord and the hindbrain. Ventral areas control motor function and convey motor information from the cerebral cortex. Dorsal regions of the midbrain are involved in sensory information circuits. The substantia nigra, a part of the brain that plays a role in reward, addiction, and movement (due to its high levels of dopaminergic neurons) is located in the midbrain. In Parkinson’s disease, which is characterized by a deficit of dopamine , death of the substantia nigra is evident.
The Diencephalon (“Interbrain”)
The diencephalon is the region of the embryonic vertebrate neural tube that gives rise to posterior forebrain structures. In adults, the diencephalon appears at the upper end of the brain stem, situated between the cerebrum and the brain stem. It is home to the limbic system, which is considered the seat of emotion in the human brain. The diencephalon is made up of four distinct components: the thalamus, the subthalamus, the hypothalamus, and the epithalamus.
Thalamus
The thalamus is part of the limbic system. It consists of two lobes of grey matter along the bottom of the cerebral cortex. Because nearly all sensory information passes through the thalamus it is considered the sensory “way station” of the brain, passing information on to the cerebral cortex (which is in the forebrain). of, or stimulation to, the thalamus are associated with changes in emotional reactivity. However, the importance of this structure on the regulation of emotional behavior is not due to the activity of the thalamus itself, but to the connections between the thalamus and other limbic-system structures.
Hypothalamus
The hypothalamus is a small part of the brain located just below the thalamus. Lesions of the hypothalamus interfere with motivated behaviors like sexuality, combativeness, and hunger. The hypothalamus also plays a role in emotion: parts of the hypothalamus seem to be involved in pleasure and rage, while the central part is linked to aversion, displeasure, and a tendency towards uncontrollable and loud laughing. When external stimuli are presented (for example, a dangerous stimuli), the hypothalamus sends signals to other limbic areas to trigger feeling states in response to the stimuli (in this case, fear).
The Spinal Cord
The spinal cord is a tail-like structure embedded in the vertebral canal of the spine. The adult spinal cord is about 40 cm long and weighs approximately 30 g . The spinal cord is attached to the underside of the medulla oblongata, and is organized to serve four distinct tasks:
1. to convey (mainly sensory) information to the brain;
2. to carry information generated in the brain to peripheral targets like skeletal muscles;
3. to control nearby organs via the autonomic nervous system;
4. to enable sensorimotor functions to control posture and other fundamental movements. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/02%3A_The_Brain/2.02%3A_Lower-Level_Structures_of_the_Brain.txt |
The brain is divided into two hemispheres and four lobes, each of which specializes in a different function.
Learning Objectives
• Outline the structure and function of the lobes and hemispheres of the
TERMS
• corpus callosum A wide, flat bundle of neural fibers beneath the cortex that connects the left and right cerebral hemispheres and facilitates interhemispheric communication.
• lateralization Localization of a function, such as speech, to the right or left side of the brain.
• visuospatial Of or pertaining to the visual perception of spatial relationships.
Brain Lateralization
The brain is divided into two halves, called hemispheres. There is evidence that each brain hemisphere has its own distinct functions, a phenomenon referred to as lateralization. The left hemisphere appears to dominate the functions of speech, language processing and comprehension, and logical reasoning, while the right is more dominant in spatial tasks like vision-independent object recognition (such as identifying an object by touch or another nonvisual sense). However, it is easy to exaggerate the differences between the functions of the left and right hemispheres; both hemispheres are involved with most processes.
Additionally, neuroplasticity (the ability of a brain to adapt to experience) enables the brain to compensate for damage to one hemisphere by taking on extra functions in the other half, especially in young brains.
Corpus Callosum
The two hemispheres communicate with one another through the corpus callosum . The corpus callosum is a wide, flat bundle of neural fibers beneath the cortex that connects the left and right cerebral hemispheres and facilitates interhemispheric communication. The corpus callosum is sometimes implicated in the cause of seizures; patients with epilepsy sometimes undergo a corpus callostomy, or the removal of the corpus callosum.
The Lobes of The Brain
The brain is separated into four lobes: the frontal, temporal, occipital, and parietal lobes.
Lobes of the brain
The brain is divided into four lobes, each of which is associated with different types of mental processes. Clockwise from left: The frontal lobe is in blue, the parietal lobe in yellow, the occipital lobe in red, and the temporal lobe in green.
The Frontal Lobe
The frontal lobe is associated with executive functions and motor performance. Executive functions are some of the highest-order cognitive processes that humans have. Examples include:
• planning and engaging in goal-directed behavior;
• recognizing future consequences of current actions;
• choosing between good and bad actions;
• overriding and suppressing socially unacceptable responses;
• determining similarities and differences between objects or situations.
The frontal lobe is considered to be the moral center of the brain because it is responsible for advanced decision-making processes. It also plays an important role in retaining emotional memories derived from the limbic system , and modifying those emotions to fit socially accepted norms .
The Temporal Lobe
The temporal lobe is associated with the retention of short- and long-term memories. It processes sensory input including auditory information, language comprehension, and naming.
It also creates emotional responses and controls biological drives such as aggression and sexuality .
The temporal lobe contains the hippocampus , which is the memory center of the brain. The hippocampus plays a key role in the formation of emotion-laden, long-term memories based on emotional input from the amygdala . The left temporal lobe holds the primary auditory cortex, which is important for processing the semantics of speech.
One specific portion of the temporal lobe, Wernicke’s area, plays a key role in speech comprehension. Another portion, Broca’s area, underlies the ability to produce (rather than understand) speech. Patients with damage to Wernicke’s area can speak clearly but the words make no sense, while patients with damage to Broca’s area will fail to form words properly and speech will be halting and slurred. These disorders are known as Wernicke’s and Broca’s aphasia respectively; an aphasia is an inability to speak.
Broca’s and Wernicke’s areas
The Occipital Lobe
The occipital lobe contains most of the visual cortex and is the visual processing center of the brain. Cells on the posterior side of the occipital lobe are arranged as a spatial map of the retinal field. The visual cortex receives raw sensory information through sensors in the retina of the eyes, which is then conveyed through the optic tracts to the visual cortex. Other areas of the occipital lobe are specialized for different visual tasks, such as visuospatial processing, color discrimination, and motion perception . Damage to the primary visual cortex (located on the surface of the posterior occipital lobe) can cause blindness, due to the holes in the visual map on the surface of the cortex caused by the lesions .
The Parietal Lobe
The parietal lobe is associated with sensory skills. It integrates different types of sensory information and is particularly useful in spatial processing and navigation. The parietal lobe plays an important role in integrating sensory information from various parts of the body, understanding numbers and their relations, and manipulating objects. Its also processes information related to the sense of touch.
The parietal lobe is comprised of the somatosensory cortex and part of the visual system. The somatosensory cortex consists of a “map” of the body that processes sensory information from specific areas of the body. Several portions of the parietal lobe are important to language and visuospatial processing; the left parietal lobe is involved in symbolic functions in language and mathematics, while the right parietal lobe is specialized to process images and interpretation of maps (i.e., spatial relationships). | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/02%3A_The_Brain/2.03%3A_Lobes_-_Cerebral_Hemispheres_and_Lobes_of_the_Brain.txt |
Learning Objectives
• Identify and describe the role of the parts of the limbic system, the midbrain, and hindbrain
Areas of the Forebrain
Other areas of the forebrain (which includes the lobes that you learned about previously), are the parts located beneath the cerebral cortex, including the thalamus and the limbic system. The thalamus is a sensory relay for the brain. All of our senses, with the exception of smell, are routed through the thalamus before being directed to other areas of the brain for processing (Figure 1).
The limbic system is involved in processing both emotion and memory. Interestingly, the sense of smell projects directly to the limbic system; therefore, not surprisingly, smell can evoke emotional responses in ways that other sensory modalities cannot. The limbic system is made up of a number of different structures, but three of the most important are the hippocampus, the amygdala, and the hypothalamus (Figure 2). The hippocampus is an essential structure for learning and memory. The amygdala is involved in our experience of emotion and in tying emotional meaning to our memories. The hypothalamus regulates a number of homeostatic processes, including the regulation of body temperature, appetite, and blood pressure. The hypothalamus also serves as an interface between the nervous system and the endocrine system and in the regulation of sexual motivation and behavior.
Link to Learning
Clive Wearing, an accomplished musician, lost the ability to form new memories when his hippocampus was damaged through illness. Check out the first few minutes of this documentary video for an introduction to this man and his condition.
Midbrain and Hindbrain Structures
The midbrain is comprised of structures located deep within the brain, between the forebrain and the hindbrain. The reticular formation is centered in the midbrain, but it actually extends up into the forebrain and down into the hindbrain. The reticular formation is important in regulating the sleep/wake cycle, arousal, alertness, and motor activity.
The hindbrain is located at the back of the head and looks like an extension of the spinal cord. It contains the medulla, pons, and cerebellum (Figure 4). The medulla controls the automatic processes of the autonomic nervous system, such as breathing, blood pressure, and heart rate. The word pons literally means “bridge,” and as the name suggests, the pons serves to connect the brain and spinal cord. It also is involved in regulating brain activity during sleep. The medulla, pons, and midbrain together are known as the brainstem.
The cerebellum (Latin for “little brain”) receives messages from muscles, tendons, joints, and structures in our ear to control balance, coordination, movement, and motor skills. The cerebellum is also thought to be an important area for processing some types of memories. In particular, procedural memory, or memory involved in learning and remembering how to perform tasks, is thought to be associated with the cerebellum. Recall that H. M. was unable to form new explicit memories, but he could learn new tasks. This is likely due to the fact that H. M.’s cerebellum remained intact.
Link to Learning
Click on the link below to review each part of the brain and its purpose through the PsychSim Tutorial. The tutorial is only intended for practice. Please disregard the final screen that requests you submit answers to your instructor.
Brain and Behavior
For a fun recap of the parts of the brain, watch the following short clip from the old cartoon, Pinky and the Brain:
WHAT DO YOU THINK?: BRAIN DEAD AND ON LIFE SUPPORT
What would you do if your spouse or loved one was declared brain dead but his or her body was being kept alive by medical equipment? Whose decision should it be to remove a feeding tube? Should medical care costs be a factor?
On February 25, 1990, a Florida woman named Terri Schiavo went into cardiac arrest, apparently triggered by a bulimic episode. She was eventually revived, but her brain had been deprived of oxygen for a long time. Brain scans indicated that there was no activity in her
cerebral cortex, and she suffered from severe and permanent cerebral atrophy. Basically, Schiavo was in a vegetative state. Medical professionals determined that she would never again be able to move, talk, or respond in any way. To remain alive, she required a feeding tube, and there was no chance that her situation would ever improve.
On occasion, Schiavo’s eyes would move, and sometimes she would groan. Despite the doctors’ insistence to the contrary, her parents believed that these were signs that she was trying to communicate with them.
After 12 years, Schiavo’s husband argued that his wife would not have wanted to be kept alive with no feelings, sensations, or brain activity. Her parents, however, were very much against removing her feeding tube. Eventually, the case made its way to the courts, both in the state of Florida and at the federal level. By 2005, the courts found in favor of Schiavo’s husband, and the feeding tube was removed on March 18, 2005. Schiavo died 13 days later.
Why did Schiavo’s eyes sometimes move, and why did she groan? Although the parts of her brain that control thought, voluntary movement, and feeling were completely damaged, her brainstem was still intact. Her medulla and pons maintained her breathing and caused involuntary movements of her eyes and the occasional groans. Over the 15-year period that she was on a feeding tube, Schiavo’s medical costs may have topped \$7 million (Arnst, 2003).
These questions were brought to popular conscience 25 years ago in the case of Terri Schiavo, and they persist today. In 2013, a 13-year-old girl who suffered complications after tonsil surgery was declared brain dead. There was a battle between her family, who wanted her to remain on life support, and the hospital’s policies regarding persons declared brain dead. In another complicated 2013–14 case in Texas, a pregnant EMT professional declared brain dead was kept alive for weeks, despite her spouse’s directives, which were based on her wishes should this situation arise. In this case, state laws designed to protect an unborn fetus came into consideration until doctors determined the fetus unviable.
Decisions surrounding the medical response to patients declared brain dead are complex. What do you think about these issues?
THINK IT OVER
You read about H. M.’s memory deficits following the bilateral removal of his hippocampus and amygdala. Have you encountered a character in a book, television program, or movie that suffered memory deficits? How was that character similar to and different from H. M.?
GLOSSARY
• Amygdala: structure in the limbic system involved in our experience of emotion and tying emotional meaning to our memories
• Cerebellum: hindbrain structure that controls our balance, coordination, movement, and motor skills, and it is thought to be important in processing some types of memory
• Cerebral cortex: surface of the brain that is associated with our highest mental capabilities
• Forebrain: largest part of the brain, containing the cerebral cortex, the thalamus, and the limbic system, among other structures
• Hindbrain: division of the brain containing the medulla, pons, and cerebellum
• Hippocampus: structure in the temporal lobe associated with learning and memory
• Hypothalamus: forebrain structure that regulates sexual motivation and behavior and a number of homeostatic processes; serves as an interface between the nervous system and the endocrine system
• Limbic system: collection of structures involved in processing emotion and memory
• Medulla: hindbrain structure that controls automated processes like breathing, blood pressure, and heart rate
• Midbrain: division of the brain located between the forebrain and the hindbrain; contains the reticular formation
• Pons: hindbrain structure that connects the brain and spinal cord; involved in regulating brain activity during sleep
• Reticular formation: midbrain structure important in regulating the sleep/wake cycle, arousal, alertness, and motor activity
• Thalamus: sensory relay for the brain
• Ventral tegmental area (VTA): midbrain structure where dopamine is produced: associated with mood, reward, and addiction
Somatosensory and Motor Cortex
Cortical Processing
As described earlier, many of the sensory axons are positioned in the same way as their corresponding receptor cells in the body. This allows identification of the position of a stimulus on the basis of which receptor cells are sending information. The cerebral cortex also maintains this sensory topography in the particular areas of the cortex that correspond to the position of the receptor cells. The somatosensory cortex provides an example in which, in essence, the locations of the somatosensory receptors in the body are mapped onto the somatosensory cortex. This mapping is often depicted using a sensory homunculus (Figure 13).
The term homunculus comes from the Latin word for “little man” and refers to a map of the human body that is laid across a portion of the cerebral cortex. In the somatosensory cortex, the external genitals, feet, and lower legs are represented on the medial face of the gyrus within the longitudinal fissure. As the gyrus curves out of the fissure and along the surface of the parietal lobe, the body map continues through the thighs, hips, trunk, shoulders, arms, and hands. The head and face are just lateral to the fingers as the gyrus approaches the lateral sulcus. The representation of the body in this topographical map is medial to lateral from the lower to upper body. It is a continuation of the topographical arrangement seen in the dorsal column system, where axons from the lower body are carried in the fasciculus gracilis, whereas axons from the upper body are carried in the fasciculus cuneatus. As the dorsal column system continues into the medial lemniscus, these relationships are maintained. Also, the head and neck axons running from the trigeminal nuclei to the thalamus run adjacent to the upper body
fibers. The connections through the thalamus maintain topography such that the anatomic information is preserved. Note that this correspondence does not result in a perfectly miniature scale version of the body, but rather exaggerates the more sensitive areas of the body, such as the fingers and lower face. Less sensitive areas of the body, such as the shoulders and back, are mapped to smaller areas on the cortex.
The cortex has been described as having specific regions that are responsible for processing specific information; there is the visual cortex, somatosensory cortex, gustatory cortex, etc. However, our experience of these senses is not divided. Instead, we experience what can be referred to as a seamless percept. Our perceptions of the various sensory modalities—though distinct in their content—are integrated by the brain so that we experience the world as a continuous whole.
In the cerebral cortex, sensory processing begins at the primary sensory cortex, then proceeds to an association area , and finally, into a multimodal integration area. For example, somatosensory information inputs directly into the primary somatosensory cortex in the post- central gyrus of the parietal lobe where general awareness of sensation (location and type of sensation) begins. In the somatosensory association cortex details are integrated into a whole. In the highest level of association cortex details are integrated from entirely different modalities to form complete representations as we experience them.
Motor Responses
The defining characteristic of the somatic nervous system is that it controls skeletal muscles. Somatic senses inform the nervous system about the external environment, but the response to that is through voluntary muscle movement. The term “voluntary” suggests that there is a conscious decision to make a movement. However, some aspects of the somatic system use voluntary muscles without conscious control. One example is the ability of our breathing to switch to unconscious control while we are focused on another task. However, the muscles that are responsible for the basic process of breathing are also utilized for speech, which is entirely voluntary. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/02%3A_The_Brain/2.04%3A_Limbic_System_and_Other_Brain_Areas.txt |
Cortical Processing
As described earlier, many of the sensory axons are positioned in the same way as their corresponding receptor cells in the body. This allows identification of the position of a stimulus on the basis of which receptor cells are sending information. The cerebral cortex also maintains this sensory topography in the particular areas of the cortex that correspond to the position of the receptor cells. The somatosensory cortex provides an example in which, in essence, the locations of the somatosensory receptors in the body are mapped onto the somatosensory cortex. This mapping is often depicted using a sensory homunculus (Figure 13).
The term homunculus comes from the Latin word for “little man” and refers to a map of the human body that is laid across a portion of the cerebral cortex. In the somatosensory cortex, the external genitals, feet, and lower legs are represented on the medial face of the gyrus within the longitudinal fissure. As the gyrus curves out of the fissure and along the surface of the parietal lobe, the body map continues through the thighs, hips, trunk, shoulders, arms, and hands. The head and face are just lateral to the fingers as the gyrus approaches the lateral sulcus. The representation of the body in this topographical map is medial to lateral from the lower to upper body. It is a continuation of the topographical arrangement seen in the dorsal column system, where axons from the lower body are carried in the fasciculus gracilis, whereas axons from the upper body are carried in the fasciculus cuneatus. As the dorsal column system continues into the medial lemniscus, these relationships are maintained. Also, the head and neck axons running from the trigeminal nuclei to the thalamus run adjacent to the upper body
fibers. The connections through the thalamus maintain topography such that the anatomic information is preserved. Note that this correspondence does not result in a perfectly miniature scale version of the body, but rather exaggerates the more sensitive areas of the body, such as the fingers and lower face. Less sensitive areas of the body, such as the shoulders and back, are mapped to smaller areas on the cortex.
Figure 13. The Sensory Homunculus. A cartoon representation of the sensory homunculus arranged adjacent to the cortical region in which the processing takes place.
The cortex has been described as having specific regions that are responsible for processing specific information; there is the visual cortex, somatosensory cortex, gustatory cortex, etc. However, our experience of these senses is not divided. Instead, we experience what can be referred to as a seamless percept. Our perceptions of the various sensory modalities—though distinct in their content—are integrated by the brain so that we experience the world as a continuous whole.
In the cerebral cortex, sensory processing begins at the primary sensory cortex, then proceeds to an association area, and finally, into a multimodal integration area. For example, somatosensory information inputs directly into the primary somatosensory cortex in the post- central gyrus of the parietal lobe where general awareness of sensation (location and type of sensation) begins. In the somatosensory association cortex details are integrated into a whole. In the highest level of association cortex details are integrated from entirely different modalities to form complete representations as we experience them.
Motor Responses
The defining characteristic of the somatic nervous system is that it controls skeletal muscles. Somatic senses inform the nervous system about the external environment, but the response to that is through voluntary muscle movement. The term “voluntary” suggests that there is a conscious decision to make a movement. However, some aspects of the somatic system use voluntary muscles without conscious control. One example is the ability of our breathing to switch to unconscious control while we are focused on another task. However, the muscles that are responsible for the basic process of breathing are also utilized for speech, which is entirely voluntary. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/02%3A_The_Brain/2.05%3A_Somatosensory_and_Motor_Cortex.txt |
Learning Objectives
• Explain the two hemispheres of the brain, lateralization and plasticity
The central nervous system (CNS), consists of the brain and the spinal cord.
The Brain
The brain is a remarkably complex organ comprised of billions of interconnected neurons and glia. It is a bilateral, or two-sided, structure that can be separated into distinct lobes. Each lobe is associated with certain types of functions, but, ultimately, all of the areas of the brain interact with one another to provide the foundation for our thoughts and behaviors.
The Spinal Cord
It can be said that the spinal cord is what connects the brain to the outside world. Because of it, the brain can act. The spinal cord is like a relay station, but a very smart one. It not only routes messages to and from the brain, but it also has its own system of automatic processes, called reflexes.
The top of the spinal cord merges with the brain stem, where the basic processes of life are controlled, such as breathing and digestion. In the opposite direction, the spinal cord ends just below the ribs—contrary to what we might expect, it does not extend all the way to the base of the spine.
The spinal cord is functionally organized in 30 segments, corresponding with the vertebrae. Each segment is connected to a specific part of the body through the peripheral nervous system. Nerves branch out from the spine at each vertebra. Sensory nerves bring messages in; motor nerves send messages out to the muscles and organs. Messages travel to and from the brain through every segment.
Some sensory messages are immediately acted on by the spinal cord, without any input from the brain. Withdrawal from heat and knee jerk are two examples. When a sensory message meets certain parameters, the spinal cord initiates an automatic reflex. The signal passes from the sensory nerve to a simple processing center, which initiates a motor command. Seconds are saved, because messages don’t have to go the brain, be processed, and get sent back. In matters of survival, the spinal reflexes allow the body to react extraordinarily fast.
The spinal cord is protected by bony vertebrae and cushioned in cerebrospinal fluid, but injuries still occur. When the spinal cord is damaged in a particular segment, all lower segments are cut off from the brain, causing paralysis. Therefore, the lower on the spine damage is, the fewer functions an injured individual loses.
The Two Hemispheres
The surface of the brain, known as the cerebral cortex , is very uneven, characterized by a distinctive pattern of folds or bumps, known as gyri (singular: gyrus), and grooves, known as sulci (singular: sulcus), shown in Figure 1. These gyri and sulci form important landmarks that allow us to separate the brain into functional centers. The most prominent sulcus, known as the longitudinal fissure, is the deep groove that separates the brain into two halves or hemispheres: the left hemisphere and the right hemisphere.
There is evidence of some specialization of function—referred to as lateralization —in each hemisphere, mainly regarding differences in language ability. Beyond that, however, the differences that have been found have been minor. What we do know is that the left hemisphere controls the right half of the body, and the right hemisphere controls the left half of the body.
The two hemispheres are connected by a thick band of neural fibers known as the corpus callosum , consisting of about 200 million axons. The corpus callosum allows the two hemispheres to communicate with each other and allows for information being processed on one side of the brain to be shared with the other side.
Normally, we are not aware of the different roles that our two hemispheres play in day-to-day functions, but there are people who come to know the capabilities and functions of their two hemispheres quite well. In some cases of severe epilepsy, doctors elect to sever the corpus callosum as a means of controlling the spread of seizures (Figure 2). While this is an effective treatment option, it results in individuals who have split brains. After surgery, these split-brain patients show a variety of interesting behaviors. For instance, a split-brain patient is unable to name a picture that is shown in the patient’s left visual field because the information is only available in the largely nonverbal right hemisphere. However, they are able to recreate the picture with their left hand, which is also controlled by the right hemisphere. When the more verbal left hemisphere sees the picture that the hand drew, the patient is able to name it (assuming the left hemisphere can interpret what was drawn by the left hand).
Much of what we know about the functions of different areas of the brain comes from studying changes in the behavior and ability of individuals who have suffered damage to the brain. For example, researchers study the behavioral changes caused by strokes to learn about the functions of specific brain areas. A stroke, caused by an interruption of blood flow to a region in the brain, causes a loss of brain function in the affected region. The damage can be in a small area, and, if it is, this gives researchers the opportunity to link any resulting behavioral changes to a specific area. The types of deficits displayed after a stroke will be largely dependent on where in the brain the damage occurred.
Consider Theona, an intelligent, self-sufficient woman, who is 62 years old. Recently, she suffered a stroke in the front portion of her right hemisphere. As a result, she has great difficulty moving her left leg. (As you learned earlier, the right hemisphere controls the left side of the body; also, the brain’s main motor centers are located at the front of the head, in the frontal lobe.) Theona has also experienced behavioral changes. For example, while in the produce section of the grocery store, she sometimes eats grapes, strawberries, and apples directly from their bins before paying for them. This behavior—which would have been very embarrassing to her before the stroke—is consistent with damage in another region in the frontal lobe—the prefrontal cortex, which is associated with judgment, reasoning, and impulse control.
Link to Learning
Watch this video to see an incredible example of the challenges facing a split-brain patient shortly following the surgery to sever her corpus callosum.
Watch this second video about another patient who underwent a dramatic surgery to prevent her seizures. You’ll learn more about the brain’s ability to change, adapt, and reorganize itself, also known as brain plasticity.
GLOSSARY
• Corpus callosum: thick band of neural fibers connecting the brain’s two hemispheres
• Gyrus (plural: gyri): bump or ridge on the cerebral cortex
• Hemisphere: left or right half of the brain
• Lateralization: concept that each hemisphere of the brain is associated with specialized functions
• Longitudinal fissure: deep groove in the brain’s cortex
• Sulcus (plural: sulci) depressions or grooves in the cerebral cortex | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/02%3A_The_Brain/2.06%3A_Hemispheres.txt |
Neuroplasticity, Neurogenesis, and Brain Lateralization
Learning Objectives
• Explain and define the concepts brain neuroplasticity, neurogenesis, and brainlateralization.
The control of some bodily functions, such as movement, vision, and hearing, is performed in specific areas of the cortex, and if an area is damaged, the individual will likely lose the ability to perform the corresponding function. For instance, if an infant suffers damage to facial recognition areas in the temporal lobe, it is likely that he or she will never be able to recognize faces. However, the brain is not divided in an entirely rigid way. The brain’s neurons have a remarkable capacity to reorganize and extend themselves to carry out particular functions in response to the needs of the organism and to repair damage. As a result, the brain constantly creates new neural communication routes and rewires existing ones. Neuroplasticity is the brain’s ability to change its structure and function in response to experience or damage .
Neuroplasticity enables us to learn and remember new things and adjust to new experiences.
Our brains are the most “plastic” when we are young children, as it is during this time that we learn the most about our environment. And neuroplasticity continues to be observed even in adults. The principles of neuroplasticity help us understand how our brains develop to reflect our experiences. For instance, accomplished musicians have a larger auditory cortex compared with the general population and also require less neural activity to play their instruments than do novices. These observations reflect the changes in the brain that follow our experiences.
Plasticity is also observed when damage occurs to the brain or to parts of the body that are represented in the motor and sensory cortexes. When a tumor in the left hemisphere of the brain impairs language, the right hemisphere begins to compensate to help the person recover the ability to speak. And if a person loses a finger, the area of the sensory cortex that previously received information from the missing finger begins to receive input from adjacent fingers, causing the remaining digits to become more sensitive to touch.
Although neurons cannot repair or regenerate themselves as skin and blood vessels can, new evidence suggests that the brain can engage in neurogenesis , the forming of new neurons . These new neurons originate deep in the brain and may then migrate to other brain areas where they form new connections with other neurons. This leaves open the possibility that someday scientists might be able to “rebuild” damaged brains by creating drugs that help grow neurons.
Unique Functions of the Left and Right Hemispheres Using Split-Brain Patients We learned that the left hemisphere of the brain primarily senses and controls the motor movements on the right side of the body, and vice versa. This fact provides an interesting way to study brain lateralization—the idea that the left and the right hemispheres of the brain are specialized to perform different functions. Gazzaniga, Bogen, and Sperry studied a patient, known as W. J., who had undergone an operation to relieve severe seizures. In this surgery, the region that normally connects the two halves of the brain and supports communication between the hemispheres, known as the corpus callosum, is severed. As a result, the patient essentially becomes a person with two separate brains. Because the left and right hemispheres are separated, each hemisphere develops a mind of its own, with its own sensations, concepts, and motivations.
In their research, Gazzaniga and his colleagues tested the ability of W. J. to recognize and respond to objects and written passages that were presented to only the left or to only the right brain hemispheres. The researchers had W. J. look straight ahead and then flashed, for a fraction of a second, a picture of a geometric shape to the left of where he was looking. By doing so, they assured that—because the two hemispheres had been separated—the image of the shape was experienced only in the right brain hemisphere (remember that sensory input from the left side of the body is sent to the right side of the brain). Gazzaniga and his colleagues found that W. J. was able to identify what he had been shown when he was asked to pick the object from a series of shapes, using his left hand, but that he could not do so when the object was shown in the right visual field. Conversely, W. J. could easily read written material presented in the right visual field (and thus experienced in the left hemisphere) but not when it was presented in the left visual field.
Visual and Verbal Processing in the Split-Brain Patient
The information presented on the left side of our field of vision is transmitted to the right brain hemisphere, and vice versa. In split-brain patients, the severed corpus callosum does not permit information to be transferred between hemispheres, which allows researchers to learn about the functions of each hemisphere.
This research, and many other studies following it, demonstrated that the two brain hemispheres specialize in different abilities. In most people, the ability to speak, write, and understand language is located in the left hemisphere. This is why W. J. could read passages that were presented on the right side and thus transmitted to the left hemisphere, but could not read passages that were only experienced in the right brain hemisphere. The left hemisphere is also better at math and at judging time and rhythm. It is also superior in coordinating the order of complex movements—for example, lip movements needed for speech. The right hemisphere has only limited verbal abilities, and yet it excels in perceptual skills. The right hemisphere is able to recognize objects, including faces, patterns, and melodies, and it can put a puzzle together or draw a picture. This is why W. J. could pick out the image when he saw it on the left, but not the right, visual field.
Although Gazzaniga’s research demonstrated that the brain is in fact lateralized, such that the two hemispheres specialize in different activities, this does not mean that when people behave in a certain way or perform a certain activity they are using only one hemisphere of their brains at a time. That would be drastically oversimplifying the concept of brain differences. We normally use both hemispheres at the same time, and the difference between the abilities of the two hemispheres is not absolute. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/02%3A_The_Brain/2.07%3A_Split-Brain_Measures-severing_the_corpus_callosum.txt |
Cortical Responses
Let’s start with sensory stimuli that have been registered through receptor cells and the information relayed to the CNS along ascending pathways. In the cerebral cortex, the initial processing of sensory perception progresses to associative processing and then integration in multimodal areas of cortex. These levels of processing can lead to the incorporation of sensory perceptions into memory, but more importantly, they lead to a response. The completion of cortical processing through the primary, associative, and integrative sensory areas initiates a similar progression of motor processing, usually in different cortical areas.
Whereas the sensory cortical areas are located in the occipital, temporal, and parietal lobes, motor functions are largely controlled by the frontal lobe. The most anterior regions of the frontal lobe—the prefrontal areas—are important for executive functions, which are those cognitive functions that lead to goal-directed behaviors. These higher cognitive processes include working memory, which has been called a “mental scratch pad,” that can help organize and represent information that is not in the immediate environment. The prefrontal lobe is responsible for aspects of attention, such as inhibiting distracting thoughts and actions so that a person can focus on a goal and direct behavior toward achieving that goal.
The functions of the prefrontal cortex are integral to the personality of an individual, because it is largely responsible for what a person intends to do and how they accomplish those plans. A famous case of damage to the prefrontal cortex is that of Phineas Gage, dating back to 1848. He was a railroad worker who had a metal spike impale his prefrontal cortex (Figure 1). He survived the accident, but according to second-hand accounts, his personality changed drastically.
Friends described him as no longer acting like himself. Whereas he was a hardworking, amiable man before the accident, he turned into an irritable, temperamental, and lazy man after the accident. Many of the accounts of his change may have been inflated in the retelling, and some behavior was likely attributable to alcohol used as a pain medication. However, the accounts suggest that some aspects of his personality did change. Also, there is new evidence that though his life changed dramatically, he was able to become a functioning stagecoach driver, suggesting that the brain has the ability to recover even from major trauma such as this.
Phineas Gage
The victim of an accident while working on a railroad in 1848, Phineas Gage had a large iron rod impaled through the prefrontal cortex of his frontal lobe. After the accident, his personality appeared to change, but he eventually learned to cope with the trauma and lived as a coach driver even after such a traumatic event.
Figure 17. (credit b: John M. Harlow, MD)
Figure 18. Phineas Gage. The victim of an accident while working on a railroad in 1848, Phineas Gage had a large iron rod impaled through the prefrontal cortex of his frontal lobe. After the accident, his personality appeared to change, but he eventually learned to cope with the trauma and lived as a coach driver even after such a traumatic event. (credit b: John M. Harlow, MD) | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/02%3A_The_Brain/2.08%3A_Trauma.txt |
Learning Objectives
1. Describe a general model of scientific research in psychology and give
specific examples that fit the model.
2. Explain who conducts scientific research in psychology and why they do it.
3. Distinguish between basic research and applied research.
03: Methods of Research
Figure 1 presents a more specific model of scientific research in psychology. The researcher (who more often than not is really a small group of researchers) formulates a research question, conducts a study designed to answer the question, analyzes the resulting data, draws conclusions about the answer to the question, and publishes the results so that they become part of the research literature. Because the research literature is one of the primary sources of new research questions, this process can be thought of as a cycle. New research leads to new questions, which lead to new research, and so on. Figure 1 also indicates that research questions can originate outside of this cycle either with informal observations or with practical problems that need to be solved. But even in these cases, the researcher would start by checking the research literature to see if the question had already been answered and to refine it based on what previous research had already found.
The research by Mehl and his colleagues is described nicely by this model. Their question— whether women are more talkative than men—was suggested to them both by people’s stereotypes and by published claims about the relative talkativeness of women and men. When they checked the research literature, however, they found that this question had not been adequately addressed in scientific studies. They conducted a careful empirical study, analyzed the results (finding very little difference between women and men), and published their work so that it became part of the research literature. The publication of their article is not the end of the story, however, because their work suggests many new questions (about the reliability of the result, about potential cultural differences, etc.) that will likely be taken up by them and by other researchers inspired by their work.
As another example, consider that as cell phones became more widespread during the 1990s, people began to wonder whether, and to what extent, cell phone use had a negative effect on driving. Many psychologists decided to tackle this question scientifically (Collet, Guillot, & Petit, 2010). It was clear from previously published research that engaging in a simple verbal task impairs performance on a perceptual or motor task carried out at the same time, but no one had studied the effect specifically of cell phone use on driving. Under carefully controlled conditions, these researchers compared people’s driving performance while using a cell phone with their performance while not using a cell phone, both in the lab and on the road. They found that people’s ability to detect road hazards, reaction time, and control of the vehicle were all impaired by cell phone use. Each new study was published and became part of the growing research literature on this topic.
Who Conducts Scientific Research in Psychology?
Scientific research in psychology is generally conducted by people with doctoral degrees
(usually the doctor of philosophy [PhD]) and master’s degrees in psychology and related fields, often supported by research assistants with bachelor’s degrees or other relevant training. Some of them work for government agencies (e.g., the National Institute of Mental Health), for nonprofit organizations (e.g., the American Cancer Society), or in the private sector (e.g., in product development). However, the majority of them are college and university faculty, who often collaborate with their graduate and undergraduate students. Although some researchers are trained and licensed as clinicians—especially those who conduct research in clinical psychology—the majority are not. Instead, they have expertise in one or more of the many other subfields of psychology: behavioral neuroscience, cognitive psychology, developmental psychology, personality psychology, social psychology, and so on. Doctoral-level researchers might be employed to conduct research full-time or, like many college and university faculty members, to conduct research in addition to teaching classes and serving their institution and community in other ways.
Of course, people also conduct research in psychology because they enjoy the intellectual and technical challenges involved and the satisfaction of contributing to scientific knowledge of human behavior. You might find that you enjoy the process too. If so, your college or university might offer opportunities to get involved in ongoing research as either a research assistant or a participant. Of course, you might find that you do not enjoy the process of conducting scientific research in psychology. But at least you will have a better understanding of where scientific knowledge in psychology comes from, an appreciation of its strengths and limitations, and an awareness of how it can be applied to solve practical problems in psychology and everyday life.
Scientific Psychology Blogs
A fun and easy way to follow current scientific research in psychology is to read any of the many excellent blogs devoted to summarizing and commenting on new findings.
Among them are the following:
You can also browse to http://www.researchblogging.org, select psychology as your topic, and read entries from a wide variety of blogs.
The Broader Purposes of Scientific Research in Psychology
People have always been curious about the natural world, including themselves and their behavior. (In fact, this is probably why you are studying psychology in the first place.) Science
grew out of this natural curiosity and has become the best way to achieve detailed and accurate knowledge. Keep in mind that most of the phenomena and theories that fill psychology textbooks are the products of scientific research. In a typical introductory psychology textbook, for example, one can learn about specific cortical areas for language and perception, principles of classical and operant conditioning, biases in reasoning and judgment, and people’s surprising tendency to obey authority. And scientific research continues because what we know right now only scratches the surface of what we can know.
Scientific research is often classified as being either basic or applied. Basic research in psychology is conducted primarily for the sake of achieving a more detailed and accurate understanding of human behavior, without necessarily trying to address any particular practical problem. The research of Mehl and his colleagues falls into this category. Applied research is conducted primarily to address some practical problem. Research on the effects of cell phone use on driving, for example, was prompted by safety concerns and has led to the enactment of laws to limit this practice. Although the distinction between basic and applied research is convenient, it is not always clear-cut. For example, basic research on sex differences in talkativeness could eventually have an effect on how marriage therapy is practiced, and applied research on the effect of cell phone use on driving could produce new insights into basic processes of perception, attention, and action.
Key Takeaways
• Research in psychology can be described by a simple cyclical model. A research question based on the research literature leads to an empirical study, the results of which are published and become part of the research literature.
• Scientific research in psychology is conducted mainly by people with doctoral degrees in psychology and related fields, most of whom are college and university faculty members. They do so for professional and for personal reasons, as well as to contribute to scientific knowledge about human behavior.
• Basic research is conducted to learn about human behavior for its own sake, and applied research is conducted to solve some practical problem. Both are valuable, and the distinction between the two is not always clear-cut.
Exercises
1. Practice: Find a description of an empirical study in a professional journal or in one of the scientific psychology blogs. Then write a brief description of the research in terms of the cyclical model presented here. One or two sentences for each part of the cycle should suffice.
2. Practice: Based on your own experience or on things you have already learned about psychology, list three basic research questions and three applied research questions of interest to you. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/03%3A_Methods_of_Research/3.01%3A_A_Model_of_Scientific_Research_in_Psychology.txt |
Neural Correlates of Memory Consolidation
The hippocampus, amygdala, and cerebellum play important roles in the consolidation and manipulation of memory.
Learning Objectives
• Analyze the role each brain structure involved in memory formation and consolidation
Key Points
• Memory consolidation is a category of processes that stabilize a memory trace after its initial acquisition.
• The hippocampus is essential for the consolidation of both short-term and long-term memories. Damage to this area of the brain can render a person incapable of making new memories and may even affect older memories that have not been fully consolidated.
• The amygdala has been associated with enhanced retention of memory. Because of this, it is thought to modulate memory consolidation. The effect is most pronounced in emotionally charged events.
• The cerebellum is associated with creativity and innovation. It is theorized that all processes of working memory are adaptively modeled by the cerebellum.
Key Terms
• declarative memory : The type of long-term memory that stores facts and events; also known as conscious or explicit memory.
• encoding : The process of converting information into a construct that can be stored within the brain.
• consolidation : The act or process of turning short-term memories into more permanent, long-term memories.
Memory consolidation is a category of processes that stabilize a memory trace after its initial acquisition. Like encoding, consolidation affects how well a memory will be remembered after it is stored: if it is encoded and consolidated well, the memory will be easily retrieved in full detail, but if encoding or consolidation is neglected, the memory will not be retrieved or may not be accurate.
Consolidation occurs through communication between several parts of the brain, including the hippocampus, the amygdala, and the cerebellum.
The Hippocampus
While psychologists and neuroscientists debate the exact role of the hippocampus, they generally agree that it plays an essential role in both the formation of new memories about experienced events and declarative memory (which handles facts and knowledge rather than motor skills). The hippocampus is critical to the formation of memories of events and facts.
Information regarding an event is not instantaneously stored in long-term memory. Instead, sensory details from the event are slowly assimilated into long-term storage over time through the process of consolidation. Some evidence supports the idea that, although these forms of memory often last a lifetime, the hippocampus ceases to play a crucial role in the retention of memory after the period of consolidation.
Damage to the hippocampus usually results in difficulties forming new memories, or anterograde amnesia, and normally also brings about problems accessing memories that were created prior to the damage, or retrograde amnesia. A famous case study that made this theory plausible is the story of a patient known as HM: After his hippocampus was removed in an effort to cure his epilepsy, he lost the ability to form memories. People with damage to the
hippocampus may still be able to learn new skills, however, because those types of memory are non-declarative. Damage may not affect much older memories. All this contributes to the idea that the hippocampus may not be crucial in memory retention in the post-consolidation stages.
The Amygdala
The amygdala is involved in memory consolidation—specifically, in how consolidation is modulated. “Modulation” refers to the strength with which a memory is consolidated. In particular, it appears that emotional arousal following an event influences the strength of the subsequent memory. Greater emotional arousal following learning enhances a person’s retention of that stimulus.
The amygdala is involved in mediating the effects of emotional arousal on the strength of the memory of an event. Even if the amygdala is damaged, memories can still be encoded. The amygdala is most helpful in enhancing the memories of emotionally charged events, such as recalling all of the details on a day when you experienced a traumatic accident.
The Cerebellum
The cerebellum plays a role in the learning of procedural memory (i.e., routine, “practiced” skills), and motor learning, such as skills requiring coordination and fine motor control. Playing a musical instrument, driving a car, and riding a bike are examples of skills requiring procedural memory. The cerebellum is more generally involved in motor learning, and damage to it can result in problems with movement; specifically, it is thought to coordinate the timing and accuracy of movements, and to make long-term changes (learning) to improve these skills. A person with hippocampal damage might still be able to remember how to play the piano but not remember facts about their life. But a person with damage to their cerebellum would have the opposite problem: they would remember their declarative memories, but would have trouble with procedural memories like playing the piano. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/04%3A_Memory/4.01%3A_Memory_and_the_Brain.txt |
Learning Objectives
• Discuss the physical characteristics of memory storage
Although the physical location of memory remains relatively unknown, it is thought to be distributed in neural networks throughout the brain.
Key Takeaways
Key Points
• It is theorized that memories are stored in neural networks in various parts of the brain associated with different types of memory, including short-term memory, sensory memory, and long-term memory.
• Memory traces, or engrams, are physical neural changes associated with memories. Scientists have gained knowledge about these neuronal codes from studies on neuroplasticity.
• Encoding of episodic memory involves lasting changes in molecular structures, which alter communication between neurons. Recent functional-imaging studies have detected working-memory signals in the medial temporal lobe and the prefrontal cortex.
• Both the frontal lobe and prefrontal cortex are associated with long- and short-term memory, suggesting a strong link between these two types of memory.
• The hippocampus is integral in consolidating memories but does not seem to store memories itself.
Key Terms
• engram: A postulated physical or biochemical change in neural tissue that represents a memory; a memory trace.
• neuroplasticity: The state or quality of the brain that allows it to adapt to experience through physical changes in connections.
Many areas of the brain have been associated with the processes of memory storage. Lesion studies and case studies of individuals with brain injuries have allowed scientists to determine which areas of the brain are most associated with which kinds of memory. However, the actual physical location of memories remains relatively unknown. It is theorized that memories are stored in neural networks in various parts of the brain associated with different types of memory, including short-term memory, sensory memory, and long-term memory. Keep in mind, however, that it is not sufficient to describe memory as solely dependent on specific brain regions, although there are areas and pathways that have been shown to be related to certain functions.
Memory Traces
Memory traces, or engrams , are the physical neural changes associated with memory storage. The big question of how information and mental experiences are coded and represented in the brain remains unanswered. However, scientists have gained much knowledge about neuronal codes from studies on neuroplasticity, the brain’s capacity to change its neural connections.
Most of this research has been focused on simple learning and does not clearly describe changes involved in more complex examples of memory.
Encoding of working memory involves the activation of individual neurons induced by sensory input. These electric spikes continue even after the sensation stops. Encoding of episodic memory (i.e., memories of experiences) involves lasting changes in molecular structures that alter communication between neurons. Recent functional-magnetic-resonance-imaging (fMRI) studies detected working memory signals in the medial temporal lobe and the prefrontal cortex. These areas are also associated with long-term memory, suggesting a strong relationship between working memory and long-term memory.
Brain Areas Associated with Memory
Imaging research and lesion studies have led scientists to conclude that certain areas of the brain may be more specialized for collecting, processing, and encoding specific types of memories. Activity in different lobes of the cerebral cortex have been linked to the formation of memories.
Sensory Memory
The temporal and occipital lobes are associated with sensation and are thus involved in sensory memory. Sensory memory is the briefest form of memory, with no storage capability. Instead, it is a temporary “holding cell” for sensory information, capable of holding information for
seconds at most before either passing it to short-term memory or letting it disappear.
Short-Term Memory
Short-term memory is supported by brief patterns of neural communication that are dependent on regions of the prefrontal cortex, frontal lobe, and parietal lobe. The hippocampus is essential for the consolidation of information from short-term to long-term memory; however, it does not seem to store information itself, adding mystery to the question of where memories are stored. The hippocampus receives input from different parts of the cortex and sends output to various areas of the brain. The hippocampus may be involved in changing neural connections for at least three months after information is initially processed. This area is believed to be important for spatial and declarative (i.e., fact-based) memory as well.
Long-Term Memory
Long-term memory is maintained by stable and permanent changes in neural connections spread throughout the brain. The processes of consolidating and storing long-term memories have been particularly associated with the prefrontal cortex, cerebrum, frontal lobe, and medial temporal lobe. However, the permanent storage of long-term memories after consolidation and encoding appears to depend upon the connections between neurons, with more deeply processed memories having stronger connections.
Three Stages of the Learning/Memory Process
Psychologists distinguish between three necessary stages in the learning and memory process: , and ( ). Encoding is defined as the initial learning of information; storage refers to maintaining information over time; retrieval is the ability to access information when you need it. If you meet someone for the first time at a party, you need to encode her name (Lyn Goff) while you associate her name with her face. Then you need to maintain the information over time. If you see her a week later, you need to recognize her face and have it serve as a cue to retrieve her name. Any successful act of remembering requires that all three stages be intact. However, two types of errors can also
occur. Forgetting is one type: you see the person you met at the party and you cannot recall her name. The other error is misremembering (false recall or false recognition): you see someone who looks like Lyn Goff and call the person by that name (false recognition of the face). Or, you might see the real Lyn Goff, recognize her face, but then call her by the name of another woman you met at the party (misrecall of her name).
Whenever forgetting or misremembering occurs, we can ask, at which stage in the learning/memory process was there a failure?—though it is often difficult to answer this question with precision. One reason for this inaccuracy is that the three stages are not as discrete as our description implies. Rather, all three stages depend on one another. How we encode information determines how it will be stored and what cues will be effective when we try to retrieve it. And too, the act of retrieval itself also changes the way information is subsequently remembered, usually aiding later recall of the retrieved information. The central point for now is that the three stages—encoding, storage, and retrieval—affect one another,and are inextricably bound together. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/04%3A_Memory/4.02%3A_Memory_Processes.txt |
Memory encoding allows an item of interest to be converted into a construct that is stored in the brain, which can later be recalled.
Memory encoding allows information to be converted into a construct that is stored in the brain indefinitely. Once it is encoded, it can be recalled from either short- or long-term memory. At a very basic level, memory encoding is like hitting “Save” on a computer file. Once a file is saved, it can be retrieved as long as the hard drive is undamaged. “Recall” refers to retrieving previously encoded information.
The process of encoding begins with perception, which is the identification, organization, and interpretation of any sensory information in order to understand it within the context of a particular environment. Stimuli are perceived by the senses, and related signals travel to the thalamus of the human brain, where they are synthesized into one experience. The hippocampus then analyzes this experience and decides if it is worth committing to long-term memory.
Encoding is achieved using chemicals and electric impulses within the brain. Neural pathways, or connections between neurons (brain cells), are actually formed or strengthened through a process called long-term potentiation, which alters the flow of information within the brain. In other words, as a person experiences novel events or sensations, the brain “rewires” itself in order to store those new experiences in memory.
Encoding refers to the initial experience of perceiving and learning information. Psychologists often study recall by having participants study a list of pictures or words. Encoding in these situations is fairly straightforward. However, “real life” encoding is much more challenging.
When you walk across campus, for example, you encounter countless sights and sounds— friends passing by, people playing Frisbee, music in the air. The physical and mental environments are much too rich for you to encode all the happenings around you or the internal thoughts you have in response to them. So, an important first principle of encoding is that it is selective: we attend to some events in our environment and we ignore others. A second point about encoding is that it is prolific; we are always encoding the events of our lives—attending to the world, trying to understand it. Normally this presents no problem, as our days are filled with routine occurrences, so we don’t need to pay attention to everything. But if something does happen that seems strange—during your daily walk across campus, you see a giraffe—then we pay close attention and try to understand why we are seeing what we are seeing.
Right after your typical walk across campus (one without the appearance of a giraffe), you would be able to remember the events reasonably well if you were asked. You could say whom you bumped into, what song was playing from a radio, and so on. However, suppose someone asked you to recall the same walk a month later. You wouldn’t stand a chance. You would likely be able to recount the basics of a typical walk across campus, but not the precise details of that particular walk. Yet, if you had seen a giraffe during that walk, the event would have been fixed in your mind for a long time, probably for the rest of your life. You would tell your friends about it, and, on later occasions when you saw a giraffe, you might be reminded of the day you saw one on campus. Psychologists have long pinpointed —having an event stand out as quite different from a background of similar events—as a key to remembering events ( ).
In addition, when vivid memories are tinged with strong emotional content, they often seem to leave a permanent mark on us. Public tragedies, such as terrorist attacks, often create vivid memories in those who witnessed them. But even those of us not directly involved in such events may have vivid memories of them, including memories of first hearing about them. For example, many people are able to recall their exact physical location when they first learned about the assassination or accidental death of a national figure. The term was originally coined by Brown and Kulik ( ) to describe this sort of vivid memory of finding out an important piece of news. The name refers to how some memories seem to be captured in the mind like a flash photograph; because of the distinctiveness and emotionality of the news, they seem to become permanently etched in the mind with exceptional clarity compared to other memories.
Take a moment and think back on your own life. Is there a particular memory that seems sharper than others? A memory where you can recall unusual details, like the colors of mundane things around you, or the exact positions of surrounding objects? Although people have great confidence in flashbulb memories like these, the truth is, our objective accuracy with them is far from perfect ( ). That is, even though people may have great confidence in what they recall, their memories are not as accurate (e.g., what the actual colors were; where objects were truly placed) as they tend to imagine. Nonetheless, all other things being equal, distinctive and emotional events are well-remembered.
Details do not leap perfectly from the world into a person’s mind. We might say that we went to a party and remember it, but what we remember is (at best) what we encoded. As noted above, the process of encoding is selective, and in complex situations, relatively few of many possible details are noticed and encoded. The process of encoding always involves — that is, taking the information from the form it is delivered to us and then converting it in a way that we can make sense of it. For example, you might try to remember the colors of a rainbow by using the acronym ROY G BIV (red, orange, yellow, green, blue, indigo, violet). The process of recoding the colors into a name can help us to remember. However, recoding can also introduce errors—when we accidentally add information during encoding, then remember that new material as if it had been part of the actual experience (as discussed below).
Psychologists have studied many recoding strategies that can be used during study to improve retention. First, research advises that, as we study, we should think of the meaning of the events, and we should try to relate new events to information we already know. This helps us form associations that we can use to retrieve information later. Second, imagining events also makes them more memorable; creating vivid images out of information (even verbal information) can greatly improve later recall. Creating imagery is part of the technique Simon Reinhard uses to remember huge numbers of digits, but we can all use images to encode information more effectively. The basic concept behind good encoding strategies is to form distinctive memories (ones that stand out), and to form links or associations among memories to help later retrieval. Using study strategies such as the ones described here is challenging, but the effort is well worth the benefits of enhanced learning and retention.
We emphasized earlier that encoding is selective: people cannot encode all information they are exposed to. However, recoding can add information that was not even seen or heard during the initial encoding phase. Several of the recoding processes, like forming associations between memories, can happen without our awareness. This is one reason people can sometimes remember events that did not actually happen—because during the process of recoding, details got added. One common way of inducing false memories in the laboratory employs a word-list technique. Participants hear lists of 15 words, like door, glass, pane, shade, ledge, sill, house, open, curtain, frame, view, breeze, sash, screen, and shutter. Later, participants are given a test in which they are shown a list of words and asked to pick out the ones they’d heard earlier. This second list contains some words from the first list (e.g., door, pane, frame ) and some words not from the list (e.g., arm, phone, bottle ). In this example, one of the words on the test is window , which—importantly—does not appear in the first list, but which is related to other words in that list. When subjects were tested, they were reasonably accurate with the studied words ( door , etc.), recognizing them 72% of the time. However, when window was on the test, they falsely recognized it as having been on the list 84% of the time. The same thing happened with many other lists the authors used. This phenomenon is referred to as the DRM (for Deese- Roediger-McDermott) effect. One explanation for such results is that, while students listened to items in the list, the words triggered the students to think about window , even though window was never presented. In this way, people seem to encode events that are not actually part of their experience.
Because humans are creative, we are always going beyond the information we are given: we automatically make associations and infer from them what is happening. But, as with the word association mix-up above, sometimes we make false memories from our inferences— remembering the inferences themselves as if they were actual experiences. To illustrate this, Brewer gave people sentences to remember that were designed to elicit pragmatic inferences . Inferences, in general, refer to instances when something is not explicitly stated, but we are still able to guess the undisclosed intention. For example, if your friend told you that she didn’t want to go out to eat, you may infer that she doesn’t have the money to go out, or that she’s too tired. With pragmatic inferences, there is usually one particular inference you’re likely to make. Consider the statement Brewer gave her participants: “The karate champion hit the cinder block.” After hearing or seeing this sentence, participants who were given a memory test tended to remember the statement as having been, “The karate champion broke the cinder block.” This remembered statement is not necessarily a logical inference (i.e., it is perfectly reasonable that a karate champion could hit a cinder block without breaking it). Nevertheless, the pragmatic conclusion from hearing such a sentence is that the block was likely broken. The participants remembered this inference they made while hearing the sentence in place of the actual words that were in the sentence.
Encoding—the initial registration of information—is essential in the learning and memory process. Unless an event is encoded in some fashion, it will not be successfully remembered later. However, just because an event is encoded (even if it is encoded well), there’s no guarantee that it will be remembered later. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/04%3A_Memory/4.03%3A_Encoding.txt |
Memory storage allows us to hold onto information for a very long duration of time—even a lifetime.
Memory Storage
Memories are not stored as exact replicas of experiences; instead, they are modified and reconstructed during retrieval and recall. Memory storage is achieved through the process of encoding, through either short- or long-term memory. During the process of memory encoding, information is filtered and modified for storage in short-term memory. Information in short- term memory deteriorates constantly; however, if the information is deemed important or useful, it is transferred to long-term memory for extended storage. Because long-term memories must be held for indefinite periods of time, they are stored, or consolidated, in a way that optimizes space for other memories. As a result, long-term memory can hold much more information than short-term memory, but it may not be immediately accessible.
The way long-term memories are stored is similar to a digital compression. This means that information is filed in a way that takes up the least amount of space, but in the process, details of the memory may be lost and not easily recovered. Because of this consolidation process, memories are more accurate the sooner they are retrieved after being stored. As the retention interval between encoding and retrieval of the memory lengthens, the accuracy of the memory decreases.
Short-Term Memory Storage
Short-term memory is the ability to hold information for a short duration of time (on the order of seconds). In the process of encoding, information enters the brain and can be quickly forgotten if it is not stored further in the short-term memory. George A. Miller suggested that the capacity of short-term memory storage is approximately seven items plus or minus two, but modern researchers are showing that this can vary depending on variables like the stored items’ phonological properties. When several elements (such as digits, words, or pictures) are held in short-term memory simultaneously, their representations compete with each other for recall, or degrade each other. Thereby, new content gradually pushes out older content, unless the older content is actively protected against interference by rehearsal or by directing attention to it.
Information in the short-term memory is readily accessible, but for only a short time. It continuously decays, so in the absence of rehearsal (keeping information in short-term memory by mentally repeating it) it can be forgotten.
Long-Term Memory Storage
In contrast to short-term memory, long-term memory is the ability to hold semantic information for a prolonged period of time. Items stored in short-term memory move to long- term memory through rehearsal, processing, and use. The capacity of long-term memory storage is much greater than that of short-term memory, and perhaps unlimited. However, the duration of long-term memories is not permanent; unless a memory is occasionally recalled, it may fail to be recalled on later occasions. This is known as forgetting.
Long-term memory storage can be affected by traumatic brain injury or lesions. Amnesia, a deficit in memory, can be caused by brain damage. Anterograde amnesia is the inability to store new memories; retrograde amnesia is the inability to retrieve old memories. These types of amnesia indicate that memory does have a storage process.
Every experience we have changes our brains. That may seem like a bold, even strange, claim at first, but it’s true. We encode each of our experiences within the structures of the nervous system, making new impressions in the process—and each of those impressions involves changes in the brain. Psychologists (and neurobiologists) say that experiences leave , or (the two terms are synonyms). Memories have to be stored somewhere in the brain, so in order to do so, the brain biochemically alters itself and its neural tissue. Just like you might write yourself a note to remind you of something, the brain “writes” a memory trace, changing its own physical composition to do so. The basic idea is that events (occurrences in our environment) create engrams through a process of : the neural changes that occur after learning to create the memory trace of an experience. Although neurobiologists are concerned with exactly what neural processes change when memories are created, for psychologists, the term memory trace simply refers to the physical change in the nervous system (whatever that may be, exactly) that represents our experience.
Although the concept of engram or memory trace is extremely useful, we shouldn’t take the term too literally. It is important to understand that memory traces are not perfect little packets of information that lie dormant in the brain, waiting to be called forward to give an accurate report of past experience. Memory traces are not like video or audio recordings, capturing experience with great accuracy; as discussed earlier, we often have errors in our memory, which would not exist if memory traces were perfect packets of information. Thus, it is wrong to think that remembering involves simply “reading out” a faithful record of past experience. Rather, when we remember past events, we reconstruct them with the aid of our memory traces—but also with our current belief of what happened. For example, if you were trying to recall for the police who started a fight at a bar, you may not have a memory trace of who pushed whom first. However, let’s say you remember that one of the guys held the door open for you. When thinking back to the start of the fight, this knowledge (of how one guy was friendly to you) may unconsciously influence your memory of what happened in favor of the nice guy. Thus, memory is a construction of what you actually recall and what you believe happened. In a phrase, remembering is reconstructive (we reconstruct our past with the aid of memory traces) not reproductive (a perfect reproduction or recreation of the past). | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/04%3A_Memory/4.04%3A_Storage.txt |
Learning Objectives
• Outline the ways in which recall can be cued or fail
Key Points
• Retrieval cues can facilitate recall. Cues are thought to be most effective when they have a strong, complex link with the information to be recalled.
• Memories of events or items tend to be recalled in the same order in which they were experienced, so by thinking through a list or series of events, you can boost your recall of successive items.
• The primacy and recency effects show that items near the beginning and end of a list or series tend to be remembered most frequently.
• Retroactive interference is when new information interferes with remembering old information; proactive interference is when old information interferes with remembering new information.
• The tip-of-the-tongue phenomenon occurs when an individual can almost recall a word but cannot directly identify it. This is a type of retrieval failure; the memory cannot be accessed, but certain aspects of it, such as the first letter or similar words, can.
Key Terms
• working memory: The system that actively holds multiple pieces of information in the mind for execution of verbal and nonverbal tasks and makes them available for further information processing.
• tip-of-the-tongue phenomenon: The failure to retrieve a word from memory combined with partial recall and the feeling that retrieval is imminent.
• retrieval: The cognitive process of bringing stored information into consciousness.
Memory retrieval is the process of remembering information stored in long-term memory. Some theorists suggests that there are three stores of memory: sensory memory, long-term memory (LTM), and short-term memory (STM). Only data that is processed through STM and encoded into LTM can later be retrieved. Overall, the mechanisms of memory are not completely understood. However, there are many theories concerning memory retrieval.
There are two main types of memory retrieval: recall and recognition. In recall, the information must be retrieved from memories. In recognition, the presentation of a familiar outside stimulus provides a cue that the information has been seen before. A cue might be an object or a scene—any stimulus that reminds a person of something related. Recall may be assisted when retrieval cues are presented that enable the subject to quickly access the information in memory.
Patterns of Memory Retrieval
Memory retrieval can occur in several different ways, and there are many things that can affect it, such as how long it has been since the last time you retrieved the memory, what other information you have learned in the meantime, and many other variables. For example, the spacing effect allows a person to remember something they have studied many times spaced over a longer period of time rather than all at once. The testing effect shows that practicing retrieval of a concept can increase the chance of remembering it.
Retrieval
Memory retrieval, including recall and recognition, is the process of remembering information stored in long-term memory.
Some effects relate specifically to certain types of recall. There are three main types of recall studied in psychology: serial recall, free recall, and cued recall.
Serial Recall
People tend to recall items or events in the order in which they occurred. This is called serial recall and can be used to help cue memories. By thinking about a string of events or even words, it is possible to use a previous memory to cue the next item in the series. Serial recall helps a person to remember the order of events in his or her life. These memories appear to exist on a continuum on which more recent events are more easily recalled.
When recalling serial items presented as a list (a common occurrence in memory studies), two effects tend to surface: the primacy effect and the recency effect . The primacy effect occurs when a participant remembers words from the beginning of a list better than the words from the middle or end. The theory behind this is that the participant has had more time to rehearse these words in working memory. The recency effect occurs when a participant remembers words from the end of a list more easily, possibly since they are still available in short-term memory.
Free Recall
Free recall occurs when a person must recall many items but can recall them in any order. It is another commonly studied paradigm in memory research. Like serial recall, free recall is subject to the primacy and recency effects.
Cued Recall
Cues can facilitate recovery of memories that have been “lost.” In research, a process called cued recall is used to study these effects. Cued recall occurs when a person is given a list to remember and is then given cues during the testing phase to aid in the retrieval of memories. The stronger the link between the cue and the testing word, the better the participant will recall the words.
Interference with Memory Retrieval
Interference occurs in memory when there is an interaction between the new material being learned and previously learned material. There are two main kinds of interference: proactive and retroactive.
Proactive Interference
Proactive interference is the forgetting of information due to interference from previous knowledge in LTM. Past memories can inhibit the encoding of new memories. This is particularly true if they are learned in similar contexts and the new information is similar to previous information. This is what is happening when you have trouble remembering your new phone number because your old one is stuck in your head.
Retroactive Interference
Retroactive interference occurs when newly learned information interferes with the encoding or recall of previously learned information. If a participant was asked to recall a list of words, and was then immediately presented with new information, it could interfere with remembering the initial list. If you learn to use a new kind of computer and then later have to use the old model again, you might find you have forgotten how to use it. This is due to retroactive interference.
Retrieval Failure
Sometimes a person is not able to retrieve a memory that they have previously encoded. This can be due to decay, a natural process that occurs when neural connections decline, like an unused muscle.
Occasionally, a person will experience a specific type of retrieval failure called tip-of-the-tongue phenomenon. This is the failure to retrieve a word from memory, combined with partial recall and the feeling that retrieval is imminent. People who experience this can often recall one or more features of the target word such as the first letter, words that sound similar, or words that have a similar meaning. While this process is not completely understood, there are two theories as to why it occurs. The first is the direct-access perspective , which states that the memory is not strong enough to retrieve but strong enough to trigger the state. The inferential perspective posits that the state occurs when the subject infers knowledge of the target word, but tries to piece together different clues about the word that are not accessible in memory. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/04%3A_Memory/4.05%3A_Retrieval.txt |
The three major classifications of memory that the scientific community deals with today are as follows: sensory memory, short-term memory, and long-term memory. Information from the world around us begins to be stored by sensory memory, making it possible for this information to be accessible in the future. Short-term memory refers to the information processed by the individual in a short period of time. Working memory performs this processing. Long-term memory allows us to store information for long periods of time. This information may be retrieved consciously (explicit memory) or unconsciously (implicit memory).
Sensory Memory
“Sensory memory is the capacity for briefly retaining the large amounts of information that people encounter daily” (Siegler and Alibali, 2005). There are three types of sensory memory: echoic memory, iconic memory, and haptic memory. Iconic memory retains information that is gathered through sight, echoic memory retains information gathered through auditory stimuli and haptic memory retains data acquired through touch.
Scientific research has focused mainly on iconic memory; information on echoic and haptic memory is comparatively scarce. Iconic memory retains information from the sense of sight with an approximate duration of 1 second. This reservoir of information then passes to short- term vision memory (which is analogous, as we shall see shortly, to the visuospatial sketchpad with which working memory operates).
Di Lollo’s model (Di Lollo, 1980) is the most widely accepted model of iconic memory. Therein, he considered iconic memory a storehouse constituted by two components: the persistence of vision and information.
1. Persistence of vision. Iconic memory corresponds to the pre-categorical representation image/visual. It is sensitive to physical parameters, such that it depends on retinal photoreceptors (rods and cones). It also depends on various cells in the visual system and on retinal ganglion cells M (transition cells) and P (sustained cells). “The occipital lobe is responsible for processing visual information”.
2. Persistence of information. Iconic memory is a storehouse of information that lasts 800 miliseconds and that represents a codified and already categorized version of the visual image. It plays the role of storehouse for post-categorical memory, which provides visual short-term memory with information to be consolidated.
Subsequent research on visual persistence from Coltheart (Coltheart, 1983) and Sperling’s studies (Sperling, 1960) on the persistence of information led to the definition of three characteristics pertaining to iconic memory: a large capacity, a short duration, and a pre- categorical nature.
Regarding short-term, Sperling interpreted the results of the partial report as due to the rapid decline of the visual sign and reaffirmed this short duration by obtaining a decrease in the number of letters reported by the subject in delaying the audio signal for choosing a row to remember in the presentation. Averbach and Coriell’s experiments (Averbach and Coriell, 1961) corroborated Sperling’s conclusion; they presented a variety of letters for a certain period of time to the subject. After each letter, and in the same position, they showed a particular visual sign. The participant’s task was to name the letter that occupied the position of the visual sign. When the visual sign appeared immediately after the letters, participants could correctly name the letter that occupied the position of the sign, however, as the presentation of the sign became more delayed, participant performance worsened. These results also show the rapid decline of visual information.
In the Atkinson-Shiffrin model, stimuli from the environment are processed first in sensory memory: storage of brief sensory events, such as sights, sounds, and tastes. It is very brief storage—up to a couple of seconds. We are constantly bombarded with sensory information. We cannot absorb all of it, or even most of it. And most of it has no impact on our lives. For example, what was your professor wearing the last class period? As long as the professor was dressed appropriately, it does not really matter what she was wearing. Sensory information
about sights, sounds, smells, and even textures, which we do not view as valuable information, we discard. If we view something as valuable, the information will move into our short-term memory system.
One study of sensory memory researched the significance of valuable information on short- term memory storage. J. R. Stroop discovered a memory phenomenon in the 1930s: you will name a color more easily if it appears printed in that color, which is called the Stroop effect. In other words, the word “red” will be named more quickly, regardless of the color the word appears in, than any word that is colored red. Try an experiment: name the colors of the words you are given in the picture . Do not read the words, but say the color the word is printed in. For example, upon seeing the word “yellow” in green print, you should say “green,” not “yellow.” This experiment is fun, but it’s not as easy as it seems.
Methods of studying-whole report and partial report techniques:- Miller’s Magic Number
A factoid is a snippet of information (usually taken out of context) that's assumed to be factual
because it's repeated often. A favorite pop-psychology factoid, repeated in textbooks and popular media, is that human short-term memory is limited to 7, plus or minus 2, items (called "chunks"). While there is some truth to it, this factoid offers little as a pedagogical tool beyond stressing the need to break problems into manageable chunks for novices. The full story behind the "magic" number seven, however, provides a fascinating look into Psychology's quest to understand the differences between experts and novices.
The number seven, called "Miller's Magic Number," comes from by the psychologist George A. Miller title "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information." In this celebrated and highly-readable article,
Miller considers two kinds of situations:
• A person must correctly distinguish between very similar items (e.g., high/low-pitched tones, shades of green), and
• A person must recall items presented in a sequence.
In the first kind of situation, called absolute judgement , subjects are exposed to a stimulus that varies along a single dimension, such as the pitch of a tone, the green-ness of a color, or the concentration of salt in a cup of water. Across many different kinds of stimuli, people can consistently distinguish about six distinct stimulus levels without making mistakes.
The second kind of situation is used to measure the span of immediate memory . Here, the subject must retain a select number of chunks in their short-term memory, and recall as many items as possible at the end of a trial. Across a handful of simple domains, such as decimal digits, letters of the alphabet, and monosyllabic words, people are able to hold anywhere from five to nine chunks in short-term without making mistakes. While it is tempting to assume that the limits of absolute judgement and immediate memory are related, Miller did not believe this to be the case.
The Game of Simon
A helpful way to understand the difference between absolute judgement and immediate memory span is with the by Milton Bradley. This simple game device has four colored buttons, each associated with a distinct tone. In each round, Simon plays a sequence of tones, and the player must repeat the sequence back by pressing the appropriate buttons. The game gets progressively harder as the length of the sequence grows.
A player is exercising absolute judgement when distinguishing between Simon's tones. The number of distinct tones is fixed at four, well within a safe "no mistakes" range for most people. Simon's increasing sequence length, however, is meant to strain immediate memory capacity. Based on Miller's article, one would expect it to be quite difficult for players to repeat
back a sequence of nine or more tones, yet expert players have managed nearly ten times that. How are they able to do this?
Shortcomings of the Magic Number
The span of short-term memory as reported by Miller in 1956 (7 ± 2 chunks) is where the pop- psychology factoid usually stops. Since that time, however, researchers have cast doubt on the magic number itself as well as its cross-domain applicability. Research with chess experts, for example, has suggested a span limit of 3 to 5 chunks; nearly half the magic number! In the domain of language, it has been found that phonological similarity and spoken word length are much better predictors of how many words a person can hold in short-term memory (less- similar and longer words are harder to retain). Things have even changed for absolute judgement: subjects in one experiment were only able to distinguish about 7 colors until they were given a broader vocabulary (i.e., "pale blueish green"). With little training, they were then able to discriminate around 36 colors.
What does it mean for short-term memory to have a limited capacity in the first place? A key insight is that traditional means of measuring absolute judgement and short-term memory span require blocking recoding -- the process of grouping or relating chunks. To block recoding, experimenters must use non-sensical or unrelated stimuli, such as made-up words or random decimal digits. Under these artificial conditions, we see something resembling a strict capacity limit, but this limit increases or even seems to disappear when subjects are able to find some higher-order meaning in the stimuli. For example, one famous subject in a random decimal digit memorization experiment found he could remember more digits at a time by mentally (he was an avid runner).
Recoding is Magical
In the real world, people are constantly recoding stimuli. Because of this, it is difficult to define precisely what a "chunk" is. Cross-domain research with experts suggests that they retain the same short-term capacity limits as novices, but the content of their chunks is far greater. In addition to denser chunks, experts have invested in building intricate networks of chunks in their long-term memories, ensuring that relevant chunks are always readily available. As Miller and many psychologists since have shown, recoding is truly where the action lies.
If stimuli can be recoded relative to one's background knowledge, then using Miller's magic number alone to judge the cognitive burden of something may or may not be helpful. When the chunk sizes are known, it becomes possible to use short-term capacity limits as a predictor of cognitive burden and complexity. In an , subjects were asked to copy the positions of all pieces from one chessboard to another. By placing the boards far apart, subjects were forced to turn their heads to focus on either board.
The experimenters were thus able to use the number of piece positions copied on every turn as an estimate of the subjects' chunk size, and were able to show that the performance difference between grand masters and novices was slim when random board positions were used.
A was done with programmers copying code by hand, and the same kind of results were found (experts were no better at remember code with shuffled lines than novices).
The Big Picture
Miller's magic number is a fun factoid, but it is only the beginning. In the search for a fixed short-term memory limit, we have found something much more interesting: an understanding of domain expertise. Experts do not exceed the limitations of the average human mind, they
have "simply" built a vast, complex network of domain-specific chunks that allows them to rarely end up in unfamiliar territory. We can still see experts' capacity limits in the lab, but they are much more difficult to spot in the wild.
Chunking
Chunking refers to a phenomenon whereby individuals group items together when performing a memory task to improve the performance of sequential memory.
The word “Chunking,” a phenomenon whereby individuals group items together when performing a memory task, was initiated by (Miller, 1956). (Lindley, 1966) showed that groups produced by chunking have concept meanings to the participant. Therefore, this strategy makes it easier for an individual to maintain and recall information in memory. For example, when recalling a number sequence 01122014, if we group the numbers as 01, 12, and 2014, mnemonic meanings for each group as a day, a month and a year are created. Furthermore, studies found evidence that the firing event of a single cell is associated with a particular concept, such as personal names of Bill Clinton or Jennifer Aniston (Kreiman et al., 2000, 2001).
Psychologists believe that chunking plays as an essential role in joining the elements of a memory trace together through a particular hierarchical memory structure (Tan and Soon, 1996; Edin et al., 2009). At a time when information theory started to be applied in psychology, Miller claimed that short-term memory is not rigid but open to strategies (Miller, 1956) such as chunking that can expand the memory capacity (Gobet et al., 2001). According to this information, it is possible to increase short-term memory capacity by effectively recoding a large amount of low-information-content items into a smaller number of high-information- content items (Cowan, 2001; Chen and Cowan, 2005). Therefore, when chunking is evident in recall tasks, one can expect a higher number of correct recalls. Patients with Alzheimer's disease typically experience working memory deficits; chunking is also an effective method to improve patients' verbal working memory performance (Huntley et al., 2011). | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/04%3A_Memory/4.06%3A_Modal_Model_of_Memory.txt |
Hermann Ebbinghaus (1850–1909) was a pioneer of the study of memory. In this section we consider three of his most important findings, each of which can help you improve your memory. In his research, in which he was the only research participant, Ebbinghaus practiced memorizing lists of nonsense syllables, such as the following:
DIF, LAJ, LEQ, MUV, WYC, DAL, SEN, KEP, NUD
You can imagine that because the material that he was trying to learn was not at all meaningful, it was not easy to do. Ebbinghaus plotted how many of the syllables he could remember against the time that had elapsed since he had studied them. He discovered an important principle of memory: Memory decays rapidly at first, but the amount of decay levels off with time.
Although Ebbinghaus looked at forgetting after days had elapsed, the same effect occurs on longer and shorter time scales. Bahrick (1984) found that students who took a Spanish language course forgot about one half of the vocabulary that they had learned within three years, but
that after that time their memory remained pretty much constant. Forgetting also drops off quickly on a shorter time frame. This suggests that you should try to review the material that you have already studied right before you take an exam; that way, you will be more likely to remember the material during the exam.
Ebbinghaus also discovered another important principle of learning, known as the spacing effect . The spacing effect refers to the fact that learning is better when the same amount of study is spread out over periods of time than it is when it occurs closer together or at the same time . This means that even if you have only a limited amount of time to study, you’ll learn more if you study continually throughout the semester (a little bit every day is best) than if you wait to cram at the last minute before your exam. Another good strategy is to study and then wait as long as you can before you forget the material. Then review the information and again wait as long as you can before you forget it. (This probably will be a longer period of time than the first time.) Repeat and repeat again. The spacing effect is usually considered in terms of the difference between distributed practice (practice that is spread out over time) and massed practice (practice that comes in one block), with the former approach producing better memory.
Ebbinghaus also considered the role of overlearning —that is, continuing to practice and study even when we think that we have mastered the material. Ebbinghaus and other researchers have found that overlearning helps encoding (Driskell, Willis, & Copper, 1992). Students frequently think that they have already mastered the material but then discover when they get to the exam that they have not. The point is clear: Try to keep studying and reviewing, even if you think you already know all the material.
4.08: William James- isolating Short-term and Long-term memory
An issue discussed during 1960s was weather human memory system has one or two components. Some authors like Arthur Melton 8) argued that both short term memory (STM) and long-term memory (LTM) are just two subcomponents dependent on the same system. He justified his views by proofs of activation of the LTM in STM experiments. His work was very influential, yet during the years more and more evidence of at least two separate memory systems have accumulated.
The first more influential two component memory model was introduced in 1968 by Richard Atkinson and Richard Shiffrni 9) . Their model called Multi-store model consisted of long-term and working or short-term memory model and was later improved by an additional component, the sensory memory. Sensory memory contains one register for each sense and serves as an short lasting buffer-zone before the information can enter short-term memory. Short-term memory is a temporal storage for new information before it enters long-term memory, but is also used for cognitive tasks, understanding and learning.
The thesis of two separate memory systems: the long-term memory and the short-term memory is today considered to be true. This thesis is supported by differences in:
§ capacity (small for STM and large or unlimited for LTM),
§ duration limits (items in STM decay as a function of time, which is not a characteristic of LTM),
§ retention speed (very high for STM and possibly lower for LTM),
§ time to acquire information (short for STM and longer for LTM),
§ information encoding (semantic for LTM and acoustic or visual for STM), and
§ type of memory affected by physical injuries in patients.
Another term should be clarified here: the working memory, which is often mistaken for the short-term memory. The main difference between these two is that working memory usually includes the structure and processes performed by a system in control of the short-term memory. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/04%3A_Memory/4.07%3A_Ebbinghaus.txt |
Variations in the ability to retrieve information are also seen in the serial position curve. When we give people a list of words one at a time (e.g., on flashcards) and then ask them to recall them, the results look something like those in Figure 14 “The Serial Position Curve”. People are able to retrieve more words that were presented to them at the beginning and the end of the list than they are words that were presented in the middle of the list. This pattern, known as the serial position curve, is caused by two retrieval phenomenon: The primacy effect refers to a tendency to better remember stimuli that are presented early in a list. The recency effect refers to the tendency to better remember stimuli that are presented later in a list.
There are a number of explanations for primacy and recency effects, but one of them is in terms of the effects of rehearsal on short-term and long-term memory (Baddeley, Eysenck, & Anderson, 2009). Because we can keep the last words that we learned in the presented list in short-term memory by rehearsing them before the memory test begins, they are relatively easily remembered. So the recency effect can be explained in terms of maintenance rehearsal in short-term memory. And the primacy effect may also be due to rehearsal—when we hear the first word in the list we start to rehearse it, making it more likely that it will be moved from short-term to long-term memory. And the same is true for the other words that come early in the list. But for the words in the middle of the list, this rehearsal becomes much harder, making them less likely to be moved to LTM.
4.10: Recency Effects and Primary Effects
People tend to recall items or events in the order in which they occurred. This is called serial recall and can be used to help cue memories. By thinking about a string of events or even words, it is possible to use a previous memory to cue the next item in the series. Serial recall helps a person to remember the order of events in his or her life. These memories appear to exist on a continuum on which more recent events are more easily recalled.
When recalling serial items presented as a list (a common occurrence in memory studies), two effects tend to surface: the primacy effect and the recency effect. The primacy effect occurs when a participant remembers words from the beginning of a list better than the words from the middle or end. The theory behind this is that the participant has had more time to rehearse these words in working memory. The recency effect occurs when a participant remembers words from the end of a list more easily, possibly since they are still available in short-term memory.
4.11: Short Term Memory
Short-term memory (STM) is a temporary storage system that processes incoming sensory memory; sometimes it is called working memory. Short-term memory takes information from sensory memory and sometimes connects that memory to something already in long-term memory. Short-term memory storage lasts about 20 seconds. George Miller (1956), in his research on the capacity of memory, found that most people can retain about 7 items in STM. Some remember 5, some 9, so he called the capacity of STM 7 plus or minus 2.
Think of short-term memory as the information you have displayed on your computer screen— a document, a spreadsheet, or a web page. Then, information in short-term memory goes to long-term memory (you save it to your hard drive), or it is discarded (you delete a document or close a web browser). This step of rehearsal, the conscious repetition of information to be remembered, to move STM into long-term memory is called memory consolidation.
You may find yourself asking, “How much information can our memory handle at once?” To explore the capacity and duration of your short-term memory, have a partner read the strings of random numbers out loud to you, beginning each string by saying, “Ready?” and ending each by saying, “Recall,” at which point you should try to write down the string of numbers from memory.
Work through this series of numbers using the recall exercise explained above to determine the longest string of digits that you can store.
Note the longest string at which you got the series correct. For most people, this will be close to 7, Miller’s famous 7 plus or minus 2. Recall is somewhat better for random numbers than for random letters (Jacobs, 1887), and also often slightly better for information we hear (acoustic encoding) rather than see (visual encoding) (Anderson, 1969) | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/04%3A_Memory/4.09%3A_Serial_Position_Curve.txt |
Today, many theorists use the concept of working memory (WM) to replace the concept of Short Term Memory. This new model of STM “shifted the focus from memory structure to memory processes and functions”. To put it another way, WM refers to both structures and processes used for storing and manipulating information.
To sum up, STM refers to the ability to hold information in mind over a brief period of time. As concept of STM has expanded and it includes more than just the temporary storage of information, psychologists have created new terminology, working memory. The term WM is now commonly used to refer to a broader system that both stores information and manipulates it. However, STM and WM are sometimes used interchangeably.
5.02: Components-Central Executive Phonological Loop Visuospatial Sketchpad
Baddeley's model of working memory
Based on experiments demonstrating connections between LTM and STM, as well as experiments indicating that STM consists of more components, and proposed a multi-component working memory model in 1974 . The new term working memory was supposed to emphasize the importance of this system in cognitive processing. Baddeley and Hitch suggested working memory is composed of three parts: the central executive , a system that controls the phonological loop (a subsystem for remembering phonological information such as language by constant refreshing through repetition in the loop), and the visuospatial sketch pad (a subsystem for storing visual information).
This model was later revised and improved by Baddeley but also contributed by other authors, which resulted in additional component of episodic buffer in year 2000 and more detailed functions and analysis of other components, as described in table below.
Table 1.
Central executive
It is still unclear whether it is a single system or more systems working together. Central executive's functions include attention and focusing, active inhibition of stimuli, planning and decision-making,
sequencing, updating , maintenance and integration of information from phonological loop and visuospatial sketchpad. These functions also include communication with long-term memory and connections to language understanding and production centers.
Episodic buffer
Episodic buffer has the role of integrating the information from phonological loop and visuospatial sketchpad, but also from long-term memory. It serves as the storage component of central executive , or otherwise information integration wouldn't be possible.
Phonological loop
According to Baddeley, phonological loop consists of two components : a sound storage which lasts just a few seconds and an articulatory
processor which maintains sound information in the storage by vocal or sub vocal repetition . Verbal information seems to be automatically processed by phonological loop and it also plays an important, maybe even key role in language learning and speech production. It can also help in memorizing information from the visuospatial sketchpad. (For example, repeating “A red car is on the lawn.”)
Visuospatial sketchpad
This construct according to Baddeley enables temporary storing, maintaining and manipulating of visuospatial information. It is important in spatial orientation and solving visuospatial problems . Studies have indicated that visuospatial sketchpad might actually be containing two different systems: one for spatial information and processes and the other for visual information and processes. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.01%3A_The_difference_between_Working_Memory_and_Short-Term_Memory.txt |
If information makes it past STM it may enter long-term memory (LTM), memory storage that can hold information for days, months, and years. The capacity of long-term memory is large, and there is no known limit to what we can remember. Although we may forget at least some information after we learn it, other things will stay with us forever.
Long-term memory (LTM) is the continuous storage of information. Unlike short-term memory, the storage capacity of LTM has no limits. It encompasses all the things you can remember that happened more than just a few minutes ago to all of the things that you can remember that happened days, weeks, and years ago. In keeping with the computer analogy, the information in your LTM would be like the information you have saved on the hard drive. It isn’t there on your desktop (your short-term memory), but you can pull up this information when you want it, at least most of the time. Not all long-term memories are strong memories. Some memories can only be recalled through prompts. For example, you might easily recall a fact— “What is the capital of the United States?”—or a procedure—“How do you ride a bike?”—but you might struggle to recall the name of the restaurant you had dinner when you were on vacation in France last summer. A prompt, such as that the restaurant was named after its owner, who spoke to you about your shared interest in soccer, may help you recall the name of the restaurant.
Long-term memory is divided into two types: explicit and implicit. Understanding the different types is important because a person’s age or particular types of brain trauma or disorders can leave certain types of LTM intact while having disastrous consequences for other types. Explicit memories are those we consciously try to remember and recall. For example, if you are studying for your chemistry exam, the material you are learning will be part of your explicit memory. (Note: Sometimes, but not always, the terms explicit memory and declarative memory are used interchangeably.)
Implicit memories are memories that are not part of our consciousness. They are memories formed from behaviors. Implicit memory is also called non-declarative memory.
Procedural memory is a type of implicit memory: it stores information about how to do things. It is the memory for skilled actions, such as how to brush your teeth, how to drive a car, how to swim the crawl (freestyle) stroke. If you are learning how to swim freestyle, you practice the stroke: how to move your arms, how to turn your head to alternate breathing from side to side, and how to kick your legs. You would practice this many times until you become good at it.
Once you learn how to swim freestyle and your body knows how to move through the water, you will never forget how to swim freestyle, even if you do not swim for a couple of decades. Similarly, if you present an accomplished guitarist with a guitar, even if he has not played in a long time, he will still be able to play quite well.
Declarative memory has to do with the storage of facts and events we personally experienced. Explicit (declarative) memory has two parts: semantic memory and episodic memory. Semantic means having to do with language and knowledge about language. An example would be the question “what does argumentative mean?” Stored in our semantic memory is knowledge about words, concepts, and language-based knowledge and facts. For example, answers to the following questions are stored in your semantic memory:
• Who was the first President of the United States?
• What is democracy?
• What is the longest river in the world?
Episodic memory is information about events we have personally experienced. The concept of episodic memory was first proposed about 40 years ago (Tulving, 1972). Since then, Tulving and others have looked at scientific evidence and reformulated the theory. Currently, scientists believe that episodic memory is memory about happenings in particular places at particular times, the what, where, and when of an event (Tulving, 2002). It involves recollection of visual imagery as well as the feeling of familiarity (Hassabis & Maguire, 2007).
Explicit Memory
When we assess memory by asking a person to consciously remember things, we are measuring explicit memory . Explicit memory refers to knowledge or experiences that can be consciously remembered . As you can see in Figure 3, “Types of Memory” there are two types of explicit memory: episodic and semantic . Episodic memory refers to the firsthand experiences that we have had (e.g., recollections of our high school graduation day or of the fantastic dinner we had in New York last year). Semantic memory refers to our knowledge of facts and concepts about the world (e.g., that the absolute value of −90 is greater than the absolute value of 9 and that one definition of the word “affect” is “the experience of feeling or emotion”).
Explicit memory is assessed using measures in which the individual being tested must consciously attempt to remember the information. A recall memory test is a measure of explicit memory that involves bringing from memory information that has previously been remembered . We rely on our recall memory when we take an essay test, because the test requires us to generate previously remembered information. A multiple-choice test is an example of a recognition memory test, a measure of explicit memory that involves determining whether information has been seen or learned before .
Your own experiences taking tests will probably lead you to agree with the scientific research finding that recall is more difficult than recognition. Recall, such as required on essay tests, involves two steps: first generating an answer and then determining whether it seems to be the correct one. Recognition, as on multiple-choice test, only involves determining which item from a list seems most correct (Haist, Shimamura, & Squire, 1992). Although they involve different processes, recall and recognition memory measures tend to be correlated. Students who do better on a multiple-choice exam will also, by and large, do better on an essay exam (Bridgeman & Morgan, 1996).
A third way of measuring memory is known as relearning (Nelson, 1985). Measures of relearning (or savings) assess how much more quickly information is processed or learned when it is studied again after it has already been learned but then forgotten . If you have taken some French courses in the past, for instance, you might have forgotten most of the vocabulary you learned. But if you were to work on your French again, you’d learn the vocabulary much faster the second time around. Relearning can be a more sensitive measure of memory than either recall or recognition because it allows assessing memory in terms of “how much” or “how fast” rather than simply “correct” versus “incorrect” responses. Relearning also allows us to measure memory for procedures like driving a car or playing a piano piece, as well as memory for facts and figures.
Implicit Memory
While explicit memory consists of the things that we can consciously report that we know, implicit memory refers to knowledge that we cannot consciously access. However, implicit memory is nevertheless exceedingly important to us because it has a direct effect on our behavior. Implicit memory refers to the influence of experience on behavior, even if the individual is not aware of those influences . There are three general types of implicit memory: procedural memory, classical conditioning effects, and priming.
Procedural memory refers to our often unexplainable knowledge of how to do things . When we walk from one place to another, speak to another person in English, dial a cell phone, or play a video game, we are using procedural memory. Procedural memory allows us to perform complex tasks, even though we may not be able to explain to others how we do them. There is no way to tell someone how to ride a bicycle; a person has to learn by doing it. The idea of implicit memory helps explain how infants are able to learn. The ability to crawl, walk, and talk are procedures, and these skills are easily and efficiently developed while we are children despite the fact that as adults we have no conscious memory of having learned them.
A second type of implicit memory is classical conditioning effects, in which we learn, often without effort or awareness, to associate neutral stimuli (such as a sound or a light) with another stimulus (such as food), which creates a naturally occurring response, such as enjoyment or salivation. The memory for the association is demonstrated when the conditioned stimulus (the sound) begins to create the same response as the unconditioned stimulus (the food) did before the learning.
The final type of implicit memory is known as priming, or changes in behavior as a result of experiences that have happened frequently or recently . Priming refers both to the activation of knowledge (e.g., we can prime the concept of “kindness” by presenting people with words related to kindness) and to the influence of that activation on behavior (people who are primed with the concept of kindness may act more kindly).
One measure of the influence of priming on implicit memory is the word fragment test , in which a person is asked to fill in missing letters to make words. You can try this yourself: First, try to complete the following word fragments, but work on each one for only three or four seconds. Do any words pop into mind quickly?
_ i b _ a _ y
_ h _ s _ _ i _ n
_ o _ k
_ h _ i s _
Now read the following sentence carefully:
“He got his materials from the shelves, checked them out, and then left the building.”
Then try again to make words out of the word fragments.
I think you might find that it is easier to complete fragments 1 and 3 as “library” and “book,” respectively, after you read the sentence than it was before you read it. However, reading the sentence didn’t really help you to complete fragments 2 and 4 as “physician” and “chaise.” This difference in implicit memory probably occurred because as you read the sentence, the concept of “library” (and perhaps “book”) was primed, even though they were never mentioned explicitly. Once a concept is primed it influences our behaviors, for instance, on word fragment tests.
Our everyday behaviors are influenced by priming in a wide variety of situations. Seeing an advertisement for cigarettes may make us start smoking, seeing the flag of our home country may arouse our patriotism, and seeing a student from a rival school may arouse our competitive spirit. And these influences on our behaviors may occur without our being aware of them.
Research Focus: Priming Outside Awareness Influences Behavior
One of the most important characteristics of implicit memories is that they are frequently formed and used automatically , without much effort or awareness on our part. In one demonstration of the automaticity and influence of priming effects, John Bargh and his colleagues (Bargh, Chen, & Burrows, 1996) conducted a study in which they showed college students lists of five scrambled words, each of which they were to make into a sentence.
Furthermore, for half of the research participants, the words were related to stereotypes of the elderly.
These participants saw words such as the following:
in Florida retired live people bingo man the forgetful plays
The other half of the research participants also made sentences, but from words that had nothing to do with elderly stereotypes. The purpose of this task was to prime stereotypes of elderly people in memory for some of the participants but not for others.
The experimenters then assessed whether the priming of elderly stereotypes would have any effect on the students’ behavior—and indeed it did. When the research participant had gathered all of his or her belongings, thinking that the experiment was over, the experimenter thanked him or her for participating and gave directions to the closest elevator. Then, without the participants knowing it, the experimenters recorded the amount of time that the participant spent walking from the doorway of the experimental room toward the elevator.
Bargh, Chen, and Burrows (1996) found that priming words associated with the elderly made people walk more slowly.
To determine if these priming effects occurred out of the awareness of the participants, Bargh and his colleagues asked still another group of students to complete the priming task and then to indicate whether they thought the words they had used to make the sentences had any relationship to each other, or could possibly have influenced their behavior in any way. These students had no awareness of the possibility that the words might have been related to the elderly or could have influenced their behavior. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.03%3A_Long_Term_Memory.txt |
Psychologists refer to the time between learning and testing as the retention interval. Memories can consolidate during that time, aiding retention. However, experiences can also occur that undermine the memory. For example, think of what you had for lunch yesterday—a pretty easy task. However, if you had to recall what you had for lunch 17 days ago, you may well fail (assuming you don’t eat the same thing every day). The 16 lunches you’ve had since that one have created retroactive interference . Retroactive interference refers to new activities (i.e., the subsequent lunches) during the retention interval (i.e., the time between the lunch 17 days ago and now) that interfere with retrieving the specific, older memory (i.e., the lunch details from 17 days ago). But just as newer things can interfere with remembering older things, so can the opposite happen. Proactive interference is when past memories interfere with the encoding of new ones. For example, if you have ever studied a second language, often times the grammar and vocabulary of your native language will pop into your head, impairing your fluency in the foreign language.
Retroactive interference is one of the main causes of forgetting (McGeoch, 1932). In the module Eyewitness Testimony and Memory Biases(http://noba.to/uy49tm37), Elizabeth Loftus describes her fascinating work on eyewitness memory, in which she shows how memory for an event can be changed via misinformation supplied during the retention interval. For example, if you witnessed a car crash but subsequently heard people describing it from their own perspective, this new information may interfere with or disrupt your own personal recollection of the crash. In fact, you may even come to remember the event happening exactly as the others described it! This misinformation effect in eyewitness memory represents a type of retroactive interference that can occur during the retention interval. Of course, if correct information is given during the retention interval, the witness’s memory will usually be improved.
Although interference may arise between the occurrence of an event and the attempt to recall it, the effect itself is always expressed when we retrieve memories , the topic to which we turn next.
In some cases our existing memories influence our new learning. This may occur either in a backward way or a forward way. Retroactive interference occurs when learning something new impairs our ability to retrieve information that was learned earlier . For example, if you have learned to program in one computer language, and then you learn to program in another similar one, you may start to make mistakes programming the first language that you never would have made before you learned the new one. In this case the new memories work backward (retroactively) to influence retrieval from memory that is already in place.
In contrast to retroactive interference, proactive interference works in a forward direction. Proactive interference occurs when earlier learning impairs our ability to encode information that we try to learn later . For example, if we have learned French as a second language, this knowledge may make it more difficult, at least in some respects, to learn a third language (say Spanish), which involves similar but not identical vocabulary. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.04%3A_Decay_vs._Interference.txt |
Interference vs. Decay
Chances are that you have experienced memory lapses and been frustrated by them. You may have had trouble remembering the definition of a key term on an exam or found yourself unable to recall the name of an actor from one of your favorite TV shows. Maybe you forgot to call your aunt on her birthday or you routinely forget where you put your cell phone.
Oftentimes, the bit of information we are searching for comes back to us, but sometimes it does not. Clearly, forgetting seems to be a natural part of life. Why do we forget? And is forgetting always a bad thing?
Causes of Forgetting
One very common and obvious reason why you cannot remember a piece of information is because you did not learn it in the first place. If you fail to encode information into memory, you are not going to remember it later on. Usually , failures occur because we are distracted or are not paying attention to specific details. For example, people have a lot of trouble recognizing an actual penny out of a set of drawings of very similar pennies, or lures, even though most of us have had a lifetime of experience handling pennies ( ). However, few of us have studied the features of a penny in great detail, and since we have not attended to those details, we fail to recognize them later. Similarly, it has been well documented that distraction during learning impairs later memory (e.g., ). Most of the time this is not problematic, but in certain situations, such as when you are studying for an exam, failures to encode due to distraction can have serious repercussions.
Another proposed reason why we forget is that memories fade, or over time. It has been known since the pioneering work of Hermann Ebbinghaus ( ) that as time passes, memories get harder to recall. Ebbinghaus created more than 2,000 nonsense syllables, such as dax , bap , and rif , and studied his own memory for them, learning as many as 420 lists of 16 nonsense syllables for one experiment. He found that his memories diminished as time passed, with the most forgetting happening early on after learning. His observations and subsequent research suggested that if we do not rehearse a memory and the neural representation of that memory is not reactivated over a long period of time, the memory representation may disappear entirely or fade to the point where it can no longer be accessed. As you might imagine, it is hard to definitively prove that a memory has decayed as opposed to it being inaccessible for another reason. Critics argued that forgetting must be due to processes other than simply the passage of time, since disuse of a memory does not always guarantee forgetting ( ). More recently, some memory theorists have proposed that recent memory traces may be degraded or disrupted by new experiences ( ). Memory traces need to be or transferred from the hippocampus to more durable representations in the cortex, in order for them to last ( ). When the consolidation process is interrupted by the encoding of other experiences, the memory trace for the original experience does not get fully developed and thus is forgotten.
Both encoding failures and decay account for more permanent forms of forgetting, in which the memory trace does not exist, but forgetting may also occur when a memory exists yet we temporarily cannot access it. This type of forgetting may occur when we lack the appropriate retrieval cues for bringing the memory to mind. You have probably had the frustrating experience of forgetting your password for an online site. Usually, the password has not been permanently forgotten; instead, you just need the right reminder to remember what it is. For example, if your password was “pizza0525,” and you received the password hints “favorite food” and “Mom’s birthday,” you would easily be able to retrieve it. Retrieval hints can bring back to mind seemingly forgotten memories (Tulving & Pearlstone, 1966). One real- life illustration of the importance of retrieval cues comes from a study showing that whereas people have difficulty recalling the names of high school classmates years after graduation, they are easily able to recognize the names and match them to the appropriate faces (Bahrick, Bahrick, & Wittinger, 1975). The names are powerful enough retrieval cues that they bring back the memories of the faces that went with them. The fact that the presence of the right retrieval cues is critical for remembering adds to the difficulty in proving that a memory is permanently forgotten as opposed to temporarily unavailable.
Retrieval failures can also occur because other memories are blocking or getting in the way of recalling the desired memory. This blocking is referred to as interference . For example, you may fail to remember the name of a town you visited with your family on summer vacation because the names of other towns you visited on that trip or on other trips come to mind instead. Those memories then prevent the desired memory from being retrieved. Interference is also relevant to the example of forgetting a password: passwords that we have used for other websites may come to mind and interfere with our ability to retrieve the desired password. Interference can be either proactive, in which old memories block the learning of new related memories, or retroactive, in which new memories block the retrieval of old related memories. For both types of interference, competition between memories seems to be key (Mensink & Raaijmakers, 1988). Your memory for a town you visited on vacation is unlikely to interfere with your ability to remember an Internet password, but it is likely to interfere with your ability to remember a different town’s name. Competition between memories can also lead to forgetting in a different way. Recalling a desired memory in the face of competition may result in the inhibition of related, competing memories (Levy & Anderson, 2002). You may have difficulty recalling the name of Kennebunkport, Maine, because other Maine towns, such as Bar Harbor, Winterport, and Camden, come to mind instead. However, if you are able to recall Kennebunkport despite strong competition from the other towns, this may actually change the competitive landscape, weakening memory for those other towns’ names, leading to forgetting of them instead.
Finally, some memories may be forgotten because we deliberately attempt to keep them out of mind . Over time, by actively trying not to remember an event, we can sometimes successfully keep the undesirable memory from being retrieved either by inhibiting the undesirable memory or generating diversionary thoughts (Anderson & Green, 2001). Imagine that you slipped and fell in your high school cafeteria during lunch time, and everyone at the surrounding tables laughed at you. You would likely wish to avoid thinking about that event and might try to prevent it from coming to mind. One way that you could accomplish this is by thinking of other, more positive, events that are associated with the cafeteria. Eventually, this memory may be suppressed to the point that it would only be retrieved with great difficulty (Hertel & Calcaterra, 2005)
Adaptive Forgetting
We have explored five different causes of forgetting. Together they can account for the day-to- day episodes of forgetting that each of us experience. Typically, we think of these episodes in a negative light and view forgetting as a memory failure. Is forgetting ever good? Most people would reason that forgetting that occurs in response to a deliberate attempt to keep an event out of mind is a good thing. No one wants to be constantly reminded of falling on their face in front of all of their friends. However, beyond that, it can be argued that forgetting is adaptive, allowing us to be efficient and hold onto only the most relevant memories (Bjork, 1989; Anderson & Milson, 1989). Shereshevsky, or “S,” the mnemonist studied by Alexander Luria (1968), was a man who almost never forgot. His memory appeared to be virtually limitless. He could memorize a table of 50 numbers in under 3 minutes and recall the numbers in rows, columns, or diagonals with ease. He could recall lists of words and passages that he had memorized over a decade before. Yet Shereshevsky found it difficult to function in his everyday life because he was constantly distracted by a flood of details and associations that sprung to mind. His case history suggests that remembering everything is not always a good thing. You may occasionally have trouble remembering where you parked your car but imagine if every time you had to find your car, every single former parking space came to mind. The task would become impossibly difficult to sort through all of those irrelevant memories. Thus, forgetting is adaptive in that it makes us more efficient. The price of that efficiency is those moments when our memories seem to fail us (Schacter, 1999).
The Fallibility of Memory
Memories can be encoded poorly or fade with time; the storage and recovery process is not flawless.
Learning Objectives
• Distinguish among the factors that make some memories unrecoverable
Key Takeaways
Key Points
• Memories are affected by how a person internalizes eventsthrough perceptions, interpretations, and emotions.
• Transience refers to the general deterioration of a specific memory over time.
• Transience is caused by proactive and retroactive interference.
• Encoding is the process of converting sensory input into a formthat memory is capable of processing and storing.
• Memories that are encoded poorly or shallowly may not be recoverable.
Key Terms
• transience: The deterioration of a specific memory over time.
Memory is not perfect. Storing a memory and retrieving it later involves both biological and psychological processes, and the relationship between the two is not fully understood. Memories are affected by how a person internalizes events through perceptions, interpretations, and emotions. This can cause a divergence between what is internalized as a memory and what actually happened in reality; it can also cause events to encode incorrectly, or not at all.
Transience
It is easier to remember recent events than those further in the past, and the more we repeat or use information, the more likely it is to enter into long-term memory. However, without use, or with the addition of new memories, old memories can decay. “Transience” refers to the general deterioration of a specific memory over time. Transience is caused by proactive and retroactive interference. Proactive interference is when old information inhibits the ability to remember new information, such as when outdated scientific facts interfere with the ability to remember updated facts. Retroactive interference is when new information inhibits the ability to remember old information, such as when hearing recent news figures, then trying to remember earlier facts and figures.
Encoding Failure
Encoding is the process of converting sensory input into a form able to be processed and stored in the memory. However, this process can be impacted by a number of factors, and how well information is encoded affects how well it is able to be recalled later. Memory is associative by nature; commonalities between points of information not only reinforce old memories, but serve to ease the establishment of new ones. The way memories are encoded is personal; it depends on what information an individual considers to be relevant and useful, and how it relates to the individual’s vision of reality. All of these factors impact how memories are prioritized and how accessible they will be when they are stored in long-term memory. Information that is considered less relevant or less useful will be harder to recall than memories that are deemed valuable and important. Memories that are encoded poorly or shallowly may not be recoverable at all.
Types of Forgetting
There are many ways in which a memory might fail to be retrieved, or be forgotten.
Learning Objectives
• Differentiate among the different processes involved in forgetting
Key Takeaways
Key Points
• The trace decay theory of forgetting states that all memories fade automatically as a function of time; under this theory, you need to follow a certain path, or trace, to recall a memory.
• Under interference theory, all memories interfere with the ability to recall other memories.
• Proactive interference occurs when memories from someone’s past influence new memories; retroactive interference occurs when old memories are changed by new ones, sometimes so much that the original memory is forgotten.
• Cue-dependent forgetting, also known as retrieval failure, is the failure to recall information in the absence of memory cues.
• The tip-of-the-tongue phenomenon is the failure to retrieve a wordfrom memory, combined with partial recall and the feeling that retrieval is imminent.
Key Terms
• Trace decay theory: The theory that if memories are not reviewedor recalled consistently, they will begin to decay and will ultimately be forgotten.
• Retroactive interference: When newly learned information interferes with and impedes the recall of previously learned information.
• Proactive interference: When past memories inhibit an individual’sfull potential to retain new memories.
• Trace: A pathway to recall a memory.
Memory is not static. How you remember an event depends on a large number of variables, including everything from how much sleep you got the night before to how happy you were during the event. Memory is not always perfectly reliable, because it is influenced not only by the actual events it records, but also by other knowledge, experiences, expectations, interpretations, perceptions, and emotions. And memories are not necessarily permanent: they can disappear over time. This process is called forgetting. But why do we forget? The answer is currently unknown. There are several theories that address why we forget memories and information over time, including trace decay theory, interference theory, and cue-dependent forgetting.
Trace Decay Theory
The trace decay theory of forgetting states that all memories fade automatically as a function of time. Under this theory, you need to follow a certain pathway, or trace, to recall a memory. If this pathway goes unused for some amount of time, the memory decays, which leads to difficulty recalling, or the inability to recall, the memory. Rehearsal, or mentally going over a memory, can slow this process. But disuse of a trace will lead to memory decay, which will ultimately cause retrieval failure. This process begins almost immediately if the information is not used: for example, sometimes we forget a person’s name even though we have just met them.
Interference Theory
It is easier to remember recent events than those further in the past. ” Transience ” refers to the general deterioration of a specific memory over time. Under interference theory, transience occurs because all memories interfere with the ability to recall other memories. Proactive and retroactive interference can impact how well we are able to recall a memory, and sometimes cause us to forget things permanently.
Proactive Interference
Proactive interference occurs when old memories hinder the ability to make new memories. In this type of interference, old information inhibits the ability to remember new information, such as when outdated scientific facts interfere with the ability to remember updated facts. This often occurs when memories are learned in similar contexts, or regarding similar things. It’s when we have preconceived notions about situations and events, and apply them to current situations and events. An example would be growing up being taught that Pluto is a planet in our solar system, then being told as an adult that Pluto is no longer considered a planet. Having such a strong memory would negatively impact the recall of the new information, and when asked how many planets there are, someone who grew up thinking of Pluto as a planet might say nine instead of eight.
Retroactive Interference
Retroactive interference occurs when old memories are changed by new ones, sometimes so much that the original memory is forgotten. This is when newly learned information interferes with and impedes the recall of previously learned information. The ability to recall previously learned information is greatly reduced if that information is not utilized, and there is substantial new information being presented. This often occurs when hearing recent news figures, then trying to remember earlier facts and figures. An example of this would be learning a new way to make a paper airplane, and then being unable to remember the way you used to make them.
Cue-Dependent Forgetting
When we store a memory, we not only record all sensory data, we also store our mood and emotional state. Our current mood thus will affect the memories that are most effortlessly available to us, such that when we are in a good mood, we recollect good memories, and when we are in a bad mood, we recollect bad ones. This suggests that we are sometimes cued to remember certain things by, for example, our emotional state or our environment. Cue-dependent forgetting, also known as retrieval failure, is the failure to recall information in the absence of memory cues. There are three types of cues that can stop this type of forgetting:
• Semantic cues are used when a memory is retrieved because of its association with another memory. For example, someone forgets everything about his trip to Ohio until he is reminded that he visited a certain friend there, and that cue causes him to recollect many more events of the trip.
• State-dependent cues are governed by the state of mind at the time of encoding. The emotional or mental state of the person (such as being inebriated, drugged, upset, anxious, or happy) is key to establishing cues. Under cue-dependent forgetting theory, a memory might be forgotten until a person is in the same state.
• Context-dependent cues depend on the environment and situation. Memoryretrieval can be facilitated or triggered by replication of the context in which the memory was encoded. Such conditions can include weather, company, location, the smell of a particular odor, hearing a certain song, or even tasting a specific flavor.
Other Types of Forgetting
Trace decay, interference, and lack of cues are not the only ways that memories can fail to be retrieved. Memory’s complex interactions with sensation, perception, and attention sometimes render certain memories irretrievable.
Absentmindedness
If you’ve ever put down your keys when you entered your house and then couldn’t find them later, you have experienced absentmindedness. Attention and memory are closely related, and absentmindedness involves problems at the point where attention and memory interface. Common errors of this type include misplacing objects or forgetting appointments. Absentmindedness occurs because at the time of encoding, sufficient attention was not paid to what would later need to be recalled.
Blocking
Occasionally, a person will experience a specific type of retrieval failure called blocking. Blocking is when the brain tries to retrieve or encode information, but another memory interferes with it. Blocking is a primary cause of the tip-of-the-tongue phenomenon. This is the failure to retrieve a word from memory, combined with partial recall and the feeling that retrieval is imminent. People who experience this can often recall one or more features of the target word, such as the first letter, words that sound similar, or words that have a similar meaning. Sometimes a hint can help them remember: another example of cued memory.
Amnesia
Amnesia, the inability to recall certain memories, often results from damage to any of a number of regions in the temporal lobe and hippocampus.
Learning Objectives
• Differentiate among the different types of amnesia and memory loss
Key Takeaways
Key Points
• Anterograde amnesia is the inability to create new memories; longterm memories from before the event typically remain intact. However, memories that were not fully consolidated from beforethe event may also be lost.
• Retrograde amnesia is the inability to recall memories from before the onset of amnesia. A person may be able to encode new memories after the event, and they are more likely to remember general knowledge rather than specifics.
• Childhood amnesia is the inability to remember events from very early in childhood, due to the fact that the parts of the brain involved in longterm memory storage are still undeveloped for the first couple years of life.
• ” Dementia ” is a collective term for many neurocognitive disorders affecting memory that can arise in old age, including Alzheimer’s disease.
Key Terms
• Retrograde amnesia: The loss of memories from the periodbefore the amnesic episode.
• Anterograde amnesia: The inability to remember new information since the amnesic episode.
“Amnesia” is a general term for the inability to recall certain memories, or in some cases, the inability to form new memories. Some types of amnesia are due to neurological trauma; but in other cases, the term “amnesia” is just used to describe normal memory loss, such as not remembering childhood memories.
Amnesia from Brain
Damage Amnesia typically occurs when there is damage to a variety of regions of the temporal lobe or the hippocampus, causing the inability to recall memories before, or after, an (often traumatic) event. There are two main forms of amnesia: retrograde and anterograde.
Retrograde Amnesia
Retrograde amnesia is the inability to recall memories made before the onset of amnesia. Retrograde amnesia is usually caused by head trauma or brain damage to parts of the brain other than the hippocampus (which is involved with the encoding process of new memories). Brain damage causing retrograde amnesia can be as varied as a cerebrovascular accident, stroke, tumor, hypoxia, encephalitis, or chronic alcoholism. Retrograde amnesia is usually temporary, and can often be treated by exposing the sufferer to cues for memories of the period of time that has been forgotten.
Anterograde Amnesia
Anterograde amnesia is the inability to create new memories after the onset of amnesia, while memories from before the event remain intact. Brain regions related to this condition include the medial temporal lobe, medial diencephalon, and hippocampus. Anterograde amnesia can be caused by the effects of long-term alcoholism, severe malnutrition, stroke, head trauma, surgery, Wernicke-Korsakoff syndrome, cerebrovascular events, anoxia, or other trauma. Anterograde amnesia cannot be treated with pharmaceuticals because of the damage to brain tissue. However, sufferers can be treated through education to define their daily routines: typically, procedural memories (motor skills and routines like tying shoes or playing an instrument) suffer less than declarative memories (facts and events). Additionally, social and emotional support is important to improve the quality of life of those suffering from anterograde amnesia.
The man with no short-term memory
In 1985, Clive Wearing, then a well-known musicologist, contracted a herpes simplex virus that attacked his central nervous system. The virus damaged his hippocampus, the area of the brain required in the transfer of memories from short-term to long-term storage. As a result, Wearing developed a profound case of total amnesia, both retrograde and anterograde. He is completely unable to form lasting new memories—his memory only lasts for between 7 and 30 seconds— and also cannot recall aspects of his past memories, frequently believing that he has only recently awoken from a coma. Click here to watch a short video explaining his condition.
Other Types of Amnesia
Some types of forgetting are not due to traumatic brain injury, but instead are the result of the changes the human brain goes through over the course of a lifetime.
Childhood Amnesia
Do you remember anything from when you were six months old? How about two years old? There’s a reason that nobody does. Childhood amnesia, also called infantile amnesia, is the inability of adults to retrieve memories before the age of 2–4. This is because for the first year or two of life, brain structures such as the limbic system (which holds the hippocampus and the amygdala and is vital t0 memory storage) are not yet fully developed. Research has shown that children have the capacity to remember events that happened to them from age 1 and before while they are still relatively young, but as they get older they tend to be unable to recall memories from their youngest years.
Neurocognitive Disorders
Neurocognitive disorders are a broad category of brain diseases typical to old age that cause a long-term and often gradual decrease in the ability to think and recall memories, such that a person’s daily functioning is affected. “Neurocognitive disorder” is synonymous with “dementia” and “senility,” but these terms are no longer used in the DSM-5. For the diagnosis to be made there must be a change from a person’s usual mental functioning and a greater decline than one would expect due to aging. These diseases also have a significant effect on a person’s caregivers. The most common type of dementia is Alzheimer’s disease, which makes up 50% to 70% of cases. Its most common symptoms are short-term memory loss and word-finding difficulties. People with Alzheimer’s also have trouble with visual-spatial areas (for example, they may get lost often), reasoning, judgement, and insight into whether they are experiencing memory loss at all. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.05%3A_Forgetting.txt |
What factors determine what information can be retrieved from memory? One critical factor is the type of hints, or cues, in the environment. You may hear a song on the radio that suddenly evokes memories of an earlier time in your life, even if you were not trying to remember it when the song came on. Nevertheless, the song is closely associated with that time, so it brings the experience to mind.
The general principle that underlies the effectiveness of retrieval cues is the encoding specificity principle (Tulving & Thomson, 1973): when people encode information, they do so in specific ways. For example, take the song on the radio: perhaps you heard it while you were at a terrific party, having a great, philosophical conversation with a friend. Thus, the song became part of that whole complex experience. Years later, even though you haven’t thought about that party in ages, when you hear the song on the radio, the whole experience rushes back to you. In general, the encoding specificity principle states that, to the extent a retrieval cue (the song) matches or overlaps the memory trace of an experience (the party, the conversation), it will be effective in evoking the memory. A classic experiment on the encoding specificity principle had participants memorize a set of words in a unique setting. Later, the participants were tested on the word sets, either in the same location they learned the words or a different one. As a result of encoding specificity, the students who took the test in the same place they learned the words were actually able to recall more words (Godden & Baddeley, 1975) than the students who took the test in a new setting. In this instance, the physical context itself provided cues for retrieval. This is why it’s good to study for midterms and finals in the same room you’ll be taking them in.
One caution with this principle, though, is that, for the cue to work, it can’t match too many other experiences (Nairne, 2002; Watkins, 1975). Consider a lab experiment. Suppose you study 100 items; 99 are words, and one is a picture—of a penguin, item 50 in the list. Afterwards, the cue “recall the picture” would evoke “penguin” perfectly. No one would miss it. However, if the word “penguin” were placed in the same spot among the other 99 words, its memorability would be exceptionally worse. This outcome shows the power of distinctiveness that we discussed in the section on encoding: one picture is perfectly recalled from among 99 words because it stands out. Now consider what would happen if the experiment were repeated, but there were 25 pictures distributed within the 100-item list. Although the picture of the penguin would still be there, the probability that the cue “recall the picture” (at item 50) would be useful for the penguin would drop correspondingly. Watkins (1975) referred to this outcome as demonstrating the cue overload principle. That is, to be effective, a retrieval cue cannot be overloaded with too many memories. For the cue “recall the picture” to be effective, it should only match one item in the target set (as in the one-picture, 99-word case).
We are more likely to be able to retrieve items from memory when conditions at retrieval are similar to the conditions under which we encoded them. Context-dependent learning refers to an increase in retrieval when the external situation in which information is learned matches the situation in which it is remembered. Godden and Baddeley (1975) conducted a study to test this idea using scuba divers. They asked the divers to learn a list of words either when they were on land or when they were underwater. Then they tested the divers on their memory, either in the same or the opposite situation. As you can see in Figure 15, the divers’ memory was better when they were tested in the same context in which they had learned the words than when they were tested in the other context.
You can see that context-dependent learning might also be important in improving your memory. For instance, you might want to try to study for an exam in a situation that is similar to the one in which you are going to take the exam. Whereas context-dependent learning refers to a match in the external situation between learning and remembering, state-dependent learning refers to superior retrieval of memories when the individual is in the same physiological or psychological state as during encoding. Research has found, for instance, that animals that learn a maze while under the influence of one drug tend to remember their learning better when they are tested under the influence of the same drug than when they are tested without the drug (Jackson, Koek, & Colpaert, 1992). And research with humans finds that bilinguals remember better when tested in the same language in which they learned the material (Marian & Kaushanskaya, 2007). Mood states may also produce state-dependent learning. People who learn information when they are in a bad (rather than a good) mood find it easier to recall these memories when they are tested while they are in a bad mood, and vice versa. It is easier to recall unpleasant memories than pleasant ones when we’re sad, and easier to recall pleasant memories than unpleasant ones when we’re happy (Bower, 1981; Eich, 2008). | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.06%3A_Encoding_Specificity_Principle.txt |
Memories are not stored as exact replicas of reality; rather, they are modified and reconstructed during recall.
Learning Objectives
• Evaluate how mood, suggestion, and imagination can lead to memory errors or bias
Key Takeaways
Key Points
• Because memories are reconstructed, they are susceptible to being manipulated with false information.
• Much research has shown that the phrasing of questions can alter memories. Children are particularly suggestible to such leading questions.
• People tend to place past events into existing representations of the world ( schemas ) to make memories more coherent.
• Intrusion errors occur when information that is related to the theme of a certain memory, but was not actually a part of the original episode, become associated with the event.
• There are many types of bias that influence recall, including fading- affect bias, hindsight bias, illusory correlation, self-serving bias, self- reference effect, source amnesia, source confusion, mood-dependent memory retrieval, and the mood congruence effect.
Key Terms
• Consolidation: The act or process of turning short-term memories into more permanent, long-term memories.
• Schema: A worldview or representation.
• Leading question: A query that suggests the answer or contains the information the examiner is looking for.
Memory Errors
Memories are fallible. They are reconstructions of reality filtered through people’s minds, not perfect snapshots of events. Because memories are reconstructed, they are susceptible to being manipulated with false information. Memory errors occur when memories are recalled incorrectly; a memory gap is the complete loss of a memory.
Schemas
In a 1932 study, Frederic Bartlett demonstrated how telling and retelling a story distorted information recall. He told participants a complicated Native American story and had them repeat it over a series of intervals. With each repetition, the stories were altered. Even when participants recalled accurate information, they filled in gaps with false information. Bartlett attributed this tendency to the use of schemas . A schema is a generalization formed in the mind based on experience. People tend to place past events into existing representations of the world to make memories more coherent. Instead of remembering precise details about commonplace occurrences, people use schemas to create frameworks for typical experiences, which shape their expectations and memories. The common use of schemas suggests that memories are not identical reproductions of experience, but a combination of actual events and already-existing schemas. Likewise, the brain has the tendency to fill in blanks and inconsistencies in a memory by making use of the imagination and similarities with other memories.
Leading Questions
Much research has shown that the phrasing of questions can also alter memories. A leading question is a question that suggests the answer or contains the information the examiner is looking for. For instance, one study showed that simply changing one word in a question could alter participants’ answers: After viewing video footage of a car accident, participants who were asked how “slow” the car was going gave lower speed estimations than those who were asked how “fast” it was going. Children are particularly suggestible to such leading questions.
Intrusion Errors
Intrusion errors occur when information that is related to the theme of a certain memory, but was not actually a part of the original episode, become associated with the event. This makes it difficult to distinguish which elements are in fact part of the original memory. Intrusion errors are frequently studied through word-list recall tests.
Intrusion errors can be divided into two categories. The first are known as extra-list errors, which occur when incorrect and non-related items are recalled, and were not part of the word study list. These types of intrusion errors often follow what are known as the DRM Paradigm effects, in which the incorrectly recalled items are often thematically related to the study list one is attempting to recall from. Another pattern for extra-list intrusions would be an acoustic similarity pattern, which states that targets that have a similar sound to non-targets may be replaced with those non-targets in recall. The second type of intrusion errors are known as intra-list errors, which consist of irrelevant recall for items that were on the word study list. Although these two categories of intrusion errors are based on word-list studies in laboratories, the concepts can be extrapolated to real-life situations. Also, the same three factors that play a critical role in correct recall (i.e., recency, temporal association, and semantic relatedness) play a role in intrusions as well.
Types of Memory Bias
A person’s motivations, intentions, mood, and biases can impact what they remember about an event. There are many identified types of bias that influence people’s memories.
Fading-Affect Bias
In this type of bias, the emotion associated with unpleasant memories “fades” (i.e., is recalled less easily or is even forgotten) more quickly than emotion associated with positive memories.
Hindsight Bias
Hindsight bias is the “I knew it all along!” effect. In this type of bias, remembered events will seem predictable, even if at the time of encoding they were a complete surprise.
Illusory Correlation
When you experience illusory correlation, you inaccurately assume a relationship between two events related purely by coincidence. This type of bias comes from the human tendency to see cause-and-effect relationships when there are none; remember, correlation does not imply causation.
Mood Congruence Effect
The mood congruence effect is the tendency of individuals to retrieve information more easily when it has the same emotional content as their current emotional state. For instance, being in a depressed mood increases the tendency to remember negative events.
Mood-State Dependent Retrieval
Another documented phenomenon is mood-state dependent retrieval, which is a type of context-dependent memory. The retrieval of information is more effective when the emotional state at the time of retrieval is similar to the emotional state at the time of encoding. Thus, the probability of remembering an event can be enhanced by evoking the emotional state experienced during its initial processing.
Salience Effect
This effect, also known as the Von Restorff effect, is when an item that sticks out more (i.e., is noticeably different from its surroundings) is more likely to be remembered than other items. Self-Reference Effect
In the self-reference effect, memories that are encoded with relation to the self are better recalled than similar memories encoded otherwise.
Self-Serving Bias
When remembering an event, individuals will often perceive themselves as being responsible for desirable outcomes, but not responsible for undesirable ones. This is known as the self- serving bias.
Source Amnesia
Source amnesia is the inability to remember where, when, or how previously learned information was acquired, while retaining the factual knowledge. Source amnesia is part of ordinary forgetting, but can also be a memory disorder. People suffering from source amnesia can also get confused about the exact content of what is remembered.
Source Confusion
Source confusion, in contrast, is not remembering the source of a memory correctly, such as personally witnessing an event versus actually only having been told about it. An example of this would be remembering the details of having been through an event, while in reality, you had seen the event depicted on television.
Considerations for Eyewitness Testimony
Increasing evidence shows that memories and individual perceptions are unreliable, biased, and manipulable.
Learning Objectives
• Analyze ways that the fallibility of memory can influence eyewitness testimonies
Key Takeaways
Key Points
• The other-race effect is a studied effect in which eyewitnesses are not as good at facially identifying individuals from races different from their own.
• The weapon-focus effect is the tendency of an individual to hyper-focus on a weapon during a violent or potentially violent crime; this leads to encoding issues with other aspects of the event.
• The time between the perception and recollection of an event can also affect recollection. The accuracy of eyewitness memory degrades rapidly after initial encoding; the longer the delay between encoding and recall, the worse the recall will be.
• Research has consistently shown that even very subtle changes in the wording of a question can influence memory. Questions whose wording might bias the responder toward one answer over another are referred to as leadingquestions.
• Age has been shown to impact the accuracy of memory; younger witnesses are more suggestible and are more easily swayed by leading questions and misinformation.
• Other factors, such as personal biases, poor visibility, and the emotional tone of the event can influence eyewitness testimony.
Key Terms
• Leading question: A question that suggests the answer or contains the information the examiner is looking for.
• Eyewitness: Someone who sees an event and can report or testify about it.
Eyewitness testimony has been considered a credible source in the past, but its reliability has recently come into question. Research and evidence have shown that memories and individual perceptions are unreliable, often biased, and can be manipulated.
Encoding Issues
Nobody plans to witness a crime; it is not a controlled situation. There are many types of biases and attentional limitations that make it difficult to encode memories during a stressful event.
Time
When witnessing an incident, information about the event is entered into memory. However, the accuracy of this initial information acquisition can be influenced by a number of factors. One factor is the duration of the event being witnessed. In an experiment conducted by Clifford and Richards (1977), participants were instructed to approach police officers and engage in conversation for either 15 or 30 seconds. The experimenter then asked the police officer to recall details of the person to whom they had been speaking (e.g., height, hair color, facial hair, etc.). The results of the study showed that police had significantly more accurate recall of the 30-second conversation group than they did of the 15-second group. This suggests that recall is better for longer events.
Other-Race Effect
The other-race effect (a.k.a., the own-race bias, cross-race effect, other-ethnicity effect, samerace advantage) is one factor thought to affect the accuracy of facial recognition. Studies investigating this effect have shown that a person is better able to recognize faces that match their own race but are less reliable at identifying other races, thus inhibiting encoding. Perception may affect the immediate encoding of these unreliable notions due to prejudices, which can influence the speed of processing and classification of racially ambiguous targets. The ambiguity in eyewitness memory of facial recognition can be attributed to the divergent strategies that are used when under the influence of racial bias.
Weapon-Focus Effect
The weapon-focus effect suggests that the presence of a weapon narrows a person’s attention, thus affecting eyewitness memory. A person focuses on a central detail (e.g., a knife) and loses focus on the peripheral details (e.g. the perpetrator’s characteristics). While the weapon is remembered clearly, the memories of the other details of the scene suffer. This effect occurs because remembering additional items would require visual attention, which is occupied by the weapon. Therefore, these additional stimuli are frequently not processed.
Retrieval Issues
Trials may take many weeks and require an eyewitness to recall and describe an event many times. These conditions are not ideal for perfect recall; memories can be affected by a number of variables.
More Time Issues
The accuracy of eyewitness memory degrades swiftly after initial encoding. The “forgetting curve” of eyewitness memory shows that memory begins to drop off sharply within 20 minutes following initial encoding, and begins to level off around the second day at a dramatically reduced level of accuracy. Unsurprisingly, research has consistently found that the longer the gap between witnessing and recalling the incident, the less accurately that memory will be recalled. There have been numerous experiments that support this claim. Malpass and Devine (1981) compared the accuracy of witness identifications after 3 days (short retention period) and 5 months (long retention period). The study found no false identifications after the 3-day period, but after 5 months, 35% of identifications were false.
The forgetting curve of memory: The red line shows that eyewitness memory declines rapidly following initial encoding and flattens out after around 2 days at a dramatically reduced level of accuracy.
Leading Questions
In a legal context, the retrieval of information is usually elicited through different types of questioning. A great deal of research has investigated the impact of types of questioning on eyewitness memory, and studies have consistently shown that even very subtle changes in the wording of a question can have an influence. One classic study was conducted in 1974 by Elizabeth Loftus, a notable researcher on the accuracy of memory. In this experiment, participants watched a film of a car accident and were asked to estimate the speed the cars were going when they “contacted” or “smashed” each other. Results showed that just changing this one word influenced the speeds participants estimated: The group that was asked the speed when the cars “contacted” each other gave an average estimate of 31.8 miles per hour, whereas the average speed in the “smashed” condition was 40.8 miles per hour. Age has been shown to impact the accuracy of memory as well. Younger witnesses, especially children, are more susceptible to leading questions and misinformation.
Bias
There are also a number of biases that can alter the accuracy of memory. For instance, racial and gender biases may play into what and how people remember. Likewise, factors that interfere with a witness’s ability to get a clear view of the event—like time of day, weather, and poor eyesight—can all lead to false recollections. Finally, the emotional tone of the event can have an impact: for instance, if the event was traumatic, exciting, or just physiologically activating, it will increase adrenaline and other neurochemicals that can damage the accuracy of memory recall.
Memory Conformity
“Memory conformity,” also known as social contagion of memory, refers to a situation in which one person’s report of a memory influences another person’s report of that same experience. This interference often occurs when individuals discuss what they saw or experienced, and can result in the memories of those involved being influenced by the report of another person. Some factors that contribute to memory conformity are age (the elderly and children are more likely to have memory distortions due to memory conformity) and confidence (individuals are more likely to conform their memories to others if they are not certain about what they remember).
Repressed Memories
Some research indicates that traumatic memories can be forgotten and later spontaneously recovered.
Learning Objectives
• Discuss the issues surrounding theories about repressed memories
Key Takeaways
Key Points
• Some theorize that survivors of childhood sexual abuse may use repression to cope with the traumatic experience.
• Detractors of the theory of repressed memories claim that for most people, the difficulty with traumatic memories is their intrusiveness—that people are unable to forget them despite often wanting to.
• Given how unreliable memory is, some argue that attempting to recover a repressed memory runs the risk of implanting ” pseudomemories.”
• At this point it is impossible, without other corroborative evidence, to distinguish a true memory from a false one.
Key Terms
• Pseudomemory: A false or otherwise inaccurate memory that has usually been implanted by some form of suggestion. This term is generally used by people who do not believe that memories can be repressed and later recalled.
• Encode: To convert sensory input into a form able to be processed and deposited in the memory.
• Repressed memory: A hypothetical concept used to describe a significant memory, usually of a traumatic nature, that has become unavailable for recall.
The issue of whether memories can be repressed is controversial, to say the least. Some research indicates that memories of traumatic events, most commonly childhood sexual abuse, may be forgotten and later spontaneously recovered. However, whether these memories are actively repressed or forgotten due to natural processes is unclear.
Support for the Existence of Repressed Memories
In one study where victims of documented child abuse were re-interviewed many years later as adults, a high proportion of the women denied any memory of the abuse. Some speculate that survivors of childhood sexual abuse may repress the memories to cope with the traumatic experience. In cases where the perpetrator of the abuse is the child’s caretaker, the child may push the memories out of awareness so that he or she can maintain an attachment to the person on whom they are dependent for survival. Traumatic memories are encoded differently than memories of ordinary experiences. In traumatic memories, there is a narrowed attentional focus on certain aspects of the memory, usually those that involved the most heightened emotional arousal. For instance, when remembering a traumatic event, individuals are most likely to remember how scared they felt, the image of having a gun held to their head, or other details that are highly emotionally charged. The limbic system is the part of the brain that is in charge of giving emotional significance to sensory inputs; however, the limbic system (particularly one of its components, the hippocampus ) is also important to the storage and retrieval of long-term memories. Supporters of the existence of repressed memories hypothesize that because the hippocampus is sensitive to stress hormones and because the limbic system is heavily occupied with the emotions of the event, the memory-encoding functionality may be limited during traumatic events. The end result is that the memory is encoded as an affective (i.e., relating to or influenced by the emotions) and sensory imprint, rather than a memory that includes a full account of what happened. In this way, traumatic experiences appear to be qualitatively different from those of non-traumatic events, and, as a result, they are more difficult to remember accurately.
Psychological disorders exist that could cause the repression of memories. Psychogenic amnesia, or dissociative amnesia, is a memory disorder characterized by sudden autobiographical memory loss, said to occur for a period of time ranging from hours to years. More recently, dissociative amnesia has been defined as a dissociative disorder characterized by gaps in memory of personal information, especially of traumatic events. These gaps involve an inability to recall personal information, usually of a traumatic or stressful nature. In a change from the DSM-IV to the DSM-5, dissociative fugue is now classified as a type of dissociative amnesia. Psychogenic amnesia is distinguished from organic amnesia in that it is supposed to result from a nonorganic cause; no structural brain damage or brain lesion should be evident, but some form of psychological stress should precipitate the amnesia. However, psychogenic amnesia as a memory disorder is controversial.
Opposition to the Existence of Repressed Memories
Memories of events are always a mix of factual traces of sensory information overlaid with emotions, mingled with interpretation and filled in with imaginings. Thus, there is always skepticism about the factual validity of memories. There is considerable evidence that, rather than being pushed out of consciousness, traumatic memories are, for many people, intrusive and unforgettable. Given research showing how unreliable memory is, it is possible that any attempt to “recover” a repressed memory runs the risk of implanting false memories. Researchers who are skeptical of the idea of recovered memories note how susceptible memory is to various manipulations that can be used to implant false memories (sometimes called “pseudomemories”). A classic study in memory research conducted by Elizabeth Loftus became widely known as the “lost in the mall” experiment. In this study, subjects were given a booklet containing three accounts of real childhood events written by family members and a fourth account of a fictitious event of being lost in a shopping mall. A quarter of the subjects reported remembering the fictitious event, and elaborated on it with extensive circumstantial details.
While this experiment does show that false memories can be implanted in some subjects, it cannot be generalized to say that all recovered memories are false memories. Nevertheless, these studies prompted public and professional concern about recovered-memory therapy for sexual abuse. According to the American Psychiatric Association, “most leaders in the field agree that although it is a rare occurrence, a memory of early childhood abuse that has been forgotten can be remembered later. However, these leaders also agree that it is possible to construct convincing pseudomemories for events that never occurred. The mechanism(s) by which both of these phenomena happen are not well understood and, at this point it is impossible, without other corroborative evidence, to distinguish a true memory from a false one.” | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.07%3A_Reconstruction_of_Memories.txt |
Sir Frederick Bartlett’s studies
The English psychologist Sir Frederic Bartlett (1886–1969), played a major role in the psychological study of memory; particularly the cognitive and social processes of remembering. Bartlett created short stories that were in some ways logical but also contained some very unusual and unexpected events. Bartlett discovered that people found it very difficult to recall the stories exactly, even after being allowed to study them repeatedly, and he hypothesized that the stories were difficult to remember because they did not fit the participants’ expectations about how stories should go.
Bartlett (1995 [1932]) is perhaps most famous for his method of repeated reproduction. He used many different written texts with this method but in "Remembering" he confined himself to an analysis of participants' reproductions of the native American folk tale War of the Ghosts (see below), while keeping in mind throughout corroborative detail from the use of other material. The story is particularly apt to the task because it involves numerous narrative disjuncture’s, seeming lack of logic, strange and vivid imagery, among other puzzling elements. French anthropologist LÉVY-BRUHL would have interpreted the story as a good example of "primitive mentality" (WAGONER, 2012). For Bartlett, the striking difference to British ways of thinking provided a powerful illustration of the process of conventionalization. He says, "I wished particularly to see how educated and rather sophisticated subjects would deal with this lack of obvious rational order" (1995 [1932], p.64). [37]
The War of the Ghosts
The War of the Ghosts was a story used by Sir Frederic Bartlett to test the influence of prior expectations on memory. Bartlett found that even when his British research participants were allowed to read the story many times they still could not remember it well, and he believed this was because it did not fit with their prior knowledge.
One night two young men from Egulac went down to the river to hunt seals and while they were there it became foggy and calm. Then they heard war-cries, and they thought: “Maybe this is a war- party.” They escaped to the shore, and hid behind a log. Now canoes came up, and they heard the noise of paddles, and saw one canoe coming up to them. There were five men in the canoe, and they said:
“What do you think? We wish to take you along. We are going up the river to make war on the people.”
One of the young men said, “I have no arrows.” “Arrows are in the canoe,” they said.
“I will not go along. I might be killed. My relatives do not know where I have gone. But you,” he said, turning to the other, “may go with them.”
So one of the young men went, but the other returned home.
And the warriors went on up the river to a town on the other side of Kalama. The people came down to the water and they began to fight, and many were killed. But presently the young man heard one of the warriors say, “Quick, let us go home: that Indian has been hit.” Now he thought: “Oh, they are ghosts.” He did not feel sick, but they said he had been shot.
So the canoes went back to Egulac and the young man went ashore to his house and made a fire. And he told everybody and said: “Behold I accompanied the ghosts, and we went to fight. Many of our fellows were killed, and many of those who attacked us were killed. They said I was hit, and I did not feel sick.”
He told it all, and then he became quiet. When the sun rose he fell down. Something black came out of his mouth. His face became contorted. The people jumped up and cried.
He was dead. (Bartlett, 1932)Bartlett, F. C. (1932). Remembering. Cambridge: Cambridge University Press.
Bartlett had Cambridge students, colleagues and other residents of Cambridge read the story twice at regular reading speed.17) After a period of approximately 15 minutes, participants wrote out the story down by hand on a sheet of paper as best they could remember it. This was repeated several times at increasing time intervals—in one case ten years later. The reproductions produced by each participant were analyzed as a series or chain, exploring what was added, deleted, and transformed from the original to first reproduction and from one reproduction to the next. In his analysis, Bartlett provides readers with a full series of reproductions for particularly illustrative cases and a detailed analysis of the changes introduced, and then elaborates on the general trends found across his sample. As mentioned above, his analysis incorporates participants' introspective reports in order to understand the interpretive and affective processes that lead to the transformations introduced into their reproductions. One participant provided the following detailed account (abbreviated here) at the first reproduction:
"When I read the story ... I thought the main point was the reference to the ghosts who were went off to fight the people further on ... I wrote out the story mainly by following my own images. I had a vague feeling of the style. There was a sort of rhythm about it I tried to imitate. I can't understand the contradiction about somebody being killed, and the man's being wounded, but feeling nothing. At first I thought there was something supernatural about the story. Then I saw that Ghosts must be a class, or clan name. That made the whole thing more comprehensible" (p.68). [39]
Bartlett notes that strict accuracy of reproduction is the exception rather than the rule. The most significant changes to the story were made on the first reproduction, which set the form, scheme, order, and arrangement of material for subsequent reproductions.18) However, as more time went by there was a progressive omission of details, simplification of events and transformation of items into the familiar. Some of the most common and persistent changes were "hunting seals" into "fishing," "canoes" into "boats," the omission of the excuse that "we have no arrows," transformations of the proper names (i.e., Egulac and Kalama) before they disappeared completely, and the precise meaning of the "ghosts." Whenever something seemed strange or incomprehensible it was either omitted completely or rationalized. For example, the "something black" that comes out of the Indian's mouth was frequently understood as the materialization of his breath and in at least one case as "his soul" leaving his body. The second meaning given to an item often appeared only in participants' introspective reports on the first reproduction but in subsequent reproductions it took the place of the original. In other cases, rationalization happened without the person's awareness, as when "hunting seals" became "fishing." [41] | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.08%3A_Autobiographical_Memories.txt |
Clearly, remembering everything would be maladaptive, but what would it be like to remember nothing? We will now consider a profound form of forgetting called amnesia that is distinct from more ordinary forms of forgetting. Most of us have had exposure to the concept of amnesia through popular movies and television. Typically, in these fictionalized portrayals of amnesia, a character suffers some type of blow to the head and suddenly has no idea who they are and can no longer recognize their family or remember any events from their past. After some period of time (or another blow to the head), their memories come flooding back to them. Unfortunately, this portrayal of amnesia is not very accurate. What does amnesia typically look like?
The most widely studied amnesic patient was known by his initials H. M. (Scoville & Milner, 1957). As a teenager, H. M. suffered from severe epilepsy, and in 1953, he underwent surgery to have both of his medial temporal lobes removed to relieve his epileptic seizures. The medial temporal lobes encompass the hippocampus and surrounding cortical tissue. Although the surgery was successful in reducing H. M.’s seizures and his general intelligence was preserved, the surgery left H. M. with a profound and permanent memory deficit. From the time of his surgery until his death in 2008, H. M. was unable to learn new information, a memory impairment called anterograde amnesia. H. M. could not remember any event that occurred since his surgery, including highly significant ones, such as the death of his father. He could not remember a conversation he had a few minutes prior or recognize the face of someone who had visited him that same day. He could keep information in his short-term, or working, memory, but when his attention turned to something else, that information was lost for good. It is important to note that H. M.’s memory impairment was restricted to declarative memory, or conscious memory for facts and events. H. M. could learn new motor skills and showed improvement on motor tasks even in the absence of any memory for having performed the task before (Corkin, 2002).
In addition to anterograde amnesia, H. M. also suffered from temporally graded retrograde amnesia. Retrograde amnesia refers to an inability to retrieve old memories that occurred before the onset of amnesia. Extensive retrograde amnesia in the absence of anterograde amnesia is very rare (Kopelman, 2000). More commonly, retrograde amnesia co-occurs with anterograde amnesia and shows a temporal gradient, in which memories closest in time to the onset of amnesia are lost, but more remote memories are retained (Hodges, 1994). In the case of H. M., he could remember events from his childhood, but he could not remember events that occurred a few years before the surgery.
Amnesiac patients with damage to the hippocampus and surrounding medial temporal lobes typically manifest a similar clinical profile as H. M. The degree of anterograde amnesia and retrograde amnesia depend on the extent of the medial temporal lobe damage, with greater damage associated with a more extensive impairment (Reed & Squire, 1998). Anterograde amnesia provides evidence for the role of the hippocampus in the formation of long-lasting declarative memories, as damage to the hippocampus results in an inability to create this type of new memory. Similarly, temporally graded retrograde amnesia can be seen as providing further evidence for the importance of memory consolidation (Squire & Alvarez, 1995). A memory depends on the hippocampus until it is consolidated and transferred into a more durable form that is stored in the cortex. According to this theory, an amnesiac patient like H. M. could remember events from his remote past because those memories were fully consolidated and no longer depended on the hippocampus.
The classic amnesiac syndrome we have considered here is sometimes referred to as organic amnesia, and it is distinct from functional, or dissociative, amnesia. Functional amnesia involves a loss of memory that cannot be attributed to brain injury or any obvious brain disease and is typically classified as a mental disorder rather than a neurological disorder (Kihlstrom, 2005). The clinical profile of dissociative amnesia is very different from that of patients who suffer from amnesia due to brain damage or deterioration. Individuals who experience dissociative amnesia often have a history of trauma. Their amnesia is retrograde, encompassing autobiographical memories from a portion of their past. In an extreme version of this disorder, people enter a dissociative fugue state, in which they lose most or all of their autobiographical memories and their sense of personal identity. They may be found wandering in a new location, unaware of who they are and how they got there. Dissociative amnesia is controversial, as both the causes and existence of it have been called into question. The memory loss associated with dissociative amnesia is much less likely to be permanent than it is in organic amnesia.
Just as the case study of the mnemonist Shereshevsky illustrates what a life with a near perfect memory would be like, amnesiac patients show us what a life without memory would be like. Each of the mechanisms we discussed that explain everyday forgetting—encoding failures, decay, insufficient retrieval cues, interference, and intentional attempts to forget—help to keep us highly efficient, retaining the important information and for the most part, forgetting the unimportant. Amnesiac patients allow us a glimpse into what life would be like if we suffered from profound forgetting and perhaps show us that our everyday lapses in memory are not so bad after all.
We now understand that Amnesia is the loss of long-term memory that occurs as the result of disease, physical trauma, or psychological trauma. Psychologist Tulving (2002) and his colleagues at the University of Toronto studied K.C. for years. K.C. suffered a traumatic head injury in a motorcycle accident and then had severe amnesia. Tulving writes,
the outstanding fact about K.C.’s mental make-up is his utter inability to remember any events, circumstances, or situations from his own life. His episodic amnesia covers his whole life, from birth to the present. The only exception is the experiences that, at any time, he has had in the last minute or two. (Tulving, 2002, p. 14)
Anterograde Amnesia
There are two common types of amnesia: anterograde amnesia and retrograde amnesia (Figure 1). Anterograde amnesia is commonly caused by brain trauma, such as a blow to the head.
With anterograde amnesia, you cannot remember new information, although you can remember information and events that happened prior to your injury. The hippocampus is usually affected (McLeod, 2011). This suggests that damage to the brain has resulted in the inability to transfer information from short-term to long-term memory; that is, the inability to consolidate memories.
Many people with this form of amnesia are unable to form new episodic or semantic memories, but are still able to form new procedural memories (Bayley & Squire, 2002). This was true of H. M., which was discussed earlier. The brain damage caused by his surgery resulted in anterograde amnesia. H. M. would read the same magazine over and over, having no memory of ever reading it—it was always new to him. He also could not remember people he had met
after his surgery. If you were introduced to H. M. and then you left the room for a few minutes, he would not know you upon your return and would introduce himself to you again. However, when presented the same puzzle several days in a row, although he did not remember having seen the puzzle before, his speed at solving it became faster each day (because of relearning) (Corkin, 1965, 1968).
Retrograde Amnesia
Retrograde amnesia is loss of memory for events that occurred prior to the trauma. People with retrograde amnesia cannot remember some or even all of their past. They have difficulty remembering episodic memories. What if you woke up in the hospital one day and there were people surrounding your bed claiming to be your spouse, your children, and your parents? The trouble is you don’t recognize any of them. You were in a car accident, suffered a head injury, and now have retrograde amnesia. You don’t remember anything about your life prior to waking up in the hospital. This may sound like the stuff of Hollywood movies, and Hollywood has been fascinated with the amnesia plot for nearly a century, going all the way back to the film Garden of Lies from 1915 to more recent movies such as the Jason Bourne trilogy starring Matt Damon. However, for real-life sufferers of retrograde amnesia, like former NFL football player Scott Bolzan, the story is not a Hollywood movie. Bolzan fell, hit his head, and deleted 46 years of his life in an instant. He is now living with one of the most extreme cases of retrograde amnesia on record.Interactive Element
Link to Learning
View the video story profiling Scott Bolzan’s amnesia and his attempts to get his life back.
GLOSSARY
Ø Amnesia: loss of long-term memory that occurs as the result of disease, physical trauma, or psychological trauma
Ø Anterograde amnesia: loss of memory for events that occur after the brain trauma
Ø Retrograde amnesia: loss of memory for events that occurred prior to brain trauma | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.09%3A_Amnesia.txt |
Eyewitnesses can provide very compelling legal testimony, but rather than recording experiences flawlessly, their memories are susceptible to a variety of errors and biases. They (like the rest of us) can make errors in remembering specific details and can even remember whole events that did not actually happen. In this module, we discuss several of the common types of errors, and what they can tell us about human memory and its interactions with the legal system.
Learning Objectives
• Describe the kinds of mistakes that eyewitnesses commonly make and some of the ways that this can impede justice.
• Explain some of the errors that are common in human memory.
• Describe some of the important research that has demonstrated human memory errors and theirconsequences.
What Is Eyewitness Testimony?
Eyewitness testimony is what happens when a person witnesses a crime (or accident, or other legally important event) and later gets up on the stand and recalls for the court all the details of the witnessed event. It involves a more complicated process than might initially be presumed. It includes what happens during the actual crime to facilitate or hamper witnessing, as well as everything that happens from the time the event is over to the later courtroom appearance.
The eyewitness may be interviewed by the police and numerous lawyers, describe the perpetrator to several different people, and make an identification of the perpetrator, among other things.
Why Is Eyewitness Testimony an Important Area of Psychological Research? When an eyewitness stands up in front of the court and describes what happened from her own perspective, this testimony can be extremely compelling—it is hard for those hearing this testimony to take it “with a grain of salt,” or otherwise adjust its power. But to what extent is this necessary?
There is now a wealth of evidence, from research conducted over several decades, suggesting that eyewitness testimony is probably the most persuasive form of evidence presented in court, but in many cases, its accuracy is dubious. There is also evidence that mistaken eyewitness evidence can lead to wrongful conviction—sending people to prison for years or decades, even to death row, for crimes they did not commit. Faulty eyewitness testimony has been implicated in at least 75% of DNA exoneration cases—more than any other cause (Garrett, 2011). In a particularly famous case, a man named Ronald Cotton was identified by a rape victim, Jennifer Thompson, as her rapist, and was found guilty and sentenced to life in prison. After more than 10 years, he was exonerated (and the real rapist identified) based on DNA evidence. For details on this case and other (relatively) lucky individuals whose false convictions were subsequently overturned with DNA evidence, see the Innocence Project website.
There is also hope, though, that many of the errors may be avoidable if proper precautions are taken during the investigative and judicial processes. Psychological science has taught us what some of those precautions might involve, and we discuss some of that science now.
Misinformation
In an early study of eyewitness memory, undergraduate subjects first watched a slideshow depicting a small red car driving and then hitting a pedestrian (Loftus, Miller, & Burns, 1978). Some subjects were then asked leading questions about what had happened in the slides. For example, subjects were asked, “How fast was the car traveling when it passed the yield sign?” But this question was actually designed to be misleading, because the original slide included a stop sign rather than a yield sign.
Later, subjects were shown pairs of slides. One of the pair was the original slide containing the stop sign; the other was a replacement slide containing a yield sign. Subjects were asked which of the pair they had previously seen. Subjects who had been asked about the yield sign were likely to pick the slide showing the yield sign, even though they had originally seen the slide with the stop sign. In other words, the misinformation in the leading question led to inaccurate memory.
This phenomenon is called the misinformation effect, because the misinformation thatsubjects were exposed to after the event (here in the form of a misleading question) apparently contaminates subjects’ memories of what they witnessed. Hundreds of subsequent studies have demonstrated that memory can be contaminated by erroneous information that people are exposed to after they witness an event (see Frenda, Nichols, & Loftus, 2011; Loftus, 2005). The misinformation in these studies has led people to incorrectly remember everything from small but crucial details of a perpetrator’s appearance to objects as large as a barn that wasn’t there at all.
These studies have demonstrated that young adults (the typical research subjects in psychology) are often susceptible to misinformation, but that children and older adults can be even more susceptible (Bartlett & Memon, 2007; Ceci & Bruck, 1995). In addition, misinformation effects can occur easily, and without any intention to deceive (Allan & Gabbert, 2008). Even slight differences in the wording of a question can lead to misinformation effects. Subjects in one study were more likely to say yes when asked “Did you see the broken headlight?” than when asked “Did you see a broken headlight?” (Loftus, 1975).
Other studies have shown that misinformation can corrupt memory even more easily when it is encountered in social situations (Gabbert, Memon, Allan, & Wright, 2004). This is a problem particularly in cases where more than one person witnesses a crime. In these cases, witnesses tend to talk to one another in the immediate aftermath of the crime, including as they wait for police to arrive. But because different witnesses are different people with different perspectives, they are likely to see or notice different things, and thus remember different things, even when they witness the same event. So when they communicate about the crime later, they not only reinforce common memories for the event, they also contaminate each other’s memories for the event (Gabbert, Memon, & Allan, 2003; Paterson & Kemp, 2006; Takarangi, Parker, & Garry, 2006).
The misinformation effect has been modeled in the laboratory. Researchers had subjects watch a video in pairs. Both subjects sat in front of the same screen, but because they wore differently polarized glasses, they saw two different versions of a video, projected onto a screen. So, although they were both watching the same screen, and believed (quite reasonably) that they were watching the same video, they were actually watching two different versions of the video (Garry, French, Kinzett, & Mori, 2008).
In the video, Eric the electrician is seen wandering through an unoccupied house and helping himself to the contents thereof. A total of eight details were different between the two videos. After watching the videos, the “co-witnesses” worked together on 12 memory test questions. Four of these questions dealt with details that were different in the two versions of the video, so subjects had the chance to influence one another. Then subjects worked individually on 20 additional memory test questions. Eight of these were for details that were different in the two videos. Subjects’ accuracy was highly dependent on whether they had discussed the details previously. Their accuracy for items they had not previously discussed with their co-witness was 79%. But for items that they had discussed, their accuracy dropped markedly, to 34%. That is, subjects allowed their co-witnesses to corrupt their memories for what they had seen.
Identifying Perpetrators
In addition to correctly remembering many details of the crimes they witness, eyewitnesses often need to remember the faces and other identifying features of the perpetrators of those crimes. Eyewitnesses are often asked to describe that perpetrator to law enforcement and later to make identifications from books of mug shots or lineups. Here, too, there is a substantial body of research demonstrating that eyewitnesses can make serious, but often understandable and even predictable, errors (Caputo & Dunning, 2007; Cutler & Penrod, 1995).
In most jurisdictions in the United States, lineups are typically conducted with pictures, called photo spreads, rather than with actual people standing behind one-way glass (Wells,
Memon, & Penrod, 2006). The eyewitness is given a set of small pictures of perhaps six or eight individuals who are dressed similarly and photographed in similar circumstances. One of these individuals is the police suspect, and the remainder are “foils” or “fillers” (people known to be innocent of the particular crime under investigation). If the eyewitness identifies the suspect, then the investigation of that suspect is likely to progress. If a witness identifies a foil or no one, then the police may choose to move their investigation in another direction.
This process is modeled in laboratory studies of eyewitness identifications. In these studies, research subjects witness a mock crime (often as a short video) and then are asked to make an identification from a photo or a live lineup. Sometimes the lineups are target present, meaning that the perpetrator from the mock crime is actually in the lineup, and sometimes they are target absent, meaning that the lineup is made up entirely of foils. The subjects, or mock witnesses, are given some instructions and asked to pick the perpetrator out of the lineup. The particular details of the witnessing experience, the instructions, and the lineup members can all influence the extent to which the mock witness is likely to pick the perpetrator out of the lineup, or indeed to make any selection at all. Mock witnesses (and indeed real witnesses) can make errors in two different ways. They can fail to pick the perpetrator out of a target present lineup (by picking a foil or by neglecting to make a selection), or they can pick a foil in a target absent lineup (wherein the only correct choice is to not make a selection).
Some factors have been shown to make eyewitness identification errors particularly likely. These include poor vision or viewing conditions during the crime, particularly stressful witnessing experiences, too little time to view the perpetrator or perpetrators, too much delay between witnessing and identifying, and being asked to identify a perpetrator from a race other than one’s own (Bornstein, Deffenbacher, Penrod, & McGorty, 2012; Brigham, Bennett, Meissner, & Mitchell, 2007; Burton, Wilson, Cowan, & Bruce, 1999; Deffenbacher, Bornstein, Penrod, & McGorty, 2004).
It is hard for the legal system to do much about most of these problems. But there are some things that the justice system can do to help lineup identifications “go right.” For example, investigators can put together high-quality, fair lineups. A fair lineup is one in which the suspect and each of the foils is equally likely to be chosen by someone who has read an eyewitness description of the perpetrator but who did not actually witness the crime (Brigham, Ready, & Spier, 1990). This means that no one in the lineup should “stick out,” and that everyone should match the description given by the eyewitness. Other important recommendations that have come out of this research include better ways to conduct lineups, “double blind” lineups, unbiased instructions for witnesses, and conducting lineups in a sequential fashion
(see Technical Working Group for Eyewitness Evidence, 1999; Wells et al., 1998; Wells & Olson, 2003).
Kinds of Memory Biases
Memory is also susceptible to a wide variety of other biases and errors. People can forget events that happened to them and people they once knew. They can mix up details across time and place. They can even remember whole complex events that never happened at all.
Importantly, these errors, once made, can be very hard to unmake. A memory is no less “memorable” just because it is wrong.
Some small memory errors are commonplace, and you have no doubt experienced many of them. You set down your keys without paying attention, and then cannot find them later when you go to look for them. You try to come up with a person’s name but cannot find it, even though you have the sense that it is right at the tip of your tongue (psychologists actually call this the tip-of-the-tongue effect, or TOT) (Brown, 1991).
Other sorts of memory biases are more complicated and longer lasting. For example, it turns out that our expectations and beliefs about how the world works can have huge influences on our memories. Because many aspects of our everyday lives are full of redundancies, our memory systems take advantage of the recurring patterns by forming and using schemata, or memory templates (Alba & Hasher, 1983; Brewer & Treyens, 1981). Thus, we know to expect that a library will have shelves and tables and librarians, and so we don’t have to spend energy noticing these at the time. The result of this lack of attention, however, is that one is likely to remember schema-consistent information (such as tables), and to remember them in a rather generic way, whether or not they were actually present.
False Memory
Some memory errors are so “large” that they almost belong in a class of their own: false memories. Back in the early 1990s a pattern emerged whereby people would go into therapy for depression and other everyday problems, but over the course of the therapy develop memories for violent and horrible victimhood (Loftus & Ketcham, 1994). These patients’ therapists claimed that the patients were recovering genuine memories of real childhood abuse, buried deep in their minds for years or even decades. But some experimental psychologists believed that the memories were instead likely to be false—created in therapy. These researchers then set out to see whether it would indeed be possible for wholly false memories to be created by procedures similar to those used in these patients’ therapy.
In early false memory studies, undergraduate subjects’ family members were recruited to provide events from the students’ lives. The student subjects were told that the researchers had talked to their family members and learned about four different events from their childhoods. The researchers asked if the now undergraduate students remembered each of these four events—introduced via short hints. The subjects were asked to write about each of the four events in a booklet and then were interviewed two separate times. The trick was that one of the events came from the researchers rather than the family (and the family had actually assured the researchers that this event had not happened to the subject). In the first such study, this researcher-introduced event was a story about being lost in a shopping mall and rescued by an older adult. In this study, after just being asked whether they remembered these events occurring on three separate occasions, a quarter of subjects came to believe that they had indeed been lost in the mall (Loftus & Pickrell, 1995). In subsequent studies, similar procedures were used to get subjects to believe that they nearly drowned and had been rescued by a lifeguard, or that they had spilled punch on the bride’s parents at a family wedding, or that they had been attacked by a vicious animal as a child, among other events (Heaps & Nash, 1999; Hyman, Husband, & Billings, 1995; Porter, Yuille, & Lehman, 1999).
More recent false memory studies have used a variety of different manipulations to produce false memories in substantial minorities and even occasional majorities of manipulated subjects (Braun, Ellis, & Loftus, 2002; Lindsay, Hagen, Read, Wade, & Garry, 2004; Mazzoni, Loftus, Seitz, & Lynn, 1999; Seamon, Philbin, & Harrison, 2006; Wade, Garry, Read, & Lindsay, 2002). For example, one group of researchers used a mock-advertising study, wherein subjects were asked to review (fake) advertisements for Disney vacations, to convince subjects that they had once met the character Bugs Bunny at Disneyland—an impossible false memory because Bugs is a Warner Brothers character (Braun et al., 2002). Another group of researcher’s photo shopped childhood photographs of their subjects into a hot air balloon picture and then asked the subjects to try to remember and describe their hot air balloon experience (Wade et al., 2002).
Other researchers gave subjects unmanipulated class photographs from their childhoods along with a fake story about a class prank, and thus enhanced the likelihood that subjects would falsely remember the prank (Lindsay et al., 2004).
Using a false feedback manipulation, we have been able to persuade subjects to falsely remember having a variety of childhood experiences. In these studies, subjects are told (falsely) that a powerful computer system has analyzed questionnaires that they completed previously and has concluded that they had a particular experience years earlier. Subjects apparently believe what the computer says about them and adjust their memories to match this new information. A variety of different false memories have been implanted in this way. In some studies, subjects are told they once got sick on a particular food (Bernstein, Laney, Morris, & Loftus, 2005). These memories can then spill out into other aspects of subjects’ lives, such that they often become less interested in eating that food in the future (Bernstein & Loftus, 2009b). Other false memories implanted with this methodology include having an unpleasant experience with the character Pluto at Disneyland and witnessing physical violence between one’s parents (Berkowitz, Laney, Morris, Garry, & Loftus, 2008; Laney & Loftus, 2008).
Importantly, once these false memories are implanted—whether through complex methods or simple ones—it is extremely difficult to tell them apart from true memories (Bernstein & Loftus, 2009a; Laney & Loftus, 2008).
Conclusion
To conclude, eyewitness testimony is very powerful and convincing to jurors, even though it is not particularly reliable. Identification errors occur, and these errors can lead to people being falsely accused and even convicted. Likewise, eyewitness memory can be corrupted by leading questions, misinterpretations of events, conversations with co-witnesses, and their own expectations for what should have happened. People can even come to remember whole events that never occurred.
The problems with memory in the legal system are real. But what can we do to start to fix them? A number of specific recommendations have already been made, and many of these are in the process of being implemented (e.g., Steblay & Loftus, 2012; Technical Working Group for Eyewitness Evidence, 1999; Wells et al., 1998). Some of these recommendations are aimed at specific legal procedures, including when and how witnesses should be interviewed, and how lineups should be constructed and conducted. Other recommendations call for appropriate education (often in the form of expert witness testimony) to be provided to jury members and others tasked with assessing eyewitness memory. Eyewitness testimony can be of great value to the legal system, but decades of research now argues that this testimony is often given far more weight than its accuracy justifies.
Outside Resources
Video 1: Eureka Foong's - The Misinformation Effect. This is a student-made video illustrating this phenomenon of altered memory. It was one of the winning entries in the 2014 Noba Student Video Award.
Video 2: Ang Rui Xia & Ong Jun Hao's - The Misinformation Effect. Another student-made video exploring the misinformation effect. Also an award winner from 2014. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.10%3A_Eyewitness_Memory.txt |
In a groundbreaking series of studies in the 1970s and early 1980s, psychologist Neisser and his colleagues devised a visual analogue of the dichotic listening task (Neisser & Becklen, 1975). Their subjects viewed a video of two distinct, but partially transparent and overlapping, events. For example, one event might involve two people playing a hand-clapping game and the other might show people passing a ball. Because the two events were partially transparent and overlapping, both produced sensory signals on the retina regardless of which event received the participant’s attention. When participants were asked to monitor one of the events by counting the number of times the actors performed an action (e.g., hand clapping or completed passes), they often failed to notice unexpected events in the ignored video stream (e.g., the hand-clapping players stopping their game and shaking hands). As for dichotic listening, the participants were unaware of events happening outside the focus of their attention, even when looking right at them. They could tell that other “stuff” was happening on the screen, but many were unaware of the meaning or substance of that stuff.
To test the power of selective attention to induce failures of awareness, Neisser and colleagues (Neisser, 1979) designed a variant of this task in which participants watched a video of two teams of players, one wearing white shirts and one wearing black shirts. Subjects were asked to press a key whenever the players in white successfully passed a ball, but to ignore the players in black. As for the other videos, the teams were filmed separately and then superimposed so that they literally occupied the same space (they were partially transparent). Partway through the video, a person wearing a raincoat and carrying an umbrella strolled through the scene. People were so intently focused on spotting passes that they often missed the “umbrella woman.” (Pro tip: If you look closely at the video, you’ll see that Ulric Neisser plays on both the black and white teams.)
These surprising findings were well known in the field, but for decades, researchers dismissed their implications because the displays had such an odd, ghostly appearance. Of course, we would notice if the displays were fully opaque and vivid rather than partly transparent and grainy. Surprisingly, no studies were built on Neisser’s method for nearly 20 years. Inspired by these counterintuitive findings and after discussing them with Neisser himself, Christopher Chabris and I revisited them in the late 1990s (Simons & Chabris, 1999). We replicated Neisser’s work, again finding that many people missed the umbrella woman when all of the actors in the video were partially transparent and occupying the same space. But, we added another wrinkle: a version of the video in which all of the actions of both teams of players were choreographed and filmed with a single camera. The players moved in and around each other and were fully visible. In the most dramatic version, we had a woman in a gorilla suit walk into the scene, stop to face the camera, thump her chest, and then walk off the other side after nine seconds on screen. Fully half the observers missed the gorilla when counting passes by the team in white.
This phenomenon is now known as inattentional blindness, the surprising failure to notice an unexpected object or event when attention is focused on something else (Mack & Rock, 1998). The past 15 years has seen a surge of interest in such failures of awareness, and we now have a better handle on the factors that cause people to miss unexpected events as well as the range of situations in which inattentional blindness occurs. People are much more likely to notice unexpected objects that share features with the attended items in a display (Most et al., 2001). For example, if you count passes by the players wearing black, you are more likely to notice the gorilla than if you count passes by the players wearing white because the color of the gorilla more closely matches that of the black-shirted players (Simons & Chabris, 1999). However, even unique items can go unnoticed. In one task, people monitored black shapes and ignored white shapes that moved around a computer window (Most et al., 2001). Approximately 30 percent of them failed to detect the bright red-cross traversing the display, even though it was the only colored item and was visible for five seconds.
Another crucial influence on noticing is the effort you put into the attention-demanding task. If you have to keep separate counts of bounce passes and aerial passes, you are less likely to notice the gorilla (Simons & Chabris, 1999), and if you are tracking faster moving objects, you are less likely to notice (Simons & Jensen, 2009). You can even miss unexpected visual objects when you devote your limited cognitive resources to a memory task (Fougnie & Marois, 2007), so the limits are not purely visual. Instead, they appear to reflect limits on the capacity of attention. Without attention to the unexpected event, you are unlikely to become aware of it (Mack & Rock, 1998; Most, Scholl, Clifford, & Simons, 2005).
Inattentional blindness is not just a laboratory curiosity—it also occurs in the real world and under more natural conditions. In a recent study (Chabris, Weinberger, Fontaine, & Simons, 2011), Chabris and colleagues simulated a famous police misconduct case in which a Boston police officer was convicted of lying because he claimed not to have seen a brutal beating (Lehr, 2009). At the time, he had been chasing a murder suspect and ran right past the scene of a brutal assault. In Chabris’ simulation, subjects jogged behind an experimenter who ran right past a simulated fight scene. At night, 65 percent missed the fight scene. Even during broad daylight, 44 percent of observers jogged right passed it without noticing, lending some plausibility to the Boston cop’s story that he was telling the truth and never saw the beating.
Perhaps more importantly, auditory distractions can induce real-world failures to see. Although people believe they can multitask, few can. And, talking on a phone while driving or walking decreases situation awareness and increases the chances that people will miss something important (Strayer & Johnston, 2001). In a dramatic illustration of cell phone–induced inattentional blindness, Ira Hymen observed that people talking on a cell phone as they walked across a college campus were less likely than other pedestrians to notice a unicycling clown who rode across their path (Hyman, Boss, Wise, McKenzie, & Caggiano, 2011).
Recently, the study of this sort of awareness failure has returned to its roots in studies of listening, with studies documenting inattentional deafness: When listening to a set of spatially localized conversations over headphones, people often fail to notice the voice of a person walking through the scene repeatedly stating “I am a gorilla” (Dalton & Fraenkel, 2012). Under conditions of focused attention, we see and hear far less of the unattended information than we might expect (Macdonald & Lavie, 2011; Wayand, Levin, & Varakin, 2005).
We now have a good understanding of the ways in which focused attention affects the detection of unexpected objects falling outside that focus. The greater the demands on attention, the less likely people are to notice objects falling outside their attention (Macdonald & Lavie, 2011; Simons & Chabris, 1999; Simons & Jensen, 2009). The more like the ignored elements of a scene, the less likely people are to notice. And, the more distracted we are, the less likely we are to be aware of our surroundings. Under conditions of distraction, we effectively develop tunnel vision.
Despite this growing understanding of the limits of attention and the factors that lead to more or less noticing, we have relatively less understanding of individual differences in noticing (Simons & Jensen, 2009). Do some people consistently notice the unexpected while others are obliviously unaware of their surroundings? Or, are we all subject to inattentional blindness due to structural limits on the nature of attention? The question remains controversial. A few studies suggest that those people who have a greater working memory capacity are more likely to notice unexpected objects (Hannon & Richards, 2010; Richards, Hannon, & Derakshan, 2010). In effect, those who have more resources available when focusing attention are more likely to spot other aspects of their world. However, other studies find no such relationship: Those with greater working memory capacity are not any more likely to spot an unexpected object or event (Seegmiller, Watson, & Strayer, 2011; Bredemeier & Simons, 2012). There are theoretical reasons to predict each pattern. With more resources available, people should be more likely to notice (see Macdonald & Lavie, 2011). However, people with greater working memory capacity also tend to be better able to maintain their focus on their prescribed task, meaning that they should be less likely to notice. At least one study suggests that the ability to perform a task does not predict the likelihood of noticing (Simons & Jensen, 2009; for a replication, see Bredemeier & Simons, 2012). In a study I conducted with Melinda Jensen, we measured how well people could track moving objects around a display, gradually increasing the speed until people reached a level of 75% accuracy. Tracking ability varied greatly: Some people could track objects at more than twice the speed others could. Yet, the ability to track objects more easily was unrelated to the odds of noticing an unexpected event. Apparently, as long as people try to perform the tracking task, they are relatively unlikely to notice unexpected events.
What makes these findings interesting and important is that they run counter to our intuitions. Most people are confident they would notice the chest-thumping gorilla. In fact, nearly 90%believe they would spot the gorilla (Levin & Angelone, 2008), and in a national survey, 78% agreed with the statement, “People generally notice when something unexpected enters their field of view, even when they’re paying attention to something else” (Simons & Chabris, 2010). Similarly, people are convinced that they would spot errors in movies or changes to a conversation partner (Levin & Angelone, 2008). We think we see and remember far more of our surroundings than we actually do. But why do we have such mistaken intuitions?
One explanation for this mistaken intuition is that our experiences themselves mislead us (Simons & Chabris, 2010). We rarely experience a study situation such as the gorilla experiment in which we are forced to confront something obvious that we just missed. That partly explains why demonstrations such as that one are so powerful: We expect that we would notice the gorilla, and we cannot readily explain away our failure to notice it. Most of the time, we are happily unaware of what we have missed, but we are fully aware of those elements of a scene that we have noticed. Consequently, if we assume our experiences are representative of the state of the world, we will conclude that we notice unexpected events. We don’t easily think about what we’re missing.
Given the limits on attention coupled with our mistaken impression that important events will capture our attention, how has our species survived? Why weren’t our ancestors eaten by unexpected predators? One reason is that our ability to focus attention intently might have been more evolutionarily useful than the ability to notice unexpected events. After all, for an event to be unexpected, it must occur relatively infrequently. Moreover, most events don’t require our immediate attention, so if inattentional blindness delays our ability to notice the events, the consequences could well be minimal. In a social context, others might notice that event and call attention to it. Although inattentional blindness might have had minimal consequences over the course of our evolutionary history, it does have consequences now.
At pedestrian speeds and with minimal distraction, inattentional blindness might not matter for survival. But in modern society, we face greater distractions and move at greater speeds, and even a minor delay in noticing something unexpected can mean the difference between a
fender-bender and a lethal collision. If talking on a phone increases your odds of missing a unicycling clown, it likely also increases your odds of missing the child who runs into the street or the car that runs a red light. Why, then, do people continue to talk on the phone when driving? The reason might well be the same mistaken intuition that makes inattentional blindness surprising: Drivers simply do not notice how distracted they are when they are talking on a phone, so they believe they can drive just as well when talking on a phone even though they can’t (Strayer & Johnston, 2001).
So, what can you do about inattentional blindness? The short answer appears to be, “not much.” There is no magical elixir that will overcome the limits on attention, allowing you to notice everything (and that would not be a good outcome anyway). But, there is something you can do to mitigate the consequences of such limits. Now that you know about inattentional blindness, you can take steps to limit its impact by recognizing how your intuitions will lead you astray.
First, maximize the attention you do have available by avoiding distractions, especially under conditions for which an unexpected event might be catastrophic. The ring of a new call or the ding of a new text are hard to resist, so make it impossible to succumb to the temptation by turning your phone off or putting it somewhere out of reach when you are driving. If you know that you will be tempted and you know that using your phone will increase inattentional blindness, you must be proactive. Second, pay attention to what others might not notice. If you are a bicyclist, don’t assume that the driver sees you, even if they appear to make eye contact. Looking is not the same as seeing. Only by understanding the limits of attention and by recognizing our mistaken beliefs about what we “know” to be true can we avoid the modern- day consequences of those limits.
Outside Resources
§ Article: Scholarpedia article on inattentional blindnesshttp://www.scholarpedia.org/article/Inattentional_blindness
§ Video: The original gorilla video
§ Video: The sequel to the gorilla video
§ Web: Website for Chabris & Simons book, The Invisible Gorilla. Includes links tovideos and descriptions of the research on inattentional blindnesshttp://www.theinvisiblegorilla.com
Discussion Questions
1. Many people, upon learning about inattentional blindness, try to think of ways to eliminate it, allowing themselves complete situation awareness. Why might we be far worse off if we were not subject to inattentional blindness?
2. If inattentional blindness cannot be eliminated, what steps might you take to avoidits consequences?
3. Can you think of situations in which inattentional blindness is highly likely to be a problem? Can you think of cases in which inattentional blindness would not havemuch of an impact?
Vocabulary
Ø Dichotic listening: A task in which different audio streams are presented to each ear. Typically, people are asked to monitor one stream while ignoring the other.
Ø Inattentional blindness: The failure to notice a fully visible, but unexpected, objector event when attention is devoted to something else.
Ø Inattentional deafness: The auditory analog of inattentional blindness. People fail to notice an unexpected sound or voice when attention is devoted to other aspects of a scene.
Ø Selective listening: A method for studying selective attention in which people focus attention on one auditory stream of information while deliberately ignoring other auditory information. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.11%3A_Attention_Blindness.txt |
Weapon focus refers to a factor that affects the reliability of eyewitness testimony. Where a weapon is used during a crime, the weapon is likely to divert the witness's attention to the weapon that the perpetrator is holding and this affects the ability of the witness to concentrate on the details of the crime. The visual attention given by a witness to a weapon can impair his or her ability to make a reliable identification and describe what the culprit looks like if the crime is of short duration.
The focused attention that an eyewitness pays to the weapon that the perpetrator holds during the commission of the alleged crime forms the basis of this concept. The proponents of this view believe that all the visual attention of the eyewitness gets drawn to the weapon, thereby affecting the ability of the eyewitness to observe other details. Chief Justice Rabner acknowledged the results of a meta-analysis undertaken by Nancy Steblay on this topic. In this meta-analysis, data from various studies on this subject was collected and analyzed to determine if the presence of a weapon may actually be a factor affecting the memory or perception of an eyewitness to a real crime. Of the 19 weapon-focus studies that involved more than 2,000 identifications, Steblay found an average decrease in accuracy of about 10 per cent when a weapon was present. In a separate study, half of the witnesses observed a person holding a syringe in a way that was personally threatening to the witness; the other half saw the same person holding a pen. Sixty-four per cent of the witnesses from the first group misidentified a filler from a target-absent lineup, compared to thirty-three from the second group. Weapon focus can also affect a witness's ability to describe a perpetrator. A meta- analysis of ten studies showed that "weapon-absent condition[s] generated significantly more accurate descriptions of the perpetrator than did the weapon-present condition". Thus, especially when the interaction is brief, the presence of a visible weapon can affect the reliability of an identification and the accuracy of a witness's description of the perpetrator.
5.13: Cross Race effect
There are numerous times in our criminal justice system that eyewitness testimony can make the difference between conviction and acquittal. When trials contain eyewitness testimony, jurors rely on it heavily, despite holding some erroneous beliefs about the factors that make eyewitnesses more or less accurate. Because jurors rely on those beliefs in evaluating eyewitness credibility and making trial judgments, false convictions in eyewitness cases are not uncommon. Indeed, eyewitness misidentifications lead to more wrongful convictions than all other causes combined.
The third and final class of explanations deals with retrieval-based processes. Research evidence indicates that the CRE reflects different processes and decision strategies occurring at the time of retrieval. More specifically, people rely more on recollection processes, as opposed to familiarity judgments, when deciding whether they have previously seen an own-race (versus an other-race) face. Witnesses also have a lower (i.e., more lenient) response criterion for other-race faces, meaning that they are more willing to make a positive identification for other- race faces than they are for own-race faces. As a result, they make more false alarms for other- race than own-race faces.
In summary, there are a number of different theories hypothesized to explain the CRE, none of which has yet to receive overwhelming support, nor has resulted in the development of appropriate remedies. There is an important practical advantage of retrieval-based explanations of the CRE, namely, that decision processes at retrieval are amenable to system variables like instructions during the lineup procedure. In contrast, cross-race contact and encoding processes are estimator variables that might predict differential performance with targets of different races, but they are much less susceptible to intervention by the criminal justice system. From an applied perspective, procedures that influence cross-race identifications at the retrieval stage could be readily implemented by lineup administrators (e.g., by providing those in a cross-race situation with specialized instructions before the identification).
5.14: Source Monitoring
One potential error in memory involves mistakes in differentiating the sources of information. Source monitoring refers to the ability to accurately identify the source of a memory. Perhaps you’ve had the experience of wondering whether you really experienced an event or only dreamed or imagined it. If so, you wouldn’t be alone. Rassin, Merkelbach, and Spaan (2001) reported that up to 25% of college students reported being confused about real versus dreamed events. Studies suggest that people who are fantasy-prone are more likely to experience source monitoring errors (Winograd, Peluso, & Glover, 1998), and such errors also occur more often for both children and the elderly than for adolescents and younger adults (Jacoby & Rhodes, 2006).
In other cases we may be sure that we remembered the information from real life but be uncertain about exactly where we heard it. Imagine that you read a news story in a tabloid magazine such as the National Enquirer. Probably you would have discounted the information because you know that its source is unreliable. But what if later you were to remember the story but forget the source of the information? If this happens, you might become convinced that the news story is true because you forget to discount it. The sleeper effect refers to attitude change that occurs over time when we forget the source of information (Pratkanis, Greenwald, Leippe, & Baumgardner, 1988).
In still other cases we may forget where we learned information and mistakenly assume that we created the memory ourselves. Kaavya Viswanathan, the author of the book How Opal Mehta Got Kissed, Got Wild, and Got a Life, was accused of plagiarism when it was revealed that many parts of her book were very similar to passages from other material. Viswanathan argued that she had simply forgotten that she had read the other works, mistakenly assuming she had made up the material herself. And the musician George Harrison claimed that he was unaware that the melody of his song “My Sweet Lord” was almost identical to an earlier song by another composer. The judge in the copyright suit that followed ruled that Harrison didn’t intentionally commit the plagiarism. (Please use this knowledge to become extra vigilant about source attributions in your written work, not to try to excuse yourself if you are accused of plagiarism.) | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.12%3A_Weapon_Focus.txt |
Learning Objectives
• Recognize and apply memory-enhancing strategies, including mnemonics, rehearsal, chunking, and peg-words
Most of us suffer from memory failures of one kind or another, and most of us would like to improve our memories so that we don’t forget where we put the car keys or, more importantly, the material we need to know for an exam. In this section, we’ll look at some ways to help you remember better, and at some strategies for more effective studying.
Memory-Enhancing Strategies
What are some everyday ways we can improve our memory, including recall? To help make sure information goes from short-term memory to long-term memory, you can use memory- enhancing strategies. One strategy is rehearsal, or the conscious repetition of information to be remembered (Craik & Watkins, 1973). Think about how you learned your multiplication tables as a child. You may recall that 6 x 6 = 36, 6 x 7 = 42, and 6 x 8 = 48. Memorizing these facts is rehearsal.
Another strategy is chunking: you organize information into manageable bits or chunks (Bodie, Powers, & Fitch-Hauser, 2006). Chunking is useful when trying to remember information like dates and phone numbers. Instead of trying to remember 5205550467, you remember the number as 520-555-0467. So, if you met an interesting person at a party and you wanted to remember his phone number, you would naturally chunk it, and you could repeat the number over and over, which is the rehearsal strategy.
Link to Learning
Try this fun activity that employs a memory-enhancing strategy.
You could also enhance memory by using elaborative rehearsal: a technique in which you think about the meaning of the new information and its relation to knowledge already stored in your memory (Tigner, 1999). For example, in this case, you could remember that 520 is an area code for Arizona and the person you met is from Arizona. This would help you better remember the 520 prefix. If the information is retained, it goes into long-term memory.
Mnemonic devices are memory aids that help us organize information for encoding. They are especially useful when we want to recall larger bits of information such as steps, stages, phases, and parts of a system (Bellezza, 1981). Brian needs to learn the order of the planets in the solar system, but he’s having a hard time remembering the correct order. His friend Kelly suggests a mnemonic device that can help him remember. Kelly tells Brian to simply remember the name Mr. VEM J. SUN, and he can easily recall the correct order of the
planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. You might use a mnemonic device to help you remember someone’s name, a mathematical formula, or the seven levels of Bloom’s taxonomy.
If you have ever watched the television show Modern Family, you might have seen Phil Dunphy explain how he remembers names:
“The other day I met this guy named Carl. Now, I might forget that name, but he was wearing a Grateful Dead t-shirt. What’s a band like the Grateful Dead? Phish. Where do fish live? The ocean. What else lives in the ocean? Coral. Hello, Co-arl.” (Wrubel & Spiller, 2010)
It seems the more vivid or unusual the mnemonic, the easier it is to remember. The key to using any mnemonic successfully is to find a strategy that works for you.
Link to Learning
Watch this fascinating TED Talk titled “Feats of Memory Anyone Can Do.” The lecture is given by Joshua Foer, a science writer who “accidentally” won the U. S. Memory Championships. He explains a mnemonic device called the memory palace.
Some other strategies that are used to improve memory include expressive writing and saying words aloud. Expressive writing helps boost your short-term memory, particularly if you write about a traumatic experience in your life. Masao Yogo and Shuji Fujihara (2008) had participants write for 20-minute intervals several times per month. The participants were instructed to write about a traumatic experience, their best possible future selves, or a trivial topic. The researchers found that this simple writing task increased short-term memory capacity after five weeks, but only for the participants who wrote about traumatic experiences. Psychologists can’t explain why this writing task works, but it does.
What if you want to remember items you need to pick up at the store? Simply say them out loud to yourself. A series of studies (MacLeod, Gopie, Hourihan, Neary, & Ozubko, 2010) found that saying a word out loud improves your memory for the word because it increases the word’s distinctiveness. Feel silly, saying random grocery items aloud? This technique works equally well if you just mouth the words. Using these techniques increased participants’ memory for the words by more than 10%. These techniques can also be used to help you study.
Using Peg-Words
Consider the case of Simon Reinhard. In 2013, he sat in front of 60 people in a room at Washington University, where he memorized an increasingly long series of digits. On the first round, a computer generated 10 random digits—6 1 9 4 8 5 6 3 7 1—on a screen for 10 seconds. After the series disappeared, Simon typed them into his computer. His recollection was perfect. In the next phase, 20 digits appeared on the screen for 20 seconds. Again, Simon got them all correct. No one in the audience (mostly professors, graduate students, and undergraduate students) could recall the 20 digits perfectly. Then came 30 digits, studied for 30 seconds; once again, Simon didn’t misplace even a single digit. For a final trial, 50 digits appeared on the screen for 50 seconds, and again, Simon got them all right. In fact, Simon would have been happy to keep going. His record in this task—called “forward digit span”—is 240 digits!
Simon Reinhard’s ability to memorize huge numbers of digits. Although it was not obvious, Simon Reinhard used deliberate mnemonic devices to improve his memory. In a typical case, the person learns a set of cues and then applies these cues to learn and remember information. Consider the set of 20 items below that are easy to learn and remember (Bower & Reitman, 1972).
1. is a gun. 11 is penny-one, hot dog bun.
2. is a shoe. 12 is penny-two, airplane glue.
3. is a tree. 13 is penny-three, bumble bee.
4. is a door. 14 is penny-four, grocery store.
5. is knives. 15 is penny-five, big beehive.
6. is sticks. 16 is penny-six, magic tricks.
7. is oven. 17 is penny-seven, go to heaven.
8. is plate. 18 is penny-eight, golden gate.
9. is wine. 19 is penny-nine, ball of twine.
10. is hen. 20 is penny-ten, ballpoint pen.
It would probably take you less than 10 minutes to learn this list and practice recalling it several times (remember to use retrieval practice!). If you were to do so, you would have a set of peg words on which you could “hang” memories. In fact, this mnemonic device is called the peg word technique. If you then needed to remember some discrete items—say a grocery list, or points you wanted to make in a speech—this method would let you do so in a very precise yet flexible way. Suppose you had to remember bread, peanut butter, bananas, lettuce, and so on. The way to use the method is to form a vivid image of what you want to remember and imagine it interacting with your peg words (as many as you need). For example, for these items, you might imagine a large gun (the first peg word) shooting a loaf of bread, then a jar of peanut butter inside a shoe, then large bunches of bananas hanging from a tree, then a door slamming on a head of lettuce with leaves flying everywhere. The idea is to provide good, distinctive cues (the weirder the better!) for the information you need to remember while you are learning it. If you do this, then retrieving it later is relatively easy. You know your cues perfectly (one is gun, etc.), so you simply go through your cue word list and “look” in your mind’s eye at the image stored there (bread, in this case).
This peg word method may sound strange at first, but it works quite well, even with little training (Roediger, 1980). One word of warning, though, is that the items to be remembered need to be presented relatively slowly at first, until you have practice associating each with its cue word. People get faster with time. Another interesting aspect of this technique is that it’s just as easy to recall the items in backwards order as forwards. This is because the peg words provide direct access to the memorized items, regardless of order.
How did Simon Reinhard remember those digits? Essentially he has a much more complex system based on these same principles. In his case, he uses “memory palaces” (elaborate scenes with discrete places) combined with huge sets of images for digits. For example, imagine mentally walking through the home where you grew up and identifying as many distinct areas and objects as possible. Simon has hundreds of such memory palaces that he uses. Next, for remembering digits, he has memorized a set of 10,000 images. Every four-digit number for him immediately brings forth a mental image. So, for example, 6187 might recall Michael Jackson.
When Simon hears all the numbers coming at him, he places an image for every four digits into locations in his memory palace. He can do this at an incredibly rapid rate, faster than 4 digits per 4 seconds when they are flashed visually, as in the demonstration at the beginning of the module. As noted, his record is 240 digits, recalled in exact order. Simon also holds the world record in an event called “speed cards,” which involves memorizing the precise order of a shuffled deck of cards. Simon was able to do this in 21.19 seconds! Again, he uses his memory palaces, and he encodes groups of cards as single images.
How to Study Effectively
Based on the information presented in this chapter, here are some strategies and suggestions to help you hone your study techniques (Figure 2). The key with any of these strategies is to figure out what works best for you.
• Use elaborative rehearsal: In a famous article, Craik and Lockhart (1972) discussed their belief that information we process more deeply goes into long-term memory. Their theory is called levels of processing. If we want to remember a piece of information, we should think about it more deeply and link it to other information and memories to make it more meaningful. For example, if we are trying to remember that the hippocampus is involved with memory processing, we might envision a hippopotamus with excellent memory and then we could better remember the hippocampus.
• Apply the self-reference effect: As you go through the process of elaborative rehearsal, it would be even more beneficial to make the material you are trying to memorize personally meaningful to you. In other words, make use of the self-reference effect. Write notes in your own words. Write definitions from the text, and then rewrite them in your own words. Relate the material to something you have already learned for another class, or think how you can apply the concepts to your own life. When you do this, you are building a web of retrieval cues that will help you access the material when you want to remember it.
• Don’t forget the forgetting curve: As you know, the information you learn drops off rapidly with time. Even if you think you know the material, study it again right before test time to increase the likelihood the information will remain in your memory. Overlearning can help prevent storage decay.
• Rehearse, rehearse, rehearse: Review the material over time, in spaced and organized study sessions. Organize and study your notes, and take practice quizzes/exams. Link the new information to other information you already know well.
• Be aware of interference: To reduce the likelihood of interference, study during a quiet time without interruptions or distractions (like television or music).
• Keep moving: Of course you already know that exercise is good for your body, but did you also know it’s also good for your mind? Research suggests that regular aerobic exercise (anything that gets your heart rate elevated) is beneficial for memory (van Praag, 2008). Aerobic exercise promotes neurogenesis: the growth of new brain cells in the hippocampus, an area of the brain known to play a role in memory and learning.
• Get enough sleep: While you are sleeping, your brain is still at work. During sleep the brain organizes and consolidates information to be stored in long-term memory (Abel & Bäuml, 2013).
• Make use of mnemonic devices: As you learned earlier in this chapter, mnemonic devices often help us to remember and recall information. There are different types of mnemonic devices, such as the acronym. An acronym is a word formed by the first letter of each of the words you want to remember. For example, even if you live near one, you might have difficulty recalling the names of all five Great Lakes. What if I told you to think of the word Homes? HOMES is an acronym that represents Huron, Ontario, Michigan, Erie, and Superior: the five Great Lakes. Another type of mnemonic device is an acrostic: you make a phrase of all the first letters of the words. For example, if you are taking a math test and you are having difficulty remembering the order of operations, recalling the following sentence will help you: “Please Excuse My Dear Aunt Sally,” because the order of mathematical operations is Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. There also are jingles, which are rhyming tunes that contain key words related to the concept, such as i before e, except after c.
MEMORY TESTS
Apply some of the memory techniques you learned about by completing the memory exercises below:
Go to garyfisk.com/anim/lecture_stm.swf. Do the demonstration.
• How many digits were you able to remember without messing up at all?
• How many digits did you remember out of the last sequence?
Go to the Faces Memory Challenge found here: experiments.wustl.edu/
• How did you do?
• Is it easier for you to remember faces or numbers? Why?
Go to https://www.exploratorium.edu/memory/dont_forget/index.html. Play the memory solitaire game. Then play game #2: Tell Yourself a Story.
• Did your memory improve the second time? Why or why not?
THINK IT OVER
• Create a mnemonic device to help you remember a term or concept from this module.
• What is an effective study technique that you have used? How is it similarto/different from the strategies suggested in this module?
GLOSSARY
Ø Chunking: organizing information into manageable bits or chunks
Ø Elaborative rehearsal: thinking about the meaning of the new information andits relation to knowledge already stored in your memory
Ø Levels of processing: information that is thought of more deeply becomesmore meaningful and thus better committed to memory
Ø Memory-enhancing strategy: technique to help make sure information goes fromshort- term memory to long-term memory
Ø Mnemonic device: memory aids that help organize information for encoding | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/05%3A_Working_Memory/5.15%3A_Memory_Techniques.txt |
Ill- Defined and Well-Defined Problems
Well-defined Problems For many abstract problems it is possible to find an algorithmic [4] solution. We call all those problems well-defined that can be properly formalised, which comes along with the following properties:
· The problem has a clearly defined given state. This might be the line-up of a chess game, a given formula you have to solve, or the set-up of the towers of Hanoi game (which we will discuss later).
· There is a finite set of operators, that is, of rules you may apply to the given state. For the chess game, e.g., these would be the rules that tell you which piece you may move to which position.
· Finally, the problem has a clear goal state: The equations is resolved to x, all discs are moved to the right stack, or the other player is in checkmate.
Not surprisingly, a problem that fulfils these requirements can be implemented algorithmically (also see convergent thinking). Therefore many well-defined problems can be very effectively solved by computers, like playing chess.
Ill-defined Problems Though many problems can be properly formalised (sometimes only if we accept an enormous complexity) there are still others where this is not the case. Good examples for this are all kinds of tasks that involve creativity [5], and, generally speaking, all problems for which it is not possible to clearly define a given state and a goal state: Formalising a problem of the kind “Please paint a beautiful picture” may be impossible. Still this is a problem most people would be able to access in one way or the other, even if the result may be totally different from person to person. And while Knut might judge that picture X is gorgeous, you might completely disagree.
Nevertheless ill-defined problems often involve sub-problems that can be totally well-defined. On the other hand, many every-day problems that seem to be completely well-defined involve- when examined in detail- a big deal of creativity and ambiguities. If we think of Knut's fairly ill- defined task of writing an essay, he will not be able to complete this task without first understanding the text he has to write about. This step is the first subgoal Knut has to solve. Interestingly, ill-defined problems often involve subproblems that are well-defined.
6.02: Problem Solving Strategies
When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.
A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them (Table 3). For example, a well-known strategy is trial and error. The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.
Method
Description
Example
Trial and error
Continue trying different solutions until problem is solved
Restarting phone, turning off WiFi, turning off bluetooth in order to determine why your phone is malfunctioning
Algorithm
Step-by-step problem- solving formula
Instruction manual for installing new software on your computer
Heuristic
General problem-solving framework
Working backwards; breaking a task into steps
Table 1. Problem Solving Strategies
Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results.
Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?
A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):
· When one is faced with too much information
· When the time to make a decision is limited
· When the decision to be made is unimportant
· When there is access to very little information to use in making the decision
· When an appropriate heuristic happens to come to mind in the same moment
Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.
Link to Learning
What problem-solving method could you use to solve Einstein’s famous riddle?
Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.
Everyday Connections: Solving Puzzles
Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.
Here is another popular type of puzzle that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:
Take a look at the “Puzzling Scales” logic puzzle below. Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).
Were you able to determine how many marbles are needed to balance the scales in Figure 3? You need nine. Were you able to solve the problems in Figure 1 and Figure 2? Here are the answers. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/06%3A_Problem_Solving/6.01%3A_Types_of_Problems.txt |
In Means-End Analysis you try to reduce the difference between initial state and goal state by creating sub goals until a sub goal can be reached directly (probably you know several examples of recursion which works on the basis of this).
An example for a problem that can be solved by Means-End Analysis are the „Towers of Hanoi“
The initial state of this problem is described by the different sized discs being stacked in order of size on the first of three pegs (the “start-peg“). The goal state is described by these discs being stacked on the third pegs (the “end-peg“) in exactly the same order.
There are three operators:
· You are allowed to move one single disc from one peg to another one
· You are only able to move a disc if it is on top of one stack
· A disc cannot be put onto a smaller one.
In order to use Means-End Analysis we have to create subgoals. One possible way of doing this is described in the picture:
1. Moving the discs lying on the biggest one onto the second peg.
2. Shifting the biggest disc to the third peg.
3. Moving the other ones onto the third peg, too.
You can apply this strategy again and again in order to reduce the problem to the case where you only have to move a single disc – which is then something you are allowed to do.
Strategies of this kind can easily be formulated for a computer; the respective algorithm for the Towers of Hanoi would look like this:
1. move n-1 discs from A to B
2. move disc #n from A to C
3. move n-1 discs from B to C
Where n is the total number of discs, A is the first peg, B the second, C the third one. Now the problem is reduced by one with each recursive loop.
Means-end analysis is important to solve everyday-problems - like getting the right train connection: You have to figure out where you catch the first train and where you want to arrive, first of all. Then you have to look for possible changes just in case you do not get a direct connection. Third, you have to figure out what are the best times of departure and arrival, on which platforms you leave and arrive and make it all fit together.
6.04: Reasoning by Analogy
Analogies describe similar structures and interconnect them to clarify and explain certain relations. In a recent study, for example, a song that got stuck in your head is compared to an itching of the brain that can only be scratched by repeating the song over and over again.
Restructuring by Using Analogies
One special kind of restructuring, the way already mentioned during the discussion of the Gestalt approach, is analogical problem solving. Here, to find a solution to one problem - the so called target problem, an analogous solution to another problem - the source problem, is presented. An example for this kind of strategy is the radiation problem posed by K. Duncker in 1945:
“As a doctor you have to treat a patient with a malignant, inoperable tumour, buried deep inside the body. There exists a special kind of ray, which is perfectly harmless at a low intensity, but at the sufficient high intensity is able to destroy the tumour - as well as the healthy tissue on his way to it. What can be done to avoid the latter?”
When this question was asked to participants in an experiment, most of them couldn't come up with the appropriate answer to the problem. Then they were told a story that went something like this:
A General wanted to capture his enemy's fortress. He gathered a large army to launch a full-scale direct attack, but then learned, that all the roads leading directly towards the fortress were blocked by mines. These roadblocks were designed in such a way, that it was possible for small groups of the fortress-owner's men to pass them safely, but every large group of men would initially set them off. Now the General figured out the following plan: He divided his troops into several smaller groups and made each of them march down a different road, timed in such a way, that the entire army would reunite exactly when reaching the fortress and could hit with full strength.
Here, the story about the General is the source problem, and the radiation problem is the target problem. The fortress is analogous to the tumour and the big army corresponds to the highly intensive ray. Consequently a small group of soldiers represents a ray at low intensity. The solution to the problem is to split the ray up, as the general did with his army, and send the now harmless rays towards the tumour from different angles in such a way that they all meet when reaching it. No healthy tissue is damaged but the tumour itself gets destroyed by the ray at its full intensity. M. Gick and K. Holyoak presented Duncker's radiation problem to a group of participants in 1980 and 1983. Only 10 percent of them were able to solve the problem right away, 30 percent could solve it when they read the story of the general before. After given an additional hint - to use the story as help - 75 percent of them solved the problem.
With this results, Gick and Holyoak concluded, that analogical problem solving depends on three steps:
1. Noticing that an analogical connection exists between the source and the target problem.
2. Mapping corresponding parts of the two problems onto each other (fortress → tumour, army → ray, etc.)
3. Applying the mapping to generate a parallel solution to the target problem (using little groups of soldiers approaching from different directions →sending several weaker rays from different directions)
Next, Gick and Holyoak started looking for factors that could be helpful for the noticing and the mapping parts, for example: Discovering the basic linking concept behind the source and the target problem.
Schema
The concept that links the target problem with the analogy (the “source problem“) is called problem schema. Gick and Holyoak obtained the activation of a schema on their participants by giving them two stories and asking them to compare and summarise them. This activation of problem schemata is called “schema induction“.
The two presented texts were picked out of six stories which describe analogical problems and their solution. One of these stories was "The General" (remember example in Chapter 4.1).
After solving the task the participants were asked to solve the radiation problem (see chapter 4.2). The experiment showed that in order to solve the target problem reading of two stories with analogical problems is more helpful than reading only one story: After reading two stories 52% of the participants were able to solve the radiation problem (As told in chapter 4.2 only 30% were able to solve it after reading only one story, namely: “The General“). Gick and Holyoak found out that the quality of the schema a participant developed differs. They classified them into three groups:
1. Good schemata: In good schemata it was recognised that the same concept was used in order to solve the problem (21% of the participants created a good schema and 91% of them were able to solve the radiation problem).
2. Intermediate schemata: The creator of an intermediate schema has figured out thatthe root of the matter equals (here: many small forces solved the problem). (20% created one, 40% of them had the right solution).
3. Poor schemata: The poor schemata were hardly related to the target problem. Inmany poor schemata the participant only detected that the hero of the story was rewarded for his efforts (59% created one, 30% of them had the right solution).
The process of using a schema or analogy, i.e. applying it to a novel situation is called transduction. One can use a common strategy to solve problems of a new kind. To create a good schema and finally get to a solution is a problem-solving skill that requires practice and some background knowledge. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/06%3A_Problem_Solving/6.03%3A_Means_Ends_Analysis.txt |
What are transformation problems? In Cognitive psychology transformation problems refer to a major modification or shift in an individual’s thought and/or behavior patterns. Cognitive psychologists have determined that an individual must carry out a certain sequence of transformations to achieve specific desired goals. A good example of this phenomena is the Wallas Stage Model of the creative process.
The Wallas Stage Model of the Creative Process
In the Wallas Stage Model , creative insights or illuminations occur in the following stages of work:
1. Preparation (conscious work on a creative problem)
2. Internalisation (internalisation of the problem context and goals into the subconscious)and Incubation (internal processing of the problem unconsciously), and
3. Illumination (the emergence – perhaps dramatically – of the creative insight to consciousness as an ‘aha!’ experience)
4. Verification and Elaboration (checking that the insight is valid and then developing it to apoint where it can be used or shared).
As the creative individual (for example a scientist or engineer) works on a problem, if it is a difficult problem they may spend quite some effort, try several different avenues, and clarify or redefine the problem situation. All this activity works to begin to internalize the problem into the subconscious, at which point the ideas about the problem can churn around in the subconscious without necessarily any further conscious input – they are “incubating.” As the problems are incubated, they may begin to coalesce into a solution to the problem, which then dramatically emerges to consciousness as an ‘aha!’ illumination experience. But this experience may or may not be a real solution to the problem – it needs to be verified and tested. It may then need further elaboration and development before it is put to use or shared with other people.
Historically, Wallas’s Stage Model was based on the insights of two of the leading scientific minds of the late 19th century. On his 70th birthday celebration, Hermann Ludwig Ferdinand von Helmholtz offered his thoughts on his creative process, consistent with the Wallas Stage Model. These were published in 1896. In 1908, Henri Poincare’s (the leading mathematician and scientist of his time) published his 1908 classic essay Mathematical Creation, in which he put forward his views on and understanding of the creative processes in mathematical work – again broadly consistent with the Wallas model, although Poincare offered his own thoughts on possible psychological mechanisms underlying the broad features of the Wallas model.
Poincare speculated that what happened was that once ideas were internalised, they bounced around in the subconscious somewhat like billiard balls, colliding with each other – but occasionally interlocking to form new stable combinations. When this happened and there was a significant fit, Poincare speculated that the mechanism which identified that a solution to the problem had been found was a sort of aesthetic sense, a “sensibility.” Poincare wrote that in creative illumination experiences:
. . . the privileged unconscious phenomena, those susceptible of becoming conscious, are those which, directly or indirectly, affect most profoundly our emotional sensibility. It may be surprising to see emotional sensibility evoked a propos of mathematical demonstrations which, it would seem, can only be of interest to the intellect. This would be to forget the feeling of mathematical beauty, of the harmony of numbers and forms, of geometric elegance. This is a true aesthetic feeling that all real mathematicians know, and surely it belongs to emotional sensibility.
Poincare argued that in the subconscious, “the useful combinations are precisely the most beautiful, I mean those best able to charm this special sensibility that all mathematicians know.”Such combinations are “capable of touching this special sensibility of the geometer of which I have just spoken, and . . . once aroused, will call our attention to them, and thus give them occasion to become conscious.”
Generalizing from Poincare’s discussion, the conscious mind receives a vast range of information inputs daily. The conscious and subconscious mental faculties sort and internalize this information by making patterns and associations between them and developing a “sense” of how new pieces of information “fit” with patterns, rules, associations and so forth that have already been internalized and may or may not even be consciously understood or brought into awareness.
Which brings us back to a discussion of intuition.
6.06: Incubation
Incubation is the concept of “sleeping on a problem,” or disengaging from actively and consciously trying to solve a problem, in order to allow, as the theory goes, the unconscious processes to work on the problem. Incubation can take a variety of forms, such as taking a break, sleeping, or working on another kind of problem either more difficult or less challenging. Findings suggest that incubation can, indeed, have a positive impact on problem-solving outcomes. Interestingly, lower-level cognitive tasks (e.g., simple math or language tasks, vacuuming, putting items away) resulted in higher problem-solving outcomes than more challenging tasks (e.g., crossword puzzles, math problems). Educators have also found that taking active breaks increases children’s creativity and problem-solving abilities in classroom settings. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/06%3A_Problem_Solving/6.05%3A_Transformation_Problems.txt |
Fixation
Sometimes, previous experience or familiarity can even make problem solving more difficult. This is the case whenever habitual directions get in the way of finding new directions – an effect called fixation.
Functional Fixedness
Functional fixedness concerns the solution of object-use problems. The basic idea is that when the usual way of using an object is emphasised, it will be far more difficult for a person to use that object in a novel manner. An example for this effect is the candle problem: Imagine you are given a box of matches, some candles and tacks. On the wall of the room there is a cork- board. Your task is to fix the candle to the cork-board in such a way that no wax will drop on the floor when the candle is lit. – Got an idea?
Explanation: The clue is just the following: when people are confronted with a problem
and given certain objects to solve it, it is difficult for them to figure out that they could use them in a different (not so familiar or obvious) way. In this example the box has to be recognized as a support rather than as a container.
A further example is the two-string problem: Knut is left in a room with a chair and a pair of pliers given the task to bind two strings together that are hanging from the ceiling. The problem he faces is that he can never reach both strings at a time because they are just too far away from each other. What can Knut do?
Solution: Knut has to recognize he can use the pliers in a novel function – as weight for a pendulum. He can bind them to one of the strings, push it away, hold the other string and just wait for the first one moving towards him. If necessary, Knut can even climb on the chair, but he is not that small, we suppose…
Mental Fixedness
Functional fixedness as involved in the examples above illustrates a mental set - a person’s tendency to respond to a given task in a manner based on past experience. Because Knut maps an object to a particular function he has difficulties to vary the way of use (pliers as pendulum's weight). One approach to studying fixation was to study wrong-answer verbal insight problems. It was shown that people tend to give rather an incorrect answer when failing to solve a problem than to give no answer at all.
A typical example: People are told that on a lake the area covered by water lilies doubles every 24 hours and that it takes 60 days to cover the whole lake. Then they are asked how many days it takes to cover half the lake. The typical response is '30 days' (whereas 59 days is correct).
These wrong solutions are due to an inaccurate interpretation, hence representation, of the problem. This can happen because of sloppiness (a quick shallow reading of the problemand/or weak monitoring of their efforts made to come to a solution). In this case error feedback should help people to reconsider the problem features, note the inadequacy of their first answer, and find the correct solution. If, however, people are truly fixated on their incorrect representation, being told the answer is wrong does not help. In a study made by P.I. Dallop and R.L. Dominowski in 1992 these two possibilities were contrasted. In approximately one third of the cases error feedback led to right answers, so only approximately one third of the wrong answers were due to inadequate monitoring. [6] Another approach is the study of examples with and without a preceding analogous task. In cases such like the water-jug task analogous thinking indeed leads to a correct solution, but to take a different way might make the case much simpler:
Imagine Knut again, this time he is given three jugs with different capacities and is asked to measure the required amount of water. Of course he is not allowed to use anything despite the jugs and as much water as he likes. In the first case the sizes are 127 litres, 21 litres and 3 litres while 100 litres are desired. In the second case Knut is asked to measure 18 litres from jugs of 39, 15 and three litres size.
In fact participants faced with the 100 litre task first choose a complicate way in order tosolve the second one. Others on the contrary who did not know about that complex task solved the 18 litre case by just adding three litres to 15.
Pitfalls to Problem Solving
Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now. Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.
Link to Learning
Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness.
Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and non-industrialized cultures (German & Barrett, 2005).
Common obstacles to solving problems
The example also illustrates two common problems that sometimes happen during problem solving. One of these is functional fixedness: a tendency to regard the functions of objects and ideas as fixed (German & Barrett, 2005). Over time, we get so used to one particular purpose for an object that we overlook other uses. We may think of a dictionary, for example, as necessarily something to verify spellings and definitions, but it also can function as a gift, a doorstop, or a footstool. For students working on the nine-dot matrix described in the last section, the notion of “drawing” a line was also initially fixed; they assumed it to be connecting dots but not extending lines beyond the dots. Functional fixedness sometimes is also called response set, the tendency for a person to frame or think about each problem in a series in the same way as the previous problem, even when doing so is not appropriate to later problems. In the example of the nine-dot matrix described above, students often tried one solution after another, but each solution was constrained by a set response not to extend any line beyond the matrix.
Functional fixedness and the response set are obstacles in problem representation, the way that a person understands and organizes information provided in a problem. If information is misunderstood or used inappropriately, then mistakes are likely—if indeed the problem can be solved at all. With the nine-dot matrix problem, for example, construing the instruction to draw four lines as meaning “draw four lines entirely within the matrix” means that the problem simply could not be solved. For another, consider this problem: “The number of water lilies on a lake doubles each day. Each water lily covers exactly one square foot. If it takes 100 days for the lilies to cover the lake exactly, how many days does it take for the lilies to cover exactly half of the lake?” If you think that the size of the lilies affects the solution to this problem, you have not represented the problem correctly. Information about lily size is not relevant to the solution, and only serves to distract from the truly crucial information, the fact that the lilies double their coverage each day. (The answer, incidentally, is that the lake is half covered in 99 days; can you think why?) | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/06%3A_Problem_Solving/6.08%3A_Blocks_to_Problem_Solving.txt |
What do the following have in common: the drug penicillin, the Eiffel Tower, the film Lord of the Rings, the General Theory of Relativity, the hymn Amazing Grace, the iPhone, the novel Don Quixote, the painting The Mona Lisa, a recipe for chocolate fudge, the soft drink Coca-Cola, the video game Wii Sports, the West Coast offense in football, and the zipper? You guessed right!
All of the named items were products of the creative mind. Not one of them existed until somebody came up with the idea. Creativity is not something that you just pick like apples from a tree. Because creative ideas are so special, creators who come up with the best ideas are often highly rewarded with fame, fortune, or both. Nobel Prizes, Oscars, Pulitzers, and other honors bring fame, and big sales and box office bring fortune. Yet what is creativity in the first place?
07: Creativity
Creativity happens when someone comes up with a creative idea. An example would be a creative solution to a difficult problem. But what makes an idea or solution creative? Creativity is the ability to generate, create, or discover new ideas, solutions, and possibilities. Very creative people often have intense knowledge about something, work on it for years, look at novel solutions, seek out the advice and help of other experts, and take risks. Although creativity is often associated with the arts, it is actually a vital form of intelligence that drives people in many disciplines to discover something new. Creativity can be found in every area of life, from the way you decorate your residence to a new way of understanding how a cell works.
Although psychologists have offered several definitions of creativity (Plucker, Beghetto, & Dow, 2004; Runco & Jaeger, 2012), probably the best definition is the one recently adapted from the three criteria that the U.S. Patent Office uses to decide whether an invention can receive patent protection (Simonton, 2012).
The first criterion is originality. The idea must have a low probability. Indeed, it often should be unique. Albert Einstein’s special theory of relativity certainly satisfied this criterion. No other scientist came up with the idea.
The second criterion is usefulness. The idea should be valuable or work. For example, a solution must, in fact, solve the problem. An original recipe that produces a dish that tastes too terrible to eat cannot be creative. In the case of Einstein’s theory, his relativity principle provided explanations for what otherwise would be inexplicable empirical results.
The third and last criterion is surprise. The idea should be surprising, or at least nonobvious (to use the term used by the Patent Office). For instance, a solution that is a straightforward derivation from acquired expertise cannot be considered surprising even if it were original.
Einstein’s relativity theory was not a step-by-step deduction from classical physics but rather the theory was built upon a new foundation that challenged the very basis of traditional physics. When applying these three criteria, it is critical to recognize that originality, usefulness, and surprise are all quantitative rather than qualitative attributes of an idea. Specifically, we really have to speak of degree to which an idea satisfies each of the three criteria. In addition, the three attributes should have a zero point, that is, it should be possible to speak of an idea lacking any originality, usefulness, or surprise whatsoever. Finally, we have to assume that if an idea scores zero on any one criterion then it must have zero creativity as well. For example, someone who reinvents the wheel is definitely producing a useful idea, but the idea has zero originality and hence no creativity whatsoever. Similarly, someone who invented a parachute made entirely out of steel reinforced concrete would get lots of credit for originality—and surprise!—but none for usefulness.
Definitions of Creativity
We already introduced a lot of ways to solve a problem, mainly strategies that can be used to find the “correct” answer. But there are also problems which do not require a “right answer” to be given - It is time for creative productiveness! Imagine you are given three objects – your task is to invent a completely new object that is related to nothing you know. Then try to describe its function and how it could additionally be used. Difficult? Well, you are free to think creatively and will not be at risk to give an incorrect answer. For example think of what can be constructed from a half-sphere, wire and a handle. The result is amazing: a lawn lounger, global earrings, a sled, a water weigher, a portable agitator, ... [10]
Divergent Thinking
The term divergent thinking describes a way of thinking that does not lead to one goal, but is open-ended. Problems that are solved this way can have a large number of potential 'solutions' of which none is exactly 'right' or 'wrong', though some might be more suitable than others.
Solving a problem like this involves indirect and productive thinking and is mostly very helpful when somebody faces an ill-defined problem, i.e. when either initial state or goal state cannot be stated clearly and operators or either insufficient or not given at all.
The process of divergent thinking is often associated with creativity, and it undoubtedly leads to many creative ideas. Nevertheless, researches have shown that there is only modest correlation between performance on divergent thinking tasks and other measures of creativity.
Additionally it was found that in processes resulting in original and practical inventions things like searching for solutions, being aware of structures and looking for analogies are heavily involved, too. Thus, divergent thinking alone is not an appropriate tool for making an invention. You also need to analyze the problem in order to make the suggested, i.e. invention, solution appropriate.
Convergent Thinking
Divergent can be contrasted by convergent thinking - thinking that seeks to find the correct answer to a specific problem. This is an adequate strategy for solving most of the well-defined problems (problems with given initial state, operators and goal state) we presented so far. To solve the given tasks it was necessary to think directly or reproductively.
It is always helpful to use a strategy to think of a way to come closer to the solution, perhaps using knowledge from previous tasks or sudden insight.
Remote Associates Test & Unusual Uses Task:
Cognitive scientists have long been interested in the thinking processes that lead to creative ideas. Indeed, many so-called “creativity tests” are actually measures of the thought processes believed to underlie the creative act. The following two measures are among the best known. The first is the Remote Associates Test, or RAT, that was introduced by Mednick. Mednick believed that the creative process requires the ability to associate ideas that are considered very far apart conceptually. The RAT consists of items that require the respondent to identify a word that can be associated to three rather distinct stimulus words. For example, what word can be associated with the words “widow, bite, monkey”? The answer is spider (black widow spider, spider bite, spider monkey). This question is relatively easy, others are much more difficult, but it gives you the basic idea.
The second measure is the Unusual Uses Task. Here, the participant is asked to generate alternative uses for a common object, such as a brick. The responses can be scored on four dimensions: (a) fluency, the total number of appropriate uses generated; (b) originality, the statistical rarity of the uses given; (c) flexibility, the number of distinct conceptual categories implied by the various uses; and (d) elaboration, the amount of detail given for the generated uses. For example, using a brick as a paperweight represents a different conceptual category that using its volume to conserve water in a toilet tank.
The capacity to produce unusual uses is but one example of the general cognitive ability to engage in divergent thinking (Guilford, 1967). Unlike convergent thinking, which converges on the single best answer or solution, divergent thinking comes up with multiple possibilities that might vary greatly in usefulness. Unfortunately, many different cognitive processes have been linked to creativity (Simonton & Damian, 2013). That is why we cannot use the singular; there is no such thing as the “creative process.” Nonetheless, the various processes do share one feature: All enable the person to “think outside the box” imposed by routine thinking—to venture into territory that would otherwise be ignored (Simonton, 2011). Creativity requires that you go where you don’t know where you’re going.
7.02: Insight
There are two very different ways of approaching a goal-oriented situation. In one case an organism readily reproduces the response to the given problem from past experience. This is called reproductive thinking. The second way requires something new and different to achieve the goal, prior learning is of little help here. Such productive thinking is (sometimes) argued to involve insight. Gestalt psychologists even state that insight problems are a separate category of problems in their own right.
Tasks that might involve insight usually have certain features - they require something new and non-obvious to be done and in most cases they are difficult enough to predict that the initial solution attempt will be unsuccessful. When you solve a problem of this kind you often have a so called "AHA-experience" - the solution pops up all of a sudden. At one time you do not have any ideas of the answer to the problem, you do not even feel to make any progress trying out different ideas, but in the next second the problem is solved. For all those readers who would like to experience such an effect, here is an example for an Insight Problem: Knut is given four pieces of a chain; each made up of three links. The task is to link it all up to a closed loop and he has only 15 cents. To open a link costs 2, to close a link costs 3 cents. What should Knut do?
To show that solving insight problems involves restructuring, psychologists created a number of problems that were more difficult to solve for participants provided with previous experiences, since it was harder for them to change the representation of the given situation (see Fixation).
Sometimes given hints may lead to the insight required to solve the problem. And this is also true for involuntarily given ones. For instance it might help you to solve a memory game if someone accidentally drops a card on the floor and you look at the other side. Although such help is not obviously a hint, the effect does not differ from that of intended help. For non- insight problems the opposite is the case. Solving arithmetical problems, for instance, requires schemas, through which one can get to the solution step by step. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/07%3A_Creativity/7.01%3A_Creativity-_What_Is_It.txt |
Deductive Reasoning
Deductive reasoning is concerned with syllogisms in which the conclusion follows logically from the premises. The following example about Knut makes this process clear:
1. Premise: Knut knows: If it is warm, one needs shorts and T-Shirts.
2. Premise: He also knows that it is warm in Spain during summer.
Conclusion: Therefore, Knut reasons that he needs shorts and T-Shirts in Spain.
In the given example it is obvious that the premises are about rather general information and the resulting conclusion is about a more special case which can be inferred from the two premises. Hereafter it is differentiated between the two major kinds of syllogisms, namely categorical and conditional ones.
Categorical Syllogisms
In categorical syllogisms the statements of the premises begin typically with “all”, “none” or “some” and the conclusion starts with “therefore” or “hence”. These kinds of syllogisms fulfill the task of describing a relationship between two categories. In the example given above in the introduction of deductive reasoning these categories are Spain and the need for shorts and T- Shirts. Two different approaches serve the study of categorical syllogisms which are the normative approach and the descriptive approach.
The normative approach
The normative approach is based on logic and deals with the problem of categorizing conclusions as either valid or invalid. “Valid” means that the conclusion follows logically from the premises whereas “invalid” means the contrary. Two basic principles and a method called Euler Circles (Figure 1) have been developed to help judging about the validity. The first principle was created by Aristotle and says “If the two premises are true, the conclusion of a valid syllogism must be true” (cp. Goldstein, 2005). The second principle describes that “The validity of a syllogism is determined only by its form, not its content.” These two principles explain why the following syllogism is (surprisingly) valid:
All flowers are animals. All animals can jump. Therefore, all flowers can jump.
Even though it is quite obvious that the first premise is not true and further that the conclusion is not true, the whole syllogism is still valid. Applying formal logic to the syllogism in the example, the conclusion is valid.
Due to this precondition it is possible to display a syllogism formally with symbols or letters and explain its relationship graphically with the help of diagrams. There are various ways to demonstrate a premise graphically. Starting with a circle to represent the first premise and adding one or more circles for the second one (Figure 1), the crucial move is to compare the constructed diagrams with the conclusion. It should be clearly laid out whether the diagrams are contradictory or not. Agreeing with one another, the syllogism is valid. The displayed syllogism (Figure 1) is obviously valid. The conclusion shows that everything that can jump contains animals which again contains flowers. This agrees with the two premises which point out that flowers are animals and that these are able to jump. The method of Euler Circles is a good device to make syllogisms better conceivable.
The descriptive approach
The descriptive approach is concerned with estimating people´s ability of judging validity and explaining judging errors. This psychological approach uses two methods in order to determine people`s performance:
Method of evaluation: People are given two premises, a conclusion and the task to judge whether the syllogism is valid or not. (preferred one)
Method of production: Participants are supplied with two premises and asked to develop a logically valid conclusion. (if possible)
While using the method of evaluation researchers found typical misjudgments about syllogisms. Premises starting with “All”, “Some” or “No” imply a special atmosphere and influence a person in the process of decision making. One mistake often occurring is judging a syllogism incorrectly as valid, in which the two premises as well as the conclusion starts with “All”. The influence of the provided atmosphere leads to the right decision at most times, but is definitely not reliable and guides the person to a rash decision. This phenomenon is called the atmosphere effect.
In addition to the form of a syllogism, the content is likely to influence a person’s decision as well and causes the person to neglect his logical thinking. The belief bias states that people tend to judge syllogisms with believable conclusions as valid, while they tend to judge syllogisms with unbelievable conclusions as invalid. Given a conclusion as like “Some bananas are pink”, hardly any participants would judge the syllogism as valid, even though it might be valid according to its premises (e.g. Some bananas are fruits. All fruits are pink.)
Mental models of deductive reasoning
It is still not possible to consider what mental processes might occur when people are trying to determine whether a syllogism is valid. After researchers observed that Euler Circles can be used to determine the validity of a syllogism, Phillip Johnson–Laird (1999) wondered whether people would use such circles naturally without any instruction how to use them. At the same time he found out that they do not work for some more complex syllogisms and that a problem can be solved by applying logical rules, but most people solve them by imagining the situation. This is the basic idea of people using mental models – a specific situation that is represented in a person’s mind that can be used to help determine the validity of syllogisms – to solve deductive reasoning problems. The basic principle behind the Mental Model Theory is: A conclusion is valid only if it cannot be refuted by any mode of the premises. This theory is rather popular because it makes predictions that can be tested and because it can be applied without any knowledge about rules of logic. But there are still problems facing researchers when trying to determine how people reason about syllogisms. These problems include the fact that a variety of different strategies are used by people in reasoning and that some people are better in solving syllogisms than others.
Effects of culture on deductive reasoning
People can be influenced by the content of syllogisms rather than by focusing on logic when judging their validity. Psychologists have wondered whether people are influenced by their cultures when judging. Therefore, they have done cross–cultural experiments in which reasoning problems were presented to people of different cultures. They observed that people from different cultures judge differently to these problems. People use evidence from their own experience (empirical evidence) and ignore evidence presented in the syllogism (theoretical evidence).
Conditional syllogisms
Another type of syllogisms is called “conditional syllogism”. Just like the categorical one, it also has two premises and a conclusion. In difference the first premise has the form “If … then”.
Syllogisms like this one are common in everyday life. Consider the following example from the story about Knut:
1. Premise: If it is raining, Knut`s wife gets wet.
2. Premise: It is raining.
Conclusion: Therefore, Knut`s wife gets wet.
Conditional syllogisms are typically given in the abstract form: “If p then q”, where “p” is called the antecedent and “q” the consequent.
Forms of conditional syllogisms
There are four major forms of conditional syllogisms, namely Modus Ponens, Modus Tollens, Denying The Antecedent and Affirming The Consequent. Obviously, the validity of the syllogisms with valid conclusions is easier to judge in a correct manner than the validity of the ones with invalid conclusions. The conclusion in the instance of the modus ponens isapparently valid. In the example it is very clear that Knut`s wife gets wet, if it is raining.
The validity of the modus tollens is more difficult to recognize. Referring to the example, in the case that Knut`s wife does not get wet it can`t be raining. Because the first premise says that if it is raining, she gets wet. So the reason for Knut`s wife not getting wet is that it is not raining. Consequently, the conclusion is valid. The validity of the remaining two kinds of conditional syllogisms is judged correctly only by 40% of people. If the method of denying the antecedent is applied, the second premise says that it is not raining. But from this fact it follows not logically that Knut`s wife does not get wet – obviously rain is not the only reason for her to get wet. It could also be the case that the sun is shining and Knut tests his new water pistol and makes her wet. So, this kind of conditional syllogism does not lead to a valid conclusion. Affirming the consequent in the case of the given example means that the second premise says that Knut`s wife gets wet. But again the reason for this can be circumstances apart from rain. So, it follows not logically that it is raining. In consequence, the conclusion of this syllogism is invalid. The four kinds of syllogisms have shown that it is not always easy to make correct judgments concerning the validity of the conclusions. The following passages will deal with other errors people make during the process of conditional reasoning.
The Wason Selection Task
The Wason Selection Task [1] is a famous experiment which shows that people make more errors in the process of reasoning, if it is concerned with abstract items than if it involves real- world items (Wason, 1966). In the abstract version of the Wason Selection Task four cards are shown to the participants with each a letter on one side and a number on the other (Figure 3, yellow cards). The task is to indicate the minimum number of cards that have to be turned over to test whether the following rule is observed: “If there is a vowel on one side then there is an even number on the other side”. 53% of participants selected the ‘E’ card which is correct, because turning this card over is necessary for testing the truth of the rule. However still another card needs to be turned over. 64 % indicated that the ‘4’ card has to be turned over which is not right. Only 4% of participants answered correctly that the ‘7’ card needs to be turned over in addition to the ‘E’. The correctness of turning over these two cards becomes more obvious if the same task is stated in terms of real-world items instead of vowels and numbers. One of the experiments for determining this was the beer/drinking-age problem used by Richard Griggs and James Cox (1982). This experiment is identical to the Wason Selection Task except that instead of numbers and letters on the cards everyday terms (beer, soda and ages) were used (Figure 3, green cards). Griggs and Cox gave the following rule to the participants: “If a person is drinking beer then he or she must be older than 19 years.” In this case 73% of participants answered in a correct way, namely that the cards with “Beer” and “14 years” on it have to be turned over to test whether the rule is kept.
Why is the performance better in the case of real–world items?
There are two different approaches which explain why participants’ performance is significantly better in the case of the beer/drinking-age problem than in the abstract version of the Wason Selection Task, namely one approach concerning permission schemas and an evolutionary approach.
The regulation: “If one is 19 years or older then he/she is allowed to drink alcohol”, is known by everyone as an experience from everyday life (also called permission schema). As this permission schema is already learned by the participants it can be applied to the Wason Selection Task for real–world items to improve participants` performance. On the contrary such a permission schema from everyday life does not exist for the abstract version of the Wason Selection Task.
The evolutionary approach concerns the important human ability of cheater-detection. This approach states that an important aspect of human behavior especially in the past was/is the ability for two persons to cooperate in a way that is beneficial for both of them. As long as each person receives a benefit for whatever he/she does in favor of the other one, everything works well in their social exchange. But if someone cheats and receives benefit from others without giving it back, some problem arises (see also chapter 3. Evolutionary Perspective on Social Cognitions [2]). It is assumed that the property to detect cheaters has become a part of human`s cognitive makeup during evolution. This cognitive ability improves the performance in the beer/drinking-age version of the Wason Selection Task as it allows people to detect a cheating person who does not behave according to the rule. Cheater-detection does not work in the case of the abstract version of the Wason Selection Task as vowels and numbers do not behave or even cheat at all as opposed to human beings.
Inductive reasoning
In the previous sections deductive reasoning was discussed, reaching conclusions based on logical rules applied to a set of premises. However, many problems cannot be represented in a way that would make it possible to use these rules to get a conclusion. This subchapter is about a way to be able to decide in terms of these problems as well: inductive reasoning. Figure 4, Deductive and inductive reasoning Inductive reasoning is the process of making simple observations of a certain kind and applying these observations via generalization to a different problem to make a decision. Hence one infers from a special case to the general principle which is just the opposite of the procedure of deductive reasoning (Figure 3).
A good example for inductive reasoning is the following:
Premise: All crows Knut and his wife have ever seen are black. Conclusion: Therefore, they reason that all crows on earth are black.
In this example it is obvious that Knut and his wife infer from the simple observation about the crows they have seen to the general principle about all crows. Considering figure 4 this means that they infer from the subset (yellow circle) to the whole (blue circle). As in this example it is typical in a process of inductive reasoning that the premises are believed to support the conclusion, but do not ensure it.
Forms of inductive reasoning
The two different forms of inductive reasoning are "strong" and "weak" induction. The former describes that the truth of the conclusion is very likely, if the assumed premises are true. An example for this form of reasoning is the one given in the previous section. In this case it is obvious that the premise ("All crows Knut and his wife have ever seen are black") gives good evidence for the conclusion ("All crows on earth are black") to be true. But nevertheless it is still possible, although very unlikely, that not all crows are black.
On the contrary, conclusions reached by "weak induction" are supported by the premises in a rather weak manner. In this approach the truth of the premises makes the truth of the conclusion possible, but not likely.
An example for this kind of reasoning is the following:
Premise: Knut always hears music with his IPod.
Conclusion: Therefore, he reasons that all music is only heard with IPods.
In this instance the conclusion is obviously false. The information the premise contains is not very representative and although it is true, it does not give decisive evidence for the truth of the conclusion. To sum it up, strong inductive reasoning gets to conclusions which are very probable whereas the conclusions reached through weak inductive reasoning on the base of the premises are unlikely to be true.
Reliability of conclusions
If the strength of the conclusion of an inductive argument has to be determined, three factors concerning the premises play a decisive role. The following example which refers to Knut and his wife and the observations they made about the crows (see previous sections) displays these factors: When Knut and his wife observe in addition to the black crows in Germany also the crows in Spain, the number of observations they make concerning the crows obviously increases. Furthermore, the representativeness of these observations is supported, if Knut and his wife observe the crows at all different day- and night times and see that they are black every time. Theoretically it may be that the crows change their color at night what would make the conclusion that all crows are black wrong. The quality of the evidence for all crows to be black increases, if Knut and his wife add scientific measurements which support the conclusion. For example they could find out that the crows' genes determine that the only color they can have is black. Conclusions reached through a process of inductive reasoning are never definitely true as no one has seen all crows on earth and as it is possible, although very unlikely, that there is a green or brown exemplar. The three mentioned factors contribute decisively to the strength of an inductive argument. So, the stronger these factors are, the more reliable are the conclusions reached through induction.
Processes and constraints
In a process of inductive reasoning people often make use of certain heuristics which lead in many cases quickly to adequate conclusions but sometimes may cause errors. In the following, two of these heuristics (availability heuristic and representativeness heuristic) are explained. Subsequently, the confirmation bias is introduced which sometimes influences peoples’ reasons according to their own opinion without them realising it.
The availability heuristic
Things that are more easily remembered are judged to be more prevalent. An example for this is an experiment done by Lichtenstein et al. (1978). The participants were asked tochoose from two different lists the causes of death which occur more often. Because of the availability heuristic people judged more “spectacular” causes like homicide or tornado to cause more deaths than others, like asthma. The reason for the subjects answering in such a way is that for example films and news in television are very often about spectacular and interesting causes of death. This is why these information are much more available to the subjects in the experiment. Another effect of the usage of the availability heuristic is called illusory correlations. People tend to judge according to stereotypes. It seems to them that there are correlations between certain events which in reality do not exist. This is what is known by the term “prejudice”. It means that a much oversimplified generalization about a group of people is made. Usually a correlation seems to exist between negative features and a certain class of people (often fringe groups). If, for example, one's neighbour is jobless and very lazy one tends to correlate these two attributes and to create the prejudice that all jobless people are lazy.
This illusory correlation occurs because one takes into account information which is available and judges this to be prevalent in many cases.
The representativeness heuristic
If people have to judge the probability of an event they try to find a comparable event and assume that the two events have a similar probability. Amos Tversky and Daniel Kahneman (1974) presented the following task to their participants in an experiment: “We randomly chose a man from the population of the U.S., Robert, who wears glasses, speaks quietly and reads a lot. Is it more likely that he is a librarian or a farmer?” More of the participants answered that Robert is a librarian which is an effect of the representativeness heuristic. The comparable event which the participants chose was the one of a typical librarian as Robert with his attributes of speaking quietly and wearing glasses resembles this event more than the event of a typical farmer. So, the event of a typical librarian is better comparable with Robert than the event of a typical farmer. Of course this effect may lead to errors as Robert is randomly chosen from the population and as it is perfectly possible that he is a farmer although he speaks quietly and wears glasses.
The representativeness heuristic also leads to errors in reasoning in cases where the conjunction rule is violated. This rule states that the conjunction of two events is never more likely to be the case than the single events alone. An example for this is the case of the feminist bank teller (Tversky & Kahneman, 1983). If we are introduced to a woman of whom we know that she is very interested in women’s rights and has participated in many political activities in college and we are to decide whether it is more likely that she is a bank teller or a feminist bank teller, we are drawn to conclude the latter as the facts we have learnt about her resemble the event of a feminist bank teller more than the event of only being a bank teller.
But it is in fact much more likely that somebody is just a bank teller than it is that someone is a feminist in addition to being a bank teller. This effect is illustrated in figure 6 where the green square, which stands for just being a bank teller, is much larger and thus more probable than the smaller violet square, which displays the conjunction of bank tellers and feminists, which is a subset of bank tellers.
The confirmation bias
This phenomenon describes the fact that people tend to decide in terms of what they themselves believe to be true or good. If, for example, someone believes that one has bad luck on Friday the thirteenth, he will especially look for every negative happening at this particular date but will be inattentive to negative happenings on other days. This behaviour strengthens the belief that there exists a relationship between Friday the thirteenth and having bad luck.
This example shows that the actual information is not taken into account to come to a conclusion but only the information which supports one's own belief. This effect leads to errors as people tend to reason in a subjective manner, if personal interests and beliefs are involved. All the mentioned factors influence the subjective probability of an event so that it differs from the actual probability (probability heuristic). Of course all of these factors do not always appear alone, but they influence one another and can occur in combination during the process of reasoning.
Why inductive reasoning at all?
All the described constraints show how prone to errors inductive reasoning is and so the question arises, why we use it at all? But inductive reasons are important nevertheless because they act as shortcuts for our reasoning. It is much easier and faster to apply the availability heuristic or the representativeness heuristic to a problem than to take into account all information concerning the current topic and draw a conclusion by using logical rules. In the following excerpt of very usual actions there is a lot of inductive reasoning involved although one does not realize it on the first view. It points out the importance of this cognitive ability: The sunrise every morning and the sunset in the evening, the change of seasons, the TV program, the fact that a chair does not collapse when we sit on it or the light bulb that flashes after we have pushed a button.
All of these cases are conclusions derived from processes of inductive reasoning. Accordingly, one assumes that the chair one is sitting on does not collapse as the chairs on which one sat before did not collapse. This does not ensure that the chair does not break into pieces but nevertheless it is a rather helpful conclusion to assume that the chair remains stable as this is very probable. To sum it up, inductive reasoning is rather advantageous in situations where deductive reasoning is just not applicable because only evidence but no proved facts are available. As these situations occur rather often in everyday life, living without the use of inductive reasoning is inconceivable.
Induction vs. deduction
The table below (Figure 6) summarizes the most prevalent properties and differences between deductive and inductive reasoning which are important to keep in mind.
Decision making
According to the different levels of consequences, each process of making a decision requires appropriate effort and various aspects to be considered. The following excerpt from the story about Knut makes this obvious: “After considering facts like the warm weather in Spain and shirts and shorts being much more comfortable in this case (information gathering and likelihood estimation) Knut reasons that he needs them for his vacation. In consequence, he finally makes the decision to pack mainly shirts and shorts in his bag (final act of choosing).” Now it seems like there cannot be any decision making without previous reasoning, but that is not true. Of course there are situations in which someone decides to do something spontaneously, with no time to reason about it. We will not go into detail here but you might think about questions like "Why do we choose one or another option in that case?"
Choosing among alternatives
The psychological process of decision making constantly goes along with situations in daily life. Thinking about Knut again we can imagine him to decide between packing more blue or more green shirts for his vacation (which would only have minor consequences) but also about applying a specific job or having children with his wife (which would have relevant influence on important circumstances of his future life). The mentioned examples are both characterized by personal decisions, whereas professional decisions, dealing for example with economic or political issues, are just as important.
The utility approach
There are three different ways to analyze decision making. The normative approach assumes a rational decision-maker with well-defined preferences. While the rational choice theory is based on a priori considerations, the descriptive approach is based on empirical observations and on experimental studies of choice behavior. The prescriptive enterprise develops methods in order to improve decision making. According to Manktelow and Reber´s definition, “utility" refers to outcomes that are desirable because they are in the person’s best interest” (Reber, A. S., 1995; Manktelow, K., 1999). This normative/descriptive approach characterizes optimal decision making by the maximum expected utility in terms of monetary value. This approach can be helpful in gambling theories, but simultaneously includes several disadvantages. People do not necessarily focus on the monetary payoff, since they find value in things other than money, such as fun, free time, family, health and others. But that is not a big problem, because it is possible to apply the graph (Figure 7), which shows the relation between (monetary) gains/losses and their subjective value / utility, which is equal to all the valuable things mentioned above. Therefore, not choosing the maximal monetary value does not automatically describe an irrational decision process.
Misleading effects
But even respecting the considerations above there might still be problems to make the “right” decision because of different misleading effects, which mainly arise because of the constraints of inductive reasoning. In general this means that our model of a situation/problem might not be ideal to solve it in an optimal way. The following three points are typical examples for such effects.
Subjective models
This effect is rather equal to the illusory correlations mentioned before in the part about the constraints of inductive reasoning. It is about the problem that models which people create might be misleading, since they rely on subjective speculations. An example could be deciding where to move by considering typical prejudices of the countries (e.g. always good pizza, nice weather and a relaxed life-style in Italy in contrast to some kind of boring food and steady rain in Great Britain). The predicted events are not equal to the events occurring indeed. (Kahneman & Tversky, 1982; Dunning & Parpal, 1989)
Focusing illusion
Another misleading effect is the so-called focusing illusion. By considering only the most obvious aspects in order to make a certain decision (e.g. the weather) people often neglect various really important outcomes (e.g. circumstances at work). This effect occurs more often, if people judge about others compared with judgments about their own living.
Framing effect
A problem can be described in different ways and therefore evoke different decision strategies. If a problem is specified in terms of gains, people tend to use a risk-aversion strategy, while a problem description in terms of losses leads to apply a risk-taking strategy. An example of the same problem and predictably different choices is the following experiment: A group of people is asked to imagine themselves \$300 richer than they are, is confronted with the choice of a sure gain of \$100 or an equal chance to gain \$200 or nothing. Most people avoid the risk and take the sure gain, which means they take the risk-aversion strategy. Alternatively if people are asked to assume themselves to be \$500 richer than in reality, given the options of a sure loss of \$100 or an equal chance to lose \$200 or nothing, the majority opts for the risk of losing \$200 by taking the risk seeking or risk-taking strategy. This phenomenon is known as framing effect and can also be illustrated by figure 8 above, which is a concave function for gains and a convex one for losses. (Foundations of Cognitive Psychology, Levitin, D. J., 2002)
Justification in decision making
Decision making often includes the need to assign a reason for the decision and therefore justify it. This factor is illustrated by an experiment by A. Tversky and E. Shafir (1992): A very attractive vacation package has been offered to a group of students who have just passed an exam and to another group of students who have just failed the exam and have the chance to rewrite it after the holidays coming up. All students have the options to buy the ticket straight away, to stay at home, or to pay \$5 for keeping the option open to buy it later. At this point, there is no difference between the two groups, since the number of students who passed the exam and decided to book the flight (with the justification of a deserving a reward), is the same as the number of students who failed and booked the flight (justified as consolation and having time for reoccupation). A third group of students who were informed to receive their results in two more days was confronted with the same problem. The majority decided to pay \$5 and keep the option open until they would get their results. The conclusion now is that even though the actual exam result does not influence the decision, it is required in order to provide a rationale.
Executive functions
Subsequently, the question arises how this cognitive ability of making decisions is realized in the human brain. As we already know that there are a couple of different tasks involved in the whole process, there has to be something that coordinates and controls those brain activities – namely the executive functions. They are the brain's conductor, instructing other brain regions to perform, or be silenced, and generally coordinating their synchronized activity (Goldberg, 2001). Thus, they are responsible for optimizing the performance of all “multi-threaded” cognitive tasks.
Locating those executive functions is rather difficult, as they cannot be appointed to a single brain region. Traditionally, they have been equated with the frontal lobes, or rather the prefrontal regions of the frontal lobes; but it is still an open question whether all of their aspects can be associated with these regions.
Nevertheless, we will concentrate on the prefrontal regions of the frontal lobes, to get an impression of the important role of the executive functions within cognition. Moreover, it is possible to subdivide these regions into functional parts. But it is to be noted that not all researchers regard the prefrontal cortex as containing functionally different regions.
Executive functions in practice
According to Norman and Shallice, there are five types of situations in which executive functions may be needed in order to optimize performance, as the automatic activation of behavior would be insufficient. These are situations involving...
1. planning or decision making.
2. error correction or trouble shooting.
3. responses containing novel sequences of actions.
4. technical difficulties or dangerous circumstances.
5. the control of action or the overcoming of strong habitual responses.
The following parts will have a closer look to each of these points, mainly referring to brain- damaged individuals. Surprisingly, intelligence in general is not affected in cases of frontal lobe injuries (Warrington, James & Maciejewski, 1986). However, dividing intelligence into crystallised intelligence (based on previously acquired knowledge) and fluid intelligence (meant to rely on the current ability of solving problems), emphasizes the executive power of the frontal lobes, as patients with lesions in these regions performed significantly worse in tests of fluid intelligence (Duncan, Burgess & Emslie, 1995).
1. Planning or decision making: Impairments in abstract and conceptual thinking
To solve many tasks it is important that one is able to use given information. In many cases, this means that material has to be processed in an abstract rather than in a concrete manner.
Patients with executive dysfunction have abstraction difficulties. This is proven by a card sorting experiment (Delis et al., 1992): The cards show names of animals and black or white triangles placed above or below the word. Again, the cards can be sorted with attention to different attributes of the animals (living on land or in water, domestic or dangerous, large or small) or the triangles (black or white, above or below word). People with frontal lobe damage fail to solve the task because they cannot even conceptualize the properties of the animals or the triangles, thus are not able to deduce a sorting-rule for the cards (in contrast, there are some individuals only perseverating; they find a sorting-criterion but are unable to switch to a new one). These problems might be due to a general difficulty in strategy formation.
Goal directed behavior
Let us again take Knut into account to get an insight into the field of goal directed behavior – in principle, this is nothing but problem solving since it is about organizing behavior towards a goal. Thus, when Knut is packing his bag for his holiday, he obviously has a goal in mind (in other words: He wants to solve a problem) – namely get ready before the plane starts. There are several steps necessary during the process of reaching a certain goal:
Goal must be kept in mind:
Knut should never forget that he has to pack his bag in time. Dividing into subtasks and sequencing:
Knut packs his bag in a structured way. He starts packing the crucial things and then goes on
with rest.
Completed portions must be kept in mind:
If Knut already packed enough underwear into his bag, he would not need to search for more. Flexibility and adaptability:
Imagine that Knut wants to pack his favourite T-Shirt, but he realizes that it is dirty. In this case,
Knut has to adapt to this situation and has to pick another T-Shirt that was not in his plan originally.
Evaluation of actions:
Along the way of reaching his ultimate goal Knut constantly has to evaluate his performance in terms of ‘How am I doing considering that I have the goal of packing my bag?’.
Executive dysfunction and goal directed behavior
The breakdown of executive functions impairs goal directed behavior to a large extend. In which way cannot be stated in general, it depends on the specific brain regions that are damaged. So it is quite possible that an individual with a particular lesion has problems with two or three of the five points described above and performs within average regions when the other abilities are tested. However, if only one link is missing from the chain, the whole plan might get very hard or even impossible to master. Furthermore, the particular hemisphere affected plays a role as well.
Another interesting result was the fact that lesions in the frontal lobes of left and right hemisphere impaired different abilities. While a lesion in the right hemisphere caused trouble in making regency judgements, a lesion in the left hemisphere impaired the patient’s performance only when the presented material was verbal or in a variation of the experiment that required self-ordered sequencing. Because of that we know that the ability to sequence behavior is not only located in the frontal lobe but in the left hemisphere particularly when it comes to motor action.
Problems in sequencing
In an experiment by Milner (1982), people were shown a sequence of cards with pictures. The experiment included two different tasks: recognition trials and recency trials. In the former the patients were shown two different pictures, one of them has appeared in the sequence before, and the participants had to decide which one it was. In the latter they were shown two different pictures, both of them have appeared before, they had to name the picture that was shown more recently than the other one.
The results of this experiment showed that people with lesions in temporal regions have more trouble with the recognition trial and patients with frontal lesions have difficulties with the recency trial since anterior regions are important for sequencing. This is due to the fact that the recognition trial demanded a properly functioning recognition memory [3], the recency trial a properly functioning memory for item order [3]. These two are dissociable and seem to be processed in different areas of the brain. The frontal lobe is not only important for sequencing but also thought to play a major role for working memory [3] . This idea is supported by the fact that lesions in the lateral regions of the frontal lobe are much more likely to impair the ability of 'keeping things in mind' than damage to other areas of the frontal cortex do. But this is not the only thing there is to sequencing. For reaching a goal in the best possible way it is important that a person is able to figure out which sequence of actions, which strategy, best suits the purpose, in addition to just being able to develop a correct sequence.
This is proven by an experiment called 'Tower of London' (Shallice, 1982) which is similar to the famous 'Tower of Hanoi' [4] task with the difference that this task required three balls to be put onto three poles of different length so that one pole could hold three balls, the second one two and the third one only one ball, in a way that a changeable goal position is attained out of a fixed initial position in as few moves as possible. Especially patients with damage to the left frontal lobe proved to work inefficiently and ineffectively on this task. They needed many moves and engaged in actions that did not lead toward the goal.
Problems with the interpretation of available information
Quite often, if we want to reach a goal, we get hints on how to do it best. This means we have to be able to interpret the available information in terms of what the appropriate strategy would be. For many patients of executive dysfunction this is not an easy thing to do either.
They have trouble to use this information and engage in inefficient actions. Thus, it will take them much longer to solve a task than healthy people who use the extra information and develop an effective strategy.
Problems with self-criticism and -monitoring
The last problem for people with frontal lobe damage we want to present here is the last point in the above list of properties important for proper goal directed behavior. It is the ability to evaluate one's actions, an ability that is missing in most patients. These people are therefore very likely to 'wander off task' and engage in behavior that does not help them to attain their goal. In addition to that, they are also not able to determine whether their task is already completed at all. Reasons for this are thought to be a lack of motivation or lack of concern about one's performance (frontal lobe damage is usually accompanied by changes in emotional processing) but these are probably not the only explanations for these problems. Another important brain region in this context – the medial portion of the frontal lobe – is responsible for detecting behavioral errors made while working towards a goal. This has been shown by ERP experiments [5] where there was an error-related negativity 100ms after an error has been made. If this area is damaged, this mechanism cannot work properly anymore and the patient loses the ability to detect errors and thus monitor his own behavior. However, in the end we must add that although executive dysfunction causes an enormous number of problems in behaving correctly towards a goal, most patients when assigned with a task are indeed anxious to solve it but are just unable to do so.
2. Error correction and trouble shooting
The most famous experiment to investigate error correction and trouble shooting is the Wisconsin Card Sorting Test (WCST). A participant is presented with cards that show certain objects. These cards are defined by shape, color and number of the objects on the cards. These cards now have to be sorted according to a rule based on one of these three criteria. The participant does not know which rule is the right one but has to reach the conclusion after positive or negative feedback of the experimenter. Then at some point, after the participant has found the correct rule to sort the cards, the experimenter changes the rule and the previous correct sorting will lead to negative feedback. The participant has to realize the change and adapt to it by sorting the cards according to the new rule.
Patients with executive dysfunction have problems identifying the rule in the first place. It takes them noticeably longer because they have trouble using already given information to make a conclusion. But once they got to sorting correctly and the rule changes, they keep sorting the cards according to the old rule although many of them notice the negative feedback. They are just not able to switch to another sorting-principle, or at least they need many tries to learn the new one. They perseverate.
Problems in shifting and modifying strategies
Intact neuronal tissue in the frontal lobe is also crucial for another executive function connected with goal directed behavior that we described above: Flexibility and adaptability. This means that persons with frontal lobe damage will have difficulties in shifting their way of thinking – meaning creating a new plan after recognizing that the original one cannot becarried out for some reason. Thus, they are not able to modify their strategy according to this new problem. Even when it is clear that one hypothesis cannot be the right one to solve a task, patients will stick to it nevertheless and are unable to abandon it (called 'tunnelvision').
Moreover, such persons do not use as many appropriate hypotheses for creating a strategy as people with damage to other brain regions do. In what particular way this can be observed in patients can again not be stated in general but depends on the nature of the shift that has to be made.
These earlier described problems of 'redirecting' of one's strategies stand in contrast to the atcual 'act of switching' between tasks. This is yet another problem for patients with frontal lobe damage. Since the control system that leads task switching as such is independent from the parts that actually perform these tasks, the task switching is particularly impaired in patients with lesions to the dorsolateral prefrontal cortex while at the same time they have no trouble with performing the single tasks alone. This of course, causes a lot of problems in goal directed behavior because as it was said before: Most tasks consist of smaller subtasks that have to be completed.
3. Responses containing novel sequences of actions
Many clinical tests have been done, requiring patients to develop strategies for dealing with novel situations. In the Cognitive Estimation Task (Shallice & Evans, 1978) patients are presented with questions whose answers are unlikely to be known. People with damage to the prefrontal cortex have major difficulties to produce estimates for questions like: “How many camels are in Holland?”. In the FAS Test (Miller, 1984) subjects have to generate sequences of words (not proper names) beginning with a certain letter (“F” , “A” or “S”) in a one-minute period. This test involves developing new strategies, selecting between alternatives and avoiding repeating previous given answers. Patients with left lateral prefrontal lesions are often impaired (Stuss et al., 1998).
4. Technical difficulties or dangerous circumstances
One single mistake in a dangerous situation may easily lead to serious injuries while a mistake in a technical difficult situation (e.g. building a house of cards) would obviously lead to failure. Thus, in such situations, automatic activation of responses clearly would be insufficient and executive functions seem to be the only solution for such problems. Wilkins, Shallice and McCarthy (1987) were able to prove a connection between dangerous or difficult situations and the prefrontal cortex, as patients with lesions to this area were impaired during experiments concerning dangerous or difficult situations. The ventromedial and orbitofrontal cortex may be particularly important for these aspects of executivefunctions.
5. Control of action or the overcoming of strong habitual responses
Deficits in initiation, cessation and control of action
We start by describing the effects of the loss of the ability to start something, to initiate an action. A person with executive dysfunction is likely to have trouble beginning to work on a task without strong help from the outside, while people with left frontal lobe damage often show impaired spontaneous speech and people with right frontal lobe damage rather show poor nonverbal fluency. Of course, one reason is the fact that this person will not have any intention, desire or concern on his or her own of solving the task since this is yet another characteristic of executive dysfunction. But it is also due to a psychological effect often connected with the loss of properly executive functioning: Psychological inertia. Like in physics, inertia in this case means that an action is very hard to initiate, but once started, it is again very hard to shift or stop. This phenomenon is characterized by engagement in repetitive behavior, is called perseveration (cp. WCST [6]).
Another problem caused by executive dysfunction can be observed in patients suffering from the so called environmental dependency syndrome. Their actions are impelled or obligated by their physical or social environment. This manifests itself in many different ways and depends to a large extent on the individual’s personal history. Examples are patients who begin to type when they see a computer key board, who start washing the dishes upon seeing a dirty kitchen or who hang up pictures on the walls when finding hammer, nails and pictures on the floor. This makes these people appear as if they were acting impulsively or as if they have lost their ‘free will’. It shows a lack of control for their actions. This is due to the fact that an impairment in their executive functions causes a disconnection between thought and action. These patients know that their actions are inappropriate but like in the WCST, they cannot control what they are doing. Even if they are told by which attribute to sort the cards, they will still keep sorting them sticking to the old rule due to major difficulties in the translation of these directions into action.
What is needed to avoid problems like these are the abilities to start, stop or change an action but very likely also the ability to use information to direct behavior.
Deficits in cognitive estimation
Next to the difficulties to produce estimates to questions whose answers are unlikely known, patients with lesions to the frontal lobes have problems with cognitive estimation in general. Cognitive estimation is the ability to use known information to make reasonable judgments or deductions about the world. Now the inability for cognitive estimation is the third type of deficits often observed in individuals with executive dysfunction. It is already known that people with executive dysfunction have a relatively unaffected knowledge base. This means they cannot retain knowledge about information or at least they are unable to make inferences based on it. There are various effects which are shown on such individuals. Now for example patients with frontal lobe damage have difficulty estimating the length of the spine of an average woman.
Making such realistic estimations requires inferencing based on other knowledge which is in this case, knowing that the height of the average woman is about 5ft 6 in (168cm) and considering that the spine runs about one third to one half the length of the body and so on. Patients with such a dysfunction do not only have difficulties in their estimates of cognitive information but also in their estimates of their own capacities (such as their ability to direct activity in goal – oriented manner or in controlling their emotions). Prigatuno, Altman and O’Brien (1990) reported that when patients with anterior lesions associated with diffuse axonal injury to other brain areas are asked how capable they are of performing tasks such as scheduling their daily activities or preventing their emotions from affecting daily activities, they grossly overestimate their abilities. From several experiments Smith and Miler (1988) found out that individuals with frontal lobe damages have no difficulties in determining whether an item was in a specific inspection series they find it difficult to estimate how frequently an item did occur. This may not only reflect difficulties in cognitive estimation but also in memory task that place a premium on remembering temporal information. Thus both difficulties (in cognitive estimation and in temporal sequencing) may contribute to a reduced ability to estimate frequency of occurrence.
Despite these impairments in some domains the abilities of estimation are preserved in patients with frontal lobe damage. Such patients also do have problems in estimating how well they can prevent their emotions for affecting their daily activities. They are also as good at judging how many dues they will need to solve a puzzle as patients with temporal lobe damage or neurologically intact people.
Theories of frontal lobe function in executive control
In order to explain that patients with frontal lobe damage have difficulties in performing executive functions, four major approaches have developed. Each of them leads to an improved understanding of the role of frontal regions in executive functions, but none of these theories covers all the deficits occurred.
Role of working memory
The most anatomically specific approach assumes the dorsolateral prefrontal area of the frontal lobe to be critical for working memory. The working memory which has to be clearly distinguished from the long term memory keeps information on-line for use in performing a task. Not being generated for accounting for the broad array of dysfunctions it focuses on the three following deficits:
1. Sequencing information and directing behavior toward a goal
2. Understanding of temporal relations between items and events
3. Some aspects of environmental dependency and perseveration
Research on monkeys has been helpful to develop this approach (the delayed-response paradigm, Goldman-Rakic, 1987, serves as a classical example).
Role of Controlled Versus Automatic Processes
There are two theories based on the underlying assumption that the frontal lobes are especially important for controlling behavior in non-experienced situations and for overriding stimulus- response associations, but contribute little to automatic and effortless behavior (Banich, 1997). Stuss and Benson (1986) consider control over behavior to occur in a hierarchical manner. They distinguish between three different levels, of which each is associated with a particular brain region. In the first level sensory information is processed automatically by posterior regions, in the next level (associated with the executive functions of the frontal lobe) conscious control is needed to direct behavior toward a goal and at the highest level controlled self-reflection takes place in the prefrontal cortex. This model is appropriate for explaining deficits in goal-oriented behavior, in dealing with novelty, the lack of cognitive flexibility and the environmental dependency syndrome. Furthermore it can explain the inability to control action consciously and to criticise oneself. The second model developed by Shalice (1982) proposes a system consisting of two parts that influence the choice of behavior. The first part, a cognitive system called contention scheduling, is in charge of more automatic processing. Various links and processing schemes cause a single stimulus to result in an automatic string of actions. Once an action is initiated, it remains active until inhibited. The second cognitive system is the supervisory attentional system which directs attention and guides action through decision processes and is only active “when no processing schemes are available, when the task is technically difficult, when problem solving is required and when certain response tendencies must be overcome” (Banich , 1997). This theory supports the observations of few deficits in routine situations, but relevant problems in dealing with novel tasks (e.g. the Tower of London task, Shallice, 1982), since no schemes in contention scheduling exist for dealing with it.
Impulsive action is another characteristic of patients with frontal lobe damages which can be explained by this theory. Even if asked not to do certain things, such patients stick to their routines and cannot control their automatic behavior.
Use of Scripts
The approach based on scripts, which are sets of events, actions and ideas that are linked to form a unit of knowledge was developed by Schank (1982) amongst others. Containing information about the setting in which an event occurs, the set of events needed to achieve the goal and the end event terminating the action. Such managerial knowledge units (MKUs) are supposed to be stored in the prefrontal cortex. They are organized in a hierarchical manner being abstract at the top and getting more specific at the bottom. Damage of the scripts leads to the inability to behave goal-directed, finding it easier to cope with usual situations (due to the difficulty of retrieving a MKU of a novel event) and deficits in the initiation and cessation of action (because of MKUs specifying the beginning and ending of an action.)
Role of a goal list
The perspective of artificial intelligence and machine learning introduced an approach which assumes that each person has a goal list, which contains the tasks requirements or goals. This list is fundamental to guiding behavior and since frontal lobe damages disrupt the ability to form a goal list, the theory helps to explain difficulties in abstract thinking, perceptual analysis, verbal output and staying on task. It can also account for the strong environmental influence on patients with frontal lobe damages, due to the lack of internal goals and the difficulty of organizing actions toward a goal.
Brain Region
Possible Function (left hemisphere)
Possible Function (right hemisphere)
Brodman's Areas which are involved
ventrolateral prefrontal cortex (VLPFC)
Retrieval and maintenance of semantic and/or linguistic
information
Retrieval and maintenance of visuospatial information
44, 45, 47 (44 & 45 =
Broca's Area)
dorsolateral prefrontal cortex
)DLPRF)
Selecting a range of responses and suppressing inappropriate ones; manipulating the contents of working memory
Monitoring and checking of information held in mind, particularly in conditions of uncertainty; vigilance and
sustained attention
9, 46
anterior prefrontal cortex; frontal pole; rostral prefrontal cortex
Multitasking; maintaining future intentions & goals while currently
performing other tasks or subgoals
Same
10
Table 3
Summary
It is important to keep in mind that reasoning and decision making are closely connected to each other: Decision making in many cases happens with a previous process of reasoning. People's everyday life is decisively coined by the synchronized appearance of these two human cognitive features. This synchronization, in turn, is realized by the executive functions which seem to be mainly located in the frontal lobes of the brain.
Deductive Reasoning + Inductive Reasoning
There is more than one way to start with information and arrive at an inference; thus, there is more than one way to reason. Each has its own strengths, weaknesses, and applicability to the real world.
Deduction
In this form of reasoning a person starts with a known claim or general belief, and from there determines what follows. Essentially, deduction starts with a hypothesis and examines the possibilities within that hypothesis to reach a conclusion. Deductive reasoning has the advantage that, if your original premises are true in all situations and your reasoning is correct, your conclusion is guaranteed to be true. However, deductive reasoning has limited applicability in the real world because there are very few premises which are guaranteed to be true all of the time.
A syllogism is a form of deductive reasoning in which two statements reach a logical conclusion. An example of a syllogism is, “All dogs are mammals; Kirra is a dog; therefore, Kirra is a mammal.”
Induction
Inductive reasoning makes broad inferences from specific cases or observations. In this process of reasoning, general assertions are made based on specific pieces of evidence. Scientists use inductive reasoning to create theories and hypotheses. An example of inductive reasoning is, “The sun has risen every morning so far; therefore, the sun rises every morning.” Inductive reasoning is more practical to the real world because it does not rely on a known claim; however, for this same reason, inductive reasoning can lead to faulty conclusions. A faulty example of inductive reasoning is, “I saw two brown cats; therefore, the cats in this neighborhood are brown.”
Interactive Element
Sherlock Holmes, master of reasoning: In this video, we see the famous literary character Sherlock Holmes use both inductive and deductive reasoning to form inferences about his friends. As you can see, inductive reasoning can lead to erroneous conclusions. Can you distinguish between his deductive (general to specific) and inductive (specific to general) reasoning? | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/08%3A_Reasoning/8.02%3A_Deductive_Reasoning__Inductive_Reasoning.txt |
Interactive Element
Cindy Sifonis is a professor of psychology at Oakland University, listen to her 4:44 min podcast on Propositional Reasoning to better understand the role it plays in cognitive psychology.
8.04: Venn Diagrams
To visualize the interaction of sets, John Venn in 1880 thought to use overlapping circles, building on a similar idea used by Leonhard Euler in the 18th century. These illustration show called Venn Diagrams.
Definition: Venn Diagrams
A Venn diagram represents each set by a circle, usually drawn inside of a containing box representing the universal set. Overlapping areas indicate elements common to both sets.
Basic Venn diagrams can illustrate the interaction of two or three sets.
Example 1
Create Venn diagrams to illustrate A B, A B, and Ac B A B contains all elements in either set.
A B contains only those elements in both sets – in the overlap of the circles.
Ac will contain all elements not in the set A. Ac B will contain the elements in set B that are not in set A.
Example 2
Use a Venn diagram to illustrate (H F)c W
We’ll start by identifying everything in the set H F
Now, (H F)c W will contain everything not in the set identified above that is also in set W.
Example 3
Create an expression to represent the outlined part of the Venn diagram shown.
The elements in the outlined set are in sets H and F, but are not in set W. So we could represent this set as H F Wc
Try it Now
Create an expression to represent the outlined portion of the Venn diagram shown
8.05: Syllogisms
Syllogisms are an example of Deductive reasoning. Deductive reasoning derives specifics from what is already known. It was the preferred form of reasoning used by ancient rhetoricians like Aristotle to make logical arguments. A syllogism is an example of deductive reasoning that is commonly used when teaching logic. A syllogism is an example of deductive reasoning in which a conclusion is supported by major and minor premises. The conclusion of a valid argument can be deduced from the major and minor premises. A commonly used example of a syllogism is “All humans are mortal. Socrates is a human. Socrates is mortal.” In this case, the conclusion, “Socrates is mortal,” is derived from the major premise, “All humans are mortal,” and the minor premise, “Socrates is a human.” In some cases, the major and minor premises of a syllogism may be taken for granted as true. In the previous example, the major premise is presumed true because we have no knowledge of an immortal person to disprove the statement. The minor premise is presumed true because Socrates looks and acts like other individuals we know to be human. Detectives or scientists using such logic would want to test their conclusion. We could test our conclusion by stabbing Socrates to see if he dies, but since the logic of the syllogism is sound, it may be better to cut Socrates a break and deem the argument valid. Since most arguments are more sophisticated than the previous example, speakers need to support their premises with research and evidence to establish their validity before deducing their conclusion.
A syllogism can lead to incorrect conclusions if one of the premises isn’t true, as in the following example:
· All presidents have lived in the White House. (Major premise)
· George Washington was president. (Minor premise)
· George Washington lived in the White House. (Conclusion)
In the previous example, the major premise was untrue, since John Adams, our second president, was the first president to live in the White House. This causes the conclusion to be false. A syllogism can also exhibit faulty logic even if the premises are both true but are unrelated, as in the following example:
· Penguins are black and white. (Major premise)
· Some old television shows are black and white. (Minor premise)
· Some penguins are old television shows. (Conclusion) | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/08%3A_Reasoning/8.03%3A_Propositional_Reasoning.txt |
Humans pay particular attention to stimuli that are salient—things that are unique, negative, colorful, bright, and moving. In many cases, we base our judgments on information that seems to represent, or match, what we expect will happen. When we do so, we are using the representativeness heuristic.
Cognitive accessibility refers to the extent to which knowledge is activated in memory and thus likely to be used to guide our reactions to others. The tendency to overuse accessible social constructs can lead to errors in judgment, such as the availability heuristic and the false consensus bias. Counterfactual thinking about what might have happened and the tendency to anchor on an initial construct and not adjust sufficiently from it are also influenced by cognitive accessibility.
You can use your understanding of social cognition to better understand how you think accurately—but also sometimes inaccurately—about yourself and others.
9.02: Availability
Although which characteristics we use to think about objects or people is determined in part by the salience of their characteristics (our perceptions are influenced by our social situation), individual differences in the person who is doing the judging are also important (our perceptions are influenced by person variables). People vary in the schemas that they find important to use when judging others and when thinking about themselves. One way to consider this importance is in terms of the cognitive accessibility of the schema. Cognitive accessibility refers to the extent to which a schema is activated in memory and thus likely to be used in information processing.
You probably know people who are golf nuts (or maybe tennis or some other sport nuts). All they can talk about is golf. For them, we would say that golf is a highly accessible construct. Because they love golf, it is important to their self-concept; they set many of their goals in terms of the sport, and they tend to think about things and people in terms of it (“if he plays golf, he must be a good person!”). Other people have highly accessible schemas about eating healthy food, exercising, environmental issues, or really good coffee, for instance. In short, when a schema is accessible, we are likely to use it to make judgments of ourselves and others.
Although accessibility can be considered a person variable (a given idea is more highly accessible for some people than for others), accessibility can also be influenced by situational factors. When we have recently or frequently thought about a given topic, that topic becomes more accessible and is likely to influence our judgments. This is in fact the explanation for the results of the priming study you read about earlier—people walked slower because the concept of elderly had been primed and thus was currently highly accessible for them.
Because we rely so heavily on our schemas and attitudes—and particularly on those that are salient and accessible—we can sometimes be overly influenced by them. Imagine, for instance, that I asked you to close your eyes and determine whether there are more words in the English language that begin with the letter R or that have the letter R as the third letter. You would probably try to solve this problem by thinking of words that have each of the characteristics. It turns out that most people think there are more words that begin with R, even though there are in fact more words that have R as the third letter.
You can see that this error can occur as a result of cognitive accessibility. To answer the question, we naturally try to think of all the words that we know that begin with R and that have R in the third position. The problem is that when we do that, it is much easier to retrieve the former than the latter, because we store words by their first, not by their third, letter. We may also think that our friends are nice people because we see them primarily when they are around us (their friends). And the traffic might seem worse in our own neighborhood than we think it is in other places, in part because nearby traffic jams are more accessible for us than are traffic jams that occur somewhere else. And do you think it is more likely that you will be killed in a plane crash or in a car crash? Many people fear the former, even though the latter is much more likely: Your chances of being involved in an aircraft accident are about 1 in 11 million, whereas your chances of being killed in an automobile accident are 1 in 5,000—over 50,000 people are killed on U.S. highways every year.
In this case, the problem is that plane crashes, which are highly salient, are more easily retrieved from our memory than are car crashes, which are less extreme.
The tendency to make judgments of the frequency of an event, or the likelihood that an event will occur, on the basis of the ease with which the event can be retrieved from memory is known as the availability heuristic (Schwarz & Vaughn, 2002; Tversky & Kahneman, 1973).
The idea is that things that are highly accessible (in this case, the term availability is used) come to mind easily and thus may overly influence our judgments. Thus, despite the clear facts, it may be easier to think of plane crashes than of car crashes because the former are so highly salient. If so, the availability heuristic can lead to errors in judgments.
Still another way that the cognitive accessibility of constructs can influence information processing is through their effects on processing fluency. Processing fluency refers to the ease with which we can process information in our environments. When stimuli are highly accessible, they can be quickly attended to and processed, and they therefore have a large influence on our perceptions. This influence is due, in part, to the fact that our body reacts positively to information that we can process quickly, and we use this positive response as a basis of judgment (Reber, Winkielman, & Schwarz, 1998; Winkielman & Cacioppo, 2001).
In one study demonstrating this effect, Norbert Schwarz and his colleagues (Schwarz et al., 1991) asked one set of college students to list 6 occasions when they had acted either assertively or unassertively and asked another set of college students to list 12 such examples. Schwarz determined that for most students, it was pretty easy to list 6 examples but pretty hard to list 12.
The researchers then asked the participants to indicate how assertive or unassertive they actually were. You can see from Figure 1 “Processing Fluency” that the ease of processing influenced judgments. The participants who had an easy time listing examples of their behavior (because they only had to list 6 instances) judged that they did in fact have the characteristics they were asked about (either assertive or unassertive), in comparison with the participants who had a harder time doing the task (because they had to list 12 instances). Other research has found similar effects—people rate that they ride their bicycles more often after they have been asked to recall only a few rather than many instances of doing so (Aarts & Dijksterhuis, 1999), and they hold an attitude with more confidence after being asked to generate few rather than many arguments that support it (Haddock, Rothman, Reber, & Schwarz, 1999).
When it was relatively easy to complete the questionnaire (only 6 examples were required), the student participants rated that they had more of the trait than when the task was more difficult (12 answers were required). Data are from Schwarz et al. (1991).
We are likely to use this type of quick and “intuitive” processing, based on our feelings about how easy it is to complete a task, when we don’t have much time or energy for more in-depth processing, such as when we are under time pressure, tired, or unwilling to process the stimulus in sufficient detail. Of course, it is very adaptive to respond to stimuli quickly (Sloman, 2002; Stanovich & West, 2002; Winkielman, Schwarz, & Nowak, 2002), and it is not impossible that in at least some cases, we are better off making decisions based on our initial responses than on a more thoughtful cognitive analysis (Loewenstein, weber, Hsee, & Welch, 2001). For instance, Dijksterhuis, Bos, Nordgren, and van Baaren (2006) found that when participants were given tasks requiring decisions that were very difficult to make on the basis of a cognitive analysis of the problem, they made better decisions when they didn’t try to analyze the details carefully but simply relied on their unconscious intuition.
In sum, people are influenced not only by the information they get but by how they get it. We are more highly influenced by things that are salient and accessible and thus easily attended to, remembered, and processed. On the other hand, information that is harder to access from memory, is less likely to be attended to, or takes more effort to consider is less likely to be used in our judgments, even if this information is statistically equally informative or even more informative. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/09%3A_Decision_Making/9.01%3A_Representativeness.txt |
How tall do you think Mt. Everest is? (Don’t Google it—that kind of defeats the purpose) You probably don’t know the exact number, but do you think it’s taller or shorter than 150 feet? Assuming you said “taller”[1], make a guess. How tall do you think it is?
Mt. Everest is roughly 29,000 ft. in height.
How’d you do? If I were to guess, based on the psychology I’m about to share with you, you probably undershot it. Even if you didn’t, most people would.[3] The reason is what’s called the anchoring heuristic.
Back in the 1970s, Amos Tversky and Daniel Kahneman identified a few reliable mental shortcuts people use when they have to make judgments. Oh, you thought people were completely rational every time they make a decision? It’s nice to think, but it’s not always what happens. We’re busy! We have lives! I can’t sit around and do the math anytime I want to know how far away a stop sign is, so I make estimates based on pretty reliable rules of thumb. The tricks we use to do that are called heuristics.
The Basics of the Anchoring Heuristic
The basic idea of anchoring is that when we’re making a numerical estimate, we’re often biased by the number we start at. In the case of the Mt. Everest estimate, I gave you the starting point of 150 feet. You though “Well, it’s taller than that,” so you likely adjusted the estimate from 150 feet to something taller than that. The tricky thing, though, is that we don’t often adjust far enough away from the anchor.
Let’s jump into an alternate timeline and think about how things could have gone differently. Instead of starting you at 150 feet, this time I ask you whether Mt. Everest is taller or shorter than 300,000 feet. This time you’d probably end up at a final estimate that’s bigger than the correct answer. The reason is you’d start at 300,000 and start adjusting down, but you’d probably stop before you got all the way down to the right answer.
Coming Up With Your Own Anchors
In general, this is a strategy that tends to work for people. After all, when we don’t know an exact number, how are we supposed to figure it out? It seems pretty reasonable to start with a concrete anchor and go from there.
In fact, some research has shown that this is how people make these estimates when left to their own devices. Rather than work from an anchor that’s given to them (like in the Mt. Everest example), people will make their own anchor—a “self-generated anchor.”
For example, if you ask someone how many days it takes Mercury to orbit the sun, she’ll likely to start at 365 (the number of days it takes Earth to do so) and then adjust downward. But of course, people usually don’t adjust far enough.
Biased By Completely Arbitrary Starting Points
This paints an interesting picture of how we strive to be reasonable by adopting a pretty decent strategy for coming up with numerical estimates. When you think about, even though we’re biased by the starting point, it sounds like a decent strategy. After all, you’ve got to start somewhere!
But what if the starting point is totally arbitrary? Sure, the “150 feet” anchor from before probably seems pretty arbitrary, but at the time you might have thought “Why would he have started me at 150 feet? It must be a meaningful starting point.”
The truth is that these anchors bias judgments even when everyone realizes how arbitrary they are. To test this idea, one study asked people to guess the percentage of African countries in the United Nations. To generate a starting point, though, the researchers spun a “Wheel-of- Fortune” type of wheel with numbers between 0 – 100.
For whichever number the wheel landed on, people said whether they thought the real answer was more or less than that number. Even these random anchors ended up biasing people’s estimates. If the wheel had landed on 10, people tended to say about 25% of countries in the UN are African, but if the wheel had landed on 65, they tended to say about 45% of countries in the UN are African. That’s a pretty big difference in estimates, and it comes from a random change in a completely arbitrary value.
People’s judgments can even be biased by anchors based on their own social security numbers.
Biased By Numbers in the Air
Through all of these examples, the anchor has been a key part of the judgment process. That is, someone says “Is it higher or lower than this anchor?” and then you make a judgment. But what if the starting point is in the periphery?
Even when some irrelevant number is just hanging out in the environment somewhere, it can still bias your judgments! These have been termed “incidental anchors.”
For example, participants in one study were given a description of a restaurant and asked to report how much money they would be willing to spend there. Two groups of people made this judgment, and the only difference between them is that for one group, the restaurant’s name was “Studio 17” and for the other group, the restaurant’s name was “Studio 97.” When the restaurant was “Studio 97,” people said they’d spend more money (an average of about \$32) than when the restaurant was “Studio 17” (where they reported a willingness to spend about \$24).
Other research has shown that people were willing to pay more money for a CD when a totally separate vendor was selling \$80 sweatshirts, compared to when that other vendor was selling \$10 sweatshirts.
In both of these examples, the anchor was completely irrelevant to the number judgments, and people weren’t even necessarily focused on the anchor. Even still, just having a number in the environment could bias people’s final judgments.
Raising the Anchor and Saying “Ahoy”
Across all of these studies, a consistent pattern emerges: even arbitrary starting points end up biasing numerical judgments. Whether we’re judging prices, heights, ages, or percentages, the number we start at keeps us from reaching the most accurate final answer.
This has turned out to be a well-studied phenomenon as psychologists have explored the limits of its effects. Some results have shown that anchoring effects depend on your personality, and others have shown that they depend on your mood.
In fact, there’s still some debate over how anchoring works. Whereas some evidence argues for the original conception that people adjust their estimates from a starting point, others argue for a “selective accessibility” model in which people entertain a variety of specific hypotheses before settling on an answer. Still others have provided evidence suggesting that anchoring works similarly to persuasion.
Overall, however, the anchoring effect appears robust, and when you’re in the throes of numerical estimates, think about whether your answer could have been biased by other numbers floating around.
Problem 2 (adapted from Joyce & Biddle, 1981):
We know that executive fraud occurs and that it has been associated with many recent financial scandals. And, we know that many cases of management fraud go undetected even when annual audits are performed. Do you think that the incidence of significant executive-level management fraud is more than 10 in 1,000 firms (that is, 1 percent) audited by Big Four accounting firms?
a. Yes, more than 10 in 1,000 Big Four clients have significant executive-level management fraud.
b. No, fewer than 10 in 1,000 Big Four clients have significant executive-level management fraud.
What is your estimate of the number of Big Four clients per 1,000 that have significant executive-level management fraud? (Fill in the blank below with the appropriate number.)
in 1,000 Big Four clients have significant executive-level management fraud.
Regarding the second problem, people vary a great deal in their final assessment of the level of executive-level management fraud, but most think that 10 out of 1,000 is too low. When I run this exercise in class, half of the students respond to the question that I asked you to answer.
The other half receive a similar problem, but instead are asked whether the correct answer is higher or lower than 200 rather than 10. Most people think that 200 is high. But, again, most people claim that this “anchor” does not affect their final estimate. Yet, on average, people who are presented with the question that focuses on the number 10 (out of 1,000) give answers that are about one-half the size of the estimates of those facing questions that use an anchor of 200. When we are making decisions, any initial anchor that we face is likely to influence our judgments, even if the anchor is arbitrary. That is, we insufficiently adjust our judgments away from the anchor. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/09%3A_Decision_Making/9.03%3A_Anchoring.txt |
Sunk cost is a term used in economics referring to non-recoverable investments of time or money. The trap occurs when a person’s aversion to loss impels them to throw good money after bad, because they don’t want to waste their earlier investment. This is vulnerable to manipulation. The more time and energy a cult recruit can be persuaded to spend with the group, the more “invested” they will feel, and, consequently, the more of a loss it will feel to leave that group. Consider the advice of billionaire investor Warren Buffet: “When you find yourself in a hole, the best thing you can do is stop digging” (Levine, 2003).
9.06: Hindsight Bias
Hindsight bias is the opposite of overconfidence bias, as it occurs when looking backward in time where mistakes made seem obvious after they have already occurred. In other words, after a surprising event occurred, many individuals are likely to think that they already knew this was going to happen. This may be because they are selectively reconstructing the events. Hindsight bias becomes a problem especially when judging someone else’s decisions. For example, let’s say a company driver hears the engine making unusual sounds before starting her morning routine. Being familiar with this car in particular, the driver may conclude that the probability of a serious problem is small and continue to drive the car. During the day, the car malfunctions, stranding her away from the office. It would be easy to criticize her decision to continue to drive the car because, in hindsight, the noises heard in the morning would make us believe that she should have known something was wrong and she should have taken the car in for service. However, the driver may have heard similar sounds before with no consequences, so based on the information available to her at the time, she may have made a reasonable choice. Therefore, it is important for decision makers to remember this bias before passing judgments on other people’s actions.
9.07: Illusory Correlations
The temptation to make erroneous cause-and-effect statements based on correlational research is not the only way we tend to misinterpret data. We also tend to make the mistake of illusory correlations, especially with unsystematic observations. Illusory correlations, or false correlations, occur when people believe that relationships exist between two things when no such relationship exists. One well-known illusory correlation is the supposed effect that the moon’s phases have on human behavior. Many people passionately assert that human behavior is affected by the phase of the moon, and specifically, that people act strangely when the moon is full (Figure 3).
There is no denying that the moon exerts a powerful influence on our planet. The ebb and flow of the ocean’s tides are tightly tied to the gravitational forces of the moon. Many people believe, therefore, that it is logical that we are affected by the moon as well. After all, our bodies are largely made up of water. A meta-analysis of nearly 40 studies consistently demonstrated, however, that the relationship between the moon and our behavior does not exist (Rotton & Kelly, 1985). While we may pay more attention to odd behavior during the full phase of the moon, the rates of odd behavior remain constant throughout the lunar cycle.
Why are we so apt to believe in illusory correlations like this? Often we read or hear about them and simply accept the information as valid. Or, we have a hunch about how something works and then look for evidence to support that hunch, ignoring evidence that would tell us our hunch is false; this is known as confirmation bias. Other times, we find illusory correlations based on the information that comes most easily to mind, even if that information is severely limited. And while we may feel confident that we can use these relationships to better understand and predict the world around us, illusory correlations can have significant drawbacks. For example, research suggests that illusory correlations—in which certain behaviors are inaccurately attributed to certain groups—are involved in the formation of prejudicial attitudes that can ultimately lead to discriminatory behavior
9.08: Confirmation Bias
Confirmation bias is a person’s tendency to seek, interpret and use evidence in a way that conforms to their existing beliefs. This can lead a person to make certain mistakes such as: poor judgments that limits their ability to learn, induces changing in beliefs to justify past actions, and act in a hostile manner towards people who disagree with them. Confirmation bias lead a person to perpetuate stereotypes or cause a doctor to inaccurately diagnose a condition.
What is noteworthy about confirmation bias is that it supports the The Argumentative Theory.Although confirmation bias is almost universally deplored as a regrettable failing of reason in others, the argumentative theory of reason explains that this bias is Adaptive Behavior because it aids in forming persuasive arguments by preventing us from being distracted by useless evidence and unhelpful stories.
Interestingly, Charles Darwin made a practice of recording evidence against his theory in a special notebook, because he found that this contradictory evidence was particularly difficult to remember.
9.09: Belief Perseverance Bias
Belief Perseverance bias occurs when a person has clear evidence against, they still hold on to their previous belief. Many people in the skeptic community are often frustrated when, after they have laid out so many sound arguments based on clear reasoning, they still can’t seem to change what someone believes. Once you believe something, it is so easy to see the reasons for why you hold that belief but for others it seems impossible. Try as you might to share your beliefs with others, you still fail at winning them to your side.
“The human understanding when it has once adopted an opinion draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects.”
– Francis Bacon
What we are talking about here is, at the least, confirmation bias, the tendency to seek only information that supports one’s previous belief and reject information that refutes it. But there is also the issue of belief perseverance. In other words, Much of this stems from people’s preference for certainty and continuity. We like our knowledge to be consistent, linear, and absolute. “I already came to a conclusion and am absolutely certain that what I believe is true. I no longer want to think about it. If I exert all of the work required to admit that I am wrong and was wrong, there will be a lot of additional work to learn and integrate that new information. In the meantime, I will have a very difficult time functioning. My life will be much easier if I simply accept that my previous belief was true.” Or as Daniel Kahneman says:
“Sustaining doubt is harder work than sliding into certainty.”
– Daniel Kahneman | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/09%3A_Decision_Making/9.05%3A_Sunk_Cost_Effect.txt |
Problem 1 (adapted from Alpert & Raiffa, 1969):
Listed below are 10 uncertain quantities. Do not look up any information on these items. For each, write down your best estimate of the quantity. Next, put a lower and upper bound around your estimate, such that you are 98 percent confident that your range surrounds the actual quantity. Respond to each of these items even if you admit to knowing very little about these quantities.
1. The first year the Nobel Peace Prize was awarded
2. The date the French celebrate "Bastille Day"
3. The distance from the Earth to the Moon
4. The height of the Leaning Tower of Pisa
5. Number of students attending Oxford University (as of 2014)
6. Number of people who have traveled to space (as of 2013)
7. 2012-2013 annual budget for the University of Pennsylvania
8. Average life expectancy in Bangladesh (as of 2012)
9. World record for pull-ups in a 24-hour period
10. Number of colleges and universities in the Boston metropolitan area
On the first problem, if you set your ranges so that you were justifiably 98 percent confident, you should expect that approximately 9.8, or nine to 10, of your ranges would include the actual value. So, let’s look at the correct answers:
1. 1901
2. 14th of July
3. 384,403 km (238,857 mi)
4. 56.67 m (183 ft)
5. 22,384 (as of 2014)
6. 536 people (as of 2013)
7. \$6.007 billion
8. 70.3 years (as of 2012)
9. 4,321
10. 52
Count the number of your 98% ranges that actually surrounded the true quantities. If you surrounded nine to 10, you were appropriately confident in your judgments. But most readers surround only between three (30%) and seven (70%) of the correct answers, despite claiming 98% confidence that each range would surround the true value. As this problem shows, humans tend to be overconfident in their judgments.
In 1984, Jennifer Thompson was raped. During the attack, she studied the attacker's face, determined to identify him if she survived the attack. When presented with a photo lineup, she identified Cotton as her attacker. Twice, she testified against him, even after seeing Bobby Poole, the man who boasted to fellow inmates that he had committed the crimes for which Cotton was convicted. After Cotton's serving 10.5 years of his sentence, DNA testing conclusively proved that Poole was indeed the rapist.
Thompson has since become a critic of the reliability of eyewitness testimony. She was remorseful after learning that Cotton was an innocent man who was sent to prison. Upon release, Cotton was awarded \$110,000 compensation from the state of North Carolina. Cotton and Thompson have reconciled to become close friends, and tour in support of eyewitness testimony reform.
One of the most remarkable aspects of Jennifer Thompson’s mistaken identity of Ronald Cotton was her certainty. But research reveals a pervasive cognitive bias toward overconfidence, which is the tendency for people to be too certain about their ability to accurately remember events and to make judgments. David Dunning and his colleagues (Dunning, Griffin, Milojkovic, & Ross, 1990) asked college students to predict how another student would react in various situations. Some participants made predictions about a fellow student whom they had just met and interviewed, and others made predictions about their roommates whom they knew very well. In both cases, participants reported their confidence in each prediction, and accuracy was determined by the responses of the people themselves. The results were clear: Regardless of whether they judged a stranger or a roommate, the participants consistently overestimated the accuracy of their own predictions.
Eyewitnesses to crimes are also frequently overconfident in their memories, and there is only a small correlation between how accurate and how confident an eyewitness is. The witness who claims to be absolutely certain about his or her identification (e.g., Jennifer Thompson) is not much more likely to be accurate than one who appears much less sure, making it almost impossible to determine whether a particular witness is accurate or not (Wells & Olson, 2003).
I am sure that you have a clear memory of when you first heard about the 9/11 attacks in 2001, and perhaps also when you heard that Princess Diana was killed in 1997 or when the verdict of the O. J. Simpson trial was announced in 1995. This type of memory, which we experience along with a great deal of emotion, is known as a flashbulb memory—a vivid and emotional memory of an unusual event that people believe they remember very well. (Brown & Kulik, 1977).
People are very certain of their memories of these important events, and frequently overconfident. Talarico and Rubin (2003) tested the accuracy of flashbulb memories by asking students to write down their memory of how they had heard the news about either the September 11, 2001, terrorist attacks or about an everyday event that had occurred to them during the same time frame. These recordings were made on September 12, 2001. Then the participants were asked again, either 1, 6, or 32 weeks later, to recall their memories. The participants became less accurate in their recollections of both the emotional event and the everyday events over time. But the participants’ confidence in the accuracy of their memory of learning about the attacks did not decline over time. After 32 weeks the participants were overconfident; they were much more certain about the accuracy of their flashbulb memories than they should have been. Schmolck, Buffalo, and Squire (2000) found similar distortions in memories of news about the verdict in the O. J. Simpson trial. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/09%3A_Decision_Making/9.10%3A_Overconfidence.txt |
Sensation and perception are two separate processes that are very closely related. Sensation is input about the physical world obtained by our sensory receptors, and perception is the process by which the brain selects, organizes, and interprets these sensations. In other words, senses are the physiological basis of perception. Perception of the same senses may vary from one person to another because each person’s brain interprets stimuli differently based on that individual’s learning, memory, emotions, and expectations.
Imagine standing on a city street corner. You might be struck by movement everywhere as cars and people go about their business, by the sound of a street musician’s melody or a horn honking in the distance, by the smell of exhaust fumes or of food being sold by a nearby vendor, and by the sensation of hard pavement under your feet.
We rely on our sensory systems to provide important information about our surroundings. We use this information to successfully navigate and interact with our environment so that we can find nourishment, seek shelter, maintain social relationships, and avoid potentially dangerous situations. But while sensory information is critical to our survival, there is so much information available at any given time that we would be overwhelmed if we were forced to attend to all of it. In fact, we are aware of only a fraction of the sensory information taken in by our sensory systems at any given time.
This section will provide an overview of how sensory information is received and processed by the nervous system and how that affects our conscious experience of the world. We begin by learning the distinction between sensation and perception. Then we consider the physical properties of light and sound stimuli, along with an overview of the basic structure and function of the major sensory systems. The module will close with a discussion of a historically important theory of perception called the Gestalt theory. This theory attempts to explain some underlying principles of perception.
Interactive Element
Seeing something is not the same thing as making sense of what you see. Why is it that our senses are so easily fooled? In this video, you will come to see how our perceptions are not infallible, and they can be influenced by bias, prejudice, and other factors. Psychologists are interested in how these false perceptions influence our thoughts and behavior.
10.02: Classic View of Perception
Distal stimulus, proximal stimulus, percept:
To understand what perception does, you must understand the difference between the the proximal (~approximate = close) stimulus and the distal(~ distant) stimulus or object.
• distal stimuli are objects and events out in the world about you.
• proximal stimuli are the patterns of stimuli from these objects and events that actually reach your senses (eyes, ears, etc.)
Most of the time, perception reflects the properties of the distal objects and events very accurately, much more accurately than you might expect from the apparently limited, varying, unstable pattern of proximal stimulation the brain/mind gets. The problem of perception is to understand how the mind/brain extracts accurate stable perceptions of objects and events from such apparently limited, inadequate information.
In vision, light rays from distal objects form a sharply focused array on the retina in back of the eye. But this array continually varies as the eyes move, as the observer gets different views of the same object, as amount of light varies, etc. Although this proximal stimulus array is what actually triggers the neural signals to the brain, we are quite unaware of it or pay little attention to it (most of the time). Instead we are aware of and respond to the distal objects that the proximal stimulus represents. This is completely reasonable: the distal object is what is important. | textbooks/socialsci/Psychology/Cognitive_Psychology/Cognitive_Psychology_(Andrade_and_Walker)/10%3A_Perception/10.01%3A_Sensation_vs._Perception.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.