chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
John Stuart Mill modified his father’s theory of associationism (Mill & Mill, 1869; Mill, 1848) in many ways, including proposing a mental chemistry “in which it is proper to say that the simple ideas generate, rather than . . . compose, the complex ones” (Mill, 1848, p. 533). Mill’s mental chemistry is an early example of emergence, where the properties of a whole (i.e., a complex idea) are more than the sum of the properties of the parts (i.e., a set of associated simple ideas).
The generation of one class of mental phenomena from another, whenever it can be made out, is a highly interesting fact in mental chemistry; but it no more supersedes the necessity of an experimental study of the generated phenomenon than a knowledge of the properties of oxygen and sulphur enables us to deduce those of sulphuric acid without specific observation and experiment. (Mill, 1848, p. 534)
Mathematically, emergence results from nonlinearity (Luce, 1999). If a system is linear, then its whole behaviour is exactly equal to the sum of the behaviours of its parts. The standard pattern associator that was illustrated in Figure 4-1 is an example of such a system. Each output unit in the standard pattern associator computes a net input, which is the sum of all of the individual signals that it receives from the input units. Output unit activity is exactly equal to net input. In other words, output activity is exactly equal to the sum of input signals in the standard pattern associator. In order to increase the power of this type of pattern associator—in order to facilitate emergence—a nonlinear relationship between input and output must be introduced.
Neurons demonstrate one powerful type of nonlinear processing. The inputs to a neuron are weak electrical signals, called graded potentials, which stimulate and travel through the dendrites of the receiving neuron. If enough of these weak graded potentials arrive at the neuron’s soma at roughly the same time, then their cumulative effect disrupts the neuron’s resting electrical state. This results in a massive depolarization of the membrane of the neuron’s axon, called an action potential, which is a signal of constant intensity that travels along the axon to eventually stimulate some other neuron.
A crucial property of the action potential is that it is an all-or-none phenomenon, representing a nonlinear transformation of the summed graded potentials. The neuron converts continuously varying inputs into a response that is either on (action potential generated) or off (action potential not generated). This has been called the all-or-none law (Levitan & Kaczmarek, 1991, p. 43): “The all-or-none law guarantees that once an action potential is generated it is always full size, minimizing the possibility that information will be lost along the way.” The all-or-none output of neurons is a nonlinear transformation of summed, continuously varying input, and it is the reason that the brain can be described as digital in nature (von Neumann, 1958).
The all-or-none behaviour of a neuron makes it logically equivalent to the relays or switches that were discussed in Chapter 2. This logical interpretation was exploited in an early mathematical account of the neural information processing (McCulloch & Pitts, 1943). McCulloch and Pitts used the all-or-none law to justify describing neurons very abstractly as devices that made true or false logical assertions about input information:
The all-or-none law of nervous activity is sufficient to insure that the activity of any neuron may be represented as a proposition. Physiological relations existing among nervous activities correspond, of course, to relations among the propositions; and the utility of the representation depends upon the identity of these relations with those of the logical propositions. To each reaction of any neuron there is a corresponding assertion of a simple proposition. (McCulloch & Pitts, 1943, p. 117)
McCulloch and Pitts (1943) invented a connectionist processor, now known as the McCulloch-Pitts neuron (Quinlan, 1991), that used the all-or-none law. Like the output units in the standard pattern associator (Figure 4-1), a McCulloch-Pitts neuron first computes its net input by summing all of its incoming signals. However, it then uses a nonlinear activation function to transform net input into internal activity. The activation function used by McCulloch and Pitts was the Heaviside step function, named after nineteenth-century electrical engineer Oliver Heaviside. This function compares the net input to a threshold. If the net input is less than the threshold, the unit’s activity is equal to 0. Otherwise, the unit’s activity is equal to 1. (In other artificial neural networks [Rosenblatt, 1958, 1962], below-threshold net inputs produced activity of –1.)
The output units in the standard pattern associator (Figure 4-1) can be described as using the linear identity function to convert net input into activity, because output unit activity is equal to net input. If one replaced the identity function with the Heaviside step function in the standard pattern associator, it would then become a different kind of network, called a perceptron (Dawson, 2004), which was invented by Frank Rosenblatt during the era in which cognitive science was born (Rosenblatt, 1958, 1962).
Perceptrons (Rosenblatt, 1958, 1962) were artificial neural networks that could be trained to be pattern classifiers: given an input pattern, they would use their nonlinear outputs to decide whether or not the pattern belonged to a particular class. In other words, the nonlinear activation function used by perceptrons allowed them to assign perceptual predicates; standard pattern associators do not have this ability. The nature of the perceptual predicates that a perceptron could learn to assign was a central issue in an early debate between classical and connectionist cognitive science (Minsky & Papert, 1969; Papert, 1988).
The Heaviside step function is nonlinear, but it is also discontinuous. This was problematic when modern researchers sought methods to train more complex networks. Both the standard pattern associator and the perceptron are one-layer networks, meaning that they have only one layer of connections, the direct connections between input and output units (Figure 4-1). More powerful networks arise if intermediate processors, called hidden units, are used to preprocess input signals before sending them on to the output layer. However, it was not until the mid1980s that learning rules capable of training such networks were invented (Ackley, Hinton, & Sejnowski, 1985; Rumelhart, Hinton, & Williams, 1986b). The use of calculus to derive these new learning rules became possible when the discontinuous Heaviside step function was replaced by a continuous approximation of the all-ornone law (Rumelhart, Hinton, & Williams, 1986b).
One continuous approximation of the Heaviside step function is the sigmoidshaped logistic function. It asymptotes to a value of 0 as its net input approaches negative infinity, and asymptotes to a value of 1 as its net input approaches positive infinity. When the net input is equal to the threshold (or bias) of the logistic, activity is equal to 0.5. Because the logistic function is continuous, its derivative can be calculated, and calculus can be used as a tool to derive new learning rules (Rumelhart, Hinton, & Williams, 1986b). However, it is still nonlinear, so logistic activities can still be interpreted as truth values assigned to propositions.
Modern connectionist networks employ many different nonlinear activation functions. Processing units that employ the logistic activation function have been called integration devices (Ballard, 1986) because they convert a sum (net input) and “squash” it into the range between 0 and 1. Other processing units might be tuned to generate maximum responses to a narrow range of net inputs. Ballard (1986) called such processors value units. A different nonlinear continuous function, the Gaussian equation, can be used to mathematically define a value unit, and calculus can be used to derive a learning rule for this type of artificial neural network (Dawson, 1998, 2004; Dawson & Schopflocher, 1992b).
Many other activation functions exist. One review paper has identified 640 different activation functions employed in connectionist networks (Duch & Jankowski, 1999). One characteristic of the vast majority of all of these activation functions is their nonlinearity. Connectionist cognitive science is associationist, but it is also nonlinear. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.04%3A_Nonlinear_Transformations.txt |
Both the McCulloch-Pitts neuron (McCulloch & Pitts, 1943) and the perceptron (Rosenblatt, 1958, 1962) used the Heaviside step function to implement the all-ornone law. As a result, both of these architectures generated a “true” or “false” judgment about each input pattern. Thus both of these architectures are digital, and their basic function is pattern recognition or pattern classification.
The two-valued logic that was introduced in Chapter 2 can be cast in the context of such digital pattern recognition. In the two-valued logic, functions are computed over two input propositions, p and q, which themselves can either be true or false. As a result, there are only four possible combinations of p and q, which are given in the first two columns of Table 4-1. Logical functions in the two-valued logic are themselves judgments of true or false that depend on combinations of the truth values of the input propositions p and q. As a result, there are 16 different logical operations that can be defined in the two-valued logic; these were provided in Table 2-2.
The truth tables for two of the sixteen possible operations in the two-valued logic are provided in the last two columns of Table 4-1. One is the AND operation (p·q), which is only true when both propositions are true. The other is the XOR operation (p∧q), which is only true when one or the other of the propositions is true.
p q p*q p^q
1 1 1 0
1 0 0 1
0 1 0 1
0 0 0 0
Table 4-1. Truth tables for the logical operations AND (p·q) and XOR (p ∧ q), where the truth value of each operation is given as a function of the truth of each of two propositions, p and q. ‘1’ indicates “true” and ‘0’ indicates “false.” The logical notation is taken from McCulloch (1988b).
That AND or XOR are examples of digital pattern recognition can be made more explicit by representing their truth tables graphically as pattern spaces. In a pattern space, an entire row of a truth table is represented as a point on a graph. The coordinates of a point in a pattern space are determined by the truth values of the input propositions. The colour of the point represents the truth value of the operation computed over the inputs.
Figure 4-2A illustrates the pattern space for the AND operation of Table 4-1. Note that it has four graphed points, one for each row of the truth table. The coordinates of each graphed point—(1,1), (1,0), (0,1), and (0,0)—indicate the truth values of the propositions p and q. The AND operation is only true when both of these propositions are true. This is represented by colouring the point at coordinate (1,1) black. The other three points are coloured white, indicating that the logical operator returns a “false” value for each of them.
Figure 4-2. (A) Pattern space for AND; (B) Pattern space for XOR.
Pattern spaces are used for digital pattern recognition by carving them into decision regions. If a point that represents a pattern falls in one decision region, then it is classified in one way. If that point falls in a different decision region, then it is classified in a different way. Learning how to classify a set of patterns involves learning how to correctly carve the pattern space up into the desired decision regions.
The AND problem is an example of a linearly separable problem. This is because a single straight cut through the pattern space divides it into two decision regions that generate the correct pattern classifications. The dashed line in Figure 4-2A indicates the location of this straight cut for the AND problem. Note that the one “true” pattern falls on one side of this cut, and that the three “false” patterns fall on the other side of this cut.
Not all problems are linearly separable. A linearly nonseparable problem is one in which a single straight cut is not sufficient to separate all of the patterns of one type from all of the patterns of another type. An example of a linearly nonseparable problem is the XOR problem, whose pattern space is illustrated in Figure 4-2B. Note that the positions of the four patterns in Figure 4-2B are identical to the positions in Figure 4-2A, because both pattern spaces involve the same propositions. The only difference is the colouring of the points, indicating that XOR involves making a different judgment than AND. However, this difference between graphs is important, because now it is impossible to separate all of the black points from all of the white points with a single straight cut. Instead, two different cuts are required, as shown by the two dashed lines in Figure 4-2B. This means that XOR is not linearly separable.
Linear separability defines the limits of what can be computed by a Rosenblatt perceptron (Rosenblatt, 1958, 1962) or by a McCulloch-Pitts neuron (McCulloch & Pitts, 1943). That is, if some pattern recognition problem is linearly separable, then either of these architectures is capable of representing a solution to that problem. For instance, because AND is linearly separable, it can be computed by a perceptron, such as the one illustrated in Figure 4-3.
Figure 4-3. A Rosenblatt perceptron that can compute the AND operation.
This perceptron consists of two input units whose activities respectively represent the state (i.e., either 0 or 1) of the propositions p and q. Each of these input units sends a signal through a connection to an output unit; the figure indicates that the weight of each connection is 1. The output unit performs two operations. First, it computes its net input by summing the two signals that it receives (the S component of the output unit). Second, it transforms the net input into activity by applying the Heaviside step function. The figure indicates in the second component of the output unit that the threshold for this activation function (q) is 1.5. This means that output unit activity will only be 1 if net input is greater than or equal to 1.5; otherwise, output unit activity will be equal to 0.
If one considers the four different combinations of input unit activities that would be presented to this device—(1,1), (1,0), (0,1), and (0,0)—then it is clear that the only time that output unit activity will equal 1 is when both input units are activated with 1 (i.e., when p and q are both true). This is because this situation will produce a net input of 2, which exceeds the threshold. In all other cases, the net input will either be 1 or 0, which will be less than the threshold, and which will therefore produce output unit activity of 0.
The ability of the Figure 4-3 perceptron to compute AND can be described in terms of the pattern space in Figure 4-2A. The threshold and the connection weights of the perceptron provide the location and orientation of the single straight cut that carves the pattern space into decision regions (the dashed line in Figure 4-2A). Activating the input units with some pattern presents a pattern space location to the perceptron. The perceptron examines this location to decide on which side of the cut the location lies, and responds accordingly.
This pattern space account of the Figure 4-3 perceptron also points to a limitation. When the Heaviside step function is used as an activation function, the perceptron only defines a single straight cut through the pattern space and therefore can only deal with linearly separable problems. A perceptron akin to the one illustrated in Figure 4-3 would not be able to compute XOR (Figure 4-2B) because the output unit is incapable of making the two required cuts in the pattern space.
How does one extend computational power beyond the perceptron? One approach is to add additional processing units, called hidden units, which are intermediaries between input and output units. Hidden units can detect additional features that transform the problem by increasing the dimensionality of the pattern space. As a result, the use of hidden units can convert a linearly nonseparable problem into a linearly separable one, permitting a single binary output unit to generate the correct responses.
Figure 4-4 shows how the AND circuit illustrated in Figure 4-3 can be added as a hidden unit to create a multilayer perceptron that can compute the linearly nonseparable XOR operation (Rumelhart, Hinton, & Williams, 1986a). This perceptron also has two input units whose activities respectively represent the state of the propositions p and q. Each of these input units sends a signal through a connection to an output unit; the figure indicates that the weight of each connection is 1. The threshold of the output’s activation function (q) is 0.5. If we were to ignore the hidden unit in this network, the output unit would be computing OR, turning on when one or both of the input propositions are true.
However, this network does not compute OR, because the input units are also connected to a hidden unit, which in turn sends a third signal to be added into the output unit’s net input. The hidden unit is identical to the AND circuit from Figure 4-3. The signal that it sends to the output unit is strongly inhibitory; the weight of the connection between the two units is –2.
Figure 4-4. A multilayer perceptron that can compute XOR.
The action of the hidden unit is crucial to the behaviour of the system. When neither or only one of the input units activates, the hidden unit does not respond, so it sends a signal of 0 to the output unit. As a result, in these three situations the output unit turns on when either of the inputs is on (because the net input is over the threshold) and turns off when neither input unit is on. When both input units are on, they send an excitatory signal to the output unit. However, they also send a signal that turns on the hidden unit, causing it to send inhibition to the output unit. In this situation, the net input of the output unit is 1 + 1 – 2 = 0 which is below threshold, producing zero output unit activity. The entire circuit therefore performs the XOR operation.
The behaviour of the Figure 4-4 multilayer perceptron can also be related to the pattern space of Figure 4-2B. The lower cut in that pattern space is provided by the output unit. The upper cut in that pattern space is provided by the hidden unit. The coordination of the two units permits the circuit to solve this linearly nonseparable problem.
Interpreting networks in terms of the manner in which they carve a pattern space into decision regions suggests that learning can be described as determining where cuts in a pattern space should be made. Any hidden or output unit that uses a nonlinear, monotonic function like the Heaviside or the logistic can be viewed as making a single cut in a space. The position and orientation of this cut is determined by the weights of the connections feeding into the unit, as well as the threshold or bias (q) of the unit. A learning rule modifies all of these components. (The bias of a unit can be trained as if it were just another connection weight by assuming that it is the signal coming from a special, extra input unit that is always turned on [Dawson, 2004, 2005].)
The multilayer network illustrated in Figure 4-4 is atypical because it directly connects input and output units. Most modern networks eliminate such direct connections by using at least one layer of hidden units to isolate the input units from the output units, as shown in Figure 4-5. In such a network, the hidden units can still be described as carving a pattern space, with point coordinates provided by the input units, into a decision region. However, because the output units do not have direct access to input signals, they do not carve the pattern space. Instead, they divide an alternate space, the hidden unit space, into decision regions. The hidden unit space is similar to the pattern space, with the exception that the coordinates of the points that are placed within it are provided by hidden unit activities.
Figure 4-5. A typical multilayer perceptron has no direct connections between input and output units.
When there are no direct connections between input and output units, the hidden units provide output units with an internal representation of input unit activity. Thus it is proper to describe a network like the one illustrated in Figure 4-5 as being just as representational (Horgan & Tienson, 1996) as a classical model. That connectionist representations can be described as a nonlinear transformation of the input unit representation, permitting higher-order nonlinear features to be detected, is why a network like the one in Figure 4-5 is far more powerful than one in which no hidden units appear (e.g., Figure 4-3).
When there are no direct connections between input and output units, the representations held by hidden units conform to the classical sandwich that characterized classical models (Hurley, 2001)—a connectionist sandwich (Calvo & Gomila, 2008, p. 5): “Cognitive sandwiches need not be Fodorian. A feed forward connectionist network conforms equally to the sandwich metaphor. The input layer is identified with a perception module, the output layer with an action one, and hidden space serves to identify metrically, in terms of the distance relations among patterns of activation, the structural relations that obtain among concepts. The hidden layer this time contains the meat of the connectionist sandwich.”
A difference between classical and connectionist cognitive science is not that the former is representational and the latter is not. Both are representational, but they disagree about the nature of mental representations. “The major lesson of neural network research, I believe, has been to thus expand our vision of the ways a physical system like the brain might encode and exploit information and knowledge” (Clark, 1997, p. 58). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.05%3A_The_Connectionist_Sandwich.txt |
In the preceding sections some of the basic characteristics of connectionist networks were presented. These elements of connectionist cognitive science have emerged as a reaction against key assumptions of classical cognitive science. Connectionist cognitive scientists replace rationalism with empiricism, and recursion with chains of associations.
Although connectionism reacts against many of the elements of classical cognitive science, there are many similarities between the two. In particular, the multiple levels of analysis described in Chapter 2 apply to connectionist cognitive science just as well as they do to classical cognitive science (Dawson, 1998). The next two sections of this chapter focus on connectionist research in terms of one of these, the computational level of investigation.
Connectionism’s emphasis on both empiricism and associationism has raised the spectre, at least in the eyes of many classical cognitive scientists, of a return to the behaviourism that cognitivism itself revolted against. When cognitivism arose, some of its early successes involved formal proofs that behaviourist and associationist theories were incapable of accounting for fundamental properties of human languages (Bever, Fodor, & Garrett, 1968; Chomsky, 1957, 1959b, 1965, 1966). With the rise of modern connectionism, similar computational arguments have been made against artificial neural networks, essentially claiming that they are not sophisticated enough to belong to the class of universal machines (Fodor & Pylyshyn, 1988).
In Section 4.6, “Beyond the Terminal Meta-postulate,” we consider the in-principle power of connectionist networks, beginning with two different types of tasks that networks can be used to accomplish. One is pattern classification: assigning an input pattern in an all-or-none fashion to a particular category. A second is function approximation: generating a continuous response to a set of input values.
Section 4.6 then proceeds to computational analyses of how capable networks are of accomplishing these tasks. These analyses prove that networks are as powerful as need be, provided that they include hidden units. They can serve as arbitrary pattern classifiers, meaning that they can solve any pattern classification problem with which they are faced. They can also serve as universal function approximators, meaning that they can fit any continuous function to an arbitrary degree of precision. This computational power suggests that artificial neural networks belong to the class of universal machines. The section ends with a brief review of computational analyses, which conclude that connectionist networks indeed can serve as universal Turing machines and are therefore computationally sophisticated enough to serve as plausible models for cognitive science.
Computational analyses need not limit themselves to considering the general power of artificial neural networks. Computational analyses can be used to explore more specific questions about networks. This is illustrated in Section 4.7, “What Do Output Unit Activities Represent?” in which we use formal methods to answer the question that serves as the section’s title. The section begins with a general discussion of theories that view biological agents as intuitive statisticians who infer the probability that certain events may occur in the world (Peterson & Beach, 1967; Rescorla, 1967, 1968). An empirical result is reviewed that suggests artificial neural networks are also intuitive statisticians, in the sense that the activity of an output unit matches the probability that a network will be “rewarded” (i.e., trained to turn on) when presented with a particular set of cues (Dawson et al., 2009).
The section then ends by providing an example computational analysis: a formal proof that output unit activity can indeed literally be interpreted as a conditional probability. This proof takes advantage of known formal relations between neural networks and the Rescorla-Wagner learning rule (Dawson, 2008; Gluck & Bower, 1988; Sutton & Barto, 1981), as well as known formal relations between the RescorlaWagner learning rule and contingency theory (Chapman & Robbins, 1990). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.06%3A_Connectionist_Computations_-_An_Overview.txt |
Connectionist networks are associationist devices that map inputs to outputs, systems that convert stimuli into responses. However, we saw in Chapter 3 that classical cognitive scientists had established that the stimulus-response theories of behaviourist psychology could not adequately deal with the recursive structure of natural language (Chomsky, 1957, 1959b, 1965, 1966). In the terminal meta-postulate argument (Bever, Fodor, and Garrett, 1968), it was noted that the rules of associative theory defined a “terminal vocabulary of a theory, i.e., over the vocabulary in which behavior is described” (p. 583). Bever, Fodor, and Garrett then proceeded to prove that the terminal vocabulary of associationism is not powerful enough to accept or reject languages that have recursive clausal structure.
If connectionist cognitive science is another instance of associative or behaviourist theory, then it stands to reason that it too is subject to these same problems and therefore lacks the computational power required of cognitive theory. One of the most influential criticisms of connectionism has essentially made this point, arguing against the computational power of artificial neural networks because they lack the componentiality and systematicity associated with recursive rules that operate on components of symbolic expressions (Fodor & Pylyshyn, 1988). If artificial neural networks do not belong to the class of universal machines, then they cannot compete against the physical symbol systems that define classical cognitive science (Newell, 1980; Newell & Simon, 1976).
What tasks can artificial neural networks perform, and how well can they perform them? To begin, let us consider the most frequent kind of problem that artificial neural networks are used to solve: pattern recognition (Pao, 1989; Ripley, 1996). Pattern recognition is a process by which varying input patterns, defined by sets of features which may have continuous values, are assigned to discrete categories in an all-or-none fashion (Harnad, 1987). In other words, it requires that a system perform a mapping from continuous inputs to discrete outputs. Artificial neural networks are clearly capable of performing this kind of mapping, provided either that their output units use a binary activation function like the Heaviside, or that their continuous output is extreme enough to be given a binary interpretation. In this context, the pattern of “on” and “off ” responses in a set of output units represents the digital name of the class to which an input pattern has been assigned.
We saw earlier that pattern recognition problems can be represented using pattern spaces (Figure 4-2). To classify patterns, a system carves a pattern space into decision regions that separate all of the patterns belonging to one class from the patterns that belong to others. An arbitrary pattern classifier would be a system that could, in principle, solve any pattern recognition problem with which it was faced. In order to have such ability, such a system must have complete flexibility in carving a pattern space into decision regions: it must be able to slice the space into regions of any required shape or number.
Artificial neural networks can categorize patterns. How well can they do so? It has been shown that a multilayer perceptron with three layers of connections—two layers of hidden units intervening between the input and output layers—is indeed an arbitrary pattern classifier (Lippmann, 1987, 1989). This is because the two layers of hidden units provided the required flexibility in carving pattern spaces into decision regions, assuming that the hidden units use a sigmoid-shaped activation function such as the logistic. “No more than three layers are required in perceptronlike feed-forward nets” (Lippmann, 1987, p. 16).
When output unit activity is interpreted digitally—as delivering “true” or “false” judgments—artificial neural networks can be interpreted as performing one kind of task, pattern classification. However, modern networks use continuous activation functions that do not need to be interpreted digitally. If one applies an analog interpretation to output unit activity, then networks can be interpreted as performing a second kind of input-output mapping task, function approximation.
In function approximation, an input is a set of numbers that represents the values of variables passed into a function, i.e., the values of the set x1 , x2, x3, . . . xN. The output is a single value y that is the result of computing some function of those variables, i.e., y = f(x1 , x2, x3, . . . xN). Many artificial neural networks have been trained to approximate functions (Girosi & Poggio, 1990; Hartman, Keeler, & Kowalski, 1989; Moody & Darken, 1989; Poggio & Girosi, 1990; Renals, 1989). In these networks, the value of each input variable is represented by the activity of an input unit, and the continuous value of an output unit’s activity represents the computed value of the function of those input variables.
A system that is most powerful at approximating functions is called a universal function approximator. Consider taking any continuous function and examining a region of this function from a particular starting point (e.g., one set of input values) to a particular ending point (e.g., a different set of input values). A universal function approximator is capable of approximating the shape of the function between these bounds to an arbitrary degree of accuracy.
Artificial neural networks can approximate functions. How well can they do so? A number of proofs have shown that a multilayer perceptron with two layers of connections—in other words, a single layer of hidden units intervening between the input and output layers—is capable of universal function approximation (Cotter, 1990; Cybenko, 1989; Funahashi, 1989; Hartman, Keeler, & Kowalski, 1989; Hornik, Stinchcombe, &White, 1989). “If we have the right connections from the input units to a large enough set of hidden units, we can always find a representation that will perform any mapping from input to output” (Rumelhart, Hinton, &Williams, 1986a, p. 319).
That multilayered networks have the in-principle power to be arbitrary pattern classifiers or universal function approximators suggests that they belong to the class “universal machine,” the same class to which physical symbol systems belong (Newell, 1980). Newell (1980) proved that physical symbol systems belonged to this class by showing how a universal Turing machine could be simulated by a physical symbol system. Similar proofs exist for artificial neural networks, firmly establishing their computational power.
The Turing equivalence of connectionist networks has long been established. McCulloch and Pitts (1943) proved that a network of McCulloch-Pitts neurons could be used to build the machine head of a universal Turing machine; universal power was then achieved by providing this system with an external memory. “To psychology, however defined, specification of the net would contribute all that could be achieved in that field” (p. 131). More modern results have used the analog nature of modern processors to internalize the memory, indicating that an artificial neural network can simulate the entire Turing machine (Siegelmann, 1999; Siegelmann & Sontag, 1991, 1995).
Modern associationist psychologists have been concerned about the implications of the terminal meta-postulate and have argued against it in an attempt to free their theories from its computational shackles (Anderson & Bower, 1973; Paivio, 1986). The hidden units of modern artificial neural networks break these shackles by capturing higher-order associations—associations between associations—that are not defined in a vocabulary restricted to input and output activities. The presence of hidden units provides enough power to modern networks to firmly plant them in the class “universal machine” and to make them viable alternatives to classical simulations. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.07%3A_Beyond_the_Terminal_Meta-postulate.txt |
When McCulloch and Pitts (1943) formalized the information processing of neurons, they did so by exploiting the all-or-none law. As a result, whether a neuron responded could be interpreted as assigning a “true” or “false” value to some proposition computed over the neuron’s outputs. McCulloch and Pitts were able to design artificial neurons capable of acting as 14 of the 16 possible primitive functions on the two-valued logic that was described in Chapter 2.
McCulloch and Pitts (1943) formalized the all-or-none law by using the Heaviside step equation as the activation function for their artificial neurons. Modern activation functions such as the logistic equation provide a continuous approximation of the step function. It is also quite common to interpret the logistic function in digital, step function terms. This is done by interpreting a modern unit as being “on” or “off ” if its activity is sufficiently extreme. For instance, in simulations conducted with my laboratory software (Dawson, 2005) it is typical to view a unit as being “on” if its activity is 0.9 or higher, or “off ” if its activity is 0.1 or lower.
Digital activation functions, or digital interpretations of continuous activation functions, mean that pattern recognition is a primary task for artificial neural networks (Pao, 1989; Ripley, 1996). When a network performs pattern recognition, it is trained to generate a digital or binary response to an input pattern, where this response is interpreted as representing a class to which the input pattern is unambiguously assigned.
What does the activity of a unit in a connectionist network mean? Under the strict digital interpretation described above, activity is interpreted as the truth value of some proposition represented by the unit. However, modern activation functions such as the logistic or Gaussian equations have continuous values, which permit more flexible kinds of interpretation. Continuous activity might model the frequency with which a real unit (i.e., a neuron) generates action potentials. It could represent a degree of confidence in asserting that a detected feature is present, or it could represent the amount of a feature that is present (Waskan & Bechtel, 1997).
In this section, a computational-level analysis is used to prove that, in the context of modern learning theory, continuous unit activity can be unambiguously interpreted as a candidate measure of degree of confidence with conditional probability (Waskan & Bechtel, 1997).
In experimental psychology, some learning theories are motivated by the ambiguous or noisy nature of the world. Cues in the real world do not signal outcomes with complete certainty (Dewey, 1929). It has been argued that adaptive systems deal with worldly uncertainty by becoming “intuitive statisticians,” whether these systems are humans (Peterson & Beach, 1967) or animals (Gallistel, 1990; Shanks, 1995). An agent that behaves like an intuitive statistician detects contingency in the world, because cues signal the likelihood (and not the certainty) that certain events (such as being rewarded) will occur (Rescorla, 1967, 1968).
Evidence indicates that a variety of organisms are intuitive statisticians. For example, the matching law is a mathematical formalism that was originally used to explain variations in response frequency. It states that the rate of a response reflects the rate of its obtained reinforcement. For instance, if response A is reinforced twice as frequently as response B, then A will appear twice as frequently as B (Herrnstein, 1961). The matching law also predicts how response strength varies with reinforcement frequency (de Villiers & Herrnstein, 1976). Many results show that the matching law governs numerous tasks in psychology and economics (Davison & McCarthy, 1988; de Villiers, 1977; Herrnstein, 1997).
Another phenomenon that is formally related (Herrnstein & Loveland, 1975) to the matching law is probability matching, which concerns choices made by agents faced with competing alternatives. Under probability matching, the likelihood that an agent makes a choice amongst different alternatives mirrors the probability associated with the outcome or reward of that choice (Vulkan, 2000). Probability matching has been demonstrated in a variety of organisms, including insects (Fischer, Couvillon,&Bitterman, 1993; Keasar et al., 2002; Longo, 1964; Niv et al., 2002), fish (Behrend & Bitterman, 1961), turtles (Kirk&Bitterman, 1965), pigeons (Graf, Bullock, & Bitterman, 1964), and humans (Estes&Straughan, 1954).
Perceptrons, too, can match probabilities (Dawson et al., 2009). Dawson et al. used four different cues, or discriminative stimuli (DSs), but did not “reward” them 100 percent of the time. Instead, they rewarded one DS 20 percent of the time, another 40 percent, a third 60 percent, and a fourth 80 percent. After 300 epochs, where each epoch involved presenting each cue alone 10 different times in random order, these contingencies were inverted (i.e., subtracted from 100). The dependent measure was perceptron activity when a cue was presented; the activation function employed was the logistic. Some results of this experiment are presented in Figure 4-6. It shows that after a small number of epochs, the output unit activity becomes equal to the probability that a presented cue was rewarded. It also shows that perceptron responses quickly readjust when contingencies are suddenly modified, as shown by the change in Figure 4-6 around epoch 300. In short, perceptrons are capable of probability matching.
Figure 4-6. Probability matching by perceptrons. Each line shows the perceptron activation when a different cue (or discriminative stimulus, DS) is presented. Activity levels quickly become equal to the probability that each cue was reinforced (Dawson et al., 2009).
That perceptrons match probabilities relates them to contingency theory. Formal statements of this theory formalize contingency as a contrast between conditional probabilities (Allan, 1980; Cheng, 1997; Cheng & Holyoak, 1995; Cheng & Novick, 1990, 1992; Rescorla, 1967, 1968).
For instance, consider the simple situation in which a cue can either be presented, C, or not, ~C. Associated with either of these states is an outcome (e.g., a reward) that can either occur, O, or not, ~O. In this simple situation, involving a single cue and a single outcome, the contingency between the cue and the outcome is formally defined as the difference in conditional probabilities, ΔP, where ΔP = P(O|C) – P(O|~C) (Allan, 1980). More sophisticated models, such as the probabilistic contrast model (e.g., Cheng & Novick, 1990) or the power PC theory (Cheng, 1997), Epoch of Training Mean Network Activation define more complex probabilistic contrasts that are possible when multiple cues occur and can be affected by the context in which they are presented.
Empirically, the probability matching of perceptrons, illustrated in Figure 4-6, suggests that their behaviour can represent ΔP. When a cue is presented, activity is equal to the probability that the cue signals reinforcement—that is, P(O|C). This implies that the difference between a perceptron’s activity when a cue is presented and its activity when a cue is absent must be equal to ΔP. Let us now turn to a computational analysis to prove this claim.
What is the formal relationship between formal contingency theories and theories of associative learning (Shanks, 2007)? Researchers have compared the predictions of an influential account of associative learning, the Rescorla-Wagner model (Rescorla & Wagner, 1972), to formal theories of contingency (Chapman & Robbins, 1990; Cheng, 1997; Cheng & Holyoak, 1995). It has been shown that while in some instances the Rescorla-Wagner model predicts the conditional contrasts defined by a formal contingency theory, in other situations it fails to generate these predictions (Cheng, 1997).
Comparisons between contingency learning and Rescorla-Wagner learning typically involve determining equilibria of the Rescorla-Wagner model. An equilibrium of the Rescorla-Wagner model is a set of associative strengths defined by the model, at the point where the asymptote of changes in error defined by RescorlaWagner learning approaches zero (Danks, 2003). In the simple case described earlier, involving a single cue and a single outcome, the Rescorla-Wagner model is identical to contingency theory. This is because at equilibrium, the associative strength between cue and outcome is exactly equal to ΔP (Chapman & Robbins, 1990).
There is also an established formal relationship between the Rescorla-Wagner model and the delta rule learning of a perceptron (Dawson, 2008; Gluck & Bower, 1988; Sutton & Barto, 1981). Thus by examining the equilibrium state of a perceptron facing a simple contingency problem, we can formally relate this kind of network to contingency theory and arrive at a formal understanding of what output unit activity represents.
When a continuous activation function is used in a perceptron, calculus can be used to determine the equilibrium of the perceptron. Let us do so for a single cue situation in which some cue, C, when presented, is rewarded a frequency of a times, and is not rewarded a frequency of b times. Similarly, when the cue is not presented, the perceptron is rewarded a frequency of c times and is not rewarded a frequency of d times. Note that to reward a perceptron is to train it to generate a desired response of 1, and that to not reward a perceptron is to train it to generate a desired response of 0, because the desired response indicates the presence or absence of the unconditioned stimulus (Dawson, 2008).
Assume that when the cue is present, the logistic activation function computes an activation value that we designate as oc , and that when the cue is absent it returns the activation value designated as o~c. We can now define the total error of responding for the perceptron, that is, its total error for the (a + b + c + d) number of patterns that represent a single epoch, in which each instance of the contingency problem is presented once. For instance, on a trial in which C is presented and the perceptron is reinforced, the perceptron’s error for that trial is the squared difference between the reward, 1, and oc. As there are a of these trials, the total contribution of this type of trial to overall error is a(1 – oc )2 . Applying this logic to the other three pairings of cue and outcome, total error E can be defined as follows:
E = a(1-oc)2 + b(0-oc)2 + c(1-o~c)2 + d(0-o~c)2
= a(1-oc)2 + b(oc)2 + c(1-o~c)2 + d(o~c)2
For a perceptron to be at equilibrium, it must have reached a state in which total error has been optimized, so that the error can no longer be decreased by using the delta rule to alter the perceptron’s weight. To determine the equilibrium of the perceptron for the single cue contingency problem, we begin by taking the derivative of the error equation with respect to the activity of the perceptron when the cue is present, oc :
One condition of the perceptron at equilibrium is that oc is a value that causes this derivative to be equal to 0. The equation below sets the derivative to 0 and solves for oc . The result is a/(a + b), which is equal to the conditional probability P(O|C) if the single cue experiment is represented with a traditional contingency table:
Similarly, we can take the derivative of the error equation with respect to the activity of the perceptron when the cue is not present, o~c: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ) ) ( | ) ( ( ) ) ( ) ( ) ) ) Elements of Connectionist Cognitive Science 157 A second condition of the perceptron at equilibrium is that o~c is a value that causes the derivative above to be equal to 0. As before, we can set the derivative to 0 and solve for the value of o~c. This time the result is c/(c + d), which in a traditional contingency table is equal to the conditional probability P(O|~C): The main implication of the above equations is that they show that perceptron activity is literally a conditional probability. This provides a computational proof for the empirical hypothesis about perceptron activity that was generated from examining Figure 4-6. A second implication of the proof is that when faced with the same contingency problem, a perceptron’s equilibrium is not the same as that for the Rescorla-Wagner model. At equilibrium, the associative strength for the cue C that is determined by Rescorla-Wagner training is literally ΔP (Chapman & Robbins, 1990). This is not the case for the perceptron. For the perceptron, ΔP must be computed by taking the difference between its output when the cue is present and its output when the cue is absent. That is, ΔP is not directly represented as a connection weight, but instead is the difference between perceptron behaviours under different cue situations— that is, the difference between the conditional probability output by the perceptron when a cue is present and the conditional probability output by the perceptron when the cue is absent. Importantly, even though the perceptron and the Rescorla-Wagner model achieve different equilibria for the same problem, it is clear that both are sensitive to contingency when it is formally defined as ΔP. Differences between the two reflect an issue that was raised in Chapter 2, that there exist many different possible algorithms for computing the same function. Key differences between the perceptron and the Rescorla-Wagner model—in particular, the fact that the former performs a nonlinear transformation on internal signals, while the latter does not—cause them to adopt very different structures, as indicated by different equilibria. Nonetheless, these very different systems are equally sensitive to exactly the same contingency. This last observation has implications for the debate between contingency theory and associative learning (Cheng, 1997; Cheng & Holyoak, 1995; Shanks, 2007). In ( ( ) ) ( ) ( ) ( | ) 158 Chapter 4 the current phase of this debate, modern contingency theories have been proposed as alternatives to Rescorla-Wagner learning. While in some instances equilibria for the Rescorla-Wagner model predict the conditional contrasts defined by a formal contingency theory like the power PC model, in other situations this is not the case (Cheng, 1997). However, the result above indicates that differences in equilibria do not necessarily reflect differences in system abilities. Clearly equilibrium differences cannot be used as the sole measure when different theories of contingency are compared. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.08%3A_What_Do_Output_Unit_Activities_Represent%3F.txt |
In the last several sections we have explored connectionist cognitive science at the computational level of analysis. Claims about linear separability, the in-principle power of multilayer networks, and the interpretation of output unit activity have all been established using formal analyses.
In the next few sections we consider connectionist cognitive science from another perspective that it shares with classical cognitive science: the use of algorithmic-level investigations. The sections that follow explore how modern networks, which develop internal representations with hidden units, are trained, and also describe how one might interpret the internal representations of a network after it has learned to accomplish a task of interest. Such interpretations answer the question How does a network convert an input pattern into an output response? — and thus provide information about network algorithms.
The need for algorithmic-level investigations is introduced by noting in the next section Section 4.9 that most modern connectionist networks are multilayered, meaning that they have at least one layer of hidden units lying between the input units and the output units. This section introduces a general technique for training such networks, called the generalized delta rule. This rule extends empiricism to systems that can have powerful internal representations.
Section 4.10 provides one example of how the internal representations created by the generalized delta rule can be interpreted. It describes the analysis of a multilayered network that has learned to classify different types of musical chords. An examination of the connection weights between the input units and the hidden units reveals a number of interesting ways in which this network represents musical regularities. An examination of the network’s hidden unit space shows how these musical regularities permit the network to rearrange different types of chord types so that they may then be carved into appropriate decision regions by the output units.
In section 4.11 a biologically inspired approach to discovering network algorithms is introduced. This approach involves wiretapping the responses of hidden units when the network is presented with various stimuli, and then using these responses to determine the trigger features that the hidden units detect. It is also shown that changing the activation function of a hidden unit can lead to interesting complexities in defining the notion of a trigger feature, because some kinds of hidden units capture families of trigger features that require further analysis.
In Section 4.12 we describe how interpreting the internal structure of a network begins to shed light on the relationship between algorithms and architectures. Also described is a network that, as a result of training, translates a classical model of a task into a connectionist one. This illustrates an intertheoretic reduction between classical and connectionist theories, raising the possibility that both types of theories can be described in the same architecture.
4.10: Empiricism and Internal Representations
The ability of hidden units to increase the computational power of artificial neural networks was well known to Old Connectionism (McCulloch & Pitts, 1943). Its problem was that while a learning rule could be used to train networks with no hidden units (Rosenblatt, 1958, 1962), no such rule existed for multilayered networks. The reason that a learning rule did not exist for multilayered networks was because learning was defined in terms of minimizing the error of unit responses. While it was straightforward to define output unit error, no parallel definition existed for hidden unit error. A hidden unit’s error could not be defined because it was not related to any directly observable outcome (e.g., external behaviour). If a hidden unit’s error could not be defined, then Old Connectionist rules could not be used to modify its connections.
The need to define and compute hidden unit error is an example of the credit assignment problem:
In playing a complex game such as chess or checkers, or in writing a computer program, one has a definite success criterion—the game is won or lost. But in the course of play, each ultimate success (or failure) is associated with a vast number of internal decisions. If the run is successful, how can we assign credit for the success among the multitude of decisions? (Minsky, 1963, p. 432)
The credit assignment problem that faced Old Connectionism was the inability to assign the appropriate credit—or more to the point, the appropriate blame— to each hidden unit for its contribution to output unit error. Failure to solve this problem prevented Old Connectionism from discovering methods to make their most powerful networks belong to the domain of empiricism and led to its demise (Papert, 1988).
The rebirth of connectionist cognitive science in the 1980s (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c) was caused by the discovery of a solution to Old Connectionism’s credit assignment problem. By employing a nonlinear but continuous activation function, calculus could be used to explore changes in network behaviour (Rumelhart, Hinton, & Williams, 1986b). In particular, calculus could reveal how an overall network error was altered, by changing a component deep within the network, such as a single connection between an input unit and a hidden unit. This led to the discovery of the “backpropagation of error” learning rule, sometimes known as the generalized delta rule (Rumelhart Hinton, & Williams, 1986b). The calculus underlying the generalized delta rule revealed that hidden unit error could be defined as the sum of weighted errors being sent backwards through the network from output units to hidden units.
The generalized delta rule is an error-correcting method for training multilayered networks that shares many characteristics with the original delta rule for perceptrons (Rosenblatt, 1958, 1962; Widrow, 1962; Widrow & Hoff, 1960). A more detailed mathematical treatment of this rule, and its relationship to other connectionist learning rules, is provided by Dawson (2004). A less technical account of the rule is given below.
The generalized delta rule is used to train a multilayer perceptron to mediate a desired input-output mapping. It is a form of supervised learning, in which a finite set of input-output pairs is presented iteratively, in random order, during training. Prior to training, a network is a “pretty blank” slate; all of its connection weights, and all of the biases of its activation functions, are initialized as small, random numbers. The generalized delta rule involves repeatedly presenting input-output pairs and then modifying weights. The purpose of weight modification is to reduce overall network error.
A single presentation of an input-output pair proceeds as follows. First, the input pattern is presented, which causes signals to be sent to hidden units, which in turn activate and send signals to the output units, which finally activate to represent the network’s response to the input pattern. Second, the output unit responses are compared to the desired responses, and an error term is computed for each output unit. Third, an output unit’s error is used to modify the weights of its connections. This is accomplished by adding a weight change to the existing weight. The weight change is computed by multiplying four different numbers together: a learning rate, the derivative of the unit’s activation function, the output unit’s error, and the current activity at the input end of the connection. Up to this point, learning is functionally the same as performing gradient descent training on a perceptron (Dawson, 2004).
The fourth step differentiates the generalized delta rule from older rules: each hidden unit computes its error. This is done by treating an output unit’s error as if it were activity and sending it backwards as a signal through a connection to a hidden unit. As this signal is sent, it is multiplied by the weight of the connection. Each hidden unit computes its error by summing together all of the error signals that it receives from the output units to which it is connected. Fifth, once the hidden unit error has been computed, the weights of the hidden units can be modified using the same equation that was used to alter the weights of each of the output units.
This procedure can be repeated iteratively if there is more than one layer of hidden units. That is, the error of each hidden unit in one layer can be propagated backwards to an adjacent layer as an error signal once the hidden unit weights have been modified. Learning about this pattern stops once all of the connections have been modified. Then the next training pattern can be presented to the input units, and the learning process occurs again.
There are a variety of different ways in which the generic algorithm given above can be realized. For instance, in stochastic training, connection weights are updated after each pattern is presented (Dawson, 2004). This approach is called stochastic because each pattern is presented once per epoch of training, but the order of presentation is randomized for each epoch. Another approach, batch training, is to accumulate error over an epoch and to only update weights once at the end of the epoch, using accumulated error (Rumelhart, Hinton, & Williams, 1986a). As well, variations of the algorithm exist for different continuous activation functions. For instance, an elaborated error term is required to train units that have Gaussian activation functions, but when this is done, the underlying mathematics are essentially the same as in the original generalized delta rule (Dawson & Schopflocher, 1992b).
New Connectionism was born when the generalized delta rule was invented. Interestingly, the precise date of its birth and the names of its parents are not completely established. The algorithm was independently discovered more than once. Rumelhart, Hinton, and Williams (1986a, 1986b) are its most famous discoverers and popularizers. It was also discovered by David Parker in 1985 and by Yann LeCun in 1986 (Anderson, 1995). More than a decade earlier, the algorithm was reported in Paul Werbos’ (1974) doctoral thesis. The mathematical foundations of the generalized delta rule can be traced to an earlier decade, in a publication by Shun-Ichi Amari (1967).
In an interview (Anderson & Rosenfeld, 1998), neural network pioneer Stephen Grossberg stated that “Paul Werbos, David Parker, and Shun-Ichi Amari should have gotten credit for the backpropagation model, instead of Rumelhart, Hinton, and Williams” (pp. 179–180). Regardless of the credit assignment problem associated with the scientific history of this algorithm, it transformed cognitive science in the mid-1980s, demonstrating “how the lowly concepts of feedback and derivatives are the essential building blocks needed to understand and replicate higher-order phenomena like learning, emotion and intelligence at all levels of the human mind” (Werbos, 1994, p. 1). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.09%3A_Connectionist_Algorithms_-_An_Overview.txt |
Artificial neural networks provide a medium in which to explore empiricism, for they acquire knowledge via experience. This knowledge is used to mediate an input-output mapping and usually takes the form of a distributed representation. Distributed representations provide some of the putative connectionist advantages over classical cognitive science: damage resistance, graceful degradation, and so on. Unfortunately, distributed representations are also tricky to interpret, making it difficult for them to provide new theories for cognitive science.
However, interpreting the internal structures of multilayered networks, though difficult, is not impossible. To illustrate this, let us consider a multilayer perceptron trained to classify different types of musical chords. The purpose of this section is to discuss the role of hidden units, to demonstrate that networks that use hidden units can also be interpreted, and to introduce a decidedly connectionist notion called the coarse code.
Chords are combinations of notes that are related to musical scales, where a scale is a sequence of notes that is subject to certain constraints. A chromatic scale is one in which every note played is one semitone higher than the previous note. If one were to play the first thirteen numbered piano keys of Figure \(1\) in order, then the result would be a chromatic scale that begins on a low C and ends on another C an octave higher.
A major scale results by constraining a chromatic scale such that some of its notes are not played. For instance, the C major scale is produced if only the white keys numbered from 1 to 13 in Figure \(1\) are played in sequence (i.e., if the black keys numbered 2, 4, 7, 9, and 11 are not played).
The musical notation for the C major scale is provided in the sequence of notes illustrated in the first part of Figure \(2\). The Greeks defined a variety of modes for each scale; different modes were used to provoke different aesthetic experiences (Hanslick, 1957). The C major scale in the first staff of Figure \(2\) is in the Ionian mode because it begins on the note C, which is the root note, designated I, for the C major key.
One can define various musical chords in the context of C major in two different senses. First, the key signature of each chord is the same as C major (i.e., no sharps or flats). Second, each of these chords is built on the root of the C major scale (the note C). For instance, one basic chord is the major triad. In the key of C major, the root of this chord—the chord’s lowest note—is C (e.g., piano key #1 in Figure \(1\)). The major triad for this key is completed by adding two other notes to this root. The second note in the triad is 4 semitones higher than C, which is the note E (the third note in the major scale in Figure \(2\)). The third note in the triad is 3 semitones higher than the second note, which in this case is G (the fifth note in the major scale in Figure \(2\)). Thus the notes C-E-G define the major triad for the key of C; this is the first chord illustrated in Figure \(2\).
A fourth note can added on to any major triad to create an “added note” tetrachord (Baker, 1982). The type of added note chord that is created depends upon the relationship between the added note and the third note of the major triad. If the added note is 4 semitones higher than the third note, the result is a major 7th chord, such as the Cmaj7 illustrated in Figure \(2\). If the added note is 3 semitones higher than the third note, the result is a dominant 7th chord such as the C7 chord presented in Figure \(2\). If the added note is 2 semitones higher than the third note, then the result is a 6th chord, such as the C6 chord illustrated in Figure \(2\).
The preceding paragraphs described the major triad and some added note chords for the key of C major. In Western music, C major is one of twelve possible major keys. The set of all possible major keys is provided in Figure \(3\), which organizes them in an important cyclic structure, called the circle of fifths.
The circle of fifths includes all 12 notes in a chromatic scale, but arranges them so that adjacent notes in the circle are a musical interval of a perfect fifth (i.e., 7 semitones) apart. The circle of fifths is a standard topic for music students, and it is foundational to many concepts in music theory. It is provided here, though, to be contrasted later with “strange circles” that are revealed in the internal structure of a network trained to identify musical chords.
Any one of the notes in the circle of fifths can be used to define a musical key and therefore can serve as the root note of a major scale. Similarly, any one of these notes can be the root of a major triad created using the pattern of root + 4 semitones + 3 semitones that was described earlier for the key of C major (Baker, 1982). Furthermore, the rules described earlier can also be applied to produce added note chords for any of the 12 major key signatures. These possible major triads and added note chords were used as inputs for training a network to correctly classify different types of chords, ignoring musical key.
A training set of 48 chords was created by building the major triad, as well as the major 7th, dominant 7th, and 6th chord for each of the 12 possible major key signatures (i.e., using each of the notes in Figure \(3\) as a root). When presented with a chord, the network was trained to classify it into one of the four types of interest: major triad, major 7th, dominant 7th, or 6th. To do so, the network had 4 output units, one for each type of chord. For any input, the network learned to turn the correct output unit on and to turn the other three output units off.
The input chords were encoded with a pitch class representation (Laden & Keefe, 1989; Yaremchuk & Dawson, 2008). In a pitch class representation, only 12 input units are employed, one for each of the 12 different notes that can appear in a scale. Different versions of the same note (i.e., the same note played at different octaves) are all mapped onto the same input representation. For instance, notes 1, 13, 25, and 37 in Figure \(1\) all correspond to different pitches but belong to the same pitch class—they are all C notes, played at different octaves of the keyboard. In a pitch class representation, the playing of any of these input notes would be encoded by turning on a single input unit—the one unit used to represent the pitch class of C.
A pitch class representation of chords was used for two reasons. First, it requires a very small number of input units to represent all of the possible stimuli. Second, it is a fairly abstract representation that makes the chord classification task difficult, which in turn requires using hidden units in a network faced with this task.
Why chord classification might be difficult for a network when pitch class encoding is employed becomes evident by thinking about how we might approach the problem if faced with it ourselves. Classifying the major chords is simple: they are the only input stimuli that activate three input units instead of four. However, classifying the other chord types is very challenging. One first has to determine what key the stimulus is in, identify which three notes define its major chord component, and then determine the relationship between the third note of the major chord component and the fourth “added” note. This is particularly difficult because of the pitch class representation, which throws away note-order information that might be useful in identifying chord type.
It was decided that the network that would be trained on the chord classification task would be a network of value units (Dawson & Schopflocher, 1992b). The hidden units and output units in a network of value units use a Gaussian activation function, which means that they behave as if they carve two parallel planes through a pattern space. Such networks can be trained with a variation of the generalized delta rule. This type of network was chosen for this problem for two reasons. First, networks of value units have emergent properties that make them easier to interpret than other types of networks trained on similar problems (Dawson, 2004; Dawson et al., 1994). One reason for this is because value units behave as if they are “tuned” to respond to very particular input signals. Second, previous research on different versions of chord classification problems had produced networks that revealed elegant internal structure (Yaremchuk & Dawson, 2005, 2008).
The simplest network of value units that could learn to solve the chord classification problem required three hidden units. At the start of training, the value of m for each unit was initialized as 0. (The value of m for a value unit is analogous to a threshold in other types of units [Dawson, Kremer, & Gannon, 1994; Dawson & Schopflocher, 1992b]; if a value unit’s net input is equal to m then the unit generates a maximum activity of 1.00.) All connection weights were set to values randomly selected from the range between –0.1 and 0.1. The network was trained with a learning rate of 0.01 until it produced a “hit” for every output unit on every pattern. Because of the continuous nature of the activation function, a hit was defined as follows: a value of 0.9 or higher when the desired output was 1, and a value of 0.1 or lower when the desired output was 0. The network that is interpreted below learned the chord classification task after 299 presentations of the training set.
What is the role of a layer of hidden units? In a perceptron, which has no hidden units, input patterns can only be represented in a pattern space. Recall from the discussion of Figure 4-2 that a pattern space represents each pattern as a point in space. The dimensionality of this space is equal to the number of input units. The coordinates of each pattern’s point in this space are given by the activities of the input units. For some networks, the positioning of the points in the pattern space prevents some patterns from being correctly classified, because the output units are unable to adequately carve the pattern space into the appropriate decision regions.
In a multilayer perceptron, the hidden units serve to solve this problem. They do so by transforming the pattern space into a hidden unit space (Dawson, 2004). The dimensionality of a hidden unit space is equal to the number of hidden units in the layer. Patterns are again represented as points in this space; however, in this space their coordinates are determined by the activities they produce in each hidden unit. The hidden unit space is a transformation of the pattern space that involves detecting higher-order features. This usually produces a change in dimensionality—the hidden unit space often has a different number of dimensions than does the pattern space—and a repositioning of the points in the new space. As a result, the output units are able to carve the hidden unit space into a set of decision regions that permit all of the patterns, repositioned in the hidden unit space, to be correctly classified.
This account of the role of hidden units indicates that the interpretation of the internal structure of a multilayer perceptron involves answering two different questions. First, what kinds of features are the hidden units detecting in order to map patterns from the pattern space into the hidden unit space? Second, how do the output units process the hidden unit space to solve the problem of interest? The chord classification network can be used to illustrate how both questions can be addressed.
First, when mapping the input patterns into the hidden unit space, the hidden units must be detecting some sorts of musical regularities. One clue as to what these regularities may be is provided by simply examining the connection weights that feed into them, provided in Table \(1\).
Input Note Hidden 1 Hidden 1 Class Hidden 2 Hidden 2 Class Hidden 3 Hidden 3 Class
B 0.53 Circle of Major Thirds 1 0.12 Circle of Major Thirds 1 0.75 Circle of Major Seconds 1
D# 0.53 0.12 0.75
G 0.53 0.12 0.75
A -0.53 Circle of Major Thirds 2 -0.12 Circle of Major Thirds 2 0.75
C# -0.53 -0.12 0.75
F -0.53 -0.12 0.75
C 0.12 Circle of Major Thirds 3 -0.53 Circle of Major Thirds 3 -0.77 Circle of Major Seconds 2
G# 0.12 -0.53 -0.77
E 0.12 -0.53 -0.77
F# -0.12 Circle of Major Thirds 4 0.53 Circle of Major Thirds 4 -0.77
A# -0.12 0.53 -0.77
D -0.12 0.53 -0.77
Table \(3\). Connection weights from the 12 input units to each of the three hidden units. Note that the first two hidden units adopt weights that assign input notes to the four circles of major thirds. The third hidden unit adopts weights that assign input notes to the two circles of major seconds.
In the pitch class representation used for this network, each input unit stands for a distinct musical note. As far as the hidden units are concerned, the “name” of each note is provided by the connection weight between the input unit and the hidden unit. Interestingly, Table \(1\) reveals that all three hidden units take input notes that we would take as being different (because they have different names, as in the circle of fifths in Figure \(3\)) and treat them as being identical. That is, the hidden units assign the same “name,” or connection weight, to input notes that we would give different names to.
Furthermore, assigning the same “name” to different notes by the hidden units is not done randomly. Notes are assigned according to strange circles, that is, circles of major thirds and circles of major seconds. Let us briefly describe these circles, and then return to an analysis of Table \(1\).
The circle of fifths (Figure \(3\)) is not the only way in which notes can be arranged geometrically. One can produce other circular arrangements by exploiting other musical intervals. These are strange circles in the sense that they would very rarely be taught to music students as part of a music theory curriculum. However, these strange circles are formal devices that can be as easily defined as can be the circle of fifths.
For instance, if one starts with the note C and moves up a major second (2 semitones) then one arrives at the note D. From here, moving up another major second arrives at the note E. This can continue until one circles back to C but an octave higher than the original, which is a major second higher than A#. This circle of major seconds captures half of the notes in the chromatic scale, as is shown in the top part of Figure \(4\). A complementary circle of major seconds can also be constructed (bottom circle of Figure \(4\)); this circle contains all the remaining notes that are not part of the first circle.
An alternative set of musical circles can be defined by exploiting a different musical interval. In each circle depicted in Figure \(5\), adjacent notes are a major third (4 semitones) apart. As shown in Figure \(5\) four such circles are possible.
What do these strange circles have to do with the internal structure of the network trained to classify the different types of chords? A close examination of Table \(1\) indicates that these strange circles are reflected in the connection weights that feed into the network’s hidden units. For Hidden Units 1 and 2, if notes belong to the same circle of major thirds (Figure \(5\)), then they are assigned the same connection weight. For Hidden Unit 3, if notes belong to the same circle of major seconds (Figure \(4\)), then they are assigned the same connection weight. In short, each of the hidden units replaces the 12 possible different note names with a much smaller set, which equates notes that belong to the same circle of intervals and differentiates notes that belong to different circles.
Further inspection of Table \(1\) reveals additional regularities of interest. Qualitatively, both Hidden Units 1 and 2 assign input notes to equivalence classes based on circles of major thirds. They do so by using the same note “names”: 0.53, 0.12, –0.12, and –0.53. However, the two hidden units have an important difference: they assign the same names to different sets of input notes. That is, notes that are assigned one connection weight by Hidden Unit 1 are assigned a different connection weight by Hidden Unit 2.
The reason that the difference in weight assignment between the two hidden units is important is that the behavior of each hidden unit is not governed by a single incoming signal, but is instead governed by a combination of three or four input signals coming from all of the units. The connection weights used by the hidden units place meaningful constraints on how these signals are combined.
Let us consider the role of the particular connection weights used by the hidden units. Given the binary nature of the input encoding, the net input of any hidden unit is simply the sum of the weights associated with each of the activated input units. For a value unit, if the net input is equal to the value of the unit’s m then the output generates a maximum value of 1.00. As the net input moves away from m in either a positive or negative direction, activity quickly decreases. At the end of training, the values of m for the three hidden units were 0.00, 0.00, and –0.03 for Hidden Units 1, 2, and 3, respectively. Thus for each hidden unit, if the incoming signals are essentially zero—that is if all the incoming signals cancel each other out—then high activity will be produced.
Why then do Hidden Units 1 and 2 use the same set of four connection weights but assign these weights to different sets of input notes? The answer is that these hidden units capture similar chord relationships but do so using notes from different strange circles.
This is shown by examining the responses of each hidden unit to each input chord after training. Table \(2\) summarizes these responses, and shows that each hidden unit generated identical responses to different subsets of input chords.
Input Chord Activation
Chord Chord Root Hid1 Hid2 Hid3
Major
C,D,A,F#,G#,A# 0.16 0.06 0.16
C#,D#,F,G,A,B 0.06 0.16 0.16
Major7
C,D,A,F#,G#,A# 0.01 0.12 1.00
C#,D#,F,G,A,B 0.12 0.01 1.00
Dom7
C,D,A,F#,G#,A# 0.27 0.59 0.00
C#,D#,F,G,A,B 0.59 0.27 0.00
6th
C,D,A,F#,G#,A# 0.84 0.03 1.00
C#,D#,F,G,A,B 0.03 0.84 1.00
Table \(2\). The activations produced in each hidden unit by different subsets of input chords.
From Table \(2\), one can see that the activity of Hidden Unit 3 is simplest to describe: when presented with a dominant 7th chord, it produces an activation of 0 and a weak activation to a major triad. When presented with either a major 7th or a 6th chord, it produces maximum activity. This pattern of activation is easily explained by considering the weights that feed into Hidden Unit 3 (Table \(1\)). Any major 7th or 6th chord is created out of two notes from one circle of major seconds and two notes from the other circle. The sums of pairs of weights from different circles cancel each other out, producing near-zero net input and causing maximum activation.
In contrast, the dominant 7th chords use three notes from one circle of major seconds and only one from the other circle. As a result, the signals do not cancel out completely, given the weights in Table \(1\). Instead, a strong non-zero net input is produced, and the result is zero activity.
Finally, any major triad involves only three notes: two from one circle of major seconds and one from the other. Because of the odd number of input signals, cancelation to zero is not possible. However, the weights have been selected so that the net input produced by a major triad is close enough to m to produce weak activity.
The activation patterns for Hidden Units 1 and 2 are more complex. It is possible to explain all of them in terms of balancing (or failing to balance) signals associated with different circles of major thirds. However, it is more enlightening to consider these two units at a more general level, focusing on the relationship between their activations.
In general terms, Hidden Units 1 and 2 generate activations of different intensities to different classes of chords. In general, they produce the highest activity to 6th chords and the lowest activity to major 7th chords. Importantly, they do not generate the same activity to all chords of the same type. For instance, for the 12 possible 6th chords, Hidden Unit 1 generates activity of 0.84 to 6 of them but activity of only 0.03 to the other 6 chords. An inspection of Table \(2\) indicates that for every chord type, both Hidden Units 1 and 2 generate one level of activity with half of them, but produce another level of activity with the other half.
The varied responses of these two hidden units to different chords of the same type are related to the circle of major seconds (Figure \(4\)). For example, Hidden Unit 1 generates a response of 0.84 to 6th chords whose root note belongs to the top circle of Figure \(4\), and a response of 0.03 to 6th chords whose root note belongs to the bottom circle of Figure \(4\). Indeed, for all of the chord types, both of these hidden units generate one response if the root note belongs to one circle of major seconds and a different response if the root note belongs to the other circle.
Furthermore, the responses of Hidden Units 1 and 2 complement one another: for any chord type, those chords that produce low activity in Hidden Unit 1 produce higher activity in Hidden Unit 2. As well, those chords that produce low activity in Hidden Unit 2 produce higher activity in Hidden Unit 1. This complementing is again related to the circles of major seconds: Hidden Unit 1 generates higher responses to chords whose root belongs to one circle, while Hidden Unit 2 generates higher responses to chords whose roots belong to the other. Which circle is “preferred” by a hidden unit depends on chord type.
Clearly each of the three hidden units is sensitive to musical properties. However, it is not clear how these properties support the network’s ability to classify chords. For instance, none of the hidden units by themselves pick out a set of properties that uniquely define a particular type of chord. Instead, hidden units generate some activity to different chord types, suggesting the existence of a coarse code.
In order to see how the activities of the hidden units serve as a distributed representation that mediates chord classification, we must examine the hidden unit space. The hidden unit space plots each input pattern as a point in a space whose dimensionality is determined by the number of hidden units. The coordinates of the point in the hidden unit space are the activities produced by an input pattern in each hidden unit. The three-dimensional hidden unit space for the chord classification network is illustrated in Figure \(6\).
Because the hidden units generate identical responses to many of the chords, instead of 48 different visible points in this graph (one for each input pattern), there are only 8. Each point represents 6 different chords that fall in exactly the same location in the hidden unit space.
The hidden unit space reveals that each chord type is represented by two different points. That these points capture the same class is represented in Figure \(6\) by joining a chord type’s points with a dashed line. Two points are involved in defining a chord class in this space because, as already discussed, each hidden unit is sensitive to the organization of notes according to the two circles of major seconds. For each chord type, chords whose root belongs to one of these circles are mapped to one point, and chords whose root belongs to the other are mapped to the other point. Interestingly, there is no systematic relationship in the graph that maps onto the two circles. For instance, it is not the case that the four points toward the back of the Figure \(6\) cube all map onto the same circle of major seconds.
Figure \(7\) illustrates how the output units can partition the points in the hidden unit space in order to classify chords. Each output unit in this network is a value unit, which carves two parallel hyperplanes through a pattern space. To solve the chord classification problem, the connection weights and the bias of each output unit must take on values that permit these two planes to isolate the two points associated with one chord type from all of the other points in the space. Figure \(7\) shows how this would be accomplished by the output unit that signals that a 6th chord has been detected. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.11%3A_Chord_Classification_by_a_Multilayer_Perceptron.txt |
For more than half a century, neuroscientists have studied vision by mapping the receptive fields of individual neurons (Hubel & Wiesel, 1959; Lettvin, Maturana, McCulloch, & Pitts, 1959). To do this, they use a method called microelectrode recording or wiretapping (Calvin & Ojemann, 1994), in which the responses of single neurons are measured while stimuli are being presented to an animal. With this technique, it is possible to describe a neuron as being sensitive to a trigger feature, a specific pattern that when detected produces maximum activity in the cell.
That individual neurons may be described as detecting trigger features has led some to endorse a neuron doctrine for perceptual psychology. This doctrine has the goal of discovering the trigger features for all neurons (Barlow, 1972, 1995). This is because,
a description of that activity of a single nerve cell which is transmitted to and influences other nerve cells, and of a nerve cell’s response to such influences from other cells, is a complete enough description for functional understanding of the nervous system. (Barlow, 1972, p. 380)
The validity of the neuron doctrine is a controversial issue (Bowers, 2009; Gross, 2002). Regardless, there is a possibility that identifying trigger features can help to interpret the internal workings of artificial neural networks.
For some types of hidden units, trigger features can be identified analytically, without requiring any wiretapping of hidden unit activities (Dawson, 2004). For instance, the activation function for an integration device (e.g., the logistic equation) is monotonic, which means that increases in net input always produce increases in activity. As a result, if one knows the maximum and minimum possible values for input signals, then one can define an integration device’s trigger feature simply by inspecting the connection weights that feed into it (Dawson, Kremer, & Gannon, 1994). The trigger feature is that pattern which sends the minimum signal through every inhibitory connection and the maximum signal through every excitatory connection. The monotonicity of an integration device’s activation function ensures that it will have only one trigger feature.
The notion of a trigger feature for other kinds of hidden units is more complex. Consider a value unit whose bias, m, in its Gaussian activation function is equal to 0. The trigger feature for this unit will be the feature that causes it to produce maximum activation. For this value unit, this will occur when the net input to the unit is equal to 0 (i.e., equal to the value of µ) (Dawson & Schopflocher, 1992b). The net input of a value unit is defined by a particular linear algebra operation, called the inner product, between a vector that represents a stimulus and a vector that represents the connection weights that fan into the unit (Dawson, 2004). So, when net input equals 0, this means that the inner product is equal to 0.
However, when an inner product is equal to 0, this indicates that the two vectors being combined are orthogonal to one another (that is, there is an angle of 90° between the two vectors). Geometrically speaking, then, the trigger feature for a value unit is an input pattern represented by a vector of activities that is at a right angle to the vector of connection weights.
This geometric observation raises complications, because it implies that a hidden value unit will not have a single trigger feature. This is because there are many input patterns that are orthogonal to a vector of connection weights. Any input vector that lies in the hyperplane that is perpendicular to the vector of connection weights will serve as a trigger feature for the hidden value unit (Dawson, 2004); this is illustrated in Figure \(1\).
Another consequence of the geometric account provided above is that there should be families of other input patterns that share the property of producing the same hidden unit activity, but one that is lower than the maximum activity produced by one of the trigger features. These will be patterns that all fall into the same hyperplane, but this hyperplane is not orthogonal to the vector of connection weights.
The upshot of all of this is that if one trains a network of value units and then wiretaps its hidden units, the resulting hidden unit activities should be highly organized. Instead of having a rectangular distribution of activation values, there should be regular groups of activations, where each group is related to a different family of input patterns (i.e., families related to different hyperplanes of input patterns).
Empirical support for this analysis was provided by the discovery of activity banding when a hidden unit’s activities were plotted using a jittered density plot (Berkeley et al., 1995). A jittered density plot is a two-dimensional scatterplot of points; one such plot can be created for each hidden unit in a network. Each plotted point represents one of the patterns presented to the hidden unit during wiretapping. The \(x\)-value of the point’s position in the graph is the activity produced in that hidden unit by the pattern. The \(y\)-value of the point’s position in the scatterplot is a random value that is assigned to reduce overlap between points.
An example of a jittered density plot for a hidden value unit is provided in Figure \(2\). Note that the points in this plot are organized into distinct bands, which is consistent with the geometric analysis. This particular unit belongs to a network of value units trained on a logic problem discussed in slightly more detail below (Bechtel & Abrahamsen, 1991), and was part of a study that examined some of the implications of activity banding (Dawson & Piercey, 2001).
Bands in jittered density plots of hidden value units can be used to reveal the kinds of features that are being detected by these units. For instance, Berkeley et al. (1995) reported that all of the patterns that fell into the same band on a single jittered density plot in the networks did so because they shared certain local properties or features, which are called definite features.
There are two types of definite features. The first is called a definite unary feature. When a definite unary feature exists, it means that a single feature has the same value for every pattern in the band. The second is called a definite binary feature. With this kind of definite feature, an individual feature is not constant within a band. However, its relationship to some other feature is constant—variations in one feature are perfectly correlated with variations in another. Berkeley et al. (1995) showed how definite features could be both objectively defined and easily discovered using simple descriptive statistics (see also Dawson, 2005).
Definite features are always expressed in terms of the values of input unit activities. As a result, they can be assigned meanings using knowledge of a network’s input unit encoding scheme.
One example of using this approach was presented in Berkeley et al.’s (1995) analysis of a network on the Bechtel and Abrahamsen (1991) logic task. This task consists of a set of 576 logical syllogisms, each of which can be expressed as a pattern of binary activities using 14 input units. Each problem is represented as a first sentence that uses two variables, a connective or a second sentence that states a variable, and a conclusion that states a variable. Four different problem types were created in this format: modus ponens, modus tollens, disjunctive syllogism, and alternative syllogism. Each problem type was created using one of three different connectives and four different variables: the connectives were If…then, Or, or Not Both… And; the variables were A, B, C, and D. An example of a valid modus ponens argument in this format is “Sentence 1: ‘If A then B’; Sentence 2: ‘A’; Conclusion: ‘B’.”
For this problem, a network’s task is to classify an input problem into one of the four types and to classify it as being either a valid or an invalid example of that problem type. Berkeley et al. (1995) successfully trained a network of value units that employed 10 hidden units. After training, each of these units were wiretapped using the entire training set as stimulus patterns, and a jittered density plot was produced for each hidden unit. All but one of these plots revealed distinct banding. Berkeley et al. were able to provide a very detailed set of definite features for each of the bands.
After assigning definite features, Berkeley et al. (1995) used them to explore how the internal structure of the network was responsible for making the correct logical judgments. They expressed input logic problems in terms of which band of activity they belonged to for each jittered density plot. They then described each pattern as the combination of definite features from each of these bands, and they found that the internal structure of the network represented rules that were very classical in nature.
For example, Berkeley et al. (1995) found that every valid modus ponens problem was represented as the following features: having the connective If…then, having the first variable in Sentence 1 identical to Sentence 2, and having the second variable in Sentence 1 identical to the Conclusion. This is essentially the rule for valid modus ponens that could be taught in an introductory logic class (Bergmann, Moor, & Nelson, 1990). Berkeley et al. found several such rules; they also found a number that were not so traditional, but which could still be expressed in a classical form. This result suggests that artificial neural networks might be more symbolic in nature than connectionist cognitive scientists care to admit (Dawson, Medler, & Berkeley, 1997).
Importantly, the Berkeley et al. (1995) analysis was successful because the definite features that they identified were local. That is, by examining a single band in a single jittered density plot, one could determine a semantically interpretable set of features. However, activity bands are not always local. In some instances hidden value units produce nicely banded jittered density plots that possess definite features, but these features are difficult to interpret semantically (Dawson & Piercey, 2001). This occurs when the semantic interpretation is itself distributed across different bands for different hidden units; an interpretation of such a network requires definite features from multiple bands to be considered in concert.
While the geometric argument provided earlier motivated a search for the existence of bands in the hidden units of value unit networks, banding has been observed in networks of integration devices as well (Berkeley & Gunay, 2004). That being said, banding is not seen in every value unit network either. The existence of banding is likely an interaction between network architecture and problem representation; banding is useful when discovered, but it is only one tool available for network interpretation.
The important point is that practical tools exist for interpreting the internal structure of connectionist networks. Many of the technical issues concerning the relationship between classical and connectionist cognitive science may hinge upon network interpretations: “In our view, questions like ‘What is a classical rule?’ and ‘Can connectionist networks be classical in nature?’ are also hopelessly unconstrained. Detailed analysis of the internal structure of particular connectionist networks provide a specific framework in which these questions can be fruitfully pursued” (Dawson, Medler, & Berkeley, 1997, p. 39). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.12%3A_Trigger_Features.txt |
One of the prototypical architectures for classical cognitive science is the production system (Anderson, 1983; Kieras & Meyer, 1997; Meyer et al., 2001; Meyer & Kieras, 1997a, 1997b; Newell, 1973, 1990; Newell & Simon, 1972). A production system is a set of condition-action pairs. Each production works in parallel, scanning working memory for a pattern that matches its condition. If a production finds such a match, then it takes control, momentarily disabling the other productions, and performs its action, which typically involves adding, deleting, copying, or moving symbols in the working memory.
Production systems have been proposed as a lingua franca for cognitive science, capable of describing any connectionist or embodied cognitive science theory and therefore of subsuming such theories under the umbrella of classical cognitive science (Vera & Simon, 1993). This is because Vera and Simon (1993) argued that any situation-action pairing can be represented either as a single production in a production system or, for complicated situations, as a set of productions. “Productions provide an essentially neutral language for describing the linkages between information and action at any desired (sufficiently high) level of aggregation” (p. 42). Other philosophers of cognitive science have endorsed similar positions. For instance, von Eckardt (1995) suggested that if one considers distributed representations in artificial neural networks as being “higher-level” representations, then connectionist networks can be viewed as being analogous to classical architectures. This is because when examined at this level, connectionist networks have the capacity to input and output represented information, to store represented information, and to manipulate represented information. In other words, the symbolic properties of classical architectures may emerge from what are known as the subsymbolic properties of networks (Smolensky, 1988).
However, the view that artificial neural networks are classical in general or examples of production systems in particular is not accepted by all connectionists. It has been claimed that connectionism represents a Kuhnian paradigm shift away from classical cognitive science (Schneider, 1987). With respect to Vera and Simon’s (1993) particular analysis, their definition of symbol has been deemed too liberal by some neural network researchers (Touretzky & Pomerleau, 1994). Touretzky and Pomerlau (1994) claimed of a particular neural network discussed by Vera and Simon, ALVINN (Pomerleau, 1991), that its hidden unit “patterns are not arbitrarily shaped symbols, and they are not combinatorial. Its hidden unit feature detectors are tuned filters” (Touretzky & Pomerleau, 1994, p. 348). Others have viewed ALVINN from a position of compromise, noting that “some of the processes are symbolic and some are not” (Greeno & Moore, 1993, p. 54).
Are artificial neural networks equivalent to production systems? In the philosophy of science, if two apparently different theories are in fact identical, then one theory can be translated into the other. This is called intertheoretic reduction (Churchland, 1985, 1988; Hooker, 1979, 1981). The widely accepted view that classical and connectionist cognitive science are fundamentally different (Schneider, 1987) amounts to the claim that intertheoretic reduction between a symbolic model and a connectionist network is impossible. One research project (Dawson et al., 2000) directly examined this issue by investigating whether a production system model could be translated into an artificial neural network.
Dawson et al. (2000) investigated intertheoretic reduction using a benchmark problem in the machine learning literature, classifying a very large number (8,124) of mushrooms as being either edible or poisonous on the basis of 21 different features (Schlimmer, 1987). Dawson et al. (2000) used a standard machine learning technique, the ID3 algorithm (Quinlan, 1986) to induce a decision tree for the mushroom problem. A decision tree is a set of tests that are performed in sequence to classify patterns. After performing a test, one either reaches a terminal branch of the tree, at which point the pattern being tested can be classified, or a node of the decision tree, which is to say another test that must be performed. The decision tree is complete for a pattern set if every pattern eventually leads the user to a terminal branch. Dawson et al. (2000) discovered that a decision tree consisting of only five different tests could solve the Schlimmer mushroom classification task. Their decision tree is provided in Table \(1\).
Table \(1\). Dawson et al.'s (2000) step decision tree for lassifying mushrooms. Decision points in this tree where mushrooms are classified (e.g., "Rule 1 Edible") are given in bold.
Step Tests and Decision Points
1
What is the mushroom’s odour?
If it is almond or anise then it is edible. (Rule 1 Edible)
If it is creosote or fishy or foul or musty or pungent or spicy then it is poisonous. (Rule 1 Poisonous)
If it has no odour then proceed to Step 2.
2
Obtain the spore print of the mushroom.
If the spore print is black or brown or buff or chocolate or orange or yellow then it is edible. (Rule 2 Edible)
If the spore print is green or purple then it is poisonous. (Rule 2 Poisonous)
If the spore print is white then proceed to Step 3.
3
Examine the gill size of the mushroom.
If the gill size is broad, then it is edible. (Rule 3 Edible)
If the gill size is narrow, then proceed to Step 4.
4
Examine the stalk surface above the mushroom’s ring.
If the surface is fibrous then it is edible. (Rule 4 Edible)
If the surface is silky or scaly then it is poisonous. (Rule 4 Poisonous)
If the surface is smooth then proceed to Step 5.
5
Examine the mushroom for bruises.
If it has no bruises then it is edible. (Rule 5 Edible)
If it has bruises then it is poisonous. (Rule 5 Poisonous)
The decision tree provided in Table \(1\) is a classical theory of how mushrooms can be classified. It is not surprising, then, that one can translate this decision tree into the lingua franca: Dawson et al. (2000) rewrote the decision tree as an equivalent set of production rules. They did so by using the features of mushrooms that must be true at each terminal branch of the decision tree as the conditions for a production. The action of this production is to classify the mushroom (i.e., to assert that a mushroom is either edible or poisonous). For instance, at the Rule 1 Edible decision point in Table \(1\), one could create the following production rule: “If the odour is anise or almond, then the mushroom is edible.” Similar productions can be created for later decision points in the algorithm; these productions will involve a longer list of mushroom features. The complete set of productions that were created for the decision tree algorithm is provided in Table \(2\).
Dawson et al. (2000) trained a network of value units to solve the mushroom classification problem and to determine whether a classical model (such as the decision tree from Table \(1\) or the production system from Table \(2\)) could be translated into a network. To encode mushroom features, their network used 21 input units, 5 hidden value units, and 10 output value units. One output unit encoded the edible/poisonous classification—if a mushroom was edible, this unit was trained to turn on; otherwise this unit was trained to turn off.
Decision Point From Table \(1\) Equivalent Production Network Cluster
Rule 1 Edible P1: if (odor=anise)\(\lor\)(odor=almond)→edible 2 or 3
Rule 1 Poisonous P2: if (odor\(\neq\)anise) \(\land\)(odor\(\neq\)almond) \(\land\)(odor\(\neq\)none) → not edible 1
Rule 2 Edible P3: if (odor=none) \(\land\) (spore print color\(\neq\)green) \(\land\) (spore print color\(\neq\)purple) \(\land\) (spore print color=white) → edible 9
Rule 2 Poisonous P4: if (odor=none) \(\land\) ((spore print color=green)\ (\lor\) (spore print color=purple) → not edible 6
Rule 3 Edible P5: if (odor=none) \(\land\) (spore print color=white) \(\land\) (gill size=broad) → edible 4
Rule 4 Edible P6: if (odor=none) \(\land\) (spore print color=white) \(\land\) (gill size=narrow) \(\land\) (stalk surface above ring=fibrous) → edible 7 or 11
Rule 4 Poisonous P7: if(odor=none) \(\land\) (spore print color=white) \(\land\) (gill size=narrow) \(\land\) ((stalk surface above ring=silky) \(\lor\) (stalk surface above ring=scaly)) → edible 5
Rule 5 Edible P8: if (odor=none) \(\land\) (spore print color=white) \(\land\) (gill size=narrow) \(\land\) (stalk surface above ring=smooth) \(\land\) (bruises=no) → edible 8 or 12
Rule 5 Poisonous P9: if (odor=none) \(\land\) (spore print color=white) \(\land\) (gill size=narrow) \(\land\) (stalk surface above ring=smooth) \(\land\) (bruises=yes) → not edible 10
Table \(2\). Dawson et al.’s (2000) production system translation of Table 4-4. Conditions are given as sets of features. The Network Cluster column pertains to their artificial neural network trained on the mushroom problem and is described later in the text.
The other nine output units were used to provide extra output learning, which was the technique employed to insert a classical theory into the network. Normally, a pattern classification system is only provided with information about what correct pattern labels to assign. For instance, in the mushroom problem, the system would typically only be taught to generate the label edible or the label poisonous. However, more information about the pattern classification task is frequently available. In particular, it is often known why an input pattern belongs to one class or another. It is possible to incorporate this information to the pattern classification problem by teaching the system not only to assign a pattern to a class (e.g., “edible”, “poisonous”) but to also generate a reason for making this classification (e.g., “passed Rule 1”, “failed Rule 4”). Elaborating a classification task along such lines is called the injection of hints or extra output learning (Abu-Mostafa, 1990; Suddarth & Kergosien, 1990).
Dawson et al. (2000) hypothesized that extra output learning could be used to insert the decision tree from Table \(1\) into a network. Table \(1\) provides nine different terminal branches of the decision tree at which mushrooms are assigned to categories (“Rule 1 edible”, “Rule 1 poisonous”, “Rule 2 edible”, etc.). The network learned to “explain” why it classified an input pattern in a particular way by turning on one of the nine extra output units to indicate which terminal branch of the decision tree was involved. In other words, the network (which required 8,699 epochs of training on the 8,124 different input patterns!) classified networks “for the same reasons” as would the decision tree. This is why Dawson et al. hoped that this classical theory would literally be translated into the network.
Apart from the output unit behavior, how could one support the claim that a classical theory had been translated into a connectionist network? Dawson et al. (2000) interpreted the internal structure of the network in an attempt to see whether such a network analysis would reveal an internal representation of the classical algorithm. If this were the case, then standard training practices would have succeeded in translating the classical algorithm into a PDP network.
One method that Dawson et al. (2000) used to interpret the trained network was a multivariate analysis of the network’s hidden unit space. They represented each mushroom as the vector of five hidden unit activation values that it produced when presented to the network. They then performed a k-means clustering of this data. The k-means clustering is an iterative procedure that assigns data points to k different clusters in such a way that each member of a cluster is closer to the centroid of that cluster than to the centroid of any other cluster to which other data points have been assigned.
However, whenever cluster analysis is performed, one question that must be answered is How many clusters should be used?—in other words, what should the value of k be?. An answer to this question is called a stopping rule. Unfortunately, no single stopping rule has been agreed upon (Aldenderfer & Blashfield, 1984; Everitt, 1980). As a result, there exist many different types of methods for determining k (Milligan & Cooper, 1985).
While no general method exists for determining the optimal number of clusters, one can take advantage of heuristic information concerning the domain being clustered in order to come up with a satisfactory stopping rule for this domain. Dawson et al. (2000) argued that when the hidden unit activities of a trained network are being clustered, there must be a correct mapping from these activities to output responses, because one trained network itself has discovered one such mapping. They used this position to create the following stopping rule: “Extract the smallest number of clusters such that every hidden unit activity vector assigned to the same cluster produces the same output response in the network.” They used this rule to determine that the k-means analysis of the network’s hidden unit activity patterns required the use of 12 different clusters.
Dawson et al. (2000) then proceeded to examine the mushroom patterns that belonged to each cluster in order to determine what they had in common. For each cluster, they determined the set of descriptive features that each mushroom shared. They realized that each set of shared features they identified could be thought of as a condition, represented internally by the network as a vector of hidden unit activities, which results in the network producing a particular action, in particular, the edible/poisonous judgment represented by the first output unit.
For example, mushrooms that were assigned to Cluster 2 had an odour that was either almond or anise, which is represented by the network’s five hidden units adopting a particular vector of activities. These activities serve as a condition that causes the network to assert that the mushroom is edible.
By interpreting a hidden unit vector in terms of condition features that are prerequisites to network responses, Dawson et al. (2000) discovered an amazing relationship between the clusters and the set of productions in Table \(2\). They determined that each distinct class of hidden unit activities (i.e., each cluster) corresponded to one, and only one, of the productions listed in the table. This mapping is provided in the last column of Table \(2\). In other words, when one describes the network as generating a response because its hidden units are in one state of activity, one can translate this into the claim that the network is executing a particular production. This shows that the extra output learning translated the classical algorithm into a network model.
The translation of a network into a production system, or vice versa, is an example of new wave reductionism (Bickle, 1996; Endicott, 1998). In new wave reductionism, one does not reduce a secondary theory directly to a primary theory. Instead, one takes the primary theory and constructs from it a structure that is analogous to the secondary theory, but which is created in the vocabulary of the primary theory. Theory reduction involves constructing a mapping between the secondary theory and its image constructed from the primary theory. “The older theory, accordingly, is never deduced; it is just the target of a relevantly adequate mimicry” (Churchland, 1985, p. 10).
Dawson et al.’s (2000) interpretation is a new wave intertheoretic reduction because the production system of Table \(2\) represents the intermediate structure that is analogous to the decision tree of Table \(1\). “Adequate mimicry” was established by mapping different classes of hidden unit states to the execution of particular productions. In turn, there is a direct mapping from any of the productions back to the decision tree algorithm. Dawson et al. concluded that they had provided an exact translation of a classical algorithm into a network of value units.
The relationship between hidden unit activities and productions in Dawson et al.’s (2000) mushroom network is in essence an example of equivalence between symbolic and subsymbolic accounts. This implies that one cannot assume that classical models and connectionist networks are fundamentally different at the algorithmic level, because one type of model can be translated into the other. It is possible to have a classical model that is exactly equivalent to a PDP network.
This result provides very strong support for the position proposed by Vera and Simon (1993). The detailed analysis provided by Dawson et al. (2000) permitted them to make claims of the type “Network State \(x\) is equivalent to Production \(y\).” Of course, this one result cannot by itself validate Vera and Simon’s argument. For instance, can any classical theory be translated into a network? This is one type of algorithmic-level issue that requires a great deal of additional research. As well, the translation works both ways: perhaps artificial neural networks provide a biologically plausible lingua franca for classical architectures! | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.13%3A_A_Parallel_Distributed_Production_System.txt |
The notion of representation in classical cognitive science is tightly linked to the structure/process distinction that is itself inspired by the digital computer. An explicit set of rules is proposed to operate on a set of symbols that permits its components to be identified, digitally, as tokens that belong to particular symbol types.
In contrast, artificial neural networks dispense (at first glance) with the sharp distinction between structure and process that characterizes classical cognitive science. Instead, networks themselves take the form of dynamic symbols that represent information at the same time as they transform it. The dynamic, distributed nature of artificial neural networks appears to make them more likely to be explained using statistical mechanics than using propositional logic.
One of the putative advantages of connectionist cognitive science is that it can inspire alternative notions of representation. The blurring of the structure/process distinction, the seemingly amorphous nature of the internal structure that characterizes many multilayer networks, leads to one such proposal, called coarse coding.
A coarse code is one in which an individual unit is very broadly tuned, sensitive to either a wide range of features or at least to a wide range of values for an individual feature (Churchland & Sejnowski, 1992; Hinton, McClelland, & Rumelhart, 1986). In other words, individual processors are themselves very inaccurate devices for measuring or detecting a feature. The accurate representation of a feature can become possible, though, by pooling or combining the responses of many such inaccurate detectors, particularly if their perspectives are slightly different (e.g., if they are sensitive to different ranges of features, or if they detect features from different input locations).
A familiar example of coarse coding is provided by the nineteenth trichromatic theory of colour perception (Helmholtz, 1968; Wasserman, 1978). According to this theory, colour perception is mediated by three types of retinal cone receptors. One is maximally sensitive to short (blue) wavelengths of light, another is maximally sensitive to medium (green) wavelengths, and the third is maximally sensitive to long (red) wavelengths. Thus none of these types of receptors are capable of representing, by themselves, the rich rainbow of perceptible hues.
However, these receptors are broadly tuned and have overlapping sensitivities. As a result, most light will activate all three channels simultaneously, but to different degrees. Actual colored light does not produce sensations of absolutely pure color; that red, for instance, even when completely freed from all admixture of white light, still does not excite those nervous fibers which alone are sensitive to impressions of red, but also, to a very slight degree, those which are sensitive to green, and perhaps to a still smaller extent those which are sensitive to violet rays. (Helmholtz, 1968, p. 97)
The pooling of different activities of the three channels permits a much greater variety of colours to be represented and perceived.
We have already seen examples of coarse coding in some of the network analyses that were presented earlier in this chapter. For instance, consider the chord recognition network. It was shown in Table 4.10.2 that none of its hidden units were accurate chord detectors. Hidden Units 1 and 2 did not achieve maximum activity when presented with any chord. When Hidden Unit 3 achieved maximum activity, this did not distinguish a 6th chord from a major 7th chord. However, when patterns were represented as points in a three-dimensional space, where the coordinates of each point were defined by a pattern’s activity in each of the three hidden units (Figures 4.10.6 and 4.10.7), perfect chord classification was possible.
Other connectionist examples of coarse coding are found in studies of networks trained to accomplish navigational tasks, such as making judgments about the distance or direction between pairs of cities on a map (Dawson & Boechler, 2007; Dawson, Boechler, & Orsten, 2005; Dawson, Boechler, & Valsangkar-Smyth, 2000). For instance, Dawson and Boechler (2007) trained a network to judge the heading from one city on a map of Alberta to another. Seven hidden value units were required to accomplish this task. Each of these hidden units could be described as being sensitive to heading. However, this sensitivity was extremely coarse—some hidden units could resolve directions only to the nearest 180°. Nevertheless, a linear combination of the activities of all seven hidden units represented the desired direction between cities with a high degree of accuracy.
Similarly, Dawson, Boechler, and Valsangkar-Smyth (2000) trained a network of value units to make distance judgments between all possible pairs of 13 Albertan cities. This network required six hidden units to accomplish this task. Again, these units provided a coarse coding solution to the problem. Each hidden unit could be described as occupying a location on the map of Alberta through which a line was drawn at a particular orientation. This oriented line provided a one-dimensional map of the cities: connection weights encoded the projections of the cities from the two-dimensional map onto each hidden unit’s one-dimensional representation. However, because the hidden units provided maps of reduced dimensionality, they were wildly inaccurate. Depending on the position of the oriented line, two cities that were far apart in the actual map could lie close together on a hidden unit’s representation. Fortunately, because each of these inaccurate hidden unit maps encoded projections from different perspectives, the combination of their activities was able to represent the actual distance between all city pairs with a high degree of accuracy.
The discovery of coarse coding in navigational networks has important theoretical implications. Since the discovery of place cells in the hippocampus (O’Keefe & Dostrovsky, 1971), it has been thought that one function of the hippocampus is to instantiate a cognitive map (O’Keefe & Nadel, 1978). One analogy used to explain cognitive maps is that they are like graphical maps (Kitchin, 1994). From this, one might predict that the cognitive map is a metric, topographically organized, two-dimensional array in which each location in the map (i.e., each place in the external world) is associated with the firing of a particular place cell, and neighboring place cells represent neighboring places in the external world.
However, this prediction is not supported by anatomical evidence. First, place cells do not appear to be topographically organized (Burgess, Recce, & O’Keefe, 1995; McNaughton et al., 1996). Second, the receptive fields of place cells are at best locally metric, because one cannot measure the distance between points that are more than about a dozen body lengths apart because of a lack of receptive field overlap (Touretzky, Wan, & Redish, 1994). Some researchers now propose that the cognitive map doesn’t really exit, but that map-like properties emerge when place cells are coordinated with other types of cells, such as head direction cells, which fire when an animal’s head is pointed in a particular direction, regardless of the animal’s location in space (McNaughton et al., 1996; Redish, 1999; Redish & Touretzky, 1999; Touretzky, Wan, & Redish, 1994).
Dawson et al. (2000) observed that their navigational network is also subject to the same criticisms that have been leveled against the notion of a topographically organized cognitive map. The hidden units did not exhibit topographic organization, and their inaccurate responses suggest that they are at best locally metric.
Nevertheless, the behavior of the Dawson et al. (2000) network indicated that it represented information about a metric space. That such behavior can be supported by the type of coarse coding discovered in this network suggests that metric, spatial information can be encoded in a representational scheme that is not isomorphic to a graphical map. This raises the possibility that place cells represent spatial information using a coarse code which, when its individual components are inspected, is not very map-like at all. O’Keefe and Nadel (1978, p. 78) were explicitly aware of this kind of possibility: “The cognitive map is not a picture or image which ‘looks like’ what it represents; rather, it is an information structure from which map-like images can be reconstructed and from which behavior dependent upon place information can be generated.”
What are the implications of the ability to interpret the internal structure of artificial neural networks to the practice of connectionist cognitive science?
When New Connectionism arose in the 1980s, interest in it was fuelled by two complementary perspectives (Medler, 1998). First, there was growing dissatisfaction with the progress being made in classical cognitive science and symbolic artificial intelligence (Dreyfus, 1992; Dreyfus & Dreyfus, 1988). Second, seminal introductions to artificial neural networks (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c) gave the sense that the connectionist architecture was a radical alternative to its classical counterpart (Schneider, 1987).
The apparent differences between artificial neural networks and classical models led to an early period of research in which networks were trained to accomplish tasks that had typically been viewed as prototypical examples of classical cognitive science (Bechtel, 1994; Rumelhart & McClelland, 1986a; Seidenberg & McClelland, 1989; Sejnowski & Rosenberg, 1988). These networks were then used as “existence proofs” to support the claim that non-classical models of classical phenomena are possible. However, detailed analyses of these networks were not provided, which meant that, apart from intuitions that connectionism is not classical, there was no evidence to support claims about the non-classical nature of the networks’ solutions to the classical problems. Because of this, this research perspective has been called gee whiz connectionism (Dawson, 2004, 2009).
Of course, at around the same time, prominent classical researchers were criticizing the computational power of connectionist networks (Fodor & Pylyshyn, 1988), arguing that connectionism was a throwback to less powerful notions of associationism that classical cognitive science had already vanquished (Bever, Fodor, & Garrett, 1968; Chomsky, 1957, 1959b, 1965). Thus gee whiz connectionism served an important purpose: providing empirical demonstrations that connectionism might be a plausible medium in which cognitive science can be fruitfully pursued.
However, it was noted earlier that there exists a great deal of research on the computational power of artificial neural networks (Girosi & Poggio, 1990; Hartman, Keeler, & Kowalski, 1989; Lippmann, 1989; McCulloch & Pitts, 1943; Moody & Darken, 1989; Poggio & Girosi, 1990; Renals, 1989; Siegelmann, 1999; Siegelmann & Sontag, 1991); the conclusion from this research is that multilayered networks have the same in-principle power as any universal machine. This leads, though, to the demise of gee whiz connectionism, because if connectionist systems belong to the class of universal machines, “it is neither interesting nor surprising to demonstrate that a network can learn a task of interest” (Dawson, 2004, p. 118). If a network’s ability to learn to perform a task is not of interest, then what is?
It can be extremely interesting, surprising, and informative to determine what regularities the network exploits. What kinds of regularities in the input patterns has the network discovered? How does it represent these regularities? How are these regularities combined to govern the response of the network? (Dawson, 2004, p. 118)
By uncovering the properties of representations that networks have discovered for mediating an input-output relationship, connectionist cognitive scientists can discover new properties of cognitive phenomena. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.14%3A_Of_Coarse_Codes.txt |
In the last several sections, we have been concerned with interpreting the internal structure of multilayered artificial neural networks. While some have claimed that all that can be found within brains and networks is goo (Mozer & Smolensky, 1989), the preceding examples have shown that detailed interpretations of internal network structure are both possible and informative. These interpretations reveal algorithmic-level details about how artificial neural networks use their hidden units to mediate mappings from inputs to outputs.
If the goal of connectionist cognitive science is to make new representational discoveries, then this suggests that it be practiced as a form of synthetic psychology (Braitenberg, 1984; Dawson, 2004) that incorporates both synthesis and analysis, and that involves both forward engineering and reverse engineering.
The analytic aspect of connectionist cognitive science involves peering inside a network in order to determine how its internal structure represents solutions to problems. The preceding pages of this chapter have provided several examples of this approach, which seems identical to the reverse engineering practiced by classical cognitive scientists.
The reverse engineering phase of connectionist cognitive science is also linked to classical cognitive science, in the sense that the results of these analyses are likely to provide the questions that drive algorithmic-level investigations. Once a novel representational format is discovered in a network, a key issue is to determine whether it also characterizes human or animal cognition. One would expect that when connectionist cognitive scientists evaluate their representational discoveries, they should do so by gathering the same kind of relative complexity, intermediate state, and error evidence that classical cognitive scientists gather when seeking strong equivalence.
Before one can reverse engineer a network, one must create it. And if the goal of such a network is to discover surprising representational regularities, then it should be created by minimizing representational assumptions as much as possible. One takes the building blocks available in a particular connectionist architecture, creates a network from them, encodes a problem for this network in some way, and attempts to train the network to map inputs to outputs.
This synthetic phase of research involves exploring different network structures (e.g., different design decisions about numbers of hidden units, or types of activation functions) and different approaches to encoding inputs and outputs. The idea is to give the network as many degrees of freedom as possible to discover representational regularities that have not been imposed or predicted by the researcher. These decisions all involve the architectural level of investigation.
One issue, though, is that networks are greedy, in the sense that they will exploit whatever resources are available to them. As a result, fairly idiosyncratic and specialized detectors are likely to be found if too many hidden units are provided to the network, and the network’s performance may not transfer well when presented with novel stimuli. To deal with this, one must impose constraints by looking for the simplest network that will reliably learn the mapping of interest. The idea here is that such a network might be the one most likely to discover a representation general enough to transfer the network’s ability to new patterns.
Importantly, sometimes when one makes architectural decisions to seek the simplest network capable of solving a problem, one discovers that the required network is merely a perceptron that does not employ any hidden units. In the remaining sections of this chapter I provide some examples of simple networks that are capable of performing interesting tasks. In section 4.15 the relevance of perceptrons to modern theories of associative learning is described. In section 4.16 I present a perceptron model of the reorientation task. In section 4.17 an interpretation is given for the structure of a perceptron that learns a seemingly complicated progression of musical chords. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.15%3A_Architectural_Connectionism_-_An_Overview.txt |
The history of artificial neural networks can be divided into two periods, Old Connectionism and New Connectionism (Medler, 1998). New Connectionism studies powerful networks consisting of multiple layers of units, and connections are trained to perform complex tasks. Old Connectionism studied networks that belonged to one of two classes. One was powerful multilayer networks that were hand wired, not trained (McCulloch & Pitts, 1943). The other was less powerful networks that did not have hidden units but were trained (Rosenblatt, 1958, 1962; Widrow, 1962; Widrow & Hoff, 1960).
Perceptrons (Rosenblatt, 1958, 1962) belong to Old Connectionism. A perceptron is a standard pattern associator whose output units employ a nonlinear activation function. Rosenblatt’s perceptrons used the Heaviside step function to convert net input into output unit activity. Modern perceptrons use continuous nonlinear activation functions, such as the logistic or the Gaussian (Dawson, 2004, 2005, 2008; Dawson et al., 2009; Dawson et al., 2010).
Perceptrons are trained using an error-correcting variant of Hebb-style learning (Dawson, 2004). Perceptron training associates input activity with output unit error as follows. First, a pattern is presented to the input units, producing output unit activity via the existing connection weights. Second, output unit error is computed by taking the difference between actual output unit activity and desired output unit activity for each output unit in the network. This kind of training is called supervised learning, because it requires an external trainer to provide the desired output unit activities. Third, Hebb-style learning is used to associate input unit activity with output unit error: weight change is equal to a learning rate times input unit activity times output unit error. (In modern perceptrons, this triple product can also be multiplied by the derivative of the output unit’s activation function, resulting in gradient descent learning [Dawson, 2004]).
The supervised learning of a perceptron is designed to reduce output unit errors as training proceeds. Weight changes are proportional to the amount of generated error. If no errors occur, then weights are not changed. If a task’s solution can be represented by a perceptron, then repeated training using pairs of input-output stimuli is guaranteed to eventually produce zero error, as proven in Rosenblatt’s perceptron convergence theorem (Rosenblatt, 1962).
Being a product of Old Connectionism, there are limits to the range of input-output mappings that can be mediated by perceptrons. In their famous computational analyses of what perceptrons could and could not learn to compute, Minsky and Papert (1969) demonstrated that perceptrons could not learn to distinguish some basic topological properties easily discriminated by humans, such as the difference between connected and unconnected figures. As a result, interest in and funding for Old Connectionist research decreased dramatically (Medler, 1998; Papert, 1988).
However, perceptrons are still capable of providing new insights into phenomena of interest to cognitive science. The remainder of this section illustrates this by exploring the relationship between perceptron learning and classical conditioning.
The primary reason that connectionist cognitive science is related to empiricism is that the knowledge of an artificial neural network is typically acquired via experience. For instance, in supervised learning a network is presented with pairs of patterns that define an input-output mapping of interest, and a learning rule is used to adjust connection weights until the network generates the desired response to a given input pattern.
In the twentieth century, prior to the birth of artificial neural networks (McCulloch & Pitts, 1943), empiricism was the province of experimental psychology. A detailed study of classical conditioning (Pavlov, 1927) explored the subtle regularities of the law of contiguity. Pavlovian, or classical, conditioning begins with an unconditioned stimulus (US) that is capable, without training, of producing an unconditioned response (UR). Also of interest is a conditioned stimulus (CS) that when presented will not produce the UR. In classical conditioning, the CS is paired with the US for a number of trials. As a result of this pairing, which places the CS in contiguity with the UR, the CS becomes capable of eliciting the UR on its own. When this occurs, the UR is then known as the conditioned response (CR).
Classical conditioning is a very basic kind of learning, but experiments revealed that the mechanisms underlying it were more complex than the simple law of contiguity. For example, one phenomenon found in classical conditioning is blocking (Kamin, 1968). Blocking involves two conditioned stimuli, CSA and CSB. Either stimulus is capable of being conditioned to produce the CR. However, if training begins with a phase in which only CSA is paired with the US and is then followed by a phase in which both CSA and CSB are paired with the US, then CSB fails to produce the CR. The prior conditioning involving CSA blocks the conditioning of CSB, even though in the second phase of training CSB is contiguous with the UR.
The explanation of phenomena such as blocking required a new model of associative learning. Such a model was proposed in the early 1970s by Robert Rescorla and Allen Wagner (Rescorla & Wagner, 1972). This mathematical model of learning has been described as being cognitive, because it defines associative learning in terms of expectation. Its basic idea is that a CS is a signal about the likelihood that a US will soon occur. Thus the CS sets up expectations of future events. If these expectations are met, then no learning will occur. However, if these expectations are not met, then associations between stimuli and responses will be modified. “Certain expectations are built up about the events following a stimulus complex; expectations initiated by that complex and its component stimuli are then only modified when consequent events disagree with the composite expectation” (p. 75).
The expectation-driven learning that was formalized in the Rescorla-Wagner model explained phenomena such as blocking. In the second phase of learning in the blocking paradigm, the coming US was already signaled by CSA. Because there was no surprise, no conditioning of CSB occurred. The Rescorla-Wagner model has had many other successes; though it is far from perfect (Miller, Barnet, & Grahame, 1995; Walkenbach & Haddad, 1980), it remains an extremely influential, if not the most influential, mathematical model of learning.
The Rescorla-Wagner proposal that learning depends on the amount of surprise parallels the notion in supervised training of networks that learning depends on the amount of error. What is the relationship between Rescorla-Wagner learning and perceptron learning?
Proofs of the equivalence between the mathematics of Rescorla-Wagner learning and the mathematics of perceptron learning have a long history. Early proofs demonstrated that one learning rule could be translated into the other (Gluck & Bower, 1988; Sutton & Barto, 1981). However, these proofs assumed that the networks had linear activation functions. Recently, it has been proven that if when it is more properly assumed that networks employ a nonlinear activation function, one can still translate Rescorla-Wagner learning into perceptron learning, and vice versa (Dawson, 2008).
One would imagine that the existence of proofs of the computational equivalence between Rescorla-Wagner learning and perceptron learning would mean that perceptrons would not be able to provide any new insights into classical conditioning. However, this is not correct. Dawson (2008) has shown that if one puts aside the formal comparison of the two types of learning and uses perceptrons to simulate a wide variety of different classical conditioning paradigms, then some puzzling results occur. On the one hand, perceptrons generate the same results as the Rescorla-Wagner model for many different paradigms. Given the formal equivalence between the two types of learning, this is not surprising. On the other hand, for some paradigms, perceptrons generate different results than those predicted from the Rescorla-Wagner model (Dawson, 2008, Chapter 7). Furthermore, in many cases these differences represent improvements over Rescorla-Wagner learning. If the two types of learning are formally equivalent, then how is it possible for such differences to occur?
Dawson (2008) used this perceptron paradox to motivate a more detailed comparison between Rescorla-Wagner learning and perceptron learning. He found that while these two models of learning were equivalent at the computational level of investigation, there were crucial differences between them at the algorithmic level. In order to train a perceptron, the network must first behave (i.e., respond to an input pattern) in order for error to be computed to determine weight changes. In contrast, Dawson showed that the Rescorla-Wagner model defines learning in such a way that behaviour is not required!
Dawson’s (2008) algorithmic analysis of Rescorla-Wagner learning is consistent with Rescorla and Wagner’s (1972) own understanding of their model: “Independent assumptions will necessarily have to be made about the mapping of associative strengths into responding in any particular situation” (p. 75). Later, they make this same point much more explicitly:
We need to provide some mapping of [associative] values into behavior. We are not prepared to make detailed assumptions in this instance. In fact, we would assume that any such mapping would necessarily be peculiar to each experimental situation, and depend upon a large number of ‘performance’ variables. (Rescorla & Wagner, 1972, p. 77)
Some knowledge is tacit: we can know more than we can tell (Polanyi, 1966). Dawson (2008) noted that the Rescorla-Wagner model presents an interesting variant of this theme, where if there is no explicit need for a behavioural theory, then there is no need to specify it explicitly. Instead, researchers can ignore Rescorla and Wagner’s (1972) call for explicit models to convert associative strengths into behaviour and instead assume unstated, tacit theories such as “strong associations produce stronger, or more intense, or faster behavior.” Researchers evaluate the RescorlaWagner model (Miller, Barnet, & Grahame, 1995; Walkenbach & Haddad, 1980) by agreeing that associations will eventually lead to behaviour, without actually stating how this is done. In the Rescorla-Wagner model, learning comes first and behaviour comes later—maybe.
Using perceptrons to study classical conditioning paradigms contributes to the psychological understanding of such learning in three ways. First, at the computational level, it demonstrates equivalences between independent work on learning conducted in computer science, electrical engineering, and psychology (Dawson, 2008; Gluck & Bower, 1988; Sutton & Barto, 1981).
Second, the results of training perceptrons in these paradigms raise issues that lead to a more sophisticated understanding of learning theories. For instance, the perceptron paradox led to the realization that when the Rescorla-Wagner model is typically used, accounts of converting associations into behavior are unspecified. Recall that one of the advantages of computer simulation research is exposing tacit assumptions (Lewandowsky, 1993).
Third, the activation functions that are a required property of a perceptron serve as explicit theories of behavior to be incorporated into the Rescorla-Wagner model. More precisely, changes in activation function result in changes to how the perceptron responds to stimuli, indicating the importance of choosing a particular architecture (Dawson & Spetch, 2005). The wide variety of activation functions that are available for artificial neural networks (Duch & Jankowski, 1999) offers a great opportunity to explore how changing theories of behaviour—or altering architectures—affect the nature of associative learning.
The preceding paragraphs have shown how the perceptron can be used to inform theories of a very old psychological phenomenon, classical conditioning. We now consider how perceptrons can play a role in exploring a more modern topic, reorientation, which was described from a classical perspective in Chapter 3 (Section 3.12). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.16%3A_New_Powers_of_Old_Networks.txt |
In the reorientation task, an agent learns that a particular place—usually a corner of a rectangular arena—is a goal location. The agent is then removed from the arena, disoriented, and returned to an arena. Its task is to use the available cues to relocate the goal. Theories of reorientation assume that there are two types of cues available for reorienting: local feature cues and relational geometric cues. Studies indicate that both types of cues are used for reorienting, even in cases where geometric cues are irrelevant (Cheng & Newcombe, 2005). As a result, some theories have proposed that a geometric module guides reorienting behaviour (Cheng, 1986; Gallistel, 1990).
The existence of a geometric module has been proposed because different kinds of results indicate that the processing of geometric cues is mandatory. First, in some cases agents continue to make rotational errors (i.e., the agent does not go to the goal location, but goes instead to an incorrect location that is geometrically identical to the goal location) even when a feature disambiguates the correct corner (Cheng, 1986; Hermer & Spelke, 1994). Second, when features are removed following training, agents typically revert to choosing both of the geometrically correct locations (Kelly et al., 1998; Sovrano et al., 2003). Third, when features are moved, agents generate behaviours that indicate that both types of cues were processed (Brown, Spetch, & Hurd, 2007; Kelly, Spetch, & Heth, 1998).
Recently, some researchers have begun to question the existence of geometric modules. One reason for this is that the most compelling evidence for claims of modularity comes from neuroscience (Dawson, 1998; Fodor, 1983), but such evidence about the modularity of geometry in the reorientation task is admittedly sparse (Cheng & Newcombe, 2005). This has led some researchers to propose alternative notions of modularity when explaining reorientation task regularities (Cheng, 2005, 2008; Cheng & Newcombe, 2005).
Still other researchers have explored how to abandon the notion of the geometric module altogether. They have proceeded by creating models that produce the main findings from the reorientation task, but they do so without using a geometric module. A modern perceptron that uses the logistic activation function has been shown to provide just such a model (Dawson et al., 2010).
The perceptrons used by Dawson et al. (2010) used a single output unit that, when the perceptron was “placed” in the original arena, was trained to turn on to the goal location and turn off to all of the other locations. A set of input units was used to represent the various cues—featural and geometric—available at each location. Both feature cues and geometric cues were treated in an identical fashion by the network; no geometric module was built into it.
After training, the perceptron was “placed” into a new arena; this approach was used to simulate the standard variations of the reorientation task in which geometric cues and feature cues could be placed in conflict. In the new arena, the perceptron was “shown” all of the possible goal locations by activating its input units with the features available at each location. The resulting output unit activity was interpreted as representing the likelihood that there was a reward at any of the locations in the new arena.
The results of the Dawson et al. (2010) simulations replicated the standard reorientation task findings that have been used to argue for the existence of a geometric module. However, this was accomplished without using such a module. These simulations also revealed new phenomena that have typically not been explored in the reorientation task that relate to the difference between excitatory cues, which indicate the presence of a reward, and inhibitory cues, which indicate the absence of a reward. In short, perceptrons have been used to create an associative, nonmodular theory of reorientation. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.17%3A_Connectionist_Reorientation.txt |
We have seen that a particular type of network from Old Connectionism, the perceptron, can be usefully applied in the studies of classical conditioning and reorientation. In the current section we see that it can also be used to explore musical regularities. Also illustrated is the interpretation of the internal structure of such a network, which demonstrates that even simple networks can reveal some interesting algorithmic properties.
Jazz progressions are sequences of chords. Consider the C major scale presented earlier, in Figure 4-8. If one takes the first note of the scale, C, as the root and adds every second note in the scale—E, G, and B)—the result is a four-note chord—a tetrachord—called the C major 7th chord (Cmaj7). Because the root of this chord is the first note of the scale, this is identified as the I chord for C major. Other tetrachords can also be built for this key. Starting with the second note in the scale, D, and adding the notes F, A, and C produces D minor 7th (Dm7). Because its root is the second note of the scale, this is identified as the II chord for the key of C major. Using G as the root and adding the notes B, D, and F creates the G dominant 7th chord (G7). It is the V chord of the key of C major because its root is the fifth note of the C major scale.
The I, II, and V chords are the three most commonly played jazz chords, and in jazz they often appear in the context of the II-V-I progression (Levine, 1989). This chord progression involves playing these chords in a sequence that begins with the II chord, moves to the V chord, and ends on the I chord. The II-V-I progression is important for several reasons.
First, chord progressions are used to establish tonality, that is, to specify to the listener the musical key in which a piece is being played. They do so by setting up expectancies about what is to be played next. For any major key, the most stable tones are notes I, IV, and V (Krumhansl, 1990), and the most stable chords are the ones built on those three notes.
Second, in the perception of chord sequences there are definite preferences for the IV chord to resolve into the V chord and for the V chord to resolve into the I chord, producing the IV-V-I progression that is common in cadences in classical music (Bharucha, 1984; Jarvinen, 1995; Katz, 1995; Krumhansl, Bharucha, & Kessler, 1982; Rosner & Narmour, 1992). There is a similar relationship between the IV chord and the II chord if the latter is minor (Steedman, 1984). Thus the II-V-I progression is a powerful tool for establishing the tonality of a musical piece.
Third, the II-V-I progression lends itself to a further set of chord progressions that move from key to key, providing variety but also establishing tonality. After playing the Cmaj7 chord to end the II-V-I progression for C major, one can change two notes to transform Cmaj7 into Cm7, which is the II chord of a different musical key, A# major. As a result, one can move from performing the II-V-I progression in C major to performing the same progression in a major key one tone lower. This process can be repeated; the full set of chord changes is provided in Table 4-6. Note that this progression eventually returns to the starting key of C major, providing another powerful cue of tonality.
Chord Progression For Key
Key II V I
C Dm7 G7 Cmaj7
A# Cm7 F7 A#maj7
G# A#m7 D#7 G#maj7
F# G#m7 C#7 F#maj7
E F#m7 B7 Emaj7
D Em7 A7 Dmaj7
C Dm7 G7 Cmaj7
Table \(1\). A progression of II-V-I progressions, descending from the key of C major. The chords in each row are played in sequence, and after playing one row, the next row is played.
A connectionist network can be taught the II-V-I chord progression. During training, one presents, in pitch class format, a chord belonging to the progression. The network learns to output the next chord to be played in the progression, again using pitch class format. Surprisingly, this problem is very simple: it is linearly separable and can be solved by a perceptron!
How does a perceptron represent this jazz progression? Because a perceptron has no hidden units, its representation must be stored in the set of connection weights between the input and output units. However, this matrix of connection weights is too complex to reveal its musical representations simply by inspecting it. Instead, multivariate statistics must be used.
First, one can convert the raw connection weights into a correlation matrix. That is, one can compute the similarity of each pair of output units by computing the correlation between the connection weights that feed into them. Once the weights have been converted into correlations, further analyses are then available to interpret network representations. Multidimensional scaling (MDS) can summarize the relationships within a correlation matrix made visible by creating a map (Kruskal & Wish, 1978; Romney, Shepard, & Nerlove, 1972; Shepard, Romney, & Nerlove, 1972). Items are positioned in the map in such a way that the more similar items are, the closer together they are in the map.
The MDS of the jazz progression network’s correlations produced a one-dimensional map that provided a striking representation of musical relationships amongst the notes. In a one-dimensional MDS solution, each data point is assigned a single number, which is its coordinate on the single axis that is the map. The coordinate for each note is presented in a bar chart in Figure \(1\).
The first regularity evident from Figure \(1\) is that half of the notes have negative coordinates, while the other half have positive coordinates. That is, the perceptron’s connection weights separate musical notes into two equal-sized classes. These classes reflect a basic property of the chord progressions learned by the network: all of the notes that have positive coordinates were also used as major keys in which the II-V-I progression was defined, while none of the notes with negative coordinates were used in this fashion.
Another way to view the two classes of notes revealed by this analysis is in terms of the two circles of major seconds that were presented in Figure 4.10.4. The first circle of major seconds contains only those notes that have positive coordinates in Figure \(1\). The other circle of major seconds captures the set of notes that have negative coordinates in Figure \(2\). In other words, the jazz progression network acts as if it has classified notes in terms of the circles of major seconds!
The order in which the notes are arranged in the one-dimensional map is also related to the four circles of major thirds that were presented in Figure 4.10.5. The bars in Figure \(1\) have been colored to reveal four sets of three notes each. Each of these sets of notes defines a circle of major thirds. The MDS map places notes in such a way that the notes of one such circle are listed in order, followed by the notes of another circle of major thirds.
To summarize, one musical formalism is the II-V-I jazz progression. Interestingly, this formalism can be learned by a network from Old Connectionism, the perceptron. Even though this network is simple, interpreting its representations is not straightforward and requires the use of multivariate statistics. However, when such analysis is performed, it appears that the network captures the regularities of this jazz progression using the strange circles that were encountered in the earlier section on chord classification. That is, the connection weights of the perceptron reveal circles of major seconds and circles of major thirds. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.18%3A_Perceptrons_and_Jazz_Progressions.txt |
The purpose of the current chapter was to introduce the elements of connectionist cognitive science, the “flavour” of cognitive science that was seen first as Old Connectionism in the 1940s (McCulloch & Pitts, 1943) and which peaked by the late 1950s (Rosenblatt, 1958, 1962; Widrow, 1962; Widrow & Hoff, 1960). Criticisms concerning the limitations of such networks (Minsky & Papert, 1969) caused connectionist research to almost completely disappear until the mid-1980s (Papert, 1988), when New Connectionism arose in the form of techniques capable of training powerful multilayered networks (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c).
Connectionism is now well established as part of mainstream cognitive science, although its relationship to classical cognitive science is far from clear. Artificial neural networks have been used to model a dizzying variety of phenomena including animal learning (Enquist & Ghirlanda, 2005; Schmajuk, 1997), cognitive development (Elman et al., 1996), expert systems (Gallant, 1993), language (Mammone, 1993; Sharkey, 1992), pattern recognition and perception (Pao, 1989; Ripley, 1996; Wechsler, 1992), and musical cognition (Griffith & Todd, 1999; Todd & Loy, 1991).
Given the breadth of connectionist cognitive science, only a selection of its elements have been introduced in this chapter; capturing all of the important contributions of connectionism in a single chapter is not possible. A proper treatment of connectionism requires a great deal of further reading; fortunately connectionism is described in a rich and growing literature (Amit, 1989; Anderson, 1995; Anderson & Rosenfeld, 1998; Bechtel & Abrahamsen, 2002; Carpenter & Grossberg, 1992; Caudill & Butler, 1992a, 1992b; Churchland, 1986; Churchland & Sejnowski, 1992; Clark, 1989, 1993; Dawson, 2004, 2005; Grossberg, 1988; Horgan & Tienson, 1996; Quinlan, 1991; Ramsey, Stich, & Rumelhart, 1991; Ripley, 1996; Rojas, 1996).
Connectionist cognitive science is frequently described as a reaction against the foundational assumptions of classical cognitive science. The roots of classical cognitive science draw inspiration from the rationalist philosophy of Descartes, with an emphasis on nativism and logicism (Chomsky, 1966; Devlin, 1996). In contrast, the foundations of connectionist cognitive science are the empiricist philosophy of Locke and the associationist psychology that can be traced from the early British empiricists to the more modern American behaviourists. Connectionist networks acquire structure or knowledge via experience; they often begin as blank slates (Pinker, 2002) and acquire structure as they learn about their environments (Bechtel, 1985; Clark, 1989, 1993; Hillis, 1988).
Classical cognitive science departed from Cartesian philosophy by seeking materialist accounts of mentality. This view was inspired by the digital computer and the fact that electronic switches could be assigned abstract logical interpretations (Shannon, 1938).
Connectionism is materialist as well, but arguably in a more restricted sense than classical cognitive science. The classical approach appeals to the multiple realization argument when it notes that under the proper interpretation, almost any physical substrate could instantiate information processing or symbol manipulation (Hillis, 1998). In contrast, connectionism views the digital computer metaphor as mistaken. Connectionists claim that the operations of such a device—regardless of its material nature—are too slow, brittle, and inflexible to be appropriate for modelling cognition. Connectionism posits instead that the brain is the only appropriate material for realizing the mind and researchers attempt to frame its theories in terms of information processing that is biologically plausible or neuronally inspired (Amit, 1989; Burnod, 1990; Gluck & Myers, 2001).
In adopting the digital computer metaphor and the accompanying logictic view that cognition is the result of rule-governed symbol manipulation, classical cognitive science is characterized by a marked structure/process distinction. That is, classical models—typified by Turing machines (Turing, 1936) or production systems (Newell & Simon, 1972)—distinguish between the symbols being manipulated and the explicit rules doing the manipulating. This distinction is usually marked in models by having separate locations for structure and process, such as a memory that holds symbols and a central controller that holds the processes.
In abandoning the digital computer metaphor and adopting a notion of information processing that is biologically inspired, connectionist cognitive science abandons or blurs the structure/process distinction. Neural networks can be viewed as both structure and process; they have been called active data structures (Hillis, 1985). This has led to an extensive debate about whether theories of cognition require explicit rules (Ramsey, Stich, & Rumelhart, 1991).
The digital computer metaphor adopted by classical cognitive science leads it to also adopt a particular notion of control. In particular, classical models invoke a notion of serial control in which representations can only be manipulated one rule at a time. When classical problem solvers search a problem space in order to solve a problem (Newell & Simon, 1972), they do so to discover a sequence of operations to perform.
In contrast, when connectionist cognitive science abandons the digital computer metaphor, it abandons with it the assumption of centralized serial control. It does so because it views this as a fatal flaw in classical models, generating a “von Neumann bottleneck” that makes classical theories too slow to be useful in real time (Feldman & Ballard, 1982; Hillis, 1985). In the stead of centralized serial control, connectionists propose decentralized control in which many simple processes can be operating in parallel (see Dawson & Schopflocher, 1992a).
Clearly, from one perspective, there are obvious and important differences between connectionist and classical cognitive science. However, a shift in perspective can reveal a view in which striking similarities between these two approaches are evident. We saw earlier that classical cognitive science is performed at multiple levels of analysis, using formal methods to explore the computational level, behavioural methods to investigate the algorithmic level, and a variety of behavioural and biological techniques to elaborate the architectural and implementational levels. It is when connectionist cognitive science is examined from this same multiplelevels viewpoint that its relationship to classical cognitive science is made apparent (Dawson, 1998).
Analyses at the computational level involve using some formal language to make proofs about cognitive systems. Usually these proofs concern statements about what kind of computation is being performed or what the general capabilities of a system are. Computational-level analyses have had a long and important history in connectionist cognitive science, and they have been responsible, for example, for proofs that particular learning rules will converge to desired least-energy or lowerror states (Ackley, Hinton, & Sejnowski, 1985; Hopfield, 1982; Rosenblatt, 1962; Rumelhart, Hinton, & Williams, 1986b). Other examples of computational analyses were provided earlier in this chapter, in the discussion of carving pattern spaces into decision regions and the determination that output unit activities could be interpreted as being conditional probabilities.
That computational analysis is possible for both connectionist and classical cognitive science highlights one similarity between these two approaches. The results of some computational analyses, though, reveal a more striking similarity. One debate in the literature has concerned whether the associationist nature of artificial neural networks limits their computational power, to the extent that they are not appropriate for cognitive science. For instance, there has been considerable debate about whether PDP networks demonstrate appropriate systematicity and componentiality (Fodor & McLaughlin, 1990; Fodor & Pylyshyn, 1988; Hadley, 1994a, 1994b, 1997; Hadley & Hayward, 1997), two characteristics important for the use of recursion in classical models. However, beginning with the mathematical analyses of Warren McCulloch (McCulloch & Pitts, 1943) and continuing with modern computational analyses (Girosi & Poggio, 1990; Hartman, Keeler, & Kowalski, 1989; Lippmann, 1989; McCulloch & Pitts, 1943; Moody & Darken, 1989; Poggio & Girosi, 1990; Renals, 1989; Siegelmann, 1999; Siegelmann & Sontag, 1991), we have seen that artificial neural networks belong to the class of universal machines. Classical and connectionist cognitive science are not distinguishable at the computational level of analysis (Dawson, 1998, 2009).
Let us now turn to the next level of analysis, the algorithmic level. For classical cognitive science, the algorithmic level involves detailing the specific information processing steps that are involved in solving a problem. In general, this almost always involves analyzing behaving systems in order to determine how representations are being manipulated, an approach typified by examining human problem solving with the use of protocol analysis (Ericsson & Simon, 1984; Newell & Simon, 1972). Algorithmic-level analyses for connectionists also involve analyzing the internal structure of intact systems—trained networks—in order to determine how they mediate stimulus-response regularities. We have seen examples of a variety of techniques that can and have been used to uncover the representations that are hidden within network structures, and which permit networks to perform desired input-output mappings. Some of these representations, such as coarse codes, look like alternatives to classical representations. Thus one of classical cognitive science’s contributions may be to permit new kinds of representations to be discovered and explored.
Nevertheless, algorithmic-level analyses also reveal further similarities between connectionist and classical cognitive science. While these two approaches may propose different kinds of representations, they still are both representational. There is no principled difference between the classical sandwich and the connectionist sandwich (Calvo & Gomila, 2008). Furthermore, it is not even guaranteed that the contents of these two types of sandwiches will differ. One can peer inside an artificial neural network and find classical rules for logic (Berkeley et al., 1995) or even an entire production system (Dawson et al., 2000).
At the architectural level of analysis, stronger differences between connectionist and classical cognitive science can be established. Indeed, the debate between these two approaches is in essence a debate about architecture. This is because many of the dichotomies introduced earlier—rationalism vs. empiricism, digital computer vs. analog brain, structure/process vs. dynamic data, serialism vs. parallelism—are differences in opinion about cognitive architecture.
In spite of these differences, and in spite of connectionism’s search for biologically plausible information processing, there is a key similarity at the architectural level between connectionist and classical cognitive science: at this level, both propose architectures that are functional, not physical. The connectionist architecture consists of a set of building blocks: units and their activation functions, modifiable connections, learning rules. But these building blocks are functional accounts of the information processing properties of neurons; other brain-like properties are ignored. Consider one response (Churchland & Churchland, 1990) to the claim that the mind is the product of the causal powers of the brain (Searle, 1990):
We presume that Searle is not claiming that a successful artificial mind must have all the causal powers of the brain, such as the power to smell bad when rotting, to harbor slow viruses such as kuru, to stain yellow with horseradish peroxidase and so forth. Requiring perfect parity would be like requiring that an artificial flying device lay eggs. (Churchland & Churchland, 1990, p. 37)
It is the functional nature of the connectionist architecture that enables it to be almost always studied by simulating it—on a digital computer!
The functional nature of the connectionist architecture raises some complications when the implementational level of analysis is considered. On the one hand, many researchers view connectionism as providing implementational-level theories of cognitive phenomena. At this level, one finds researchers exploring relationships between biological receptive fields and patterns of connectivity and similar properties of artificial networks (Ballard, 1986; Bankes & Margoliash, 1993; Bowers, 2009; Guzik, Eaton, & Mathis, 1999; Keith, Blohm, & Crawford, 2010; Moorhead, Haig, & Clement, 1989; Poggio, Torre, & Koch, 1985; Zipser & Andersen, 1988). One also encounters researchers finding biological mechanisms that map onto architectural properties such as learning rules. For example, there is a great deal of interest in relating the actions of certain neurotransmitters to Hebb learning (Brown, 1990; Gerstner & Kistler, 2002; van Hemmen & Senn, 2002). Similarly, it has been argued that connectionist networks provide an implementational account of associative learning (Shanks, 1995), a position that ignores its potential contributions at other levels of analysis (Dawson, 2008).
On the other hand, the functional nature of the connectionist architecture has resulted in its biological status being questioned or challenged. There are many important differences between biological and artificial neural networks (Crick & Asanuma, 1986; Douglas & Martin, 1991; McCloskey, 1991). There is very little biological evidence in support of important connectionist learning rules such as backpropagation of error (Mazzoni, Andersen, & Jordan, 1991; O’Reilly, 1996; Shimansky, 2009). Douglas and Martin (1991, p. 292) dismissed artificial neural networks as merely being “stick and ball models.” Thus whether connectionist cognitive science is a biologically plausible alternative to classical cognitive science remains an open issue.
That connectionist cognitive science has established itself as a reaction against classical cognitive science cannot be denied. However, as we have seen in this section, it is not completely clear that connectionism represents a radical alternative to the classical approach (Schneider, 1987), or that it is rather much more closely related to classical cognitive science than a brief glance at some of the literature might suggest (Dawson, 1998). It is certainly the case that connectionist cognitive science has provided important criticisms of the classical approach and has therefore been an important contributor to theory of mind.
Interestingly, many of the criticisms that have been highlighted by connectionist cognitive science—slowness, brittleness, biological implausibility, overemphasis of logicism and disembodiment—have been echoed by a third school, embodied cognitive science. Furthermore, related criticisms have been applied by embodied cognitive scientists against connectionist cognitive science. Not surprisingly, then, embodied cognitive science has generated a very different approach to deal with these issues than has connectionist cognitive science.
In Chapter 5 we turn to the elements of this third “flavour” of cognitive science. As has been noted in this final section of Chapter 4, there appears to be ample room for finding relationships between connectionism and classicism such that the umbrella cognitive science can be aptly applied to both. We see that embodied cognitive science poses some interesting and radical challenges, and that its existence calls many of the core features shared by connectionism and classicism into question. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/04%3A_Elements_of_Connectionist_Cognitive_Science/4.19%3A_What_Is_Connectionist_Cognitive_Science%3F.txt |
One of the key reactions against classical cognitive science was connectionism. A second reaction against the classical approach has also emerged. This second reaction is called embodied cognitive science, and the purpose of this chapter is to introduce its key elements.
Embodied cognitive science explicitly abandons the disembodied mind that serves as the core of classical cognitive science. It views the purpose of cognition not as building representations of the world, but instead as directing actions upon the world. As a result, the structure of an agent’s body and how this body can sense and act upon the world become core elements. Embodied cognitive science emphasizes the embodiment and situatedness of agents.
Embodied cognitive science’s emphasis on embodiment, situatedness, and action upon the world is detailed in the early sections of the chapter. This emphasis leads to a number of related elements: feedback between agents and environments, stigmergic control of behaviour, affordances and enactive perception, and cognitive scaffolding. In the first half of this chapter these notions are explained, showing how they too can be traced back to some of the fundamental assumptions of cybernetics. Also illustrated is how such ideas are radical departures from the ideas emphasized by classical cognitive scientists.
Not surprisingly, such differences in fundamental ideas lead to embodied cognitive science adopting methodologies that are atypical of classical cognitive science. Reverse engineering is replaced with forward engineering, as typified by behaviourbased robotics. These methodologies use an agent’s environment to increase or leverage its abilities, and in turn they have led to novel accounts of complex human activities. For instance, embodied cognitive science can construe social interactions either as sense-act cycles in a social environment or as mediated by simulations that use our own brains or bodies as physical stand-ins for other agents.
In spite of such differences, it is still the case that there are structural similarities between embodied cognitive science and the other two approaches that have been introduced in the preceding chapters. The current chapter ends with a consideration of embodied cognitive science in light of Chapter 2’s multiple levels of investigation, which were earlier used as a context in which to consider the research of both classical and of connectionist cognitive science.
5.02: Abandoning Methodological Solipsism
The goal of Cartesian philosophy was to provide a core of incontestable truths to serve as an anchor for knowledge (Descartes, 1960, 1996). Descartes believed that he had achieved this goal. However, the cost of this accomplishment was a fundamental separation between mind and body. Cartesian dualism disembodied the mind, because Descartes held that the mind’s existence was independent of the existence of the body.
I am not that structure of limbs which is called a human body, I am not even some thin vapor which permeates the limbs—a wind, fire, air, breath, or whatever I depict in my imagination, for these are things which I have supposed to be nothing. (Descartes, 1996, p. 18)
Cartesian dualism permeates a great deal of theorizing about the nature of mind and self, particularly in our current age of information technology. One such theory is posthumanism (Dewdney, 1998; Hayles, 1999). Posthumanism results when the content of information is more important than the physical medium in which it is represented, when consciousness is considered to be epiphenomenal, and when the human body is simply a prosthetic. Posthumanism is rooted in the pioneering work of cybernetics (Ashby, 1956, 1960; MacKay, 1969; Wiener, 1948), and is sympathetic to such futuristic views as uploading our minds into silicon bodies (Kurzweil, 1999, 2005; Moravec, 1988, 1999), because, in this view, the nature of the body is irrelevant to the nature of the mind. Hayles uncomfortably notes that a major implication of posthumanism is its “systematic devaluation of materiality and embodiment” (Hayles, 1999, p. 48); “because we are essentially information, we can do away with the body” (Hayles, 1999, p. 12).
Some would argue that similar ideas pervade classical cognitive science. American psychologist Sylvia Scribner wrote that cognitive science “is haunted by a metaphysical spectre. The spectre goes by the familiar name of Cartesian dualism, which, in spite of its age, continues to cast a shadow over inquiries into the nature of human nature” (Scribner & Tobach, 1997, p. 308).
In Chapter 3 we observed that classical cognitive science departed from the Cartesian approach by seeking materialist explanations of cognition. Why then should it be haunted by dualism?
To answer this question, we examine how classical cognitive science explains, for instance, how a single agent produces different behaviors. Because classical cognitive science appeals to the representational theory of mind (Pylyshyn, 1984), it must claim that different behaviors must ultimately be rooted in different mental representations.
If different behaviors are caused by differences between representations, then classical cognitive science must be able to distinguish or individuate representational states. How is this done? The typical position adopted by classical cognitive science is called methodological solipsism (Fodor, 1980). Methodological solipsism individuates representational states only in terms of their relations to other representational states. Relations of the states to the external world—the agent’s environment—are not considered. “Methodological solipsism in psychology is the view that psychological states should be construed without reference to anything beyond the boundary of the individual who has those states” (Wilson, 2004, p. 77).
The methodological solipsism that accompanies the representational theory of mind is an example of the classical sandwich (Hurley, 2001). The classical sandwich is the view that links between a cognitive agent’s perceptions and a cognitive agent’s actions must be mediated by internal thinking or planning. In the classical sandwich, models of cognition take the form of sense-think-act cycles (Brooks, 1999; Clark, 1997; Pfeifer & Scheier, 1999). Furthermore, these theories tend to place a strong emphasis on the purely mental part of cognition—the thinking—and at the same time strongly de-emphasize the physical—the action. In the classical sandwich, perception, thinking, and action are separate and unequal.
On this traditional view, the mind passively receives sensory input from its environment, structures that input in cognition, and then marries the products of cognition to action in a peculiar sort of shotgun wedding. Action is a by-product of genuinely mental activity. (Hurley, 2001, p. 11)
Although connectionist cognitive science is a reaction against classical cognitivism, this reaction does not include a rejection of the separation of perception and action via internal representation. Artificial neural networks typically have undeveloped models of perception (i.e., input unit encodings) and action (i.e., output unit encodings), and in modern networks communication between the two must be moderated by representational layers of hidden units.
Highly artificial choices of input and output representations and poor choices of problem domains have, I believe, robbed the neural network revolution of some of its initial momentum. . . . The worry is, in essence, that a good deal of the research on artificial neural networks leaned too heavily on a rather classical conception of the nature of the problems. (Clark, 1997, p. 58)
The purpose of this chapter is to introduce embodied cognitive science, a fairly modern reaction against classical cognitive science. This approach is an explicit rejection of methodological solipsism. Embodied cognitive scientists argue that a cognitive theory must include an agent’s environment as well as the agent’s experience of that environment (Agre, 1997; Chemero, 2009; Clancey, 1997; Clark, 1997; Dawson, Dupuis, & Wilson, 2010; Dourish, 2001; Gibbs, 2006; Johnson, 2007; Menary, 2008; Pfeifer & Scheier, 1999; Shapiro, 2011; Varela, Thompson, & Rosch, 1991). They recognize that this experience depends on how the environment is sensed, which is situation; that an agent’s situation depends upon its physical nature, which is embodiment; and that an embodied agent can act upon and change its environment (Webb & Consi, 2001). The embodied approach replaces the notion that cognition is representation with the notion that cognition is the control of actions upon the environment. As such, it can also be viewed as a reaction against a great deal of connectionist cognitive science.
In embodied cognitive science, the environment contributes in such a significant way to cognitive processing that some would argue that an agent’s mind has leaked into the world (Clark, 1997; Hutchins, 1995; Menary, 2008, 2010; Noë, 2009; Wilson, 2004). For example, research in behaviour-based robotics eliminates resource-consuming representations of the world by letting the world serve as its own representation, one that can be accessed by a situated agent (Brooks, 1999). This robotics tradition has also shown that nonlinear interactions between an embodied agent and its environment can produce surprisingly complex behavior, even when the internal components of an agent are exceedingly simple (Braitenberg, 1984; Grey Walter, 1950a, 1950b, 1951, 1963; Webb & Consi, 2001).
In short, embodied cognitive scientists argue that classical cognitive science’s reliance on methodological solipsism—its Cartesian view of the disembodied mind—is a deep-seated error. “Classical rule-and-symbol-based AI may have made a fundamental error, mistaking the cognitive profile of the agent plus the environment for the cognitive profile of the naked brain” (Clark, 1997, p. 61).
In reacting against classical cognitive science, the embodied approach takes seriously the idea that Simon’s (1969) parable of the ant might also be applicable to human cognition: “A man, viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself” (p. 25). However, when it comes to specifics about applying such insight, embodied cognitive science is frustratingly fractured. “Embodied cognition, at this stage in its very brief history, is better considered a research program than a well-defined theory” (Shapiro, 2011, p. 2). Shapiro (2011) went on to note that this is because embodied cognitive science “exhibits much greater latitude in its subject matter, ontological commitment, and methodology than does standard cognitive science” (p. 2).
Shapiro (2011) distinguished three key themes that are present, often to differing degrees, in a variety of theories that belong to embodied cognitive science. The first of Shapiro’s themes is conceptualization. According to this theme, the concepts that an agent requires to interact with its environment depend on the form of the agent’s body. If different agents have different bodies, then their understanding or engagement with the world will differ as well. We explore the theme of conceptualization later in this chapter, in the discussion of concepts such as umwelten, affordances, and enactive perception.
Shapiro’s (2011) second theme of embodied cognitive science is replacement: “An organism’s body in interaction with its environment replaces the need for representational processes thought to have been at the core of cognition” (p. 4). The theme of replacement is central to the idea of cognitive scaffolding, in which agents exploit environmental resources for problem representation and solution.
The biological brain takes all the help it can get. This help includes the use of external physical structures (both natural and artifactual), the use of language and cultural institutions, and the extensive use of other agents. (Clark, 1997, p. 80)
Shapiro’s (2011) third theme of embodied cognitive science is constitution. According to this theme, the body or the world has more than a causal role in cognition—they are literally constituents of cognitive processing. The constitution hypothesis leads to one of the more interesting and radical proposals from embodied cognitive science, the extended mind. According to this hypothesis, which flies in the face of the Cartesian mind, the boundary of the mind is not the skin or the skull (Clark, 1997, p. 53): “Mind is a leaky organ, forever escaping its ‘natural’ confines and mingling shamelessly with body and with world.”
One reason that Shapiro (2011) argued that embodied cognitive science is not a well-defined theory, but is instead a more ambiguous research program, is because these different themes are endorsed to different degrees by different embodied cognitive scientists. For example, consider the replacement hypothesis. On the one hand, some researchers, such as behaviour-based roboticists (Brooks, 1999) or radical embodied cognitive scientists (Chemero, 2009), are strongly anti-representational; their aim is to use embodied insights to expunge representational issues from cognitive science. On the other hand, some other researchers, such as philosopher Andy Clark (1997), have a more moderate view in which both representational and non-representational forms of cognition might be present in the same agent.
Shapiro’s (2011) three themes of conceptualization, replacement, and constitution characterize important principles that are the concern of the embodied approach. These principles also have important effects on the practice of embodied cognitive science. Because of their concern with environmental contributions to behavioral complexity, embodied cognitive scientists are much more likely to practice forward engineering or synthetic psychology (Braitenberg, 1984; Dawson, 2004; Dawson, Dupuis, & Wilson, 2010; Pfeifer & Scheier, 1999). In this approach, devices are first constructed and placed in an environment, to examine what complicated or surprising behaviors might emerge. Thus while in reverse engineering behavioral observations are the source of models, in forward engineering models are the source of behavior to observe. Because of their concern about how engagement with the world is dependent upon the physical nature and abilities of agents, embodied cognitive scientists actively explore the role that embodiment plays in cognition. For instance, their growing interest in humanoid robots is motivated by the realization that human intelligence and development require human form (Breazeal, 2002; Brooks et al., 1999).
In the current chapter we introduce some of the key elements that characterize embodied cognitive science. These ideas are presented in the context of reactions against classical cognitive science in order to highlight their innovative nature. However, it is important to keep potential similarities between embodied cognitive science and the other two approaches in mind; while they are not emphasized here, the possibility of such similarities is a central theme of Part II of this book. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.01%3A_Chapter_Overview.txt |
The traveling salesman problem is a vital optimization problem (Gutin & Punnen, 2002; Lawler, 1985). It involves determining the order in which a salesman should visit a sequence of cities, stopping at each city only once, such that the shortest total distance is traveled. The problem is tremendously important: a modern bibliography cites 500 studies on how to solve it (Laporte & Osman, 1995).
One reason for the tremendous amount of research on the traveling salesman problem is that its solution can be applied to a dizzying array of real-world problems and situations (Punnen, 2002), including scheduling tasks, minimizing interference amongst a network of transmitters, data analysis in psychology, X-ray crystallography, overhauling gas turbine engines, warehouse order-picking problems, and wallpaper cutting. It has also attracted so much attention because it is difficult. The traveling salesman problem is an NP-complete problem (Kirkpatrick, Gelatt, & Vecchi, 1983), which means that as the number of cities involved in the salesman’s tour increases linearly, the computational effort for finding the shortest route increases exponentially.
Because of its importance and difficulty, a number of different approaches to solving the traveling salesman problem have been explored. These include a variety of numerical optimization algorithms (Bellmore & Nemhauser, 1968). Some other algorithms, such as simulated annealing, are derived from physical metaphors (Kirkpatrick, Gelatt, & Vecchi, 1983). Still other approaches are biologically inspired and include neural networks (Hopfield & Tank, 1985; Siqueira, Steiner, & Scheer, 2007), genetic algorithms (Braun, 1991; Fogel, 1988), and molecular computers built using DNA molecules (Lee et al., 2004).
Given the difficulty of the traveling salesman problem, it might seem foolish to suppose that cognitively simple agents are capable of solving it. However, evidence shows that a colony of ants is capable of solving a version of this problem, which has inspired new algorithms for solving the traveling salesman problem (Dorigo & Gambardella, 1997)!
One study of the Argentine ant Iridomyrmex humilis used a system of bridges to link the colony’s nest to a food supply (Goss et al., 1989). The ants had to choose between two different routes at two different locations in the network of bridges; some of these routes were shorter than others. When food was initially discovered, ants traversed all of the routes with equal likelihood. However, shortly afterwards, a strong preference emerged: almost all of the ants chose the path that produced the shortest journey between the nest and the food.
The ants’ solution to the traveling salesmen problem involved an interaction between the world and a basic behavior: as Iridomyrmex humilis moves, it deposits a pheromone trail; the potency of this trail fades over time. An ant that by chance chooses the shortest path will add to the pheromone trail at the decision points sooner than will an ant that has taken a longer route. This means that as other ants arrive at a decision point they will find a stronger pheromone trail in the shorter direction, they will be more likely to choose this direction, and they will also add to the pheromone signal.
Each ant that passes the choice point modifies the following ant’s probability of choosing left or right by adding to the pheromone on the chosen path. This positive feedback system, after initial fluctuation, rapidly leads to one branch being ‘selected.’ (Goss et al., 1989, p. 581)
The ability of ants to choose shortest routes does not require a great deal of individual computational power. The solution to the traveling salesman problem emerges from the actions of the ant colony as a whole.
The selection of the shortest branch is not the result of individual ants comparing the different lengths of each branch, but is instead a collective and self-organizing process, resulting from the interactions between the ants marking in both directions. (Goss et al., 1989, p. 581) | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.03%3A_Societal_Computing.txt |
To compute solutions to the traveling salesman problem, ants from a colony interact with and alter their environment in a fairly minimal way: they deposit a pheromone trail that can be later detected by other colony members. However, impressive examples of richer interactions between social insects and their world are easily found.
For example, wasps are social insects that house their colonies in nests of intricate structure that exhibit, across species, tremendous variability in size, shape, and location (Downing & Jeanne, 1986). The size of nests ranges from a mere dozen to nearly a million cells or combs (Theraulaz, Bonabeau, & Deneubourg, 1998). The construction of some nests requires that specialized labor be coordinated (Jeanne, 1996, p. 473): “In the complexity and regularity of their nests and the diversity of their construction techniques, wasps equal or surpass many of the ants and bees.”
More impressive nests are constructed by other kinds of insect colonies, such as termites, whose vast mounds are built over many years by millions of individual insects. A typical termite mound has a height of 2 meters, while some as high as 7 meters have been observed (von Frisch, 1974). Termite mounds adopt a variety of structural innovations to control their internal temperature, including ventilation shafts or shape and orientation to minimize the effects of sun or rain. Such nests,
seem [to be] evidence of a master plan which controls the activities of the builders and is based on the requirements of the community. How this can come to pass within the enormous complex of millions of blind workers is something we do not know. (von Frisch, 1974, p. 150)
How do colonies of simple insects, such as wasps or termites, coordinate the actions of individuals to create their impressive, intricate nests? “One of the challenges of insect sociobiology is to explain how such colony-level behavior emerges from the individual decisions of members of the colony” (Jeanne, 1996, p. 473).
One theoretical approach to this problem is found in the pioneering work of entomologist William Morton Wheeler, who argued that biology had to explain how organisms cope with complex and unstable environments. With respect to social insects, Wheeler (1911) proposed that a colony of ants, considered as a whole, is actually an organism, calling the colony-as-organism the superorganism: “The animal colony is a true organism and not merely the analogue of the person” (p. 310).
Wheeler (1926) agreed that the characteristics of a superorganism must emerge from the actions of its parts, that is, its individual colony members. However, Wheeler also argued that higher-order properties could not be reduced to properties of the superorganism’s components. He endorsed ideas that were later popularized by Gestalt psychology, such as the notion that the whole is not merely the sum of its parts (Koffka, 1935; Köhler, 1947).
The unique qualitative character of organic wholes is due to the peculiar nonadditive relations or interactions among their parts. In other words, the whole is not merely a sum, or resultant, but also an emergent novelty, or creative synthesis. (Wheeler, 1926, p. 433)
Wheeler’s theory is an example of holism (Sawyer, 2002), in which the regularities governing a whole system cannot be easily reduced to a theory that appeals to the properties of the system’s parts. Holistic theories have often been criticized as being nonscientific (Wilson & Lumsden, 1991). The problem with these theories is that in many instances they resist traditional, reductionist approaches to defining the laws responsible for emerging regularities. “Holism is an idea that has haunted biology and philosophy for nearly a century, without coming into clear focus” (Wilson & Lumsden, 1991, p. 401).
Theorists who rejected Wheeler’s proposal of the superorganism proposed alternative theories that reduced colonial intelligence to the actions of individual colony members. A pioneer of this alternative was a contemporary of Wheeler, French biologist Etienne Rabaud. “His entire work on insect societies was an attempt to demonstrate that each individual insect in a society behaves as if it were alone” (Theraulaz & Bonabeau, 1999). Wilson and Lumsden adopted a similar position:
It is tempting to postulate some very complex force distinct from individual repertories and operating at the level of the colony. But a closer look shows that the superorganismic order is actually a straightforward summation of often surprisingly simple individual responses. (Wilson & Lumsden, 1991, p. 402)
Of interest to embodied cognitive science are theories which propose that dynamic environmental control guides the construction of the elaborate nests.
The first concern of such a theory is the general account that it provides of the behavior of each individual. For example, consider one influential theory of wasp behavior (Evans, 1966; Evans & West-Eberhard, 1970), in which a hierarchy of internal drives serves to release behaviors. For instance, high-level drives might include mating, feeding, and brood-rearing. Such drives set in motion lower-level sequences of behavior, which in turn might activate even lower-level behavioral sequences. In short, Evans views wasp behavior as being rooted in innate programs, where a program is a set of behaviors that are produced in a particular sequence, and where the sequence is dictated by the control of a hierarchical arrangement of drives. For example, a brood-rearing drive might activate a drive for capturing prey, which in turn activates a set of behaviors that produces a hunting flight.
Critically, though, Evans’ programs are also controlled by releasing stimuli that are external to the wasp. In particular, one behavior in the sequence is presumed to produce an environmental signal that serves to initiate the next behavior in the sequence. For instance, in Evans’ (1966) model of the construction of a burrow by a solitary digger wasp, the digging behavior of a wasp produces loosened soil, which serves as a signal for the wasp to initiate scraping behavior. This behavior in turn causes the burrow to be clogged, which serves as a signal for clearing behavior. Having a sequence of behaviors under the control of both internal drives and external releasers provides a balance between rigidity and flexibility; the internal drives serve to provide a general behavioral goal, while variations in external releasers can produce variations in behaviors: e.g., resulting in an atypical nest structure when nest damage elicits a varied behavioral sequence. “Each element in the ‘reaction chain’ is dependent upon that preceding it as well as upon certain factors in the environment (often gestalts), and each act is capable a certain latitude of execution” (p. 144).
If an individual’s behavior is a program whose actions are under some environmental control (Evans, 1966; Evans & West-Eberhard, 1970), then it is a small step to imagine how the actions of one member of a colony can affect the later actions of other members, even in the extreme case where there is absolutely no direct communication amongst colony members; an individual in the colony simply changes the environment in such a way that new behaviors are triggered by other colony members.
This kind of theorizing is prominent in modern accounts of nest construction by social paper wasps (Theraulaz & Bonabeau, 1999). A nest for such wasps consists of a lattice of cells, where each cell is essentially a comb created from a hexagonal arrangement of walls. When a large nest is under construction, where will new cells be added?
Theraulaz and Bonabeau (1999) answered this question by assuming that the addition of new cells was under environmental control. They hypothesized that an individual wasp’s decision about where to build a new cell wall was driven by its perception of existing walls. Their theory consisted of two simple rules. First, if there is a location on the nest in which three walls of a cell already existed, then this was proposed as a stimulus to cause a wasp to add another wall here with high probability. Second, if only two walls already existed as part of a cell, this was also a stimulus to add a wall, but this stimulus produced this action with a much lower probability.
The crucial characteristic of this approach is that behavior is controlled, and the activities of the members of a colony are coordinated, by a dynamic environment. That is, when an individual is triggered to add a cell wall to the nest, then the nest structure changes. Such changes in nest appearance in turn affect the behavior of other wasps, affecting choices about the locations where walls will be added next. Theraulaz and Bonabeau (1999) created a nest building simulation that only used these two rules, and demonstrated that it created simulated nests that were very similar in structure to real wasp nests.
In addition to adding cells laterally to the nest, wasps must also lengthen existing walls to accommodate the growth of larvae that live inside the cells. Karsai (1999) proposed another environmentally controlled model of this aspect of nest building. His theory is that wasps perceive the relative difference between the longest and the shortest wall of a cell. If this difference was below a threshold value, then the cell was untouched. However, if this difference exceeded a certain threshold, then this would cause a wasp to lengthen the shortest wall. Karsai used a computer simulation to demonstrate that this simple model provided an accurate account of the three-dimensional growth of a wasp nest over time.
The externalization of control illustrated in theories of wasp nest construction is called stigmergy (Grasse, 1959). The term comes from the Greek stigma, meaning “sting,” and ergon, meaning “work,” capturing the notion that the environment is a stimulus that causes particular work, or behaviour, to occur. It was first used in theories of termite mound construction proposed by French zoologist PierrePaul Grassé (Theraulaz & Bonabeau, 1999). Grassé demonstrated that the termites themselves do not coordinate or regulate their building behaviour, but that this is instead controlled by the mound structure itself.
Stigmergy is appealing because it can explain how very simple agents create extremely complex products, particularly in the case where the final product, such as a termite mound, is extended in space and time far beyond the life expectancy of the organisms that create it. As well, it accounts for the building of large, sophisticated nests without the need for a complete blueprint and without the need for direct communication amongst colony members (Bonabeau et al., 1998; Downing & Jeanne, 1988; Grasse, 1959; Karsai, 1999; Karsai & Penzes, 1998; Karsai & Wenzel, 2000; Theraulaz & Bonabeau, 1995). Stigmergy places an emphasis on the importance of the environment that is typically absent in the classical sandwich that characterizes theories in both classical and connectionist cognitive science. However, early classical theories were sympathetic to the role of stigmergy (Simon, 1969). In Simon’s famous parable of the ant, observers recorded the path travelled by an ant along a beach. How might we account for the complicated twists and turns of the ant’s route? Cognitive scientists tend to explain complex behaviours by invoking complicated representational mechanisms (Braitenberg, 1984). In contrast, Simon (1969) noted that the path might result from simple internal processes reacting to complex external forces— the various obstacles along the natural terrain of the beach: “Viewed as a geometric figure, the ant’s path is irregular, complex, hard to describe. But its complexity is really a complexity in the surface of the beach, not a complexity in the ant” (p. 24).
Similarly, Braitenberg (1984) argued that when researchers explain behaviour by appealing to internal processes, they ignore the environment: “When we analyze a mechanism, we tend to overestimate its complexity” (p. 20). He suggested an alternative approach, synthetic psychology, in which simple agents (such as robots) are built and then observed in environments of varying complexity. This approach can provide cognitive science with more powerful, and much simpler, theories by taking advantage of the fact that not all of the intelligence must be placed inside an agent.
Embodied cognitive scientists recognize that the external world can be used to scaffold cognition and that working memory—and other components of a classical architecture—have leaked into the world (Brooks, 1999; Chemero, 2009; Clark, 1997, 2003; Hutchins, 1995; Pfeifer & Scheier, 1999). In many respect, embodied cognitive science is primarily a reaction against the overemphasis of internal processing that is imposed by the classical sandwich. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.04%3A_Stigmergy_and_Superorganisms.txt |
Theories that incorporate stigmergy demonstrate the plausibility of removing central cognitive control; perhaps embodied cognitive science could replace the classical sandwich’s sense-think-act cycle with sense-act reflexes.
The realization was that the so-called central systems of intelligence—or core AI as it has been referred to more recently—was perhaps an unnecessary illusion, and that all the power of intelligence arose from the coupling of perception and actuation systems. (Brooks, 1999, p. viii)
For a stigmergic theory to have any power at all, agents must exhibit two critical abilities. First, they must be able to sense their world. Second, they must be able to physically act upon the world. For instance, stigmergic control of nest construction would be impossible if wasps could neither sense local attributes of nest structure nor act upon the nest to change its appearance.
In embodied cognitive science, an agent’s ability to sense its world is called situatedness. For the time being, we will simply equate situatedness with the ability to sense. However, situatedness is more complicated than this, because it depends critically upon the physical nature of an agent, including its sensory apparatus and its bodily structure. These issues will be considered in more detail in the next section.
In embodied cognitive science, an agent’s ability to act upon and alter its world depends upon its embodiment. In the most general sense, to say that an agent is embodied is to say that it is an artifact, that it has physical existence. Thus while neither a thought experiment (Braitenberg, 1984) nor a computer simulation (Wilhelms & Skinner, 1990) for exploring a Braitenberg vehicle are embodied, a physical robot that acts like a Braitenberg vehicle (Dawson, Dupuis, & Wilson, 2010) is embodied. The physical structure of the robot itself is important in the sense that it is a source of behavioral complexity. Computer simulations of Braitenberg vehicles are idealizations in which all motors and sensors work perfectly. This is impossible in a physically realized robot. In an embodied agent, one motor will be less powerful than another, or one sensor may be less effective than another. Such differences will alter robot behavior. These imperfections are another important source of behavioral complexity, but are absent when such vehicles are created in simulated and idealized worlds.
However, embodiment is more complicated than mere physical existence. Physically existing agents can be embodied to different degrees (Fong, Nourbakh sh, & Dautenhahn, 2003). This is because some definitions of embodiment relate to the extent to which an agent can alter its environment. For instance, Fong, Nourbakhsh, & Dautenhahn (2003, p. 149) argued that “embodiment is grounded in the relationship between a system and its environment. The more a robot can perturb an environment, and be perturbed by it, the more it is embodied.” As a result, not all robots are equally embodied (Dawson, Dupuis, & Wilson, 2010). A robot that is more strongly embodied than another is a robot that is more capable of affecting, and being affected by, its environment.
The power of embodied cognitive science emerges from agents that are both situated and embodied. This is because these two characteristics provide a critical source of nonlinearity called feedback (Ashby, 1956; Wiener, 1948). Feedback occurs when information about an action’s effect on the world is used to inform the progress of that action. As Ashby (1956, p. 53) noted, “‘feedback’ exists between two parts when each affects the other,” when “circularity of action exists between the parts of a dynamic system.”
Wiener (1948) realized that feedback was central to a core of problems involving communication, control, and statistical mechanics, and that it was crucial to both biological agents and artificial systems. He provided a mathematical framework for studying communication and control, defining the discipline that he called cybernetics. The term cybernetics was derived from the Greek word for “steersman” or “governor.” “In choosing this term, we wish to recognize that the first significant paper on feedback mechanisms is an article on governors, which was published by Clerk Maxwell in 1868” (Wiener, 1948, p. 11). Interestingly, engine governors make frequent appearances in formal discussions of the embodied approach (Clark, 1997; Port & van Gelder, 1995b; Shapiro, 2011).
The problem with the nonlinearity produced by feedback is that it makes computational analyses extraordinarily difficult. This is because the mathematics of feedback relationships between even small numbers of components is essentially intractable. For instance, Ashby (1956) realized that feedback amongst a machine that only consisted of four simple components could not analyzed:
When there are only two parts joined so that each affects the other, the properties of the feedback give important and useful information about the properties of the whole. But when the parts rise to even as few as four, if everyone affects the other three, then twenty circuits can be traced through them; and knowing the properties of all the twenty circuits does not give complete information about the system. (Ashby, 1956, p. 54)
For this reason, embodied cognitive science is often practised using forward engineering, which is a kind of synthetic methodology (Braitenberg, 1984; Dawson, 2004; Pfeifer & Scheier, 1999). That is, researchers do not take a complete agent and reverse engineer it into its components. Instead, they take a small number of simple components, compose them into an intact system, set the components in motion in an environment of interest, and observe the resulting behaviors.
For instance, Ashby (1960) investigated the complexities of his four-component machine not by dealing with intractable mathematics, but by building and observing a working device, the Homeostat. It comprised four identical machines (electrical input-output devices), incorporated mutual feedback, and permitted him to observe the behavior, which was the movement of indicators for each machine. Ashby discovered that the Homeostat could learn; he reinforced its responses by physically manipulating the dial of one component to “punish” an incorrect response (e.g., for moving one of its needles in the incorrect direction). Ashby also found that the Homeostat could adapt to two different environments that were alternated from trial to trial. This knowledge was unattainable from mathematical analyses. “A better demonstration can be given by a machine, built so that we know its nature exactly and on which we can observe what will happen in various conditions” (p. 99).
Braitenberg (1984) has argued that an advantage of forward engineering is that it will produce theories that are simpler than those that will be attained by reverse engineering. This is because when complex or surprising behaviors emerge, preexisting knowledge of the components—which were constructed by the researcher— can be used to generate simpler explanations of the behavior.
Analysis is more difficult than invention in the sense in which, generally, induction takes more time to perform than deduction: in induction one has to search for the way, whereas in deduction one follows a straightforward path. (Braitenberg, 1984, p. 20)
Braitenberg called this the law of uphill analysis and downhill synthesis.
Another way in which to consider the law of uphill analysis and downhill synthesis is to apply Simon’s (1969) parable of the ant. If the environment is taken seriously as a contributor to the complexity of the behavior of a situated and embodied agent, then one can take advantage of the agent’s world and propose less complex internal mechanisms that still produce the desired intricate results. This idea is central to the replacement hypothesis that Shapiro (2011) has argued is a fundamental characteristic of embodied cognitive science. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.05%3A_Embodiment%2C_Situatedness%2C_and_Feedback.txt |
The situatedness of an agent is not merely perception; the nature of an agent’s perceptual apparatus is a critical component of situatedness. Clearly agents can only experience the world in particular ways because of limits, or specializations, in their sensory apparatus (Uexküll, 2001). Ethologist Jakob von Uexküll coined the term umwelt to denote the “island of the senses” produced by the unique way in which an organism is perceptually engaged with its world. Uexküll realized that because different organisms experience the world in different ways, they can live in the same world but at the same time exist in different umwelten. Similarly, the ecological theory of perception (Gibson, 1966, 1979) recognized that one could not separate the characteristics of an organism from the characteristics of its environment. “It is often neglected that the words animal and environment make an inseparable pair” (Gibson, 1979, p. 8).
The inseparability of animal and environment can at times even be rooted in the structure of an agent’s body. For instance, bats provide a prototypical example of an active-sensing system (MacIver, 2008) because they emit a high-frequency sound and detect the location of targets by processing the echo. The horizontal position of a target (e.g., a prey insect) is uniquely determined by the difference in time between the echo’s arrival to the left and right ears. However, this information is not sufficient to specify the vertical position of the target. The physical nature of bat ears solves this problem. The visible external structure (the pinna and the tragus) of the bat’s ear has an extremely intricate shape. As a result, returning echoes strike the ear at different angles of entry. This provides additional auditory cues that vary systematically with the vertical position of the target (Wotton, Haresign, & Simmons, 1995; Wotton & Simmons, 2000). In other words, the bat’s body—in particular, the shape of its ears—is critical to its umwelt.
Passive and active characteristics of an agent’s body are central to theories of perception that are most consistent with embodied cognitive science (Gibson, 1966, 1979; Noë, 2004). This is because embodied cognitive science has arisen as part of a reaction against the Cartesian view of mind that inspired classical cognitive science. In particular, classical cognitive science inherited Descartes’ notion (Descartes, 1960, 1996) of the disembodied mind that had descended from Descartes’ claim of Cogito ergo sum. Embodied cognitive scientists have been strongly influenced by philosophical positions which arose as reactions against Descartes, such as Martin Heidegger’s Being and Time (Heidegger, 1962), originally published in 1927. Heidegger criticized Descartes for adopting many of the terms of older philosophies but failing to recognize a critical element, their interactive relationship to the world: “The ancient way of interpreting the Being of entities is oriented towards the ‘world’ or ‘Nature’ in the widest sense” (Heidegger, 1962, p. 47). Heidegger argued instead for Being-in-the-world as a primary mode of existence. Being-in-the-world is not just being spatially located in an environment, but is a mode of existence in which an agent is actively engaged with entities in the world.
Dawson, Dupuis, and Wilson (2010) used a passive dynamic walker to illustrate this inseparability of agent and environment. A passive dynamic walker is an agent that walks without requiring active control: its walking gait is completely due to gravity and inertia (McGeer, 1990). Their simplicity and low energy requirements have made them very important models for the development of walking robots (Alexander, 2005; Collins et al., 2005; Kurz et al., 2008; Ohta, Yamakita, & Furuta, 2001; Safa, Saadat, & Naraghi, 2007; Wisse, Schwab, & van der Helm, 2004). Dawson, Dupuis, and Wilson constructed a version of McGeer’s (1990) original walker from LEGO. The walker itself was essentially a straight-legged hinge that would walk down an inclined ramp. However, the ramp had to be of a particular slope and had to have properly spaced platforms with gaps in between to permit the agent’s legs to swing. Thus the LEGO hinge that Dawson, Dupuis, and Wilson (2010) built had the disposition to walk, but it required a specialized environment to have this disposition realized. The LEGO passive dynamic walker is only a walker when it interacts with the special properties of its ramp. Passive dynamic walking is not a characteristic of a device, but is instead a characteristic of a device being in a particular world.
Being-in-the-world is related to the concept of affordances developed by psychologist James J. Gibson (Gibson, 1979). In general terms, the affordances of an object are the possibilities for action that a particular object permits a particular agent. “The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill” (p. 127). Again, affordances emerge from an integral relationship between an object’s properties and an agent’s abilities to act.
Note that the four properties listed—horizontal, flat, extended, and rigid—would be physical properties of a surface if they were measured with the scales and standard units used in physics. As an affordance of support for a species of animal, however, they have to be measured relative to the animal. They are unique for that animal. They are not just abstract physical properties. (p. 127)
Given that affordances are defined in terms of an organism’s potential actions, it is not surprising that action is central to Gibson’s (1966, 1979) ecological approach to perception. Gibson (1966, p. 49) noted that “when the ‘senses’ are considered as active systems they are classified by modes of activity not by modes of conscious quality.” Gibson’s emphasis on action and the world caused his theory to be criticized by classical cognitive science (Fodor & Pylyshyn, 1981). Perhaps it is not surprising that the embodied reaction to classical cognitive science has been accompanied by a modern theory of perception that has descended from Gibson’s work: the enactive approach to perception (Noë, 2004).
Enactive perception reacts against the traditional view that perception is constructing internal representations of the external world. Enactive perception argues instead that the role of perception is to access information in the world when it is needed. That is, perception is not a representational process, but is instead a sensorimotor skill (Noë, 2004). “Perceiving is a way of acting. Perception is not something that happens to us, or in us. It is something we do” (p. 1).
Action plays multiple central roles in the theory of enactive perception (Noë, 2004). First, the purpose of perception is not viewed as building internal representations of the world, but instead as controlling action on the world. Second, and related to the importance of controlling action, our perceptual understanding of objects is sensorimotor, much like Gibson’s (1979) notion of affordance. That is, we obtain an understanding of the external world that is related to its changes in appearance that would result by changing our position—by acting on an object, or by moving to a new position. Third, perception is to be an intrinsically exploratory process. As a result, we do not construct complete visual representations of the world. Instead, perceptual objects are virtual—we have access to properties in the world when needed, and only through action.
Our sense of the perceptual presence of the cat as a whole now does not require us to be committed to the idea that we represent the whole cat in consciousness at once. What it requires, rather, is that we take ourselves to have access, now, to the whole cat. The cat, the tomato, the bottle, the detailed scene, all are present perceptually in the sense that they are perceptually accessible to us. (Noë, 2004, p. 63)
Empirical support for the virtual presence of objects is provided by the phenomenon of change blindness. Change blindness occurs when a visual change occurs in plain sight of a viewer, but the viewer does not notice the change. For instance, in one experiment (O’Regan et al., 2000), subjects inspect an image of a Paris street scene. During this inspection, the color of a car in the foreground of the image changes, but a subject does not notice this change! Change blindness supports the view that representations of the world are not constructed. “The upshot of this is that all detail is present in experience not as represented, but rather as accessible” (Noë, 2004, p. 193). Accessibility depends on action, and action also depends on embodiment. “To perceive like us, it follows, you must have a body like ours” (p. 25). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.06%3A_Umwelten%2C_Affordances%2C_and_Enactive_Perception.txt |
Classical cognitive science usually assumes that the primary purpose of cognition is planning (Anderson, 1983; Newell, 1990); this planning is used to mediate perception and action. As a result, classical theories take the form of the sense-think-act cycle (Pfeifer & Scheier, 1999). Furthermore, the “thinking” component of this cycle is emphasized far more than either the “sensing” or the “acting.” “One problem with psychology’s attempt at cognitive theory has been our persistence in thinking about cognition without bringing in perceptual and motor processes” (Newell, 1990, p. 15).
Embodied cognitive science (Agre, 1997; Brooks, 1999, 2002; Chemero, 2009; Clancey, 1997; Clark, 1997, 2003, 2008; Pfeifer & Scheier, 1999; Robbins & Aydede, 2009; Shapiro, 2011; Varela, Thompson, & Rosch, 1991) recognizes the importance of sensing and acting, and reacts against central cognitive control. Its more radical proponents strive to completely replace the sense-think-act cycle with sense-act mechanisms.
This reaction is consistent with several themes in the current chapter: the importance of the environment, degrees of embodiment, feedback between the world and the agent, and the integral relationship between an agent’s body and its umwelt. Given these themes, it becomes quite plausible to reject the proposal that cognition is used to plan, and to posit instead that the purpose of cognition is to guide action:
The brain should not be seen as primarily a locus of inner descriptions of external states of affairs; rather, it should be seen as a locus of internal structures that act as operators upon the world via their role in determining actions. (Clark, 1997, 47)
Importantly, these structures do not stand between sensing and acting, but instead provide direct links between them.
The action-based reaction against classical cognitivism is typified by pioneering work in behavior-based robotics (Brooks, 1989, 1991, 1999, 2002; Brooks & Flynn, 1989). Roboticist Rodney Brooks construes the classical sandwich as a set of vertical processing layers that separate perception and action. His alternative is a hierarchical arrangement of horizontal processing layers that directly connect perception and action.
Brooks’ action-based approach to behavior is called the subsumption architecture (Brooks, 1999). The subsumption architecture is a set of modules. However, these modules are somewhat different in nature than those that were discussed in Chapter 3 (see also Fodor, 1983). This is because each module in the subsumption architecture can be described as a sense-act mechanism. That is, every module can have access to sensed information, as well as to actuators. This means that modules in the subsumption architecture do not separate perception from action. Instead, each module is used to control some action on the basis of sensed information.
The subsumption architecture arranges modules hierarchically. Lower-level modules provide basic, general-purpose, sense-act functions. Higher-level modules provide more complex and more specific sense-act functions that can exploit the operations of lower-level operations. For instance, in an autonomous robot the lowest-level module might simply activate motors to move a robot forward (e.g., Dawson, Dupuis. & Wilson, 2010, Chapter 7). The next level might activate a steering mechanism. This second level causes the robot to wander by taking advantage of the movement provided by the lower level. If the lower level were not operating, then wandering would not occur: because although the steering mechanism was operating, the vehicle would not be moving forward.
Vertical sense-act modules, which are the foundation of the subsumption architecture, also appear to exist in the human brain (Goodale, 1988, 1990, 1995; Goodale & Humphrey, 1998; Goodale, Milner, Jakobson, & Carey, 1991; Jakobson et al., 1991).
There is a long-established view that two distinct physiological pathways exist in the human visual system (Livingstone & Hubel, 1988; Maunsell & Newsome, 1987; Ungerleider & Mishkin, 1982): one, the ventral stream, for processing the appearance of objects; the other, the dorsal stream, for processing their locations. In short, in object perception the ventral stream delivers the “what,” while the dorsal stream delivers the “where.” This view is supported by double dissociation evidence observed in clinical patients: brain injuries can cause severe problems in seeing motion but leave form perception unaffected, or vice versa (Botez, 1975; Hess, Baker, & Zihl, 1989; Zihl, von Cramon, & Mai, 1983).
There has been a more recent reconceptualization of this classic distinction: the duplex approach to vision (Goodale & Humphrey, 1998), which maintains the physiological distinction between the ventral and dorsal streams but reinterprets their functions. In the duplex theory, the ventral stream creates perceptual representations, while the dorsal stream mediates the visual control of action.
The functional distinction is not between ‘what’ and ‘where,’ but between the way in which the visual information about a broad range of object parameters are transformed either for perceptual purposes or for the control of goal-directed actions. (Goodale & Humphrey, 1998, p. 187)
The duplex theory can be seen as representational theory that is elaborated in such a way that fundamental characteristics of the subsumption architecture are present. These results can be used to argue that the human brain is not completely structured as a “classical sandwich.” On the one hand, in the duplex theory the purpose of the ventral stream is to create a representation of the perceived world (Goodale & Humphrey, 1998). On the other hand, in the duplex theory the purpose of the dorsal stream is the control of action, because it functions to convert visual information directly into motor commands. In the duplex theory, the ventral stream is strikingly similar to the vertical layers of the subsumption architecture.
Double dissociation evidence from cognitive neuroscience has been used to support the duplex theory. The study of one brain-injured subject (Goodale et al., 1991) revealed normal basic sensation. However, the patient could not describe the orientation or shape of any visual contour, no matter what visual information was used to create it. While this information could not be consciously reported, it was available, and could control actions. The patient could grasp objects, or insert objects through oriented slots, in a fashion indistinguishable from control subjects, even to the fine details that are observed when such actions are initiated and then carried out. This pattern of evidence suggests that the patient’s ventral stream was damaged, but that the dorsal stream was unaffected and controlled visual actions. “At some level in normal brains the visual processing underlying ‘conscious’ perceptual judgments must operate separately from that underlying the ‘automatic’ visuomotor guidance of skilled actions of the hand and limb” (p. 155).
Other kinds of brain injuries produce a very different pattern of abnormalities, establishing the double dissociation that supports the duplex theory. For instance, damage to the posterior parietal cortex—part of the dorsal stream—can cause optic ataxia, in which visual information cannot be used to control actions towards objects presented in the part of the visual field affected by the brain injury (Jakobson et al., 1991). Optic ataxia, however, does not impair the ability to perceive the orientation and shapes of visual contours.
Healthy subjects can also provide support for the duplex theory. For instance, in one study subjects reached toward an object whose position changed during a saccadic eye movement (Pelisson et al., 1986). As a result, subjects were not conscious of the target’s change in location. Nevertheless, they compensated to the object’s new position when they reached towards it. “No perceptual change occurred, while the hand pointing response was shifted systematically, showing that different mechanisms were involved in visual perception and in the control of the motor response” (p. 309). This supports the existence of “horizontal” sense-act modules in the human brain. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.07%3A_Horizontal_Layers_of_Control.txt |
The robotics work of Grey Walter has been accurately described as an inspiration to modern studies of autonomous systems (Reeve & Webb, 2003). Indeed, the kind of research conducted by Grey Walter seems remarkably similar to the “new wave” of behavior-based or biologically inspired robotics (Arkin, 1998; Breazeal, 2002; Sharkey, 1997; Webb & Consi, 2001).
In many respects, this represents an important renaissance of Grey Walter’s search for “mimicry of life” (Grey Walter, 1963, p. 114). Although the Tortoises were described in his very popular 1963 book The Living Brain, they essentially disappeared from the scientific picture for about a quarter of a century. Grey Walter was involved in a 1970 motorcycle accident that ended his career; after this accident, the whereabouts of most of the Tortoises was lost. One remained in the possession of his son after Grey Walter’s death in 1977; it was located in 1995 after an extensive search by Owen Holland. This discovery renewed interest in Grey Walter’s work (Hayward, 2001; Holland, 2003a, 2003b), and has re-established its important place in modern research.
The purpose of the current section is to briefly introduce one small segment of robotics research that has descended from Grey Walter’s pioneering work. In Chapter 3, we introduced the reorientation task that is frequently used to study how geometric and feature cues are used by an agent to navigate through its world. We also described a classical theory, the geometric module (Cheng, 1986; Gallistel, 1990), which has been used to explain some of the basic findings concerning this task. In Chapter 4, we noted that the reorientation task has also been approached from the perspective of connectionist cognitive science. A simple artificial neural network, the perceptron, has been offered as a viable alternative to classical theory (Dawson et al., 2010). In this section we briefly describe a third approach to the reorientation task, because embodied cognitive science has studied it in the context of behavior-based robotics.
Classical and connectionist cognitive science provide very different accounts of the co-operative and competitive interactions between geometric and featural cues when an agent attempts to relocate the target location in a reorientation arena. However, these different accounts are both representational. One of the themes pervading embodied cognitive science is a reaction against representational explanations of intelligent behavior (Shapiro, 2011). One field that has been a test bed for abandoning internal representations is known as new wave robotics (Sharkey, 1997).
New wave roboticists strive to replace representation with reaction (Brooks, 1999), to use sense-act cycles in the place of representational sense-think-act processing. This is because “embodied and situated systems can solve rather complicated tasks without requiring internal states or internal representations” (Nolfi & Floreano, 2000, p. 93). One skill that has been successfully demonstrated in new wave robotics is navigation in the context of the reorientation task (Lund & Miglino, 1998).
The Khepera robot (Bellmore & Nemhauser, 1968; Boogaarts, 2007) is a standard platform for the practice of new wave robotics. It has the appearance of a motorized hockey puck, uses two motor-driven wheels to move about, and has eight sensors distributed around its chassis that allow it to detect the proximity of obstacles. Roboticists have the goal of combining the proximity detector signals to control motor speed in order to produce desired dynamic behaviour. One approach to achieving this goal is to employ evolutionary robotics (Nolfi & Floreano, 2000). Evolutionary robotics involves using a genetic algorithm (Holland, 1992; Mitchell, 1996) to find a set of weights between each proximity detector and each motor.
In general, evolutionary robotics proceeds as follows (Nolfi & Floreano, 2000). First, a fitness function is defined, to evaluate the quality of robot performance. Evolution begins with an initial population of different control systems, such as different sets of sensor-to-motor weights. The fitness function is used to assess each of these control systems, and those that produce higher fitness values “survive.” Survivors are used to create the next generation of control systems via prescribed methods of “mutation.” The whole process of evaluate-survive-mutate is iterated; average fitness is expected to improve with each new generation. The evolutionary process ends when improvements in fitness stabilize. When evolution stops, the result is a control system that should be quite capable of performing the task that was evaluated by the fitness function.
Lund and Miglino (1998) used this procedure to evolve a control system that enabled Khepera robots to perform the reorientation task in a rectangular arena without feature cues. Their goal was to see whether a standard result—rotational error—could be produced in an agent that did not employ the geometric module, and indeed which did not represent arena properties at all. Lund and Miglino’s fitness function simply measured a robot’s closeness to the goal location. After 30 generations of evolution, they produced a system that would navigate a robot to the goal location from any of 8 different starting locations with a 41 percent success rate. Their robots also produced rotational error, for they incorrectly navigated to the corner 180° from the goal in another 41 percent of the test trials. These results were strikingly similar to those observed when rats perform reorientation in featureless rectangular arenas (e.g., Gallistel, 1990).
Importantly, the control system that was evolved by Lund and Miglino (1998) was simply a set of weighted connections between proximity detectors and motors, and not an encoding of arena shape.
The geometrical properties of the environment can be assimilated in the sensorymotor schema of the robot behavior without any explicit representation. In general, our work, in contrast with traditional cognitive models, shows how environmental knowledge can be reached without any form of direct representation. (Lund and Miglino, 1998, p. 198)
If arena shape is not explicitly represented, then how does the control system developed by Lund and Miglino (1998) produce reorientation task behaviour? When the robot is far enough from the arena walls that none of the sensors are detecting an obstacle, the controller weights are such that the robot moves in a gentle curve to the left. As a result, it never encounters a short wall when it leaves from any of its eight starting locations! When a long wall is (inevitably) encountered, the robot turns left and follows the wall until it stops in a corner. The result is that the robot will be at either the target location or its rotational equivalent.
The control system evolved by Lund and Miglino (1998) is restricted to rectangular arenas of a set size. If one of their robots is placed in an arena of even a slightly different size, its performance suffers (Nolfi, 2002). Nolfi used a much longer evolutionary process (500 generations), and also placed robots in different sized arenas, to successfully produce devices that would generate typical results not only in a featureless rectangular arena, but also in arenas of different dimensions. Again, these robots did so without representing arena shape or geometry.
Nolfi’s (2002) more general control system worked as follows. His robots would begin by moving forwards and avoiding walls, which would eventually lead them into a corner. When facing a corner, signals from the corner’s two walls caused the robot to first turn to orient itself at an angle of 45° from one of the corner’s walls. Then the robot would make an additional turn that was either clockwise or counterclockwise, depending upon whether the sensed wall was to the robot’s left or the right.
The final turn away from the corner necessarily pointed the robot in a direction that would cause it to follow a long wall, because sensing a wall at 45° is an indirect measurement of wall length:
If the robot finds a wall at about 45° on its left side and it previously left a corner, it means that the actual wall is one of the two longer walls. Conversely, if it encounters a wall at 45° on its right side, the actual wall is necessarily one of the two shorter walls. What is interesting is that the robot “measures” the relative length of the walls through action (i.e., by exploiting sensory–motor coordination) and it does not need any internal state to do so. (Nolfi, 2002, p. 141)
As a result, the robot sensed the long wall in a rectangular arena without representing wall length. It followed the long wall, which necessarily led the robot to either the goal corner or the corner that results in a rotational error, regardless of the actual dimensions of the rectangular arena.
Robots simpler than the Khepera can also perform the reorientation task, and they can at the same time generate some of its core results. The subsumption architecture has been used to design a simple LEGO robot, antiSLAM (Dawson, Dupuis, & Wilson, 2010), that demonstrates rotational error and illustrates how a new wave robot can combine geometric and featural cues, an ability not included in the evolved robots that have been discussed above.
The ability of autonomous robots to navigate is fundamental to their success. In contrast to the robots described in the preceding paragraphs, one of the major approaches to providing such navigation is called SLAM, which is an acronym for a representational approach named “simultaneous localization and mapping” (Jefferies & Yeap, 2008). Representationalists assumed that agents navigate their environment by sensing their current location and referencing it on some internal map. How is such navigation to proceed if an agent is placed in a novel environment for which no such map exists? SLAM is an attempt to answer this question. It proposes methods that enable an agent to build a new map of a novel environment and at the same time use this map to determine the agent’s current location.
The representational assumptions that underlie approaches such as SLAM have recently raised concerns in some researchers who study animal navigation (Alerstam, 2006). To what extent might a completely reactive, sense-act robot be capable of demonstrating interesting navigational behaviour? The purpose of antiSLAM (Dawson, Dupuis, & Wilson, 2010) was to explore this question in an incredibly simple platform—the robot’s name provides some sense of the motivation for its construction.
AntiSLAM is an example of a Braitenberg Vehicle 3 (Braitenberg, 1984), because it uses six different sensors, each of which contributes to the speed of two motors that propel and steer it. Two are ultrasonic sensors that are used as sonar to detect obstacles, two are rotation detectors that are used to determine when the robot has stopped moving, and two are light sensors that are used to attract the robot to locations of bright illumination. The sense-act reflexes of antiSLAM were not evolved but were instead created using the subsumption architecture.
The lowest level of processing in antiSLAM is “drive,” which essentially uses the outputs of the ultrasonic sensors to control motor speed. The closer to an obstacle a sensor gets, the slower is the speed of the one motor that the sensor helps to control. The next level is “escape.” When both rotation sensors are signaling that the robot is stationary (i.e., stopped by an obstacle detected by both sensors), the robot executes a turn to point itself in a different direction. The next level up is “wall following”: motor speed is manipulated in such a way that the robot has a strong bias to keep closer to a wall on the right than to a wall on the left. The highest level is “feature,” which uses two light sensors to contribute to motor speed in such a way that it approaches areas of brighter light.
AntiSLAM performs complex, lifelike exploratory behavior when placed in general environments. It follows walls, steers itself around obstacles, explores regions of brighter light, and turns around and escapes when it finds itself stopped in a corner or in front of a large obstacle.
When placed in a reorientation task arena, antiSLAM generates behaviors that give it the illusion of representing geometric and feature cues (Dawson, Dupuis, & Wilson, 2010). It follows walls in a rectangular arena, slowing to a halt when enters a corner. It then initiates a turning routine to exit the corner and continue exploring. Its light sensors permit it to reliably find a target location that is associated with particular geometric and local features. When local features are removed, it navigates the arena using geometric cues only, and it produces rotational errors. When local features are moved (i.e., an incorrect corner is illuminated), its choice of locations from a variety of starting points mimics the same combination of geometric and feature cues demonstrated in experiments with animals. In short, it produces some of the key features of the reorientation task—however, it does so without creating a cognitive map, and even without representing a goal. Furthermore, observations of antiSLAM’s reorientation task behavior indicated that a crucial behavioral measure, the path taken by an agent as it moves through the arena, is critical. Such paths are rarely reported in studies of reorientation.
The reorienting robots discussed above are fairly recent descendants of Grey Walter’s (1963) Tortoises, but their more ancient ancestors are the eighteenth-century life-mimicking, clockwork automata (Wood, 2002). These devices brought into sharp focus the philosophical issues concerning the comparison of man and machine that was central to Cartesian philosophy (Grenville, 2001; Wood, 2002). Religious tensions concerning the mechanistic nature of man, and the spiritual nature of clockwork automata, were soothed by dualism: automata and animals were machines. Men too were machines, but unlike automata, they also had souls. It was the appearance of clockwork automata that led to their popularity, as well as to their conflicts with the church. “Until the scientific era, what seemed most alive to people was what most looked like a living being. The vitality accorded to an object was a function primarily of its form” (Grey Walter, 1963, p. 115).
In contrast, Grey Walter’s Tortoises were not attempts to reproduce appearances, but were instead simulations of more general and more abstract abilities central to biological agents,
exploration, curiosity, free-will in the sense of unpredictability, goal-seeking, sel-fregulation, avoidance of dilemmas, foresight, memory, learning, forgetting, association of ideas, form recognition, and the elements of social accommodation. Such is life. (Grey Walter, 1963, p. 120)
By situating and embodying his machines, Grey Walter invented a new kind of scientific tool that produced behaviors that were creative and unpredictable, governed by nonlinear relationships between internal mechanisms and the surrounding, dynamic world.
Modern machines that mimic lifelike behavior still raise serious questions about what it is to be human. To Wood (2002, p. xxvii) all automata were presumptions “that life can be simulated by art or science or magic. And embodied in each invention is a riddle, a fundamental challenge to our perception of what makes us human.” The challenge is that if the lifelike behaviors of the Tortoises and their descendants are merely feedback loops between simple mechanisms and their environments, then might the same be true of human intelligence?
This challenge is reflected in some of roboticist Rodney Brooks’ remarks in Errol Morris’ 1997 documentary Fast, Cheap & Out of Control. Brooks begins by describing one of his early robots: “To an observer it appears that the robot has intentions and it has goals and it is following people and chasing prey. But it’s just the interaction of lots and lots of much simpler processes.” Brooks then considers extending this view to human cognition: “Maybe that’s all there is. Maybe a lot of what humans are doing could be explained this way.”
But as the segment in the documentary proceeds, Brooks, the pioneer of behavior-based robotics, is reluctant to believe that humans are similar types of devices:
When I think about it, I can almost see myself as being made up of thousands and thousands of little agents doing stuff almost independently. But at the same time I fall back into believing the things about humans that we all believe about humans and living life that way. Otherwise I analyze it too much; life becomes almost meaningless. (Morris, 1997)
Conflicts like those voiced by Brooks are brought to the forefront when embodied cognitive science ventures to study humanoid robots that are designed to exploit social environments and interactions (Breazeal, 2002; Turkle, 2011). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.08%3A_Reorientation_without_Representation.txt |
The embodied approach has long recognized that an agent’s environment is much more that a static array of stimuli (Gibson, 1979; Neisser, 1976; Scribner & Tobach, 1997; Vygotsky, 1986). “The richest and most elaborate affordances of the environment are provided by other animals and, for us, other people” (Gibson, 1979, p. 135). A social environment is a rich source of complexity and ranges from dynamic interactions with other agents to cognitive scaffolding provided by cultural conventions. “All higher mental processes are primarily social phenomena, made possible by cognitive tools and characteristic situations that have evolved in the course of history” (Neisser, 1976, p. 134).
In the most basic sense of social, multiple agents in a shared world produce a particularly complex source of feedback between each other’s actions. “What the other animal affords the observer is not only behavior but also social interaction. As one moves so does the other, the one sequence of action being suited to the other in a kind of behavioral loop” (Gibson, 1979, p. 42).
Grey Walter (1963) explored such behavioral loops when he placed two Tortoises in the same room. Mounted lights provided particularly complex stimuli in this case, because robot movements would change the position of the two lights, which in turn altered subsequent robot behaviors. In describing a photographic record of one such interaction, Grey Walter called the social dynamics of his machines,
the formation of a cooperative and a competitive society.... When the two creatures are released at the same time in the dark, each is attracted by the other’s headlight but each in being attracted extinguishes the source of attraction to the other. The result is a stately circulating movement of minuet-like character; whenever the creatures touch they become obstacles and withdraw but are attracted again in rhythmic fashion. (Holland, 2003a, p. 2104)
Similar behavioral loops have been exploited to explain the behavior of larger collections of interdependent agents, such as flocks of flying birds or schools of swimming fish (Nathan & Barbosa, 2008; Reynolds, 1987). Such an aggregate presents itself as another example of a superorganism, because the synchronized movements of flock members give “the strong impression of intentional, centralized control” (Reynolds, 1987, p. 25). However, this impression may be the result of local, stigmergic interactions in which an environment chiefly consists of other flock members in an agent’s immediate vicinity.
In his pioneering work on simulating the flight of a flock of artificial birds, called boids, Reynolds (1987) created lifelike flocking behavior by having each independently flying boid adapt its trajectory according to three simple rules: avoid collision with nearby flock mates, match the velocity of nearby flock mates, and stay close to nearby flock mates. A related model (Couzin et al., 2005) has been successfully used to predict movement of human crowds (Dyer et al., 2008; Dyer et al., 2009; Faria et al., 2010).
However, many human social interactions are likely more involved than the simple behavioral loops that defined the social interactions amongst Grey Walter’s (1963) Tortoises or the flocking behavior of Reynolds’ (1987) boids. These interactions are possibly still behavioral loops, but they may be loops that involve processing special aspects of the social environment. This is because it appears that the human brain has a great deal of neural circuitry devoted to processing specific kinds of social information.
Social cognition is fundamentally involved with how we understand others (Lieberman, 2007). One key avenue to such understanding is our ability to use and interpret facial expressions (Cole, 1998; Etcoff & Magee, 1992). There is a long history of evidence that indicates that our brains have specialized circuitry for processing faces. Throughout the eighteenth and nineteenth centuries, there were many reports of patients whose brain injuries produced an inability to recognize faces but did not alter the patients’ ability to identify other visual objects. This condition was called prosopagnosia, for “face blindness,” by German neuroscientist Joachim Bodamer in a famous 1947 manuscript (Ellis & Florence, 1990). In the 1980s, recordings from single neurons in the monkey brain revealed cells that appeared to be tailored to respond to specific views of monkey faces (Perrett, Mistlin, & Chitty, 1987; Perrett, Rolls, & Caan, 1982). At that time, though, it was unclear whether analogous neurons for face processing were present in the human brain.
Modern brain imaging techniques now suggest that the human brain has an elaborate hierarchy of co-operating neural systems for processing faces and their expressions (Haxby, Hoffman, & Gobbini, 2000, 2002). Haxby, Hoffman, and Gobbini (2000, 2002) argue for the existence of multiple, bilateral brain regions involved in different face perception functions. Some of these are core systems that are responsible for processing facial invariants, such as relative positions of the eyes, nose, and mouth, which are required for recognizing faces. Others are extended systems that process dynamic aspects of faces in order to interpret, for instance, the meanings of facial expressions. These include subsystems that co-operatively account for lip reading, following gaze direction, and assigning affect to dynamic changes in expression.
Facial expressions are not the only source of social information. Gestures and actions, too, are critical social stimuli. Evidence also suggests that mirror neurons in the human brain (Gallese et al., 1996; Iacoboni, 2008; Rizzolatti & Craighero, 2004; Rizzolatti, Fogassi, & Gallese, 2006) are specialized for both the generation and interpretation of gestures and actions.
Mirror neurons were serendipitously discovered in experiments in which motor neurons in region F5 were recorded when monkeys performed various reaching actions (Di Pellegrino et al., 1992). By accident, it was discovered that many of the neurons that were active when a monkey performed an action also responded when similar actions were observed being performed by another:
After the initial recording experiments, we incidentally observed that some experimenter’s actions, such as picking up the food or placing it inside the testing box, activated a relatively large proportion of F5 neurons in the absence of any overt movement of the monkey. (Di Pellegrino et al., 1992, p. 176)
The chance discovery of mirror neurons has led to an explosion of research into their behavior (Iacoboni, 2008). It has been discovered that when the neurons fire, they do so for the entire duration of the observed action, not just at its onset. They are grasp specific: some respond to actions involving precision grips, while others respond to actions involving larger objects. Some are broadly tuned, in the sense that they will be triggered when a variety of actions are observed, while others are narrowly tuned to specific actions. All seem to be tuned to object-oriented action: a mirror neuron will respond to a particular action on an object, but it will fail to respond to the identical action if no object is present.
While most of the results described above were obtained from studies of the monkey brain, there is a steadily growing literature indicating that the human brain also has a mirror system (Buccino et al., 2001; Iacoboni, 2008).
Mirror neurons are not solely concerned with hand and arm movements. For instance, some monkey mirror neurons respond to mouth movements, such as lip smacking (Ferrari et al., 2003). Similarly, the human brain has a mirror system for the act of touching (Keysers et al., 2004). Likewise, another part of the human brain, the insula, may be a mirror system for emotion (Wicker et al., 2003). For example, it generates activity when a subject experiences disgust, and also when a subject observes the facial expressions of someone else having a similar experience.
Two decades after its discovery, extensive research on the mirror neuron system has led some researchers to claim that it provides the neural substrate for social cognition and imitative learning (Gallese & Goldman, 1998; Gallese, Keysers, & Rizzolatti, 2004; Iacoboni, 2008), and that disruptions of this system may be responsible for autism (Williams et al., 2001). The growing understanding of the mirror system and advances in knowledge about the neuroscience of face perception have heralded a new interdisciplinary research program, called social cognitive neuroscience (Blakemore, Winston, & Frith, 2004; Lieberman, 2007; Ochsner & Lieberman, 2001).
It may once have seemed foolhardy to work out connections between fundamental neurophysiological mechanisms and highly complex social behavior, let alone to decide whether the mechanisms are specific to social processes. However... neuroimaging studies have provided some encouraging examples. (Blakemore, Winston, & Frith, 2004, p. 216)
The existence of social cognitive neuroscience is a consequence of humans evolving, embodied and situated, in a social environment that includes other humans and their facial expressions, gestures, and actions. The modern field of sociable robotics (Breazeal, 2002) attempts to develop humanoid robots that are also socially embodied and situated. One purpose of such robots is to provide a medium for studying human social cognition via forward engineering.
A second, applied purpose of sociable robotics is to design robots to work co-operatively with humans by taking advantage of a shared social environment. Breazeal (2002) argued that because the human brain has evolved to be expert in social interaction, “if a technology behaves in a socially competent manner, we evoke our evolved social machinery to interact with it” (p. 15). This is particularly true if a robot’s socially competent behavior is mediated by its humanoid embodiment, permitting it to gesture or to generate facial expressions. “When a robot holds our gaze, the hardwiring of evolution makes us think that the robot is interested in us. When that happens, we feel a possibility for deeper connection” (Turkle, 2011, p. 110). Sociable robotics exploits the human mechanisms that offer this deeper connection so that humans won’t require expert training in interacting with sociable robots.
A third purpose of sociable robotics is to explore cognitive scaffolding, which in this literature is often called leverage, in order to extend the capabilities of robots. For instance, many of the famous platforms of sociable robotics—including Cog (Brooks et al., 1999; Scassellati, 2002), Kismet (Breazeal, 2002, 2003, 2004), Domo (Edsinger-Gonzales & Weber, 2004), and Leanardo (Breazeal, Gray, & Berlin, 2009)—are humanoid in form and are social learners—their capabilities advance through imitation and through interacting with human partners. Furthermore, the success of the robot’s contribution to the shared social environment leans heavily on the contributions of the human partner. “Edsinger thinks of it as getting Domo to do more ‘by leveraging the people.’ Domo needs the help. It understands very little about any task as a whole” (Turkle, 2011, p. 157).
The leverage exploited by a sociable robot takes advantage of behavioral loops mediated by the expressions and gestures of both robot and human partner. For example, consider the robot Kismet (Breazeal, 2002). Kismet is a sociable robotic “infant,” a dynamic, mechanized head that participates in social interactions. Kismet has auditory and visual perceptual systems that are designed to perceive social cues provided by a human “caregiver.” Kismet can also deliver such social cues by changing its facial expression, directing its gaze to a location in a shared environment, changing its posture, and vocalizing.
When Kismet is communicating with a human, it uses the interaction to fulfill internal drives or needs (Breazeal, 2002). Kismet has three drives: a social drive to be in the presence of and stimulated by people, a stimulation drive to be stimulated by the environment in general (e.g., by colorful toys), and a fatigue drive that causes the robot to “sleep.” Kismet sends social signals to satisfy these drives. It can manipulate its facial expression, vocalization, and posture to communicate six basic emotions: anger, disgust, fear, joy, sorrow, and surprise. These expressions work to meet the drives by manipulating the social environment in such a way that the environment changes to satisfy Kismet’s needs.
For example, an unfulfilled social drive causes Kismet to express sadness, which initiates social responses from a caregiver. When Kismet perceives the caregiver’s face, it wiggles its ears in greeting, and initiates a playful dialog to engage the caregiver. Kismet will eventually habituate to these interactions and then seek to fulfill a stimulation drive by coaxing the caregiver to present a colourful toy. However, if this presentation is too stimulating—if the toy is presented too closely or moved too quickly—the fatigue drive will produce changes in Kismet’s behaviour that attempt to decrease this stimulation. If the world does not change in the desired way, Kismet will end the interaction by “sleeping.” “But even at its worst, Kismet gives the appearance of trying to relate. At its best, Kismet appears to be in continuous, expressive conversation” (Turkle, 2011, p. 118).
Kismet’s behavior leads to lengthy, dynamic interactions that are realistically social. A young girl interacting with Kismet “becomes increasingly happy and relaxed. Watching girl and robot together, it is easy to see Kismet as increasingly happy and relaxed as well. Child and robot are a happy couple” (Turkle, 2011, p. 121). Similar results occur when adults converse with Kismet. “One moment, Rich plays at a conversation with Kismet, and the next, he is swept up in something that starts to feel real” (p. 154).
Even the designer of a humanoid robot can be “swept up” by their interactions with it. Domo (Edsinger-Gonzales & Weber, 2004) is a limbed humanoid robot that is intended to be a physical helper, by performing such actions as placing objects on shelves. It learns to behave by physically interacting with a human teacher. These physical interactions give even sophisticated users—including its designer, Edsinger—a strong sense that Domo is a social creature. Edsinger finds himself vacillating back and forth between viewing Domo as a creature or as being merely a device that he has designed.
For Edsinger, this sequence—experiencing Domo as having desires and then talking himself out of the idea—becomes familiar. For even though he is Domo’s programmer, the robot’s behavior has not become dull or predictable.Working together, Edsigner and Domo appear to be learning from each other. (Turkle, 2011, p. 156)
That sociable robots can generate such strong reactions within humans is potentially concerning. The feeling of the uncanny occurs when the familiar is presented in unfamiliar form (Freud, 1976). The uncanny results when standard categories used to classify the world disappear (Turkle, 2011). Turkle (2011) called one such instance, when a sociable robot is uncritically accepted as a creature, the robotic moment. Edsinger’s reactions to Domo illustrated its occurrence: “And this is where we are in the robotic moment. One of the world’s most sophisticated robot ‘users’ cannot resist the idea that pressure from a robot’s hand implies caring” (p. 160).
At issue in the robotic moment is a radical recasting of the posthuman (Hayles, 1999). “The boundaries between people and things are shifting” (Turkle, 2011, p. 162). The designers of sociable robots scaffold their creations by taking advantage of the expert social abilities of humans. The robotic moment, though, implies a dramatic rethinking of what such human abilities entail. Might human social interactions be reduced to mere sense-act cycles of the sort employed in devices like Kismet? “To the objection that a robot can only seem to care or understand, it has become commonplace to get the reply that people, too, may only seem to care or understand” (p. 151).
In Hayles’ (1999) definition of posthumanism, the body is dispensable, because the essence of humanity is information. But this is an extremely classical view. An alternative, embodied posthumanism is one in which the mind is dispensed with, because what is fundamental to humanity is the body and its engagement with reality. “From its very beginnings, artificial intelligence has worked in this space between a mechanical view of people and a psychological, even spiritual, view of machines” (Turkle, 2011, p. 109). The robotic moment leads Turkle to ask “What will love be? And what will it mean to achieve ever-greater intimacy with our machines? Are we ready to see ourselves in the mirror of the machine and to see love as our performances of love?” (p. 165). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.09%3A_Robotic_Moments_in_Social_Environment.txt |
Social interactions involve coordinating the activities of two or more agents. Even something as basic as a conversation between two people is highly coordinated, with voices, gestures, and facial expressions used to orchestrate joint actions (Clark, 1996). Fundamental to coordinating such social interactions is our ability to predict the actions, interest, and emotions of others. Generically, the study of the ability to make such predictions is called the study of theory of mind, because many theorists argue that these predictions are rooted in our assumption that others, like us, have minds or mental states. As a result, researchers call our ability to foretell others’ actions mind reading or mentalizing (Goldman, 2006). “Having a mental state and representing another individual as having such a state are entirely different matters. The latter activity, mentalizing or mind reading, is a second-order activity: It is mind thinking about minds” (p. 3).
There are three general, competing theories about how humans perform mind reading (Goldman, 2006). The first is rationality theory, a version of which was introduced in Chapter 3 in the form of the intentional stance (Dennett, 1987). According to rationality theory, mind reading is accomplished via the ascription of contents to the putative mental states of others. In addition, we assume that other agents are rational. As a result, future behaviors are predicted by inferring what future behaviors follow rationally from the ascribed contents. For instance, if we ascribe to someone the belief that piano playing can only be improved by practicing daily, and we also ascribe to them the desire to improve at piano, then according to rationality theory it would be natural to predict that they would practice piano daily.
A second account of mentalizing is called theory-theory (Goldman, 2006). Theory-theory emerged from studies of the development of theory of mind (Gopnik & Wellman, 1992; Wellman, 1990) as well as from research on cognitive development in general (Gopnik & Meltzoff, 1997; Gopnik, Meltzoff, & Kuhl, 1999). Theory-theory is the position that our understanding of the world, including our understanding of other people in it, is guided by naive theories (Goldman, 2006). These theories are similar in form to the theories employed by scientists, because a naive theory of the world will—eventually—be revised in light of conflicting evidence.
Babies and scientists share the same basic cognitive machinery. They have similar programs, and they reprogram themselves in the same way. They formulate theories, make and test predictions, seek explanations, do experiments, and revise what they know in the light of new evidence. (Gopnik, Meltzoff, & Kuhl, 1999, p. 161)
There is no special role for a principle of rationality in theory-theory, which distinguishes it from rationality theory (Goldman, 2006). However, it is clear that both of these approaches to mentalizing are strikingly classical in nature. This is because both rely on representations. One senses the social environment, then thinks (by applying rationality or by using a naïve theory), and then finally predicts future actions of others. A third theory of mind reading, simulation theory, has emerged as a rival to theory-theory, and some of its versions posit an embodied account of mentalizing.
Simulation theory is the view that people mind read by replicating or emulating the states of others (Goldman, 2006). In simulation theory, “mindreading includes a crucial role for putting oneself in others’ shoes. It may even be part of the brain’s design to generate mental states that match, or resonate with, states of people one is observing” (p. 4).
The modern origins of simulation theory rest in two philosophical papers from the 1980s, one by Gordon (1986) and one by Heal (1986). Gordon (1986) noted that the starting point for explaining how we predict the behavior of others should be investigating our ability to predict our own actions. We can do so with exceedingly high accuracy because “our declarations of immediate intention are causally tied to some actual precursor of behavior: perhaps tapping into the brain’s updated behavioral ‘plans’ or into ‘executive commands’ that are about to guide the relevant motor sequences” (p. 159).
For Gordon (1986), our ability to accurately predict our own behavior was a kind of practical reasoning. He proceeded to argue that such reasoning could also be used in attempts to predict others. We could predict others, or predict our own future behavior in hypothetical situations, by simulating practical reasoning.
To simulate the appropriate practical reasoning I can engage in a kind of pretend-play: pretend that the indicated conditions actually obtain, with all other conditions remaining (so far as is logically possible and physically probable) as they presently stand; then continuing the make-believe try to ’make up my mind’ what to do given these (modified) conditions. (Gordon, 1986, p. 160)
A key element of such “pretend play” is that behavioral output is taken offline.
Gordon’s proposal causes simulation theory to depart from the other two theories of mind reading by reducing its reliance on ascribed mental contents. For Gordon (1986, p. 162), when someone simulates practical reasoning to make predictions about someone else, “they are ‘putting themselves in the other’s shoes’ in one sense of that expression: that is, they project themselves into the other’s situation, but without any attempt to project themselves into, as we say, the other’s ‘mind.’” Heal (1986) proposed a similar approach, which she called replication.
A number of different variations of simulation theory have emerged (Davies & Stone, 1995a, 1995b), making a definitive statement of its fundamental characteristics problematic (Heal, 1996). Some versions of simulation theory remain very classical in nature. For instance, simulation could proceed by setting the values of a number of variables to define a situation of interest. These values could then be provided to a classical reasoning system, which would use these represented values to make plausible predictions.
Suppose I am interested in predicting someone’s action. . . . I place myself in what I take to be his initial state by imagining the world as it would appear from his point of view and I then deliberate, reason and reflect to see what decision emerges. (Heal, 1996, p. 137)
Some critics of simulation theory argue that it is just as Cartesian as other mind reading theories (Gallagher, 2005). For instance, Heal’s (1986) notion of replication exploits shared mental abilities. For her, mind reading requires only the assumption that others “are like me in being thinkers, that they possess the same fundamental cognitive capacities and propensities that I do” (p. 137).
However, other versions of simulation theory are far less Cartesian or classical in nature. Gordon (1986, pp. 17–18) illustrated such a theory with an example from Edgar Allen Poe’s The Purloined Letter:
When I wish to find out how wise, or how stupid, or how good, or how wicked is any one, or what are his thoughts at the moment, I fashion the expression of my face, as accurately as possible, in accordance with the expression of his, and then wait to see what thoughts or sentiments arise in my mind or heart, as if to match or correspond with the expression. (Gordon, 1986, pp. 17–18)
In Poe’s example, mind reading occurs not by using our reasoning mechanisms to take another’s place, but instead by exploiting the fact that we share similar bodies. Songwriter David Byrne (1980) takes a related position in Seen and Not Seen, in which he envisions the implications of people being able to mold their appearance according to some ideal: “they imagined that their personality would be forced to change to fit the new appearance. . . .This is why first impressions are often correct.” Social cognitive neuroscience transforms such views from art into scientific theory.
Ultimately, subjective experience is a biological data format, a highly specific mode of presenting about the world, and the Ego is merely a complex physical event—an activation pattern in your central nervous system. (Metzinger, 208, p. 208)
Philosopher Robert Gordon’s version of simulation theory (Gordon, 1986, 1992, 1995, 1999, 2005a, 2005b, 2007, 2008) provides an example of a radically embodied theory of mind reading. Gordon (2008, p. 220) could “see no reason to hold on to the assumption that our psychological competence is chiefly dependent on the application of concepts of mental states.” This is because his simulation theory exploited the body in exactly the same way that Brooks’ (1999) behavior-based robots exploited the world: as a replacement for representation (Gordon, 1999). “One’s own behavior control system is employed as a manipulable model of other such systems. . . . Because one human behavior control system is being used to model others, general information about such systems is unnecessary” (p. 765).
What kind of evidence exists to support a more embodied or less Cartesian simulation theory? Researchers have argued that simulation theory is supported by the discovery of the brain mechanisms of interest to social cognitive neuroscience (Lieberman, 2007). In particular, it has been argued that mirror neurons provide the neural substrate that instantiates simulation theory (Gallese & Goldman, 1998): “[Mirror neuron] activity seems to be nature’s way of getting the observer into the same ‘mental shoes’ as the target—exactly what the conjectured simulation heuristic aims to do” (p. 497–498).
Importantly, the combination of the mirror system and simulation theory implies that the “mental shoes” involved in mind reading are not symbolic representations. They are instead motor representations; they are actions-on-objects as instantiated by the mirror system. This has huge implications for theories of social interactions, minds, and selves:
Few great social philosophers of the past would have thought that social understanding had anything to do with the pre-motor cortex, and that ‘motor ideas’ would play such a central role in the emergence of social understanding. Who could have expected that shared thought would depend upon shared ‘motor representations’? (Metzinger, 2009, p. 171)
If motor representations are the basis of social interactions, then simulation theory becomes an account of mind reading that stands as a reaction against classical, representational theories. Mirror neuron explanations of simulation theory replace sense-think-act cycles with sense-act reflexes in much the same way as was the case in behavior-based robotics. Such a revolutionary position is becoming commonplace for neuroscientists who study the mirror system (Metzinger, 2009).
Neuroscientist Vittorio Gallese, one of the discoverers of mirror neurons, provides an example of this radical position:
Social cognition is not only social metacognition, that is, explicitly thinking about the contents of some else’s mind by means of abstract representations. We can certainly explain the behavior of others by using our complex and sophisticated mentalizing ability. My point is that most of the time in our daily social interactions, we do not need to do this. We have a much more direct access to the experiential world of the other. This dimension of social cognition is embodied, in that it mediates between our multimodal experiential knowledge of our own lived body and the way we experience others. (Metzinger, 2009, p. 177)
Cartesian philosophy was based upon an extraordinary act of skepticism (Descartes, 1996). In his search for truth, Descartes believed that he could not rely on his knowledge of the world, or even of his own body, because such knowledge could be illusory.
I shall think that the sky, the air, the earth, colors, shapes, sounds, and all external things are merely the delusions of dreams which he [a malicious demon] has devised to ensnare my judgment. I shall consider myself as not having hands or eyes, or flesh, or blood or senses, but as falsely believing that I have all these things. (Descartes, 1996, p. 23)
The disembodied Cartesian mind is founded on the myth of the external world.
Embodied theories of mind invert Cartesian skepticism. The body and the world are taken as fundamental; it is the mind or the holistic self that has become the myth. However, some have argued that our notion of a holistic internal self is illusory (Clark, 2003; Dennett, 1991, 2005; Metzinger, 2009; Minsky, 1985, 2006; Varela, Thompson, & Rosch, 1991). “We are, in short, in the grip of a seductive but quite untenable illusion: the illusion that the mechanisms of mind and self can ultimately unfold only on some privileged stage marked out by the good old-fashioned skin-bag” (Clark, 2003, p. 27). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.10%3A_The_Architecture_of_Mind_Reading.txt |
Classical cognitive scientists investigate cognitive phenomena at multiple levels (Dawson, 1998; Marr, 1982; Pylyshyn, 1984). Their materialism commits them to exploring issues concerning implementation and architecture. Their view that the mind is a symbol manipulator leads them to seek the algorithms responsible for solving cognitive information problems. Their commitment to logicism and rationality has them deriving formal, mathematical, or logical proofs concerning the capabilities of cognitive systems.
Embodied cognitive science can also be characterized as adopting these same multiple levels of investigation. Of course, this is not to say that there are not also interesting technical differences between the levels of investigation that guide embodied cognitive science and those that characterize classical cognitive science.
By definition, embodied cognitive science is committed to providing implementational accounts. Embodied cognitive science is an explicit reaction against Cartesian dualism and its modern descendant, methodological solipsism. In its emphasis on environments and embodied agents, embodied cognitive science is easily as materialist as the classical approach. Some of the more radical positions in embodied cognitive science, such as the myth of the self (Metzinger, 2009) or the abandonment of representation (Chemero, 2009), imply that implementational accounts may be even more critical for the embodied approach than is the case for classical researchers.
However, even though embodied cognitive science shares the implementational level of analysis with classical cognitive science, this does not mean that it interprets implementational evidence in the same way. For instance, consider single cell recordings from visual neurons. Classical cognitive science, with its emphasis on the creation of internal models of the world, views such data as providing evidence about what kinds of visual features are detected, to be later combined into more complex representations of objects (Livingstone & Hubel, 1988). In contrast, embodied cognitive scientists see visual neurons as being involved not in modelling, but instead in controlling action. As a result, single cell recordings are more likely to be interpreted in the context of ideas such as the affordances of ecological perception (Gibson, 1966, 1979; Noë, 2004). “Our brain does not simply register a chair, a teacup, an apple; it immediately represents the seen object as what I could do with it—as an affordance, a set of possible behaviors” (Metzinger, 2009, p. 167). In short, while embodied and classical cognitive scientists seek implementational evidence, they are likely to interpret it very differently.
The materialism of embodied cognitive science leads naturally to proposals of functional architectures. An architecture is a set of primitives, a physically grounded toolbox of core processes, from which cognitive phenomena emerge. Explicit statements of primitive processes are easily found in embodied cognitive science. For example, it is common to see subsumption architectures explicitly laid out in accounts of behaviour-based robots (Breazeal, 2002; Brooks, 1999, 2002; Kube & Bonabeau, 2000; Scassellati, 2002).
Of course, the primitive components of a typical subsumption architecture are designed to mediate actions on the world, not to aid in the creation of models of it. As a result, the assumptions underlying embodied cognitive science’s primitive sense-act cycles are quite different from those underlying classical cognitive science’s primitive sense-think-act processing.
As well, embodied cognitive science’s emphasis on the fundamental role of an agent’s environment can lead to architectural specifications that can dramatically differ from those found in classical cognitive science. For instance, a core aspect of an architecture is control—the mechanisms that choose which primitive operation or operations to execute at any given time. Typical classical architectures will internalize control; for example, the central executive in models of working memory (Baddeley, 1986). In contrast, in embodied cognitive science an agent’s environment is critical to control; for example, in architectures that exploit stigmergy (Downing & Jeanne, 1988; Holland & Melhuish, 1999; Karsai, 1999; Susi & Ziemke, 2001; Theraulaz & Bonabeau, 1999). This suggests that the notion of the extended mind is really one of an extended architecture; control of processing can reside outside of an agent.
When embodied cognitive scientists posit an architectural role for the environment, as is required in the notion of stigmergic control, this means that an agent’s physical body must also be a critical component of an embodied architecture. One reason for this is that from the embodied perspective, an environment cannot be defined in the absence of an agent’s body, as in proposing affordances (Gibson, 1979). A second reason for this is that if an embodied architecture defines sense-act primitives, then the available actions that are available are constrained by the nature of an agent’s embodiment. A third reason for this is that some environments are explicitly defined, at least in part, by bodies. For instance, the social environment for a sociable robot such as Kismet (Breazeal, 2002) includes its moveable ears, eyebrows, lips, eyelids, and head, because it manipulates these bodily components to coordinate its social interactions with others.
Even though an agent’s body can be part of an embodied architecture does not mean that this architecture is not functional. The key elements of Kismet’s expressive features are shape and movement; the fact that Kismet is not flesh is irrelevant because its facial features are defined in terms of their function.
In the robotic moment, what you are made of—silicon, metal, flesh—pales in comparison with how you behave. In any given circumstance, some people and some robots are competent and some not. Like people, any particular robot needs to be judged on its own merits. (Turkle, 2011, p. 94)
That an agent’s body can be part of a functional architecture is an idea that is foreign to classical cognitive science. It also leads to an architectural complication that may be unique to embodied cognitive science. Humans have no trouble relating to, and accepting, sociable robots that are obviously toy creatures, such as Kismet or the robot dog Aibo (Turkle, 2011). In general, as the appearance and behavior of such robots becomes more lifelike, their acceptance will increase.
However, as robots become closer in resemblance to humans, they produce a reaction called the uncanny valley (MacDorman & Ishiguro, 2006; Mori, 1970). The uncanny valley is seen in a graph that plots human acceptance of robots as a function of robot appearance. The uncanny valley is the part of the graph in which acceptance, which has been steadily growing as appearance grows more lifelike, suddenly plummets when a robot’s appearance is “almost human”—that is, when it is realistically human, but can still be differentiated from biological humans. The uncanny valley is illustrated in the work of roboticist Hiroshi Ishiguro, who,
built androids that reproduced himself, his wife, and his five-year old daughter. The daughter’s first reaction when she saw her android clone was to flee. She refused to go near it and would no longer visit her father’s laboratory. (Turkle, 2011, p. 128)
Producing an adequate architectural component—a body that avoids the uncanny valley—is a distinctive challenge for embodied cognitive scientists who ply their trade using humanoid robots.
In embodied cognitive science, functional architectures lead to algorithmic explorations. We saw that when classical cognitive science conducts such explorations, it uses reverse engineering to attempt to infer the program that an information processor uses to solve an information processing problem. In classical cognitive science, algorithmic investigations almost always involve observing behaviour, often at a fine level of detail. Such behavioral observations are the source of relative complexity evidence, intermediate state evidence, and error evidence, which are used to place constraints on inferred algorithms.
Algorithmic investigations in classical cognitive science are almost exclusively focused on unseen, internal processes. Classical cognitive scientists use behavioral observations to uncover the algorithms hidden within the “black box” of an agent. Embodied cognitive science does not share this exclusive focus, because it attributes some behavioral complexities to environmental influences. Apart from this important difference, though, algorithmic investigations—specifically in the form of behavioral observations—are central to the embodied approach. Descriptions of behavior are the primary product of forward engineering; examples in behavior-based robotics span the literature from time lapse photographs of Tortoise trajectories (Grey Walter, 1963) to modern reports of how, over time, robots sort or rearrange objects in an enclosure (Holland & Melhuish, 1999; Melhuish et al., 2006; Scholes et al., 2004; Wilson et al., 2004). At the heart of such behavioral accounts is acceptance of Simon’s (1969) parable of the ant. The embodied approach cannot understand an architecture by examining its inert components. It must see what emerges when this architecture is embodied in, situated in, and interacting with an environment.
When embodied cognitive science moves beyond behavior-based robotics, it relies on some sorts of behavioral observations that are not employed as frequently in classical cognitive science. For example, many embodied cognitive scientists exhort the phenomenological study of cognition (Gallagher, 2005; Gibbs, 2006; Thompson, 2007; Varela, Thompson, & Rosch, 1991). Phenomenology explores how people experience their world and examines how the world is meaningful to us via our experience (Brentano, 1995; Husserl, 1965; Merleau-Ponty, 1962).
Just as enactive theories of perception (Noë, 2004) can be viewed as being inspired by Gibson’s (1979) ecological account of perception, phenomenological studies within embodied cognitive science (Varela, Thompson, & Rosch, 1991) are inspired by the philosophy of Maurice Merleau-Ponty (1962). Merleau-Ponty rejected the Cartesian separation between world and mind: “Truth does not ‘inhabit’ only ‘the inner man,’ or more accurately, there is no inner man, man is in the world, and only in the world does he know himself” (p. xii). Merleau-Ponty strove to replace this Cartesian view with one that relied upon embodiment. “We shall need to reawaken our experience of the world as it appears to us in so far as we are in the world through our body, and in so far as we perceive the world with our body” (p. 239).
Phenomenology with modern embodied cognitive science is a call to further pursue Merleau-Ponty’s embodied approach.
What we are suggesting is a change in the nature of reflection from an abstract, disembodied activity to an embodied (mindful), open-ended reflection. By embodied, we mean reflection in which body and mind have been brought together. (Varela, Thompson, & Rosch, 1991, p. 27) However, seeking evidence from such reflection is not necessarily straightforward (Gallagher, 2005). For instance, while Gallagher acknowledges that the body is critical in its shaping of cognition, he also notes that many aspects of our bodily interaction with the world are not available to consciousness and are therefore difficult to study phenomenologically.
Embodied cognitive science’s interest in phenomenology is an example of a reaction against the formal, disembodied view of the mind that classical cognitive science has inherited from Descartes (Devlin, 1996). Does this imply, then, that embodied cognitive scientists do not engage in the formal analyses that characterize the computational level of analysis? No. Following the tradition established by cybernetics (Ashby, 1956; Wiener, 1948), which made extensive use of mathematics to describe feedback relations between physical systems and their environments, embodied cognitive scientists too are engaged in computational investigations. Again, though, these investigations deviate from those conducted within classical cognitive science. Classical cognitive science used formal methods to develop proofs about what information processing problem was being solved by a system (Marr, 1982), with the notion of “information processing problem” placed in the context of rule-governed symbol manipulation. Embodied cognitive science operates in a very different context, because it has a different notion of information processing. In this new context, cognition is not modeling or planning, but is instead coordinating action (Clark, 1997).
When cognition is placed in the context of coordinating action, one key element that must be captured by formal analyses is that actions unfold in time. It has been argued that computational analyses conducted by classical researchers fail to incorporate the temporal element (Port & van Gelder, 1995a): “Representations are static structures of discrete symbols. Cognitive operations are transformations from one static symbol structure to the next. These transformations are discrete, effectively instantaneous, and sequential” (p. 1). As such, classical analyses are deemed by some to be inadequate. When embodied cognitive scientists explore the computational level, they do so with a different formalism, called dynamical systems theory (Clark, 1997; Port & van Gelder, 1995b; Shapiro, 2011).
Dynamical systems theory is a mathematical formalism that describes how systems change over time. In this formalism, at any given time a system is described as being in a state. A state is a set of variables to which values are assigned. The variables define all of the components of the system, and the values assigned to these variables describe the characteristics of these components (e.g., their features) at a particular time. At any moment of time, the values of its components provide the position of the system in a state space. That is, any state of a system is a point in a multidimensional space, and the values of the system’s variables provide the coordinates of that point.
The temporal dynamics of a system describe how its characteristics change over time. These changes are captured as a path or trajectory through state space. Dynamical systems theory provides a mathematical description of such trajectories, usually in the form of differential equations. Its utility was illustrated in Randall Beer’s (2003) analysis of an agent that learns to categorize objects, of circuits for associative learning (Phattanasri, Chiel, & Beer, 2007), and of a walking leg controlled by a neural mechanism (Beer, 2010).
While dynamical systems theory provides a medium in which embodied cognitive scientists can conduct computational analyses, it is also intimidating and difficult. “A common criticism of dynamical approaches to cognition is that they are practically intractable except in the simplest cases” (Shapiro, 2011, pp. 127– 128). This was exactly the situation that led Ashby (1956, 1960) to study feedback between multiple devices synthetically, by constructing the Homeostat. This does not mean, however, that computational analyses are impossible or fruitless. On the contrary, it is possible that such analyses can co-operate with the synthetic exploration of models in an attempt to advance both formal and behavioral investigations (Dawson, 2004; Dawson, Dupuis, & Wilson, 2010).
In the preceding paragraphs we presented an argument that embodied cognitive scientists study cognition at the same multiple levels of investigation that characterize classical cognitive science. Also acknowledged is that embodied cognitive scientists are likely to view each of these levels slightly differently than their classical counterparts. Ultimately, that embodied cognitive science explores cognition at these different levels of analysis also implies that embodied cognitive scientists are also committed to the notion of validating their theories by seeking strong equivalence. It stands to reason that the validity of a theory created within embodied cognitive science would be best established by showing that this theory is supported at all of the different levels of investigation. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.11%3A_Levels_of_Embodied_Cognitive_Science.txt |
To review, the central claim of classical cognitive science is that cognition is computation, where computation is taken to be the manipulation of internal representations. From this perspective, classical cognitive science construes cognition as an iterative sense-think-act cycle. The “think” part of this cycle is emphasized, because it is responsible for modeling and planning. The “thinking” also stands as a required mentalistic buffer between sensing and acting, producing what is known as the classical sandwich (Hurley, 2001). The classical sandwich represents a modern form of Cartesian dualism, in the sense that the mental (thinking) is distinct from the physical (the world that is sensed, and the body that can act upon it) (Devlin, 1996).
Embodied cognitive science, like connectionist cognitive science, arises from the view that the core logicist assumptions of classical cognitive science are not adequate to explain human cognition (Dreyfus, 1992; Port & van Gelder, 1995b; Winograd & Flores, 1987b).
The lofty goals of artificial intelligence, cognitive science, and mathematical linguistics that were prevalent in the 1950s and 1960s (and even as late as the 1970s) have now given way to the realization that the ‘soft’ world of people and societies is almost certainly not amenable to a precise, predictive, mathematical analysis to anything like the same degree as is the ‘hard’ world of the physical universe. (Devlin, 1996, p. 344)
As such a reaction, the key elements of embodied cognitive science can be portrayed as an inversion of elements of the classical approach.
While classical cognitive science abandons Cartesian dualism in one sense, by seeking materialist explanations of cognition, it remains true to it in another sense, through its methodological solipsism (Fodor, 1980). Methodological solipsism attempts to characterize and differentiate mental states without appealing to properties of the body or of the world (Wilson, 2004), consistent with the Cartesian notion of the disembodied mind.
In contrast, embodied cognitive science explicitly rejects methodological solipsism and the disembodied mind. Instead, embodied cognitive science takes to heart the message of Simon’s (1969) parable of the ant by recognizing that crucial contributors to behavioral complexity include an organism’s environment and bodily form. Rather than creating formal theories of disembodied minds, embodied cognitive scientists build embodied and situated agents.
Classical cognitive science adopts the classical sandwich (Hurley, 2001), construing cognition as an iterative sense-think-act cycle. There are no direct links between sensing and acting from this perspective (Brooks, 1991); a planning process involving the manipulation of internal models stands as a necessary intermediary between perceiving and acting.
In contrast, embodied cognitive science strives to replace sense-think-act processing with sense-act cycles that bypass representational processing. Cognition is seen as the control of direct action upon the world rather than the reasoning about possible action. While classical cognitive science draws heavily from the symbol-manipulating examples provided by computer science, embodied cognitive science steps further back in time, taking its inspiration from the accounts of feedback and adaptation provided by cybernetics (Ashby, 1956, 1960; Wiener, 1948).
Shapiro (2011) invoked the theme of conceptualization to characterize embodied cognitive science because it saw cognition as being directed action on the world. Conceptualization is the view that the form of an agent’s body determines the concepts that it requires to interact with the world. Conceptualization is also a view that draws from embodied and ecological accounts of perception (Gibson, 1966, 1979; Merleau-Ponty, 1962; Neisser, 1976); such theories construed perception as being the result of action and as directing possible actions (affordances) on the world. As such, the perceptual world cannot exist independently of a perceiving agent; umwelten (Uexküll, 2001) are defined in terms of the agent as well.
The relevance of the world to embodied cognitive science leads to another of its characteristics: Shapiro’s (2011) notion of replacement. Replacement is the view that an agent’s direct actions on the world can replace internal models, because the world can serve as its own best representation. The replacement theme is central to behaviour-based robotics (Breazeal, 2002; Brooks, 1991, 1999, 2002; EdsingerGonzales & Weber, 2004; Grey Walter, 1963; Sharkey, 1997), and leads some radical embodied cognitive scientists to argue that the notion of internal representations should be completely abandoned (Chemero, 2009). Replacement also permits theories to include the co-operative interaction between and mutual support of world and agent by exploring notions of cognitive scaffolding and leverage (Clark, 1997; Hutchins, 1995; Scribner & Tobach, 1997).
The themes of conceptualization and replacement emerge from a view of cognition that is radically embodied, in the sense that it cannot construe cognition without considering the rich relationships between mind, body, and world. This also leads to embodied cognitive science being characterized by Shapiro’s (2011) third theme, constitution. This theme, as it appears in embodied cognitive science, is the extended mind hypothesis (Clark, 1997, 1999, 2003, 2008; Clark & Chalmers, 1998; Menary, 2008, 2010; Noë, 2009; Rupert, 2009; Wilson, 2004, 2005). According to the extended mind hypothesis, the world and body are literally constituents of cognitive processing; they are not merely causal contributors to it, as is the case in the classical sandwich.
Clearly embodied cognitive science has a much different view of cognition than is the case for classical cognitive science. This in turn leads to differences in the way that cognition is studied.
Classical cognitive science studies cognition at multiple levels: computational, algorithmic, architectural, and implementational. It typically does so by using a top-down strategy, beginning with the computational and moving “down” towards the architectural and implementational (Marr, 1982). This top-down strategy is intrinsic to the methodology of reverse engineering or functional analysis (Cummins, 1975, 1983). In reverse engineering, the behavior of an intact system is observed and manipulated in an attempt to decompose it into an organized system of primitive components.
We have seen that embodied cognitive science exploits the same multiple levels of investigation that characterize classical cognitive science. However, embodied cognitive science tends to replace reverse engineering with an inverse, bottom-up methodology, as in forward engineering or synthetic psychology (Braitenberg, 1984; Dawson, 2004; Dawson, Dupuis, & Wilson, 2010; Pfeifer & Scheier, 1999). In forward engineering, a set of interesting primitives is assembled into a working system. This system is then placed in an interesting environment in order to see what it can and cannot do. In other words, forward engineering starts with implementational and architectural investigations. Forward engineering is motivated by the realization that an agent’s environment is a crucial contributor to behavioral complexity, and it is an attempt to leverage this possibility. As a result, some have argued that this approach can lead to simpler theories than is the case when reverse engineering is adopted (Braitenberg, 1984).
Shapiro (2011) has noted that it is too early to characterize embodied cognitive science as a unified school of thought. The many different variations of the embodied approach, and the important differences between them, are beyond the scope of the current chapter. A more accurate account of the current state of embodied cognitive science requires exploring an extensive and growing literature, current and historical (Agre, 1997; Arkin, 1998; Bateson, 1972; Breazeal, 2002; Chemero, 2009; Clancey, 1997; Clark, 1997, 2003, 2008; Dawson, Dupuis, & Wilson, 2010; Dourish, 2001; Gallagher, 2005; Gibbs, 2006; Gibson, 1979; Goldman, 2006; Hutchins, 1995; Johnson, 2007; Menary, 2010; Merleau-Ponty, 1962; Neisser, 1976; Noë, 2004, 2009; Pfeifer & Scheier, 1999; Port & van Gelder, 1995b; Robbins & Aydede, 2009; Rupert, 2009; Shapiro, 2011; Varela, Thompson, & Rosch, 1991; Wilson, 2004; Winograd & Flores, 1987b). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.12%3A_What_is_Embodied_Cognitive_Science%3F.txt |
Shakey was a 1960s robot that used a variety of sensors and motors to navigate through a controlled indoor environment (Nilsson, 1984). It did so by uploading its sensor readings to a central computer that stored, updated, and manipulated a model of Shakey’s world. This representation was used to develop plans of action to be put into effect, providing the important filling for Shakey’s classical sandwich.
Shakey impressed in its ability to navigate around obstacles and move objects to desired locations. However, it also demonstrated some key limitations of the classical sandwich. In particular, Shakey was extremely slow. Shakey typically required several hours to complete a task (Moravec, 1999), because the internal model of its world was computationally expensive to create and update. The problem with the sense-think-act cycle in robots like Shakey is that by the time the (slow) thinking is finished, the resulting plan may fail because the world has changed in the meantime.
The subsumption architecture of behavior-based robotics (Brooks, 1999, 2002) attempted to solve such problems by removing the classical sandwich; it was explicitly anti-representational. The logic of this radical move was that the world was its own best representation (Clark, 1997).
Behavior-based robotics took advantage of Simon’s (1969) parable of the ant, reducing costly and complex internal representations by recognizing that the external world is a critical contributor to behavior. Why expend computational resources on the creation and maintenance of an internal model of the world, when externally the world was already present, open to being sensed and to being acted upon? Classical cognitive science’s emphasis on internal representations and planning was a failure to take this parable to heart.
Interestingly, action was more important to earlier cognitive theories. Take, for example, Piaget’s theory of cognitive development (Inhelder & Piaget, 1958, 1964; Piaget, 1970a, 1970b, 1972; Piaget & Inhelder, 1969). According to this theory, in their early teens children achieve the stage of formal operations. Formal operations describe adult-level cognitive abilities that are classical in the sense that they involve logical operations on symbolic representations. Formal operations involve completely abstract thinking, where relationships between propositions are considered.
However, Piagetian theory departs from classical cognitive science by including actions in the world. The development of formal operations begins with the sensorimotor stage, which involves direct interactions with objects in the world. In the next preoperational stage these objects are internalized as symbols. The preoperational stage is followed by concrete operations. When the child is in the stage of concrete operations, symbols are manipulated, but not in the abstract: concrete operations are applied to “manipulable objects (effective or immediately imaginable manipulations), in contrast to operations bearing on propositions or simple verbal statements (logic of propositions)” (Piaget, 1972, p. 56). In short, Piaget rooted fully representational or symbolic thought (i.e., formal operations) in the child’s physical manipulation of his or her world. “The starting-point for the understanding, even of verbal concepts, is still the actions and operations of the subject” (Inhelder & Piaget, 1964, p. 284).
For example, classification and seriation (i.e., grouping and ordering entities) are operations that can be formally specified using logic or mathematics. One goal of Piagetian theory is to explain the development of such abstract competence. It does so by appealing to basic actions on the world experienced prior to the stage of formal operations, “actions which are quite elementary: putting things in piles, separating piles into lots, making alignments, and so on” (Inhelder & Piaget, 1964, p. 291).
Other theories of cognitive development share the Piagetian emphasis on the role of the world, but elaborate the notion of what aspects of the world are involved (Vygotsky, 1986). Vygotsky (1986), for example, highlighted the role of social systems—a different conceptualization of the external world—in assisting cognitive development. Vygotsky used the term zone of proximal development to define the difference between a child’s ability to solve problems without aid and their ability to solve problems when provided support or assistance. Vygotsky was strongly critical of instructional approaches that did not provide help to children as they solved problems.
Vygotsky (1986) recognized that sources of support for development were not limited to the physical world. He expanded the notion of worldly support to include social and cultural factors: “The true direction of the development of thinking is not from the individual to the social, but from the social to the individual” (p. 36). For example, to Vygotsky language was a tool for supporting cognition:
Real concepts are impossible without words, and thinking in concepts does not exist beyond verbal thinking. That is why the central moment in concept formation, and its generative cause, is a specific use of words as functional ‘tools.’ (Vygotsky, 1986, p. 107)
Clark (1997, p. 45) wrote: “We may often solve problems by ‘piggy-backing’ on reliable environmental properties. This exploitation of external structure is what I mean by the term scaffolding.” Cognitive scaffolding—the use of the world to support or extend thinking—is characteristic of theories in embodied cognitive science. Clark views scaffolding in the broad sense of a world or structure that descends from Vygotsky’s theory:
Advanced cognition depends crucially on our abilities to dissipate reasoning: to diffuse knowledge and practical wisdom through complex social structures, and to reduce the loads on individual brains by locating those brains in complex webs of linguistic, social, political, and institutional constraints. (Clark, 1997, p. 180)
While the developmental theories of Piaget and Vygotsky are departures from typical classical cognitive science in their emphasis on action and scaffolding, they are very traditional in other respects. American psychologist Sylvia Scribner pointed out that these two theorists, along with Newell and Simon, shared Aristotle’s “preoccupation with modes of thought central to theoretical inquiry—with logical operations, scientific concepts, and problem solving in symbolic domains,” maintaining “Aristotle’s high esteem for theoretical thought and disregard for the practical” (Scribner & Tobach, 1997, p. 338).
Scribner’s own work (Scribner & Tobach, 1997) was inspired by Vygotskian theory but aimed to extend its scope by examining practical cognition. Scribner described her research as the study of mind in action, because she viewed cognitive processes as being embedded with human action in the world. Scribner’s studies analyzed “the characteristics of memory and thought as they function in the larger, purposive activities which cultures organize and in which individuals engage” (p. 384). In other words, the everyday cognition studied by Scribner and her colleagues provided ample evidence of cognitive scaffolding: “Practical problem solving is an open system that includes components lying outside the formal problem— objects and information in the environment and goals and interests of the problem solver” (pp. 334–335).
One example of Scribner’s work on mind in action was the observation of problem-solving strategies exhibited by different types of workers at a dairy (Scribner & Tobach, 1997). It was discovered that a reliable difference between expert and novice dairy workers was that the former were more versatile in finding solutions to problems, largely because expert workers were much more able to exploit environmental resources. “The physical environment did not determine the problem-solving process but . . . was drawn into the process through worker initiative” (p. 377).
For example, one necessary job in the dairy was assembling orders. This involved using a computer printout of a wholesale truck driver’s order for products to deliver the next day, to fetch from different areas in the dairy the required number of cases and partial cases of various products to be loaded onto the driver’s truck. However, while the driver’s order was placed in terms of individual units (e.g., particular numbers of quarts of skim milk, of half-pints of chocolate milk, and so on), the computer printout converted these individual units into “case equivalents.” For example, one driver might require 20 quarts of skim milk. However, one case contains only 16 quarts. The computer printout for this part of the order would be 1 + 4, indicating one full case plus 4 additional units.
Scribner found differences between novice and expert product assemblers in the way in which these mixed numbers from the computer printout were converted into gathered products. Novice workers would take a purely mental arithmetic approach. As an example, consider the following protocol obtained from a novice worker:
It was one case minus six, so there’s two, four, six, eight, ten, sixteen (determines how many in a case, points finger as she counts). So there should be ten in here. Two, four, six, ten (counts units as she moves them from full to empty). One case minus six would be ten. (Scribner & Tobach, 1997, p. 302)
In contrast, expert workers were much more likely to scaffold this problem solving by working directly from the visual appearance of cases, as illustrated in a very different protocol:
I walked over and I visualized. I knew the case I was looking at had ten out of it, and I only wanted eight, so I just added two to it. I don’t never count when I’m making the order, I do it visual, a visual thing you know. (Scribner & Tobach, 1997, p. 303)
It was also found that expert workers flexibly alternated the distribution of scaffolding and mental arithmetic, but did so in a systematic way: when more mental arithmetic was employed, it was done to decrease the amount of physical exertion required to complete the order. This led to Scribner postulating a law of mental effort: “In product assembly, mental work will be expended to save physical work” (Scribner & Tobach, 1997, p. 348).
The law of mental effort was the result of Scribner’s observation that expert workers in the dairy demonstrated marked diversity and flexibility in their solutions to work-related problems. Intelligent agents may be flexible in the manner in which they allocate resources between sense-act and sense-think-act processing. Both types of processes may be in play simultaneously, but they may be applied in different amounts when the same problem is encountered at different times and under different task demands (Hutchins, 1995).
Such flexible information processing is an example of bricolage (Lévi-Strauss, 1966). A bricoleur is an “odd job man” in France.
The ‘bricoleur’ is adept at performing a large number of diverse tasks; but, unlike the engineer, he does not subordinate each of them to the availability of raw materials and tools conceived and procured for the purpose of the project. His universe of instruments is closed and the rules of his game are always to make do with ‘whatever is at hand.’ (Lévi-Strauss, 1966, p. 17)
Bricolage seems well suited to account for the flexible thinking of the sort described by Scribner. Lévi-Strauss (1966) proposed bricolage as an alternative to formal, theoretical thinking, but cast it in a negative light: “The ‘bricoleur’ is still someone who works with his hands and uses devious means compared to those of a craftsman” (pp. 16–17). Devious means are required because the bricoleur is limited to using only those components or tools that are at hand. “The engineer is always trying to make his way out of and go beyond the constraints imposed by a particular state of civilization while the ‘bricoleur’ by inclination or necessity always remains within them” (p. 19).
Recently, researchers have renewed interest in bricolage and presented it in a more positive light than did Lévi-Strauss (Papert, 1980; Turkle, 1995). To Turkle (1995), bricolage was a sort of intuition, a mental tinkering, a dialogue mediated by a virtual interface that was increasingly important with the visual GUIs of modern computing devices.
As the computer culture’s center of gravity has shifted from programming to dealing with screen simulations, the intellectual values of bricolage have become far more important.... Playing with simulation encourages people to develop the skills of the more informal soft mastery because it is so easy to run ‘What if?’ scenarios and tinker with the outcome. (Turkle, 1995, p. 52)
Papert (1980) argued that bricolage demands greater respect because it may serve as “a model for how scientifically legitimate theories are built” (p. 173).
The bricolage observed by Scribner and her colleagues when studying mind in action at the dairy revealed that practical cognition is flexibly and creatively scaffolded by an agent’s environment. However, many of the examples reported by Scribner suggest that this scaffolding involves using the environment as an external representation or memory of a problem. That the environment can be used in this fashion, as an externalized extension of memory, is not surprising. Our entire print culture—the use of handwritten notes, the writing of books—has arisen from a technology that serves as an extension of memory (McLuhan, 1994, p. 189): “Print provided a vast new memory for past writings that made a personal memory inadequate.”
However, the environment can also provide a more intricate kind of scaffolding. In addition to serving as an external store of information, it can also be exploited to manipulate its data. For instance, consider a naval navigation task in which a ship’s speed is to be computed by measuring of how far the ship has traveled over a recent interval of time (Hutchins, 1995). An internal, representational approach to performing this computation would be to calculate speed based on internalized knowledge of algebra, arithmetic, and conversions between yards and nautical miles. However, an easier external solution is possible. A navigator is much more likely to draw a line on a three-scale representation called a nomogram. The top scale of this tool indicates duration, the middle scale indicates distance, and the bottom scale indicates speed. The user marks the measured time and distance on the first two scales, joins them with a straight line, and reads the speed from the intersection of this line with the bottom scale. Thus the answer to the problem isn’t as much computed as it is inspected. “Much of the computation was done by the tool, or by its designer. The person somehow could succeed by doing less because the tool did more” (Hutchins, 1995, p. 151).
Classical cognitive science, in its championing of the representational theory of mind, demonstrates a modern persistence of the Cartesian distinction between mind and body. Its reliance on mental representation occurs at the expense of ignoring potential contributions of both an agent’s body and world. Early representational theories were strongly criticized because of their immaterial nature.
For example, consider the work of Edward Tolman (1932, 1948). Tolman appealed to representational concepts to explain behavior, such as his proposal that rats navigate and locate reinforcers by creating and manipulating a cognitive map. The mentalistic nature of Tolman’s theories was a source of harsh criticism:
Signs, in Tolman’s theory, occasion in the rat realization, or cognition, or judgment, or hypotheses, or abstraction, but they do not occasion action. In his concern with what goes on in the rat’s mind, Tolman has neglected to predict what the rat will do. So far as the theory is concerned the rat is left buried in thought; if he gets to the food-box at the end that is his concern, not the concern of the theory. (Guthrie, 1935, p. 172)
The later successes, and current dominance, of cognitive theory make such criticisms appear quaint. But classical theories are nonetheless being rigorously reformulated by embodied cognitive science.
Embodied cognitive scientists argue that classical cognitive science, with its emphasis on the disembodied mind, has failed to capture important aspects of thinking. For example, Hutchins (1995, p. 171) noted that “by failing to understand the source of the computational power in our interactions with simple ‘unintelligent’ physical devices, we position ourselves well to squander opportunities with so-called intelligent computers.” Embodied cognitive science proposes that the modern form of dualism exhibited by classical cognitive science is a mistake. For instance, Scribner hoped that her studies of mind in action conveyed “a conception of mind which is not hostage to the traditional cleavage between the mind and the hand, the mental and the manual” (Scribner & Tobach, 1997, p. 307). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.13%3A_Mind_in_Action.txt |
In preceding pages of this chapter, a number of interrelated topics that are central to embodied cognitive science have been introduced: situation and embodiment, feedback between agents and environments, stigmergic control of behavior, affordances and enactive perception, and cognitive scaffolding. These topics show that embodied cognitive science places much more emphasis on body and world, and on sense and action, than do other “flavours” of cognitive science.
This change in emphasis can have profound effects on our definitions of mind or self (Bateson, 1972). For example, consider this famous passage from anthropologist Gregory Bateson:
But what about ‘me’? Suppose I am a blind man, and I use a stick. I go tap, tap, tap. Where do I start? Is my mental system bounded at the handle of the stick? Is it bounded by my skin? (Bateson, 1972, p. 465)
The embodied approach’s emphasis on agents embedded in their environments leads to a radical and controversial answer to Bateson’s questions, in the form of the extended mind (Clark, 1997, 1999, 2003, 2008; Clark & Chalmers, 1998; Menary, 2008, 2010; Noë, 2009; Rupert, 2009; Wilson, 2004, 2005). According to the extended mind hypothesis, the mind and its information processing are not separated from the world by the skull. Instead, the mind interacts with the world in such a way that information processing is both part of the brain and part of the world—the boundary between the mind and the world is blurred, or has disappeared.
Where is the mind located? The traditional view—typified by the classical approach introduced in Chapter 3—is that thinking is inside the individual, and that sensing and acting involve the world outside. However, if cognition is scaffolded, then some thinking has moved from inside the head to outside in the world. “It is the human brain plus these chunks of external scaffolding that finally constitutes the smart, rational inference engine we call mind” (Clark, 1997, p. 180). As a result, Clark (1997) described the mind as a leaky organ, because it has spread from inside our head to include whatever is used as external scaffolding.
The extended mind hypothesis has enormous implications for the cognitive sciences. The debate between classical and connectionist cognitive science does not turn on this issue, because both approaches are essentially representational. That is, both approaches tacitly endorse the classical sandwich; while they have strong disagreements about the nature of representational processes in the filling of the sandwich, neither of these approaches views the mind as being extended. Embodied cognitive scientists who endorse the extended mind hypothesis thus appear to be moving in a direction that strongly separates the embodied approach from the other two. It is small comfort to know that all cognitive scientists might agree that they are in the business of studying the mind, when they can’t agree upon what minds are.
For this reason, the extended mind hypothesis has increasingly been a source of intense philosophical analysis and criticism (Adams & Aizawa, 2008; Menary, 2010; Robbins & Aydede, 2009). Adams and Aizawa (2008) are strongly critical of the extended mind hypothesis because they believe that it makes no serious attempt to define the “mark of the cognitive,” that is, the principled differences between cognitive and non-cognitive processing:
If just any sort of information processing is cognitive processing, then it is not hard to find cognitive processing in notebooks, computers and other tools. The problem is that this theory of the cognitive is wildly implausible and evidently not what cognitive psychologists intend. A wristwatch is an information processor, but not a cognitive agent. What the advocates of extended cognition need, but, we argue, do not have, is a plausible theory of the difference between the cognitive and the non-cognitive that does justice to the subject matter of cognitive psychology. (Adams & Aizawa, 2008, p. 11)
A variety of other critiques can be found in various contributions to Robbins and Aydede’s (2009) Cambridge Handbook of Situated Cognition. Prinz made a pointed argument that the extended mind has nothing to contribute to the study of consciousness. Rupert noted how the notion of innateness poses numerous problems for the extended mind. Warneken and Tomasello examined cultural scaffolding, but they eventually adopted a position where these cultural tools have been internalized by agents. Finally, Bechtel presented a coherent argument from the philosophy of biology that there is good reason for the skull to serve as the boundary between the world and the mind. Clearly, the degree to which extendedness is adopted by situated researchers is far from universal.
In spite of the currently unresolved debate about the plausibility of the extended mind, the extended mind hypothesis is an idea that is growing in popularity in embodied cognitive science. Let us briefly turn to another implication that this hypothesis has for the practice of cognitive science.
The extended mind hypothesis is frequently applied to single cognitive agents. However, this hypothesis also opens the door to co-operative or public cognition in which a group of agents are embedded in a shared environment (Hutchins, 1995). In this situation, more than one cognitive agent can manipulate the world that is being used to support the information processing of other group members.
Hutchins (1995) provided one example of public cognition in his description of how a team of individuals is responsible for navigating a ship. He argued that “organized groups may have cognitive properties that differ from those of the individuals who constitute the group” (p. 228). For instance, in many cases it is very difficult to translate the heuristics used by a solo navigator into a procedure that can be implemented by a navigation team.
Collective intelligence—also called swarm intelligence or co-operative computing—is also of growing importance in robotics. Entomologists used the concept of the superorganism (Wheeler, 1911) to explain how entire colonies could produce more complex results (such as elaborate nests) than one would predict from knowing the capabilities of individual colony members. Swarm intelligence is an interesting evolution of the idea of the superorganism; it involves a collective of agents operating in a shared environment. Importantly, a swarm’s components are only involved in local interactions with each other, resulting in many advantages (Balch & Parker, 2002; Sharkey, 2006).
For instance, a computing swarm is scalable—it may comprise varying numbers of agents, because the same control structure (i.e., local interactions) is used regardless of how many agents are in the swarm. For the same reason, a computing swarm is flexible: agents can be added or removed from the swarm without reorganizing the entire system. The scalability and flexibility of a swarm make it robust, as it can continue to compute when some of its component agents no longer function properly. Notice how these advantages of a swarm of agents are analogous to the advantages of connectionist networks over classical models, as discussed in Chapter 4.
Nonlinearity is also a key ingredient of swarm intelligence. For a swarm to be considered intelligent, the whole must be greater than the sum of its parts. This idea has been used to identify the presence of swarm intelligence by relating the amount of work done by a collective to the number of agents in the collection (Beni & Wang, 1991). If the relationship between work accomplished and number of agents is linear, then the swarm is not considered to be intelligent. However, if the relationship is nonlinear—for instance, exponentially increasing—then swarm intelligence is present. The nonlinear relationship between work and numbers may itself be mediated by other nonlinear relationships. For example, Dawson, Dupuis, and Wilson (2010) found that in collections of simple LEGO robots, the presence of additional robots influenced robot paths in an arena in such a way that a sorting task was accomplished far more efficiently.
While early studies of robot collectives concerned small groups of homogenous robots (Gerkey & Mataric, 2004), researchers are now more interested in complex collectives consisting of different types of machines for performing diverse tasks at varying locations or times (Balch & Parker, 2002; Schultz & Parker, 2002). This leads to the problem of coordinating the varying actions of diverse collective members (Gerkey & Mataric, 2002, 2004; Mataric, 1998). One general approach to solving this coordination problem is intentional co-operation (Balch & Parker, 2002; Parker, 1998, 2001), which uses direct communication amongst robots to prevent unnecessary duplication (or competition) between robot actions. However, intentional co-operation comes with its own set of problems. For instance, communication between robots is costly, particularly as more robots are added to a communicating team (Kube & Zhang, 1994). As well, as communication makes the functions carried out by individual team members more specialized, the robustness of the robot collective is jeopardized (Kube & Bonabeau, 2000). Is it possible for a robot collective to coordinate its component activities, and solve interesting problems, in the absence of direction communication?
The embodied approach has generated a plausible answer to this question via stigmergy (Kube & Bonabeau, 2000). Kube and Bonabeau (2000) demonstrated that the actions of a large collective of robots could be stigmergically coordinated so that the collective could push a box to a goal location in an arena. Robots used a variety of sensors to detect (and avoid) other robots, locate the box, and locate the goal location. A subsumption architecture was employed to instantiate a fairly simple set of sense-act reflexes. For instance, if a robot detected that is was in contact with the box and could see the goal, then box-pushing behavior was initiated. If it was in contact with the box but could not see the goal, then other movements were triggered, resulting in the robot finding contact with the box at a different position.
This subsumption architecture caused robots to seek the box, push it towards the goal, and do so co-operatively by avoiding other robots. Furthermore, when robot activities altered the environment, this produced corresponding changes in behavior of other robots. For instance, a robot pushing the box might lose sight of the goal because of box movement, and it would therefore leave the box and use its other exploratory behaviors to come back to the box and push it from a different location. “Cooperation in some tasks is possible without direct communication” (Kube & Bonabeau, 2000, p. 100). Importantly, the solution to the box-pushing problem required such co-operation, because the box being manipulated was too heavy to be moved by a small number of robots!
The box-pushing research of Kube and Bonabeau (2000) is an example of stigmergic processing that occurs when two or more individuals collaborate on a task using a shared environment. Hutchins (1995) brought attention to less obvious examples of public cognition that exploit specialized environmental tools. Such scaffolding devices cannot be dissociated from culture or history. For example, Hutchins noted that navigation depends upon centuries-old mathematics of chart projections, not to mention millennia-old number systems.
These observations caused Hutchins (1995) to propose an extension of Simon’s (1969) parable of the ant. Hutchins argued that rather than watching an individual ant on the beach, we should arrive at a beach after a storm and watch generations of ants at work. As the ant colony matures, the ants will appear smarter, because their behaviors are more efficient. But this is because,
the environment is not the same. Generations of ants have left their marks on the beach, and now a dumb ant has been made to appear smart through its simple interaction with the residua of the history of its ancestor’s actions. (Hutchins, 1995, p. 169)
Hutchins’ (1995) suggestion mirrored concerns raised by Scribner’s studies of mind in action. She observed that the diversity of problem solutions generated by dairy workers, for example, was due in part to social scaffolding.
We need a greater understanding of the ways in which the institutional setting, norms and values of the work group and, more broadly, cultural understandings of labor contribute to the reorganization of work tasks in a given community. (Scribner & Tobach, 1997, p. 373)
Furthermore, Scribner pointed out that the traditional methods used by classical researchers to study cognition were not suited for increasing this kind of understanding. The extended mind hypothesis leads not only to questions about the nature of mind, but also to the questions about the methods used to study mentality. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.14%3A_The_Extended_Mind.txt |
The most typical methodology to be found in classical cognitive science is reverse engineering. Reverse engineering involves observing the behavior of an intact system in order to infer the nature and organization of the system’s internal processes. Most cognitive theories are produced by using a methodology called functional analysis (Cummins, 1975, 1983), which uses experimental results to iteratively carve a system into a hierarchy of functional components until a basic level of subfunctions, the cognitive architecture, is reached.
A practical problem with functional analysis or reverse engineering is the frame of reference problem (Pfeifer & Scheier, 1999). This problem arises during the distribution of responsibility for the complexity of behavior between the internal processes of an agent and the external influences of its environment. Classical cognitive science, a major practitioner of functional analysis, endorses the classical sandwich; its functional analyses tend to attribute behavioral complexity to the internal processes of an agent, while at the same time ignoring potential contributions of the environment. In other words, the frame of reference problem is to ignore Simon’s (1969) parable of the ant.
Embodied cognitive scientists frequently adopt a different methodology, forward engineering. In forward engineering, a system is constructed from a set of primitive functions of interest. The system is then observed to determine whether it generates surprising or complicated behavior. “Only about 1 in 20 ‘gets it’—that is, the idea of thinking about psychological problems by inventing mechanisms for them and then trying to see what they can and cannot do” (Minsky, personal communication, 1995). This approach has also been called synthetic psychology (Braitenberg, 1984). Reverse engineers collect data to create their models; in contrast, forward engineers build their models first and use them as primary sources of data (Dawson, 2004).
We noted in Chapter 3 that classical cognitive science has descended from the seventeenth-century rationalist philosophy of René Descartes (1960, 1996). It was observed in Chapter 4 that connectionist cognitive science descended from the early eighteenth-century empiricism of John Locke (1977), which was itself a reaction against Cartesian rationalism. The synthetic approach seeks “understanding by building” (Pfeifer & Scheier, 1999), and as such permits us to link embodied cognitive science to another eighteenth-century reaction against Descartes, the philosophy of Giambattista Vico (Vico, 1990, 1988, 2002).
Vico based his philosophy on the analysis of word meanings. He argued that the Latin term for truth, verum, had the same meaning as the Latin term factum, and therefore concluded that “it is reasonable to assume that the ancient sages of Italy entertained the following beliefs about the true: ‘the true is precisely what is made’” (Vico, 1988, p. 46). This conclusion led Vico to his argument that humans could only understand the things that they made, which is why he studied societal artifacts, such as the law.
Vico’s work provides an early motivation for forward engineering: “To know (scire) is to put together the elements of things” (Vico, 1988, p. 46). Vico’s account of the mind was a radical departure from Cartesian disembodiment. To Vico, the Latins “thought every work of the mind was sense; that is, whatever the mind does or undergoes derives from contact with bodies” (p. 95). Indeed, Vico’s verum-factum principle is based upon embodied mentality. Because the mind is “immersed and buried in the body, it naturally inclines to take notice of bodily things” (p. 97).
While the philosophical roots of forward engineering can be traced to Vico’s eighteenth-century philosophy, its actual practice—as far as cognitive science is concerned—did not emerge until cybernetics arose in the 1940s. One of the earliest examples of synthetic psychology was the Homeostat (Ashby, 1956, 1960), which was built by cyberneticist William Ross Ashby in 1948. The Homeostat was a system that changed its internal states to maximize stability amongst the interactions between its internal components and the environment. William Grey Walter (1963, p. 123) noted that it was “like a fireside cat or dog which only stirs when disturbed, and then methodically finds a comfortable position and goes to sleep again.”
Ashby’s (1956, 1960) Homeostat illustrated the promise of synthetic psychology. The feedback that Ashby was interested in could not be analyzed mathematically; it was successfully studied synthetically with Ashby’s device. Remember, too, that when the Homeostat was created, computer simulations of feedback were still in the future.
As well, it was easier to produce interesting behavior in the Homeostat than it was to analyze it. This is because the secret to its success was a large number of potential internal states, which provided many degrees of freedom for producing stability. At the same time, this internal variability was an obstacle to traditional analysis. “Although the machine is man-made, the experimenter cannot tell at any moment exactly what the machine’s circuit is without ‘killing’ it and dissecting out the ‘nervous system’” (Grey Walter, 1963, p. 124).
Concerns about this characteristic of the Homeostat inspired the study of the first autonomous robots, created by cyberneticist William Grey Walter (1950a, 1950b, 1951, 1963). The first two of these machines were constructed in 1948 (de Latil, 1956); comprising surplus war materials, their creation was clearly an act of bricolage. “The first model of this species was furnished with pinions from old clocks and gas meters” (Grey Walter, 1963, p. 244). By 1951, these two had been replaced by six improved machines (Holland, 2003a), two of which are currently displayed in museums.
The robots came to be called Tortoises because of their appearance: they seemed to be toy tractors surrounded by a tortoise-like shell. Grey Walter viewed them as an artificial life form that he classified as Machina speculatrix. Machina speculatrix was a reaction against the internal variability in Ashby’s Homeostat. The goal of Grey Walter’s robotics research was to explore the degree to which one could produce complex behavior from such very simple devices (Boden, 2006). When Grey Walter modeled behavior he “was determined to wield Occam’s razor. That is, he aimed to posit as simple a mechanism as possible to explain apparently complex behavior. And simple, here, meant simple” (Boden, 2006, p. 224). Grey Walter restricted a Tortoise’s internal components to “two functional elements: two miniature radio tubes, two sense organs, one for light and the other for touch, and two effectors or motors, one for crawling and the other for steering” (Grey Walter, 1950b, p. 43).
The interesting behavior of the Tortoises was a product of simple reflexes that used detected light (via a light sensor mounted on the robot’s steering column) and obstacles (via movement of the robot’s shell) to control the actions of the robot’s two motors. Light controlled motor activity as follows. In dim light, the Tortoise’s drive motor would move the robot forward, while the steering motor slowly turned the front wheel. Thus in dim light the Tortoise “explored.” In moderate light, the drive motor continued to run, but the steering motor stopped. Thus in moderate light the Tortoise “approached.” In bright light, the drive motor continued to run, but the steering motor ran at twice the normal speed, causing marked oscillatory movements. Thus in bright light the Tortoise “avoided.”
The motors were affected by the shell’s sense of touch as follows. When the Tortoise’s shell was moved by an obstacle, an oscillating signal was generated that first caused the robot to drive fast while slowly turning, and then to drive slowly while quickly turning. The alternation of these behaviors permitted the Tortoise to escape from obstacles. Interestingly, when movement of the Tortoise shell triggered such behavior, signals from the photoelectric cell were rendered inoperative for a few moments. Thus Grey Walter employed a simple version of what later would be known as Brooks’ (1999) subsumption architecture: a higher layer of touch processing could inhibit a lower layer of light processing.
In accordance with forward engineering, after Grey Walter constructed his robots, he observed their behavior by recording the paths that they took in a number of simple environments. He preserved a visual record of their movement by using time-lapse photography; because of lights mounted on the robots, their paths were literally traced on each photograph (Holland, 2003b). Like the paths on the beach traced in Simon’s (1969) parable of the ant, the photographs recorded Tortoise behavior that was “remarkably unpredictable” (Grey Walter, 1950b, p. 44).
Grey Walter observed the behaviors of his robots in a number of different environments. For example, in one study the robot was placed in a room where a light was hidden from view by an obstacle. The Tortoise began to explore the room, bumped into the obstacle, and engaged in its avoidance behavior. This in turn permitted the robot to detect the light, which it approached. However, it didn’t collide with the light. Instead the robot circled it cautiously, veering away when it came too close. “Thus the machine can avoid the fate of the moth in the candle” (Grey Walter, 1963, p. 128).
When the environment became more complicated, so too did the behaviors produced by the Tortoise. If the robot was confronted with two stimulus lights instead of one, it would first be attracted to one, which it circled, only to move away and circle the other, demonstrating an ability to choose: it solved the problem “of Buridan’s ass, which starved to death, as some animals acting trophically in fact do, because two exactly equal piles of hay were precisely the same distance away” (Grey Walter, 1963, p. 128).
If a mirror was placed in its environment, the mirror served as an obstacle, but it reflected the light mounted on the robot, which was an attractant. The resulting dynamics produced the so-called “mirror dance” in which the robot,
lingers before a mirror, flickering, twittering and jigging like a clumsy Narcissus. The behavior of a creature thus engaged with its own reflection is quite specific, and on a purely empirical basis, if it were observed in an animal, might be accepted as evidence of some degree of self-awareness. (Grey Walter, 1963, pp. 128–129)
In less controlled or open-ended environments, the behavior that was produced was lifelike in its complexity. The Tortoises produced “the exploratory, speculative behavior that is so characteristic of most animals” (Grey Walter, 1950b, p. 43). Examples of such behavior were recounted by cyberneticist Pierre de Latil (1956):
Elsie moved to and fro just like a real animal. A kind of head at the end of a long neck towered over the shell, like a lighthouse on a promontory and, like a lighthouse; it veered round and round continuously. (de Latil, 1956, p. 209)
The Daily Mail reported that,
the toys possess the senses of sight, hunger, touch, and memory. They can walk about the room avoiding obstacles, stroll round the garden, climb stairs, and feed themselves by automatically recharging six-volt accumulators from the light in the room. And they can dance a jig, go to sleep when tired, and give an electric shock if disturbed when they are not playful. (Holland, 2003a, p. 2090)
Grey Walter released the Tortoises to mingle with the audience at a 1955 meeting of the British Association (Hayward, 2001): “The tortoises, with their in-built attraction towards light, moved towards the pale stockings of the female delegates whilst avoiding the darker legs of the betrousered males” (p. 624).
Grey Walter was masterfully able to promote his work to the general public (Hayward, 2001; Holland, 2003a). However, he worried that public reception of his machines would decrease their scientific importance. History has put such concerns to rest; Grey Walter’s pioneering research has influenced many modern researchers (Reeve & Webb, 2003). Grey Walter’s,
ingenious devices were seriously intended as working models for understanding biology: a ‘mirror for the brain’ that could both generally enrich our understanding of principles of behavior (such as the complex outcome of combining simple tropisms) and be used to test specific hypotheses (such as Hebbian learning). (Reeve & Webb, 2003, p. 2245) | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/05%3A_Elements_of_Embodied_Cognitive_Science/5.15%3A_The_Roots_of_Forward_Engineering.txt |
In the previous three chapters I have presented the elements of three different approaches to cognitive science: classical, connectionist, and embodied. In the current chapter I present a review of these elements in the context of a single topic: musical cognition. In general, this is done by developing an analogy: cognitive science is like classical music. This analogy serves to highlight the contrasting characteristics between the three approaches of cognitive science, because each school of thought approaches the study of music cognition in a distinctive way.
These distinctions are made evident by arguing that the analogy between cognitive science and classical music is itself composed of three different relationships: between Austro-German classical music and classical cognitive science, between musical Romanticism and connectionist cognitive science, and between modern music and embodied cognitive science. One goal of the current chapter is to develop each of these more specific analogies, and in so doing we review the core characteristics of each approach within cognitive science.
Each of these more specific analogies is also reflected in how each school of cognitive science studies musical cognition. Classical, connectionist, and embodied cognitive scientists have all been involved in research on musical cognition, and they have not surprisingly focused on different themes.
Reviewing the three approaches within cognitive science in the context of music cognition again points to distinctions between the three approaches. However, the fact that all three approaches are involved in the study of music points to possible similarities between them. The current chapter begins to set the stage for a second theme that is fundamental to the remainder of the book: that there is the possibility for a synthesis amongst the three approaches that have been introduced in the earlier chapters. For instance, the current chapter ends by considering the possibility of a hybrid theory of musical cognition, a theory that has characteristics of classical, connectionist, and embodied cognitive science.
6.02: The Classical Nature of Classical Music
There are many striking parallels between the classical mind and classical music, particularly the music composed in the Austro-German tradition of the eighteenth and nineteenth centuries. First, both rely heavily upon formal structures. Second, both emphasize that their formal structures are content laden. Third, both attribute great importance to abstract thought inside an agent (or composer) at the expense of contributions involving the agent’s environment or embodiment. Fourth, both emphasize central control. Fifth, the “classical” traditions of both mind and music have faced strong challenges, and many of the challenges in one domain can be related to analogous challenges in the other.
The purpose of this section is to elaborate the parallels noted above between classical music and classical cognitive science. One reason to do so is to begin to illustrate the analogy that classical cognitive science is like classical music. However, a more important reason is that this analogy, at least tacitly, has a tremendous effect on how researchers approach musical cognition. The methodological implications of this analogy are considered in detail later in this chapter.
To begin, let us consider how the notions of formalism or logicism serve as links between classical cognitive science and classical music. Classical cognitive science takes thinking to be the rule-governed manipulation of mental representations. Rules are sensitive to the form of mental symbols (Haugeland, 1985). That is, a symbol’s form is used to identify it as being a token of a particular type; to be so identified means that only certain rules can be applied. While the rules are sensitive to the formal nature of symbols, they act in such a way to preserve the meaning of the information that the symbols represent. This property reflects classical cognitive science’s logicism: the laws of thought are equivalent to the formal rules that define a system of logic (Boole, 2003). The goal of characterizing thought purely in the form of logical rules has been called the Boolean dream (Hofstadter, 1995).
It is not implausible that the Boolean dream might also characterize conceptions of music. Music’s formal nature extends far beyond musical symbols on a sheet of staff paper. Since the time of Pythagoras, scholars have understood that music reflects regularities that are intrinsically mathematical (Ferguson, 2008). There is an extensive literature on the mathematical nature of music (Assayag et al., 2002; Benson, 2007; Harkleroad, 2006). For instance, different approaches to tuning instruments reflect the extent to which tunings are deemed mathematically sensible (Isacoff, 2001).
To elaborate, some pairs of tones played simultaneously are pleasing to the ear, such as a pair of notes that are a perfect fifth apart (see Figure 4-10)—they are consonant—while other combinations are not (Krumhansl, 1990). The consonance of notes can be explained by the physics of sound waves (Helmholtz & Ellis, 1954). Such physical relationships are ultimately mathematical, because they concern ratios of frequencies of sine waves. Consonant tone pairs have frequency ratios of 2:1 (octave), 3:2 (perfect fifth), and 4:3 (perfect fourth). The most dissonant pair of tones, the tritone (an augmented fourth) is defined by a ratio that includes an irrational number (√2:1), a fact that was probably known to the Pythagoreans.
The formal nature of music extends far beyond the physics of sound. There are formal descriptions of musical elements, and of entire musical compositions, that are analogous to the syntax of linguistics (Chomsky, 1965). Some researchers have employed generative grammars to express these regularities (Lerdahl & Jackendoff, 1983; Steedman, 1984).
For instance, Lerdahl and Jackendoff (1983) argued that listeners impose a hierarchical structure on music, organizing “the sound signals into units such as motives, themes, phrases, periods, theme-groups, sections and the piece itself” (p. 12). They defined a set of well-formedness rules, which are directly analogous to generative rules in linguistics, to define how this musical organization proceeds and to rule out impossible organizations.
That classical music is expected to have a hierarchically organized, well-formed structure is a long-established view amongst scholars who do not use generative grammars to capture such regularities. Composer Aaron Copland (1939, p. 113) argued that a composition’s structure is “one of the principal things to listen for” because it is “the planned design that binds an entire composition together.”
One important musical structure is the sonata-allegro form (Copland, 1939), which is a hierarchical organization of musical themes or ideas. At the top level of this hierarchy are three different components that are presented in sequence: an initial exposition of melodic structures called musical themes, followed by the free development of these themes, and finishing with their recapitulation. Each of these segments is itself composed of three sub-segments, which are again presented in sequence. This structure is formal in the sense that the relationship between different themes presented in different sub-segments is defined in terms of their key signatures.
For instance, the exposition uses its first sub-segment to introduce an opening theme in the tonic key, that is, the initial key signature of the piece. The exposition’s second sub-segment then presents a second theme in the dominant key, a perfect fifth above the tonic. The final sub-segment of the exposition finishes with a closing theme in the dominant key. The recapitulation has a substructure that is related to that of the exposition; it uses the same three themes in the same order, but all are presented in the tonic key. The development section, which falls between the exposition and the recapitulation, explores the exposition’s themes, but does so using new material written in different keys.
Sonata-allegro form foreshadowed the modern symphony and produced a market for purely instrumental music (Rosen, 1988). Importantly, it also provided a structure, shared by both composers and their audiences, which permitted instrumental music to be expressive. Rosen notes that the sonata became popular because it,
has an identifiable climax, a point of maximum tension to which the first part of the work leads and which is symmetrically resolved. It is a closed form, without the static frame of ternary form; it has a dynamic closure analogous to the denouement of 18th century drama, in which everything is resolved, all loose ends are tied up, and the work rounded off. (Rosen, 1988, p. 10)
In short, the sonata-allegro form provided a logical structure that permitted the music to be meaningful.
The idea that musical form is essential to communicating musical meaning brings us to the second parallel between classical music and classical cognitive science: both domains presume that their formal structures are content-bearing.
Classical cognitive science explains cognition by invoking the intentional stance (Dennett, 1987), which is equivalent to relying on a cognitive vocabulary (Pylyshyn, 1984). If one assumes that an agent has certain intentional states (e.g., beliefs, desires, goals) and that lawful regularities (such as the principle of rationality) govern relationships between the contents of these states, then one can use the contents to predict future behavior. “This single assumption [rationality], in combination with home truths about our needs, capacities and typical circumstances, generates both an intentional interpretation of us as believers and desirers and actual predictions of behavior in great profusion” (Dennett, 1987, p. 50). Similarly, Pylyshyn (1984, pp. 20–21) noted that “the principle of rationality . . . is indispensable for giving an account of human behavior.”
Is there any sense in which the intentional stance can be applied to classical music? Classical composers are certainly of the opinion that music can express ideas. Copland noted that,
my own belief is that all music has an expressive power, some more and some less, but that all music has a certain meaning behind the notes and that that meaning behind the notes constitutes, after all, what the piece is saying, what the piece is about. (Copland, 1939, p. 12)
John Cage (1961) believed that compositions had intended meanings:
It seemed to me that composers knew what they were doing, and that the experiments that had been made had taken place prior to the finished works, just as sketches are made before paintings and rehearsals precede performances. (John Cage, 1961, p. 7)
Scholars, too, have debated the ability of music to convey meanings. One of the central questions in the philosophy of music is whether music can represent. As late as 1790, the dominant philosophical view of music was that it was incapable of conveying ideas, but by the time that E. T. A. Hoffman reviewed Beethoven’s Fifth Symphony in 1810, this view was predominately rejected (Bonds, 2006), although the autonomist school of musical aesthetics—which rejected musical representation—was active in the late nineteenth century (Hanslick, 1957). Nowadays most philosophers of music agree that music is representational, and they focus their attention on how musical representations are possible (Kivy, 1991; Meyer, 1956; Robinson, 1994, 1997; Sparshoot, 1994; Walton, 1994).
How might composers communicate intended meanings with their music? One answer is by exploiting particular musical forms. Conventions such as sonata-allegro form provide a structure that generates expectations, expectations that are often presumed to be shared by the audience. Copland (1939) used his book about listening to music to educate audiences about musical forms so that they could better understand his compositions as well as those of others: “In helping others to hear music more intelligently, [the composer] is working toward the spread of a musical culture, which in the end will affect the understanding of his own creations” (p. vi).
The extent to which the audience’s expectations are toyed with, and ultimately fulfilled, can manipulate its interpretation of a musical performance. Some scholars have argued that these manipulations can be described completely in terms of the structure of musical elements (Meyer, 1956). The formalist’s motto of classical cognitive science (Haugeland, 1985) can plausibly be applied to classical music.
A third parallel between classical cognitive science, which likely follows directly from the assumption that formal structures can represent content, is an emphasis on Cartesian disembodiment. Let us now consider this characteristic in more detail.
Classical cognitive science attempts to explain cognitive phenomena by appealing to a sense-think-act cycle (Pfeifer & Scheier, 1999). In this cycle, sensing mechanisms provide information about the world, and acting mechanisms produce behaviors that might change it. Thinking, considered as the manipulation of mental representations, is the interface between sensing and acting (Wilson, 2004). However, this interface, internal thinking, receives the most emphasis in a classical theory, with an accompanying underemphasis on sensing and acting (Clark, 1997).
One can easily find evidence for the classical emphasis on representations. Autonomous robots that were developed following classical ideas devote most of their computational resources to using internal representations of the external world (Brooks, 2002; Moravec, 1999; Nilsson, 1984). Most survey books on cognitive psychology (Anderson, 1985; Best, 1995; Haberlandt, 1994; Robinson-Riegler & Robinson-Riegler, 2003; Solso, 1995; Sternberg, 1996) have multiple chapters on representational topics such as memory and reasoning and rarely mention embodiment, sensing, or acting. Classical cognitive science’s sensitivity to the multiple realization argument (Fodor, 1968b, 1975), with its accompanying focus on functional (not physical) accounts of cognition (Cummins, 1983), underlines its view of thinking as a disembodied process. It was argued in Chapter 3 that the classical notion of the disembodied mind was a consequence of its being inspired by Cartesian philosophy.
Interestingly, a composer of classical music is also characterized as being similarly engaged in a process that is abstract, rational, and disembodied. Does not a composer first think of a theme or a melody and then translate this mental representation into a musical score? Mozart “carried his compositions around in his head for days before setting them down on paper” (Hildesheimer, 1983). Benson (2007, p. 25) noted that “Stravinsky speaks of a musical work as being ‘the fruit of study, reasoning, and calculation that imply exactly the converse of improvisation.’” In short, abstract thinking seems to be a prerequisite for composing.
Reactions against Austro-German classical music (Nyman, 1999) were reactions against its severe rationality. John Cage pioneered this reaction (Griffiths, 1994); beginning in the 1950s, Cage increasingly used chance mechanisms to determine musical events. He advocated “that music should no longer be conceived of as rational discourse” (Nyman, 1999, p. 32). He explicitly attacked the logicism of traditional music (Ross, 2007), declaring that “any composing strategy which is wholly ‘rational’ is irrational in the extreme” (p. 371).
Despite opposition such as Cage’s, the disembodied rationality of classical music was one of its key features. Indeed, the cognitive scaffolding of composing is frowned upon. There is a general prejudice against composers who rely on external aids (Rosen, 2002). Copland (1939, p. 22) observed that “a current idea exists that there is something shameful about writing a piece of music at the piano.” Rosen traces this idea to Giovanni Maria Artusi’s criticism of composers such as Monteverdi, in 1600: “It is one thing to search with voices and instruments for something pertaining to the harmonic faculty, another to arrive at the exact truth by means of reasons seconded by the ear” (p. 17). The expectation (then and now) is that composing a piece involves “mentally planning it by logic, rules, and traditional reason” (Rosen, 2002, p. 17). This expectation is completely consistent with the disembodied, classical view of thinking, which assumes that the primary purpose of cognition is not acting, but is instead planning.
Planning has been described as solving the problem of what to do next (Dawson, 1998; Stillings, 1995). A solution to this problem involves providing an account of the control system of a planning agent; such accounts are critical components of classical cognitive science. “An adequate theory of human cognitive processes must include a description of the control system—the mechanism that determines the sequence in which operations will be performed” (Simon, 1979, p. 370). In classical cognitive science, such control is typically central. The notion of central control is also characteristic of classical music, providing the fourth parallel between classical cognitive science and classical music.
Within the Austro-German musical tradition, a composition is a formal structure intended to express ideas. A composer uses musical notation to signify the musical events which, when realized, accomplish this expressive goal. An orchestra’s purpose is to bring the score to life, in order for the performance to deliver the intended message to the audience:
We tend to see both the score and the performance primarily as vehicles for preserving what the composer has created. We assume that musical scores provide a permanent record or embodiment in signs; in effect, a score serves to ‘fix’ or objectify a musical work. (Benson, 2003, p. 9)
However, a musical score is vague; it cannot determine every minute detail of a performance (Benson, 2003; Copland, 1939). As a result, during a performance the score must be interpreted in such a way that the missing details can be filled in without distorting the composer’s desired effect. In the Austro-German tradition of music, an orchestra’s conductor takes the role of interpreter and controls the orchestra in order to deliver the composer’s message (Green & Malko, 1975, p. 7): “The conductor acts as a guide, a solver of problems, a decision maker. His guidance chart is the composer’s score; his job, to animate the score, to make it come alive, to bring it into audible being.”
The conductor provides another link between classical music and classical cognitive science, because the conductor is the orchestra’s central control system. The individual players are expected to submit to the conductor’s control.
Our conception of the role of a classical musician is far closer to that of self-effacing servant who faithfully serves the score of the composer. Admittedly, performers are given a certain degree of leeway; but the unwritten rules of the game are such that this leeway is relatively small and must be kept in careful check. (Benson, 2003, p. 5)
It has been suggested—not necessarily validly—that professional, classically trained musicians are incapable of improvisation (Bailey, 1992)!
The conductor is not the only controller of a performance. While it is unavoidably vague, the musical score also serves to control the musical events generated by an orchestra. If the score is a content-bearing formal expression, then it is reasonable to assume that it designates the contents that the score is literally about. Benson (2003) described this aspect of a score as follows:
The idea of being ‘treu’—which can be translated as true or a faithful—implies faithfulness to someone or something. Werktreue, then, is directly a kind of faithfulness to the Werk (work) and, indirectly, a faithfulness to the composer. Given the centrality of musical notation in the discourse of classical music, a parallel notion is that of Texttreue: fidelity to the written score. (Benson, 2003, p. 5)
Note Benson’s emphasis on the formal notation of the score. It highlighted the idea that the written score is analogous to a logical expression, and that converting it into the musical events that the score is about (in Brentano’s sense) is not only desirable, but also rational. This logicism of classical music perfectly parallels the logicism found in classical cognitive science.
The role of the score as a source of control provides a link back to another issue discussed earlier, disembodiment. We saw in Chapter 3 that the disembodiment of modern classical cognitive science is reflected in its methodological solipsism. In methodological solipsism, representational states are individuated from one another only in terms of their relations to other representational states. Relations of the states to the external world—the agent’s environment—are not considered.
It is methodological solipsism that links a score’s control back to disembodiment, providing another link in the analogy between the classical mind and classical music. When a piece is performed, it is brought to life with the intent of delivering a particular message to the audience. Ultimately, then, the audience is a fundamental component of a composition’s environment. To what extent does this environment affect or determine the composition itself?
In traditional classical music, the audience is presumed to have absolutely no effect on the composition. Composer Arnold Schoenberg believed that the audience was “merely an acoustic necessity—and an annoying one at that” (Benson, 2003, p. 14). Composer Virgil Thompson defined the ideal listener as “a person who applauds vigorously” (Copland, 1939, p. 252). In short, the purpose of the audience is to passively receive the intended message. It too is under the control of the score:
The intelligent listener must be prepared to increase his awareness of the musical material and what happens to it. He must hear the melodies, the rhythms, the harmonies, the tone colors in a more conscious fashion. But above all he must, in order to follow the line of the composer’s thought, know something of the principles of musical form. (Copland, 1939, p. 17)
To see that this is analogous to methodological solipsism, consider how we differentiate compositions from one another. Traditionally, this is done by referring to a composition’s score (Benson, 2003). That is, compositions are identified in terms of a particular set of symbols, a particular formal structure. The identification of a composition does not depend upon identifying which audience has heard it. A composition can exist, and be identified, in the absence of its audience-as-environment.
Another parallel between the classical mind and classical music is that there have been significant modern reactions against the Austro-German musical tradition (Griffiths, 1994, 1995). Interestingly, these reactions parallel many of the reactions of embodied cognitive science against the classical approach. In later sections of this chapter we consider some of these reactions, and explore the idea that they make plausible the claim that “non-cognitive” processes are applicable to classical music. However, before we do so, let us first turn to consider how the parallels considered above are reflected in how classical cognitive scientists study musical cognition. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/06%3A_Classical_Music_and_Cognitive_Science/6.01%3A_Chapter_Overview.txt |
In Chapter 8 on seeing and visualizing, we see that classical theories take the purpose of visual perception to be the construction of mental models of the external, visual world. To do so, these theories must deal with the problem of underdetermination. Information in the world is not sufficient, on its own, to completely determine visual experience.
Classical solutions to the problem of underdetermination (Bruner, 1973; Gregory, 1970, 1978; Rock, 1983) propose that knowledge of the world—the contents of mental representations—is also used to determine visual experience. In other words, classical theories of perception describe visual experience as arising from the interaction of stimulus information with internal representations. Seeing is a kind of thinking.
Auditory perception has also been the subject of classical theorization. Classical theories of auditory perception parallel classical theories of visual perception in two general respects. First, since the earliest psychophysical studies of audition (Helmholtz & Ellis, 1954), hearing has been viewed as a process for building internal representations of the external world.
We have to investigate the various modes in which the nerves themselves are excited, giving rise to their various sensations, and finally the laws according to which these sensations result in mental images of determinate external objects, that is, in perceptions. (Helmholtz & Ellis, 1954, p. 4)
Second, in classical theories of hearing, physical stimulation does not by itself determine the nature of auditory percepts. Auditory stimuli are actively organized, being grouped into distinct auditory streams, according to psychological principles of organization (Bregman, 1990). “When listeners create a mental representation of auditory input, they too must employ rules about what goes with what” (p. 11).
The existence of classical theories of auditory perception, combined with the links between classical music and classical cognitive science discussed in the previous section, should make it quite unsurprising that classical theories of music perception and cognition are well represented in the literature (Deutsch, 1999; Francès, 1988; Howell, Cross, & West, 1985; Krumhansl, 1990; Lerdahl, 2001; Lerdahl & Jackendoff, 1983; Sloboda, 1985; Snyder, 2000; Temperley, 2001). This section provides some brief examples of the classical approach to musical cognition. These examples illustrate that the previously described links between classical music and cognitive science are reflected in the manner in which musical cognition is studied.
The classical approach to musical cognition assumes that listeners construct mental representations of music. Sloboda (1985) argued that,
a person may understand the music he hears without being moved by it. If he is moved by it then he must have passed through the cognitive stage, which involves forming an abstract or symbolic internal representation of the music. (Sloboda, 1985, p. 3)
Similarly, “a piece of music is a mentally constructed entity, of which scores and performances are partial representations by which the piece is transmitted” (Lerdahl & Jackendoff, 1983, p. 2). A classical theory must provide an account of such mentally constructed entities. How are they represented? What processes are required to create and manipulate them?
There is a long history of attempting to use geometric relations to map the relationships between musical pitches, so that similar pitches are nearer to one another in the map (Krumhansl, 2005). Krumhansl (1990) has shown how simple judgments about tones can be used to derive a spatial, cognitive representation of musical elements.
Krumhansl’s general paradigm is called the tone probe method (Krumhansl & Shepard, 1979). In this paradigm, a musical context is established, for instance by playing a partial scale or a chord. A probe note is then played, and subjects rate how well this probe note fits into the context. For instance, subjects might rate how well the probe note serves to complete a partial scale. The relatedness between pairs of tones within a musical context can also be measured using variations of this paradigm.
Extensive use of the probe tone method has revealed a hierarchical organization of musical notes. Within a given musical context—a particular musical key—the most stable tone is the tonic, the root of the key. For example, in the musical key of C major, the note C is the most stable. The next most stable tones are those in either the third or fifth positions of the key’s scale. In the key of C major, these are the notes E or G. Less stable than these two notes are any of the set of remaining notes that belong to the context’s scale. In the context of C major, these are the notes D, F, A, and B. Finally, the least stable tones are the set of five notes that do not belong to the context’s scale. For C major, these are the notes C#, D#, F#, G#, and A#.
This hierarchical pattern of stabilities is revealed using different kinds of contexts (e.g., partial scales, chords), and is found in subjects with widely varying degrees of musical expertise (Krumhansl, 1990). It can also be used to account for judgments about the consonance or dissonance of tones, which is one of the oldest topics in the psychology of music (Helmholtz & Ellis, 1954).
Hierarchical tonal stability relationships can also be used to quantify relationships between different musical keys. If two different keys are similar to one another, then their tonal hierarchies should be similar as well. The correlations between tonal hierarchies were calculated for every possible pair of the 12 different major and 12 different minor musical keys, and then multidimensional scaling was performed on the resulting similarity data (Krumhansl & Kessler, 1982). A fourdimensional solution was found to provide the best fit for the data. This solution arranged the tonic notes along a spiral that wrapped itself around a toroidal surface. The spiral represents two circles of fifths, one for the 12 major scales and the other for the 12 minor scales.
The spiral arrangement of notes around the torus reflects elegant spatial relationships among tonic notes (Krumhansl, 1990; Krumhansl & Kessler, 1982). For any key, the nearest neighbours moving around from the inside to the outside of the torus are the neighbouring keys in the circle of fifths. For instance, the nearest neighbours to C in this direction are the notes F and G, which are on either side of C in the circle of fifths.
In addition, the nearest neighbor to a note in the direction along the torus (i.e., orthogonal to the direction that captures the circles of fifths) reflects relationships between major and minor keys. Every major key has a complementary minor key, and vice versa; complimentary keys have the same key signature, and are musically very similar. Complimentary keys are close together on the torus. For example, the key of C major has the key of A minor as its compliment; the tonic notes for these two scales are also close together on the toroidal map.
Krumhansl’s (1990) tonal hierarchy is a classical representation in two senses. First, the toroidal map derived from tonal hierarchies provides one of the many examples of spatial representations that have been used to model regularities in perception (Shepard, 1984a), reasoning (Sternberg, 1977), and language (Tourange au & Sternberg, 1981, 1982). Second, a tonal hierarchy is not a musical property per se, but instead is a psychologically imposed organization of musical elements. “The experience of music goes beyond registering the acoustic parameters of tone frequency, amplitude, duration, and timbre. Presumably, these are recoded, organized, and stored in memory in a form different from sensory codes” (Krumhansl, 1990, p. 281). The tonal hierarchy is one such mental organization of musical tones.
In music, tones are not the only elements that appear to be organized by psychological hierarchies. “When hearing a piece, the listener naturally organizes the sound signals into units such as motives, themes, phrases, periods, theme-groups, and the piece itself” (Lerdahl & Jackendoff, 1983, p. 12). In their classic work A Generative Theory of Tonal Music, Lerdahl and Jackendoff (1983) developed a classical model of how such a hierarchical organization is derived.
Lerdahl and Jackendoff’s (1983) research program was inspired by Leonard Bernstein’s (1976) Charles Eliot Norton lectures at Harvard, in which Bernstein called for the methods of Chomskyan linguistics to be applied to music. “All musical thinkers agree that there is such a thing as a musical syntax, comparable to a descriptive grammar of speech” (p. 56). There are indeed important parallels between language and music that support developing a generative grammar of music (Jackendoff, 2009). In particular, systems for both language and music must be capable of dealing with novel stimuli, which classical researchers argue requires the use of recursive rules. However, there are important differences too. Most notable for Jackendoff (2009) is that language conveys propositional thought, while music does not. This means that while a linguistic analysis can ultimately be evaluated as being true or false, the same cannot be said for a musical analysis, which has important implications for a grammatical model of music.
Lerdahl and Jackendoff’s (1983) generative theory of tonal music correspondingly has components that are closely analogous to a generative grammar for language and other components that are not. The linguistic analogs assign structural descriptions to a musical piece. These structural descriptions involve four different, but interrelated, hierarchies.
The first is grouping structure, which hierarchically organizes a piece into motives, phrases, and sections. The second is metrical structure, which relates the events of a piece to hierarchically organized alternations of strong and weak beats. The third is time-span reduction, which assigns pitches to a hierarchy of structural importance that is related to grouping and metrical structures. The fourth is prolongational reduction, which is a hierarchy that “expresses harmonic and melodic tension and relaxation, continuity and progression” (Lerdahl & Jackendoff, 1983, p. 9). Prolongational reduction was inspired by Schenkerian musical analysis (Schenker, 1979), and is represented in a fashion that is very similar to a phrase marker. As a result, it is the component of the generative theory of tonal music that is most closely related to a generative syntax of language (Jackendoff, 2009).
Each of the four hierarchies is associated with a set of well-formedness rules (Lerdahl & Jackendoff, 1983). These rules describe how the different hierarchies are constructed, and they also impose constraints that prevent certain structures from being created. Importantly, the well-formedness rules provide psychological principles for organizing musical stimuli, as one would expect in a classical theory. The rules “define a class of grouping structures that can be associated with a sequence of pitch-events, but which are not specified in any direct way by the physical signal (as pitches and durations are)” (p. 39). Lerdahl and Jackendoff take care to express these rules in plain English so as not to obscure their theory. However, they presume that the well-formedness rules could be translated into a more formal notation, and indeed computer implementations of their theory are possible (Hamanaka, Hirata, & Tojo, 2006).
Lerdahl and Jackendoff’s (1983) well-formedness rules are not sufficient to deliver a unique “parsing” of a musical piece. One reason for this is because, unlike language, a musical parsing cannot be deemed to be correct; it can only be described as having a certain degree of coherence or preferredness. Lerdahl and Jackendoff supplement their well-formedness rules with a set of preference rules. For instance, one preference rule for grouping structure indicates that symmetric groups are to be preferred over asymmetric ones. Once again there is a different set of preference rules for each of the four hierarchies of musical structure.
The hierarchical structures defined by the generative theory of tonal music (Lerdahl & Jackendoff, 1983) describe the properties of a particular musical event. In contrast, the hierarchical arrangement of musical tones (Krumhansl, 1990) is a general organizational principle that applies to musical pitches in general, not to an event. Interestingly, the two types of hierarchies are not mutually exclusive. The generative theory of tonal music has been extended (Lerdahl, 2001) to include tonal pitch spaces, which are spatial representations of tones and chords in which the distance between two entities in the space reflects the cognitive distance between them. Lerdahl has shown that the properties of tonal pitch space can be used to aid in the construction of the time-span reduction and the prolongational reduction, increasing the power of the original generative theory. The theory can be used to predict listeners’ judgments about the attraction and tension between tones in a musical selection (Lerdahl & Krumhansl, 2007).
Lerdahl and Jackendoff’s (1983) generative theory of tonal music shares another characteristic with the linguistic theories that inspired it: it provides an account of musical competence, and it is less concerned with algorithmic accounts of music perception. The goal of their theory is to provide a “formal description of the musical intuitions of a listener who is experienced in a musical idiom” (p. 1). Musical intuition is the largely unconscious knowledge that a listener uses to organize, identify, and comprehend musical stimuli. Because characterizing such knowledge is the goal of the theory, other processing is ignored.
Instead of describing the listener’s real-time mental processes, we will be concerned only with the final state of his understanding. In our view it would be fruitless to theorize about mental processing before understanding the organization to which the processing leads. (Lerdahl & Jackendoff, 1983, pp. 3–4)
One consequence of ignoring mental processing is that the generative theory of tonal music is generally not applied to psychologically plausible representations. For instance, in spite of being a theory about an experienced listener, the various incarnations of the theory are not applied to auditory stimuli, but are instead applied to musical scores (Hamanaka, Hirata, & Tojo, 2006; Lerdahl, 2001; Lerdahl & Jackendoff, 1983).
Of course, this is not a principled limitation of the generative theory of tonal music. This theory has inspired researchers to develop models that have a more algorithmic emphasis and operate on representations that take steps towards psychological plausibility (Temperley, 2001).
Temperley’s (2001) theory can be described as a variant of the original generative theory of tonal music (Lerdahl & Jackendoff, 1983). One key difference between the two is the input representation. Temperley employs a piano-roll representation, which can be described as being a two-dimensional graph of musical input. The vertical axis, or pitch axis, is a discrete representation of different musical notes. That is, each row in the vertical axis can be associated with its own piano key. The horizontal axis is a continuous representation of time. When a note is played, a horizontal line is drawn on the piano-roll representation; the height of the line indicates which note is being played. The beginning of the line represents the note’s onset, the length of the line represents the note’s duration, and the end of the line represents the note’s offset. Temperley assumes the psychological reality of the piano-roll representation, although he admits that the evidence for this strong assumption is inconclusive.
Temperley’s (2001) model applies a variety of preference rules to accomplish the hierarchical organization of different aspects of a musical piece presented as a piano-roll representation. He provides different preference rule systems for assigning metrical structure, melodic phrase structure, contrapuntal structure, pitch class representation, harmonic structure, and key structure. In many respects, these preference rule systems represent an evolution of the well-formedness and preference rules in Lerdahl and Jackendoff’s (1983) theory.
For example, one of Temperley’s (2001) preference rule systems assigns metrical structure (i.e., hierarchically organized sets of beats) to a musical piece. Lerdahl and Jackendoff (1983) accomplished this by applying four different well-formedness rules and ten different preference rules. Temperley accepts two of Lerdahl and Jackendoff’s well-formedness rules for metre (albeit in revised form, as preference rules) and rejects two others because they do not apply to the more realistic representation that Temperley adopts. Temperley adds three other preference rules. This system of five preference rules derives metric structure to a high degree of accuracy (i.e., corresponding to a degree of 86 percent or better with Temperley’s metric intuitions).
One further difference between Temperley’s (2001) algorithmic emphasis and Lerdahl and Jackendoff’s (1983) emphasis on competence is reflected in how the theory is refined. Because Temperley’s model is realized as a working computer model, he could easily examine its performance on a variety of input pieces and therefore identify its potential weaknesses. He took advantage of this ability to propose an additional set of four preference rules for meter, as an example, to extend the applicability of his algorithm to a broader range of input materials.
To this point, the brief examples provided in this section have been used to illustrate two of the key assumptions made by classical researchers of musical cognition. First, mental representations are used to impose an organization on music that is not physically present in musical stimuli. Second, these representations are classical in nature: they involve different kinds of rules (e.g., preference rules, wellformedness rules) that can be applied to symbolic media that have musical contents (e.g., spatial maps, musical scores, piano-roll representations). A third characteristic also is frequently present in classical theories of musical cognition: the notion that the musical knowledge reflected in these representations is acquired, or can be modified, by experience.
The plasticity of musical knowledge is neither a new idea nor a concept that is exclusively classical. We saw earlier that composers wished to inform their audience about compositional conventions so the latter could better appreciate performances (Copland, 1939). More modern examples of this approach argue that ear training, specialized to deal with some of the complexities of modern music to be introduced later in this chapter, can help to bridge the gaps between composers, performers, and audiences (Friedmann, 1990). Individual differences in musical ability were thought to be a combination of innate and learned information long before the cognitive revolution occurred (Seashore, 1967): “The ear, like the eye, is an instrument, and mental development in music consists in the acquisition of skills and the enrichment of experience through this channel” (p. 3).
The classical approach views the acquisition of musical skills in terms of changes in mental representations. “We learn the structures that we use to represent music” (Sloboda, 1985, p. 6). Krumhansl (1990, p. 286) noted that the robust hierarchies of tonal stability revealed in her research reflect stylistic regularities in Western tonal music. From this she suggests that “it seems probable, then, that abstract tonal and harmonic relations are learned through internalizing distributional properties characteristic of the style.” This view is analogous to those classical theories of perception that propose that the structure of internal representations imposes constraints on visual transformations that mirror the constraints imposed by the physics of the external world (Shepard, 1984b).
Krumhansl’s (1990) internalization hypothesis is one of many classical accounts that have descended from Leonard Meyer’s account of musical meaning arising from emotions manipulated by expectation (Meyer, 1956). “Styles in music are basically complex systems of probability relationships” (p. 54). Indeed, a tremendous variety of musical characteristics can be captured by applying Bayesian models, including rhythm and metre, pitch and melody, and musical style (Temperley, 2007). A great deal of evidence also suggests that expectations about what is to come next are critical determinants of human music perception (Huron, 2006). Temperley argues that classical models of music perception (Lerdahl, 2001; Lerdahl & Jackendoff, 1983; Temperley, 2001) make explicit these probabilistic relationships. “Listeners’ generative models are tuned to reflect the statistical properties of the music that they encounter” (Temperley, 2007, p. 207).
It was earlier argued that there are distinct parallels between Austro-German classical music and the classical approach to cognitive science. One of the most compelling is that both appeal to abstract, formal structures. It would appear that the classical approach to musical cognition takes this parallel very literally. That is, the representational systems proposed by classical researchers of musical cognition internalize the formal properties of music, and in turn they impose this formal structure on sounds during the perception of music. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/06%3A_Classical_Music_and_Cognitive_Science/6.03%3A_The_Classical_Approach_to_Musical_Cognition.txt |
The eighteenth-century Industrial Revolution produced profound changes in the nature of European life, transferring power and wealth from the nobility to the commercial class (Plantinga, 1984). Tremendous discontentment with the existing social order, culminating in the French revolution, had a profound influence on political, intellectual, and artistic pursuits. It led to a movement called Romanticism (Claudon, 1980), which roughly spanned the period from the years leading up to the 1789 French revolution through to the end of the nineteenth century.
A precise definition of Romanticism is impossible, for it developed at different times in different countries, and in different arts—first poetry, then painting, and finally music (Einstein, 1947). Romanticism was a reaction against the reason and rationality that characterized the Enlightenment period that preceded it. Romanticism emphasized the individual, the irrational, and the imaginative. Arguably music provided Romanticism’s greatest expression (Einstein, 1947; Plantinga, 1984), because music expressed mystical and imaginative ideas that could not be captured by language.
It is impossible to provide a clear characterization of Romantic music (Einstein, 1947; Longyear, 1988; Plantinga, 1984; Whittall, 1987). “We seek in vain an unequivocal idea of the nature of ‘musical Romanticism’” (Einstein, 1947, p. 4). However, there is general agreement that Romantic music exhibits,
a preference for the original rather than the normative, a pursuit of unique effects and extremes of expressiveness, the mobilization to that end of an enriched harmonic vocabulary, striking new figurations, textures, and tone colors. (Plantinga, 1984, p. 21)
The list of composers who were musical Romanticism’s greatest practitioners begins with Beethoven, and includes Schubert, Mendelssohn, Schumann, Chopin, Berlioz, Liszt, Wagner, and Brahms.
Romantic music can be used to further develop the analogy between classical music and cognitive science. In particular, there are several parallels that exist between musical Romanticism and connectionist cognitive science. The most general similarity between the two is that both are reactions against the Cartesian view of the mind that dominated the Enlightenment.
Romantic composers wished to replace the calculated, rational form of music such as Bach’s contrapuntal fugues (Gaines, 2005; Hofstadter, 1979) with a music that expressed intensity of feeling, which communicated the sublime. “It was a retrogression to the primitive relationship that man had had to music—to the mysterious, the exciting, the magical” (Einstein, 1947, p. 8). As a result, musical Romanticism championed purely instrumental music; music that was not paired with words. The instrumental music of the Romantics “became the choicest means of saying what could not be said, of expressing something deeper than the word had been able to express” (p. 32). In a famous 1813 passage, music critic E. T. A. Hoffman proclaimed instrumental music to be “the most romantic of all the arts—one might almost say, the only genuinely romantic one—for its sole subject is the infinite” (Strunk, 1950, p. 775).
Connectionist cognitive science too is a reaction against the rationalism and logicism of Cartesian philosophy. And one form of this reaction parallels Romantic music’s move away from the word: many connectionists interpreted the ability of networks to accomplish classical tasks as evidence that cognitive science need not appeal to explicit rules or symbols (Bechtel & Abrahamsen, 1991; Horgan & Tienson, 1996; Ramsey, Stich, & Rumelhart, 1991; Rumelhart & McClelland, 1986a).
A second aspect of musical Romanticism’s reaction against reason was its emphasis on the imaginary and the sublime. In general, the Romantic arts provided escape by longingly looking back at “unspoiled,” preindustrial existences and by using settings that were wild and fanciful. Nature was a common inspiration. The untamed mountains and chasms of the Alps stood in opposition to the Enlightenment’s view that the world was ordered and structured.
For example, in the novel Frankenstein (Shelley, 1985), after the death of Justine, Victor Frankenstein seeks solace in a mountain journey. The beauty of a valley through which he traveled “was augmented and rendered sublime by the mighty Alps, whose white and shining pyramids and domes towered above all, as belonging to another earth, the habitations of another race of beings” (p. 97). To be sublime was to reflect a greatness that could not be completely understood. “The immense mountains and precipices that overhung me on every side—the sound of the river raging among the rocks, and the dashing of the waterfalls around, spoke of a power mighty as Omnipotence” (p. 97).
Sublime Nature appeared frequently in musical Romanticism. Longyear’s (1988, p. 12) examples include “the forest paintings in Weber’s Der Freischütz or Wagner’s; the landscapes and seascapes of Mendelssohn and Gade; the Alpine pictures in Schumann’s or Tchaikovsky’s Manfred” to name but a few.
Musical Romanticism also took great pains to convey the imaginary or the indescribable (Whittall, 1987). In some striking instances, Romantic composers followed the advice in John Keats’ 1819 Ode on a Grecian Urn, “Heard melodies are sweet, but those unheard / Are sweeter.” Consider Schumann’s piano work Humoreske (Rosen, 1995). It uses three staves: one for the right hand, one for the left, and a third—containing the melody!—which is not to be played at all. Though inaudible, the melody “is embodied in the upper and lower parts as a kind of after resonance—out of phase, delicate, and shadowy” (p. 8). The effects of the melody emerge from playing the other parts.
In certain respects, connectionist cognitive science is sympathetic to musical Romanticism’s emphasis on nature, the sublime, and the imaginary. Cartesian philosophy, and the classical cognitive science that was later inspired by it, view the mind as disembodied, being separate from the natural world. In seeking theories that are biologically plausible and neuronally inspired (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c), connectionists took a small step towards embodiment. Whereas Descartes completely separated the mind from the world, connectionists assume that brains cause minds (Searle, 1984).
Furthermore, connectionists recognize that the mental properties caused by brains may be very difficult to articulate using a rigid set of rules and symbols. One reason that artificial neural networks are used to study music is because they may capture regularities that cannot be rationally expressed (Bharucha, 1999; Rowe, 2001; Todd & Loy, 1991). These regularities emerge from the nonlinear interactions amongst network components (Dawson, 2004; Hillis, 1988). And the difficulty in explaining such interactions suggests that networks are sublime. Artificial neural networks seem to provide “the possibility of constructing intelligence without first understanding it” (Hillis, 1988, p. 176).
Musical Romanticism also celebrated something of a scale less grand than sublime Nature: the individual. Romantic composers broke away from the established system of musical patronage. They began to write music for its own (or for the composer's own) sake, instead of being written for commission (Einstein, 1947). Beethoven’s piano sonatas were so brilliant and difficult that they were often beyond the capabilities of amateur performers who had mastered Haydn and Mozart. His symphonies were intended to speak “to a humanity that the creative artist had raised to his own level” (p. 38). The subjectivity and individualism of musical Romanticism is one reason that there is no typical symphony, art-song, piano piece or composer from this era (Longyear, 1988).
Individualism was also reflected in the popularity of musical virtuosos, for whom the Romantic period was a golden age (Claudon, 1980). These included the violinists Paganini and Baillot, and the pianists Liszt, Chopin, and Schumann. They were famous not only for their musical prowess, but also for a commercialization of their character that exploited Romanticist ideals (Plantinga, 1984). Paganini and Liszt were “transformed by the Romantic imagination into a particular sort of hero: mysterious, sickly, and bearing the faint marks of dark associations with another world” (Plantinga, 1984, p. 185).
Individualism is also a fundamental characteristic of connectionism. It is not a characteristic of connectionist researchers themselves (but see below), but is instead a characteristic of the networks that they describe. When connectionist simulations are reported, the results are almost invariably provided for individual networks. This was demonstrated in Chapter 4; the interpretations of internal structure presented there are always of individual networks. This is because there are many sources of variation between networks as a result of the manner in which they are randomly initialized (Dawson, 2005). Thus it is unlikely that one network will be identical to another, even though both have learned the same task. Rather than exploring “typical” network properties, it is more expedient to investigate the interesting characteristics that can be found in one of the networks that were successfully trained.
There are famous individual networks that are analogous to musical virtuosos. These include the Jets-Sharks network used to illustrate the interactive activation with competition (IAC) architecture (McClelland & Rumelhart, 1988); a multilayered network that converted English verbs from present to past tense (Pinker & Prince, 1988; Rumelhart & McClelland, 1986a); and the NETTALK system that learned to read aloud (Sejnowski & Rosenberg, 1988).
Individualism revealed itself in another way in musical Romanticism. When Romantic composers wrote music for its own sake, they assumed that its audience would be found later (Einstein, 1947). Unfortunately, “few artists gained recognition without long, difficult struggles” (Riedel, 1969, p. 6). The isolation of the composer from the audience was an example of another Romantic invention: the composer was the misunderstood genius who idealistically pursued art for art’s sake. “The Romantic musician . . . was proud of his isolation. In earlier centuries the idea of misunderstood genius was not only unknown; it was inconceivable” (Einstein, 1947, p. 16).
The isolated genius is a recurring character in modern histories of connectionism, one of which is presented as a fairy tale (Papert, 1988), providing an interesting illustration of the link between Romanticism and connectionism. According to the prevailing view of connectionist history (Anderson & Rosenfeld, 1998; Hecht-Nielsen, 1987; Medler, 1998; Olazaran, 1996), the isolation of the neural net researcher began with a crusade by Minsky and Papert, prior to the publication of Perceptrons (Minsky & Papert, 1969), against research funding for perceptron-like systems.
Minsky and Papert’s campaign achieved its purpose. The common wisdom that neural networks were a research dead-end became firmly established. Artificial intelligence researchers got all of the neural network research money and more. The world had been reordered. And neurocomputing had to go underground. (Hecht-Nielsen, 1987, p. 17)
Going underground, at least in North America, meant connectionist research was conducted sparingly by a handful of researchers, disguised by labels such as “adaptive pattern recognition” and “biological modelling” during the “quiet years” from 1967 until 1982 (Hecht-Nielsen, 1987). A handful of neural network researchers “struggled through the entire span of quiet years in obscurity.” While it did not completely disappear, “neural-net activity decreased significantly and was displaced to areas outside AI (it was considered ‘deviant’ within AI)” (Olazaran, 1996, p. 642). Like the Romantic composers they resemble, these isolated connectionist researchers conducted science for science’s sake, with little funding, waiting for an audience to catch up—which occurred with the 1980s rise of New Connectionism.
Even though Romanticism can be thought of as a musical revolution, it did not abandon the old forms completely. Instead, Romanticist composers adapted them, and explored them, for their own purposes. For example, consider the history of the symphony. In the early seventeenth century, the symphony was merely a short overture played before the raising of the curtains at an opera (Lee, 1916). Later, the more interesting of these compositions came to be performed to their own audiences outside the theatre. The modern symphony, which typically consists of four movements (each with an expected form and tempo), begins to be seen in the eighteenth-century compositions of Carl Philip Emmanuel Bach. Experiments with this structure were conducted in the later eighteenth century by Haydn and Mozart. When Beethoven wrote his symphonies in the early nineteenth century, the modern symphonic form was established—and likely perfected. “No less a person than Richard Wagner affirmed that the right of composing symphonies was abolished by Beethoven’s Ninth’” (p. 172).
Beethoven is often taken to be the first Romantic composer because he also proved that the symphony had enormous expressive power. The Romantic composers who followed in his footsteps did not introduce dramatic changes in musical form; rather they explored variations within this form in attempts to heighten its emotional expressiveness. “Strictly speaking, no doubt, musical Romanticism is more style than language” (Whittall, 1987, p. 17). Romantic composers developed radically new approaches to instrumentation, producing new tone colours (Ratner, 1992). The amount of sound was manipulated as an expressive tool; Romanticists increased “the compass, dynamic range, and timbral intensity of virtually all instruments” (Ratner, 1992, p. 9). New harmonic progressions were invented. But all of these expressive innovations involved relaxing, rather than replacing, classical conventions. “There can be little doubt that ‘romantic’ musical styles emanate from and comingle with ‘classic’ ones. There is no isolable time and place where one leaves off and the other begins” (Plantinga, 1984, p. 22).
Connectionist cognitive science has been portrayed as a revolution (Hanson & Olson, 1991) and as a paradigm shift (Schneider, 1987). However, it is important to remember that it, like musical Romanticism, also shares many of the characteristics of the classical school that it reacted against.
For instance, connectionists don’t abandon the notion of information processing; they argue that the brain is just a different kind of information processor than is a digital computer (Churchland, Koch, & Sejnowski, 1990). Connectionists don’t discard the need for representations; they instead offer different kinds, such as distributed representations (Hinton, McClelland, & Rumelhart, 1986). Connectionists don’t dispose of symbolic accounts; they propose that they are approximations to subsymbolic regularities (Smolensky, 1988).
Furthermore, it was argued earlier in this book that connectionist cognitive science cannot be distinguished from classical cognitive science on many other dimensions, including the adoption of functionalism (Douglas & Martin, 1991) and the classical sandwich (Calvo & Gomila, 2008; Clark, 1997). When these two approaches are compared in the context of the multiple levels of investigation discussed in Chapter 2, there are many similarities between them:
Indeed, the fact that the two can be compared in this way at all indicates a commitment to a common paradigm—an endorsement of the foundational assumption of cognitive science: cognition is information processing. (Dawson, 1998, p. 298)
Copland (1952, pp. 69–70) argued that the drama of European music was defined by two polar forces: “the pull of tradition as against the attraction of innovation.” These competing forces certainly contributed to the contradictory variety found in musical Romanticism (Einstein, 1947); perhaps they too have shaped modern connectionist cognitive science. This issue can be explored by considering connectionist approaches to musical cognition and comparing them to the classical research on musical cognition that was described earlier in the current chapter. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/06%3A_Classical_Music_and_Cognitive_Science/6.04%3A_Musical_Romanticism_and_Connectionism.txt |
Connectionist research on musical cognition is perhaps not as established as classical research, but it has nonetheless produced a substantial and growing literature (Bharucha, 1999; Fiske, 2004; Griffith & Todd, 1999; Todd & Loy, 1991). The purpose of this section is to provide a very brief orientation to this research. As the section develops, the relationship of connectionist musical cognition to certain aspects of musical Romanticism is illustrated.
By the late 1980s, New Connectionism had begun to influence research on musical cognition. The effects of this spreading influence have been documented in two collections of research papers (Griffith & Todd, 1999; Todd & Loy, 1991). Connectionist musical cognition has been studied with a wide variety of network architectures, and covers a broad range of topics, most notably classifying pitch and tonality, assigning rhythm and metre, classifying and completing melodic structure, and composing new musical pieces (Griffith & Todd, 1999).
Why use neural networks to study musical cognition? Bharucha (1999) provided five reasons. First, artificial neural networks can account for the learning of musical patterns via environmental exposure. Second, the type of learning that they describe is biologically plausible. Third, they provide a natural and biologically plausible account of contextual effects and pattern completion during perception. Fourth, they are particularly well suited to modeling similarity-based regularities that are important in theories of musical cognition. Fifth, they can discover regularities (e.g., in musical styles) that can elude more formal analyses.
To begin our survey of connectionist musical cognition, let us consider the artificial neural network classifications of pitch, tonality, and harmony (Griffith & Todd, 1999; Purwins et al., 2008). A wide variety of such tasks have been successfully explored: artificial neural networks have been trained to classify chords (Laden & Keefe, 1989; Yaremchuk & Dawson, 2005; Yaremchuk & Dawson, 2008), assign notes to tonal schema similar to the structures proposed by Krumhansl (1990) (Leman, 1991; Scarborough, Miller, & Jones, 1989), model the effects of expectation on pitch perception and other aspects of musical perception (Bharucha, 1987; Bharucha & Todd, 1989), add harmony to melodies (Shibata, 1991), determine the musical key of a melody (Griffith, 1995), and detect the chord patterns in a composition (Gjerdingen, 1992).
Artificial neural networks are well suited for this wide range of pitch-related tasks because of their ability to exploit contextual information, which in turn permits them to deal with noisy inputs. For example, networks are capable of pattern completion, which is replacing information that is missing from imperfect input patterns. In musical cognition, one example of pattern completion is virtual pitch (Terhardt, Stoll, & Seewann, 1982a, 1982b), the perception of pitches that are missing their fundamental frequency.
Consider a sine wave whose frequency is \(f\). When we hear a musical sound, its pitch (i.e., its tonal height, or the note that we experience) is typically associated with this fundamental frequency (Helmholtz & Ellis, 1954; Seashore, 1967). The harmonics of this sine wave are other sine waves whose frequencies are integer multiples of \(f\) (i.e., \(2f\), \(3f\), \(4f\) and so on). The timbre of the sound (whether we can identify a tone as coming from, for example, a piano versus a clarinet) is a function of the amplitudes of the various harmonics that are also audible (Seashore, 1967).
Interestingly, when a complex sound is filtered so that its fundamental frequency is removed, our perception of its pitch is not affected (Fletcher, 1924). It is as if the presence of the other harmonics provides enough information for the auditory system to fill in the missing fundamental, so that the correct pitch is heard—a phenomenon Schumann exploited in Humoreske. Co-operative interactions amongst neurons that detect the remaining harmonics are likely responsible for this effect (Cedolin & Delgutte, 2010; Smith et al., 1978; Zatorre, 2005).
Artificial neural networks can easily model such co-operative processing and complete the missing fundamental. For instance, one important connectionist system is called a Hopfield network (Hopfield, 1982, 1984). It is an autoassociative network that has only one set of processing units, which are all interconnected. When a pattern of activity is presented to this type of network, signals spread rapidly to all of the processors, producing dynamic interactions that cause the network’s units to turn on or off over time. Eventually the network will stabilize in a least-energy state; dynamic changes in processor activities will come to a halt.
Hopfield networks can be used to model virtual pitch, because they complete the missing fundamental (Benuskova, 1994). In this network, each processor represents a sine wave of a particular frequency; if the processor is on, then this represents that the sine wave is present. If a subset of processors is activated to represent a stimulus that is a set of harmonics with a missing fundamental, then when the network stabilizes, the processor representing the missing fundamental will be also activated. Other kinds of self-organizing networks are also capable of completing the missing fundamental (Sano & Jenkins, 1989).
An artificial neural network’s ability to deal with noisy inputs allows it to cope with other domains of musical cognition as well, such as assigning rhythm and metre (Desain & Honing, 1989; Griffith & Todd, 1999). Classical models of this type of processing hierarchically assign a structure of beats to different levels of a piece, employing rules that take advantage of the fact that musical rhythm and metre are associated with integer values (e.g., as defined by time signatures, or in the definition of note durations such as whole notes, quarter notes, and so on) (Lerdahl & Jackendoff, 1983; Temperley, 2001). However, in the actual performance of a piece, beats will be noisy or imperfect, such that perfect integer ratios of beats will not occur (Gasser, Eck, & Port, 1999). Connectionist models can correct for this problem, much as networks can restore absent information such as the missing fundamental.
For example, one network for assigning rhythm and metre uses a system of oscillating processors, units that fire at a set frequency (Large & Kolen, 1994). One can imagine having available a large number of such oscillators, each representing a different frequency. While an oscillator’s frequency of activity is constant, its phase of activity can be shifted (e.g., to permit an oscillator to align itself with external beats of the same frequency). If the phases of these processors can also be affected by co-operative and competitive interactions between the processors themselves, then the phases of the various components of the system can become entrained. This permits the network to represent the metrical structure of a musical input, even if the actual input is noisy or imperfect. This notion can be elaborated in a self-organizing network that permits preferences for, or expectancies of, certain rhythmic patterns to determine the final representation that the network converges to (Gasser Eck, & Port, 1999).
The artificial neural network examples provided above illustrate another of Bharucha’s (1999) advantages of such models: biological plausibility. Many neural network models are attempts to simulate some aspects of neural accounts of auditory and musical perception. For instance, place theory is the proposal that musical pitch is represented by places of activity along the basilar membrane in the cochlea (Helmholtz & Ellis, 1954; von Bekesy, 1928). The implications of place theory can be explored by using it to inspire spatial representations of musical inputs to connectionist networks (Sano & Jenkins, 1989).
The link between connectionist accounts and biological accounts of musical cognition is not accidental, because both reflect reactions against common criticisms. Classical cognitive scientist Steven Pinker is a noted critic of connectionist cognitive science (Pinker, 2002; Pinker & Prince, 1988). Pinker (1997) has also been a leading proponent of massive modularity, which ascribes neural modules to most cognitive faculties—except for music. Pinker excluded music because he could not see any adaptive value for its natural selection: “As far as biological cause and effect are concerned, music is useless. It shows no signs of design for attaining a goal such as long life, grandchildren, or accurate perception and prediction of the world” (p. 528). The rise of modern research in the cognitive neuroscience of music (Cedolin & Delgutte, 2010; Peretz & Coltheart, 2003; Peretz & Zatorre, 2003; Purwins et al., 2008; Stewart et al., 2006; Warren, 2008) is a reaction against this classical position, and finds a natural ally in musical connectionism.
In the analogy laid out in the previous section, connectionism’s appeal to the brain was presented as an example of its Romanticism. Connectionist research on musical cognition reveals other Romanticist parallels. Like musical Romanticism, connectionism is positioned to capture regularities that are difficult to express in language or by using formal rules (Loy, 1991).
For example, human subjects can accurately classify short musical selections into different genres or styles in a remarkably short period of time, within a quarter of a second (Gjerdingen & Perrott, 2008). But it is difficult to see how one could provide a classical account of this ability because of the difficulty in formally defining a genre or style for a classical model. “It is not likely that musical styles can be isolated successfully by simple heuristics and introspection, nor can they be readily modeled as a rule-solving problem” (Loy, 1991, p. 31).
However, many different artificial neural networks have been developed to classify music using categories that seem to defy precise, formal definitions. These include networks that can classify musical patterns as belonging to the early works of Mozart (Gjerdingen, 1990); classify selections as belonging to different genres of Western music (Mostafa & Billor, 2009); detect patterns of movement between notes in segments of music (Gjerdingen, 1994) in a fashion similar to a model of apparent motion perception (Grossberg & Rudd, 1989, 1992); evaluate the affective aesthetics of a melody (Coutinho & Cangelosi, 2009; Katz, 1995); and even predict the possibility that a particular song has “hit potential” (Monterola et al., 2009).
Categories such as genre or hit potential are obviously vague. However, even identifying a stimulus as being a particular song or melody may also be difficult to define formally. This is because a melody can be transposed into different keys, performed by different instruments or voices, or even embellished by adding improvisational flourishes.
Again, melody recognition can be accomplished by artificial neural networks that map, for instance, transposed versions of the same musical segment onto a single output representation (Benuskova, 1995; Bharucha & Todd, 1989; Page, 1994; Stevens & Latimer, 1992). Neural network melody recognition has implications for other aspects of musical cognition, such as the representational format for musical memories. For instance, self-organizing networks can represent the hierarchical structure of a musical piece in an abstract enough fashion so that only the “gist” is encoded, permitting the same memory to be linked to multiple auditory variations (Large, Palmer, & Pollack, 1995). Auditory processing organizes information into separate streams (Bregman, 1990); neural networks can accomplish this for musical inputs by processing relationships amongst pitches (Grossberg, 1999).
The insights into musical representation that are being provided by artificial neural networks have important implications beyond musical cognition. There is now wide availability of music and multimedia materials in digital format. How can such material be classified and searched? Artificial neural networks are proving to be useful in addressing this problem, as well as for providing adaptive systems for selecting music, or generating musical playlists, based on a user’s mood or past preferences (Bugatti, Flammini, & Migliorati, 2002; Jun, Rho, & Hwang, 2010; Liu, Hsieh, & Tsai, 2010; Muñoz-Expósito et al., 2007).
Musical styles, or individual musical pieces, are difficult to precisely define, and therefore are problematic to incorporate into classical theories. “The fact that even mature theories of music are informal is strong evidence that the performer, the listener, and the composer do not operate principally as rule-based problem solvers” (Loy, 1991, p. 31). That artificial neural networks are capable of classifying music in terms of such vague categories indicates that “perhaps connectionism can show the way to techniques that do not have the liabilities of strictly formal systems” (p. 31). In other words, the flexibility and informality of connectionist systems allows them to cope with situations that may be beyond the capacity of classical models. Might not this advantage also apply to another aspect of musical cognition, composition?
Composition has in fact been one of the most successful applications of musical connectionism. A wide variety of composing networks have been developed. Networks have been developed to compose single-voiced melodies on the basis of learned musical structure (Mozer, 1991; Todd, 1989); to compose harmonized melodies or multiple-voice pieces (Adiloglu & Alpaslan, 2007; Bellgard & Tsang, 1994; Hoover & Stanley, 2009; Mozer, 1994); to learn jazz melodies and harmonies, and then to use this information to generate new melodies when presented with novel harmonies (Franklin, 2006); and to improvise by composing variations on learned melodies (Nagashima & Kawashima, 1997). The logic of network composition is that the relationship between successive notes in a melody, or between different notes played at the same time in a harmonized or multiple-voice piece, is not random, but is instead constrained by stylistic, melodic, and acoustic constraints (Kohonen et al., 1991; Lewis, 1991; Mozer, 1991, 1994). Networks are capable of learning such constraints and using them to predict, for example, what the next note should be in a new composition.
In keeping with musical Romanticism, however, composing networks are presumed to have internalized constraints that are difficult to formalize or to express in ordinary language. “Nonconnectionist algorithmic approaches in the computer arts have often met with the difficulty that ‘laws’ of art are characteristically fuzzy and ill-suited for algorithmic description” (Lewis, 1991, p. 212). Furthermore these “laws” are unlikely to be gleaned from analyzing the internal structure of a network, “since the hidden units typically compute some complicated, often uninterpretable function of their inputs” (Todd, 1989, p. 31). It is too early to label a composing network as an isolated genius, but it would appear that these networks are exploiting regularities that are in some sense sublime!
This particular parallel between musical Romanticism and connectionism, that both capture regularities that cannot be formalized, is apparent in another interesting characteristic of musical connectionism. The most popular algorithm for training artificial neural networks is the generalized delta rule (i.e., error backpropagation) (Chauvin & Rumelhart, 1995; Widrow & Lehr, 1990), and networks trained with this kind of supervised learning rule are the most likely to be found in the cognitive science literature. While self-organizing networks are present in this literature and have made important contributions to it (Amit, 1989; Carpenter & Grossberg, 1992; Grossberg, 1988; Kohonen, 1984, 2001), they are much less popular. However, this does not seem to be the case in musical connectionism.
For example, in the two collections that document advances in artificial neural network applications to musical cognition (Griffith & Todd, 1999; Todd & Loy, 1991), 23 papers describe new neural networks. Of these contributions, 9 involve supervised learning, while 14 describe unsupervised, self-organizing networks. This indicates a marked preference for unsupervised networks in this particular connectionist literature.
This preference is likely due to the view that supervised learning is not practical for musical cognition, either because many musical regularities can be acquired without feedback or supervision (Bharucha, 1991) or because for higher-level musical tasks the definition of the required feedback is impossible to formalize (Gjerdingen, 1989). “One wonders, for example, if anyone would be comfortable in claiming that one interpretation of a musical phrase is only 69 percent [as] true as another” (p. 67). This suggests that the musical Romanticism of connectionism is even reflected in its choice of network architectures. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/06%3A_Classical_Music_and_Cognitive_Science/6.05%3A_The_Connectionist_Approach_to_Musical_Cognition.txt |
European classical music is innovation constrained by tradition (Copland, 1952). By the end of the nineteenth century, composers had invented a market for instrumental music by refining established musical conventions (Rosen, 1988). “The European musician is forced into the position of acting as caretaker and preserver of other men’s music, whether he likes it or no” (Copland, 1952, p. 69).
What are the general characteristics of European classical music? Consider the sonata-allegro form, which is based upon particular musical themes or melodies that are associated with a specific tonality. That is, they are written in a particular musical key. This tonality dictates harmonic structure; within a musical key, certain notes or chords will be consonant, while others will not be played because of their dissonance. The sonata-allegro form also dictates an expected order in which themes and musical keys are explored and a definite time signature to be used throughout.
The key feature from above is tonality, the use of particular musical keys to establish an expected harmonic structure. “Harmony is Western music’s uniquely distinguishing element” (Pleasants, 1955, p. 97). It was a reaction against this distinguishing characteristic that led to what is known as modern music (Griffiths, 1994, 1995; Ross, 2007). This section further explores the analogy between classical music and cognitive science via parallels between modern music and embodied cognitive science.
In the early twentieth century, classical music found itself in a crisis of harmony (Pleasants, 1955). Composers began to abandon most of the characteristics of traditional European classical music in an attempt to create a new music that better reflected modern times. “‘Is it not our duty,’ [Debussy] asked, ‘to find a symphonic means to express our time, one that evokes the progress, the daring and the victories of modern days? The century of the aeroplane deserves its music’” (Griffiths, 1994, p. 98).
Modern music is said to have begun with the Prélude à L’après-midi d’un faune composed by Claude Debussy between 1892 and 1894 (Griffiths, 1994). The Prélude breaks away from the harmonic relationships defined by strict tonality. It fails to logically develop themes. It employs fluctuating tempos and irregular rhythms. It depends critically on instrumentation for expression. Debussy “had little time for the thorough, continuous, symphonic manner of the Austro-German tradition, the ‘logical’ development of ideas which gives music the effect of a narrative” (p. 9).
Debussy had opened the paths of modern music—the abandonment of traditional tonality, the development of new rhythmic complexity, the recognition of color as an essential, the creation of a quite new form for each work, the exploration of deeper mental processes. (Griffiths, 1994, p. 12)
In the twentieth century, composers experimented with new methods that further pursued these paths and exploited notions related to emergence, embodiment, and stigmergy.
To begin, let us consider how modern music addressed the crisis of harmony by composing deliberately atonal music. The possibility of atonality in music emerges from the definition of musical tonality. In Western music there are 12 possible notes available. If all of these notes are played in order from lowest to highest, with each successive note a semitone higher than the last, the result is a chromatic scale.
Different kinds of scales are created by invoking constraints that prevent some notes from being played, as addressed in the Chapter 4 discussion of jazz progressions. A major scale is produced when a particular set of 7 notes is played, and the remaining 5 notes are not played. Because a major scale does not include all of the notes in a chromatic scale, it has a distinctive sound—its tonality. A composition that had the tonal center of A major only includes those notes that belong to the A-major scale.
This implies that what is required to produce music that is atonal is to include all of the notes from the chromatic scale. If all notes were included, then it would be impossible to associate this set of notes with a tonal center. One method of ensuring atonality is the “twelve-tone technique,” or dodecaphony, invented by Arnold Schoenberg.
When a dodecaphony is employed, a composer starts by listing all twelve possible notes in some desired order, called the tone row. The tone row is the basis for a melody: the composer begins to write the melody by using the first note in the tone row, for a desired duration, possibly with repetition. However, this note cannot be reused in the melody until the remaining notes have also been used in the order specified by the tone row. This ensures that the melody is atonal, because all of the notes that make up a chromatic scale have been included. Once all twelve notes have been used, the tone row is used to create the next section of the melody. At this time, it can be systematically manipulated to produce musical variation.
The first dodecaphonic composition was Schoenberg’s 1923 Suite for Piano, Op. 25. Schoenberg and his students Alban Berg and Anton Webern composed extensively using the twelve-note technique. A later musical movement called serialism used similar systems to determine other parameters of a score, such as note durations and dynamics. It was explored by Olivier Messiaen and his followers, notably Pierre Boulez and Karlheinz Stockhausen (Griffiths, 1995).
Dodecaphony provided an alternative to the traditional forms of classical music. However, it still adhered to the Austro-German tradition’s need for structure. Schoenberg invented dodecaphony because he needed a system to compose larger-scale atonal works; prior to its invention he was “troubled by the lack of system, the absence of harmonic bearings on which large forms might be directed. Serialism at last offered a new means of achieving order” (Griffiths, 1994, p. 81).
A new generation of American composers recognized that dodecaphony and serialism were still strongly tied to musical tradition: “To me, it was music of the past, passing itself off as music of the present” (Glass, 1987, p. 13). Critics accused serialist compositions of being mathematical or mechanical (Griffiths, 1994), and serialism did in fact make computer composition possible: in 1964 Gottfried Koenig created Project 1, which was a computer program that composed serial music (Koenig, 1999).
Serialism also shared the traditional approach’s disdain for the audience. American composer Steve Reich (1974, p. 10) noted that “in serial music, the series itself is seldom audible,” which appears to be a serial composer’s intent (Griffiths, 1994). Bernstein (1976, p. 273) wrote that Schoenberg “produced a music that was extremely difficult for the listener to follow, in either form or content.” This music’s opacity, and its decidedly different or modern sound, frequently led to hostile receptions. One notable example is The Agony of Modern Music:
The vein for which three hundred years offered a seemingly inexhaustible yield of beautiful music has run out. What we know as modern music is the noise made by deluded speculators picking through the slag pile.(Pleasants,1955, p. 3)
That serial music was derived from a new kind of formalism also fuelled its critics.
Faced with complex and lengthy analyses, baffling terminology and a total rejection of common paradigms of musical expression, many critics—not all conservative—found ample ammunition to back up their claims that serial music was a mere intellectual exercise which could not seriously be regarded as music at all. (Grant, 2001, p. 3)
Serialism revealed that European composers had difficulty breaking free of the old forms even when they recognized a need for new music (Griffiths, 1994). Schoenberg wrote, “I am at least as conservative as Edison and Ford have been. But I am, unfortunately, not quite as progressive as they were in their own fields” (Griffiths, 1995, p. 50).
American composers rejected the new atonal structures (Bernstein, 1976). Phillip Glass described his feelings about serialism so: “A wasteland, dominated by these maniacs, these creeps, who were trying to make everyone write this crazy creepy music” (Schwarz, 1996). When Glass attended concerts, the only “breaths of fresh air” that he experienced were when works from modern American composers such as John Cage were on the program (Glass, 1987). Leonard Bernstein (1976, p. 273) wrote that “free atonality was in itself a point of no return. It seemed to fulfill the conditions for musical progress. . . . But then: a dead end. Where did one go from here?” The new American music was more progressive than its European counterpart because its composers were far less shackled by musical traditions.
For instance, American composers were willing to relinquish the central control of the musical score, recognizing the improvisational elements of classical composition (Benson, 2003). Some were even willing to surrender the composer’s control over the piece (Cage, 1961), recognizing that many musical effects depended upon the audience’s perceptual processes (Potter, 2000; Schwarz, 1996). It was therefore not atonality itself but instead the American reaction to it that led to a classical music with clear links to embodied cognitive science.
Consider, for instance, the implications of relinquishing centralized control in modern music. John Cage was largely motivated by his desire to free musical compositions from the composer’s will. He wrote that “when silence, generally speaking, is not in evidence, the will of the composer is. Inherent silence is equivalent to denial of the will” (Cage, 1961, p. 53). Cage’s most famous example of relinquishing control is in his “silent piece,” 4’33”, first performed by pianist David Tudor in 1952 (Nyman, 1999). It consists of three parts; the entire score for each part reads “TACET,” which instructs the performer to remain silent. Tudor signaled the start of each part by closing the keyboard lid, and opened the lid when the part was over.
4’33” places tremendous compositional responsibility upon its audience. Cage is quoted on this subject as saying:
Most people think that when they hear a piece of music, they’re not doing anything but something is being done to them. Now this is not true, and we must arrange our music, we must arrange our art, we must arrange everything, I believe, so that people realize that they themselves are doing it. (Nyman, 1999, p. 24)
This is contrary to the traditional disembodiment of classical music that treats audiences as being passive and unimportant.
Cage pioneered other innovations as he decentralized control in his compositions. From the early 1950s onwards, he made extended use of chance operations when he composed. Cage used dice rolls to determine the order of sounds in his 1951 piano piece Music of Changes (Ross, 2007). The stochastic nature of Cage’s compositional practices did not produce music that sounded random. This is because Cage put tremendous effort into choosing interesting sound elements. “In the Music of Changes the effect of the chance operations on the structure (making very apparent its anachronistic character) was balanced by a control of the materials” (Cage, 1961, p. 26). Cage relaxed his influence on control—that is, upon which element to perform next—with the expectation that this, coupled with his careful choice of elements that could be chosen, would produce surprising and interesting musical results. Cage intended novel results to emerge from his compositions.
The combination of well-considered building blocks to produce emergent behaviours that surprise and inform is characteristic of embodied cognitive science (Braitenberg, 1984; Brooks, 1999; Dawson, 2004; Dawson, Dupuis, & Wilson, 2010; Pfeifer & Scheier, 1999; Webb & Consi, 2001).
Advances in synthetic psychology come about by taking a set of components, by letting them interact, and by observing surprising emergent phenomena. However, the role of theory and prior knowledge in this endeavor is still fundamentally important, because it guides decisions about what components to select, and about the possible dynamics of their interaction. In the words of Cervantes, diligence is the mother of good luck. (Dawson, 2004, p. 22)
An emphasis on active audiences and emergent effects is also found in the works of other composers inspired by Cage (Schwarz, 1996). For instance, compositions that incorporated sounds recorded on magnetic tape were prominent in early minimalist music. Minimalist pioneer Terry Riley began working with tape technology in 1960 (Potter, 2000). He recorded a variety of sounds and made tape loops from them. A tape loop permitted a sound segment to be repeated over and over. He then mixed these tapes using a device called an echoplex that permitted the sounds “to be repeated in an ever-accumulating counterpoint against itself” (p. 98). Further complexities of sound were produced by either gradually or suddenly changing the speed of the tape to distort the tape loop’s frequency. Riley’s tape loop experiments led him to explore the effects of repetition, which was to become a centrally important feature of minimalist music.
Riley’s work strongly influenced other minimalist composers. One of the most famous minimalist tape compositions is Steve Reich’s 1965 It’s Gonna Rain. Reich recorded a sermon of a famous street preacher, Brother Walter, who made frequent Sunday appearances in San Francisco’s Union Square. From this recording, Reich made a tape loop of a segment of the sermon that contained the title phrase. Reich (2002) played two copies of this tape loop simultaneously on different tape machines, and made a profound discovery:
In the process of trying to line up two identical tape loops in some particular relationship, I discovered that the most interesting music of all was made by simply lining the loops up in unison, and letting them slowly shift out of phase with each other. (Reich, 2002, p. 20)
He recorded the result of phase-shifting the loops, and composed his piece by phase-shifting a loop of this recording. Composer Brian Eno describes Reich’s It’s Gonna Rain thus:
The piece is very, very interesting because it’s tremendously simple. It’s a piece of music that anybody could have made. But the results, sonically, are very complex. . . . What you become aware of is that you are getting a huge amount of material and experience from a very, very simple starting point. (Eno, 1996)
The complexities of It’s Gonna Rain emerge from the dynamic combination of simple components, and thus are easily linked to the surrender of control that was begun by John Cage. However, they also depend to a large extent upon the perceptual processes of a listener when confronted with the continuous repetition of sound fragments. “The mind is mesmerized by repetition, put into such a state that small motifs can leap out of the music with a distinctness quite unrelated to their acoustic dominance” (Griffiths, 1994, p. 167). From a perceptual point of view, it is impossible to maintain a constant perception of a repeated sound segment. During the course of listening, the perceptual system will habituate to some aspects of it, and as a result—as if by chance—new regularities will emerge. “The listening experience itself can become aleatory in music[,] subject to ‘aural illusions’” (p. 166).
Minimalism took advantage of the active role of the listener and exploited repetition to deliberately produce aural illusions. The ultimate effect of a minimalist composition is not a message created by the composer and delivered to a (passive) audience, but is instead a collaborative effort between musician and listener. Again, this mirrors the interactive view of world and agent that characterizes embodied cognitive science and stands opposed to the disembodied stance taken by both Austro-German music and classical cognitive science.
Minimalism became lastingly important when its composers discovered how their techniques, such as decentralized control, repetition, and phase shifting, could be communicated using a medium that was more traditional than tape loops. This was accomplished when Terry Riley realized that the traditional musical score could be reinvented to create minimalist music. Riley’s 1964 composition In C is 53 bars of music written in the key of C major, indicating a return to tonal music. Each bar is extremely simple; the entire score fits onto a single page. Performers play each bar in sequence. However, they repeat a bar as many times as they like before moving on to the next. When they reach the final bar, they repeat it until all of the other performers have reached it. At that time, the performance is concluded.
Riley’s In C can be thought of as a tape loop experiment realized as a musical score. Each performer is analogous to one of the tape loops, and the effect of the music arises from their interactions with one another. The difference, of course, is that each “tape loop” is not identical to the others, because each performer controls the number of times that they repeat each bar. Performers listen and react to In C as they perform it.
There are two compelling properties that underlie a performance of In C. First, each musician is an independent agent who is carrying out a simple act. At any given moment each musician is performing one of the bars of music. Second, what each musician does at the next moment is affected by the musical environment that the ensemble of musicians is creating. A musician’s decision to move from one bar to the next depends upon what they are hearing. In other words, the musical environment being created is literally responsible for controlling the activities of the agents who are performing In C. This is a musical example of a concept that we discussed earlier as central to embodied cognitive science: stigmergy.
In stigmergy, the behaviours of agents are controlled by an environment in which they are situated, and which they also can affect. The performance of a piece like In C illustrates stigmergy in the sense that musicians decide what to play next on the basis of what they are hearing right now. Of course, what they decide to play will form part of the environment, and will help guide the playing decisions of other performers.
The stigmergic nature of minimalism contrasts with the classical ideal of a composer transcribing mental contents. One cannot predict what In C will sound like by examining its score. Only an actual performance will reveal what In C’s score represents. Reich (1974, p. 9) wrote: “Though I may have the pleasure of discovering musical processes and composing the musical material to run through them, once the process is set up and loaded it runs by itself.”
Reich’s idea of a musical process running by itself is reminiscent of synthetic psychology, which begins by defining a set of primitive abilities for an agent. Typically there are nonlinear interactions between these building blocks, and between the building blocks and the environment. As a result, complex and interesting behaviors emerge—results that far exceed behavioral predictions based on knowing the agent’s makeup (Braitenberg, 1984). Human intelligence is arguably the emergent product of simple, interacting mental agents (Minsky, 1985). The minimalists have tacitly adopted this view and created a mode of composition that reflects it.
The continual evolution of modern technology has had a tremendous impact on music. Some of this technology has created situations in which musical stigmergy is front and centre. For example, consider a computer program called Swarm Music (Blackwell, 2003). In Swarm Music, there are one or more swarms of “particles.” Each particle is a musical event: it exists in a musical space where the coordinates of the space define musical parameters such as pitch, duration, and loudness, and the particle’s position defines a particular combination of these parameters. A swarm of particles is dynamic, and it is drawn to attractors that are placed in the space. The swarm can thus be converted into music. “The swarming behavior of these particles leads to melodies that are not structured according to familiar musical rules, but are nevertheless neither random nor unpleasant” (Blackwell & Young, 2004).
Swarm Music is made dynamic by coupling it with human performers in an improvised and stigmergic performance. The sounds created by the human performers are used to revise the positions of the attractors for the swarms, causing the music generated by the computer system to change in response to the other performers. The human musicians then change their performance in response to the computer.
Performers who have improvised with Swarm Music are affected by its stigmergic nature. Jazz singer Kathleen Willison,
was surprised to find in the first improvisation that Swarm Music seemed to be imitating her: ‘(the swarm) hit the same note at the same time—the harmonies worked.’ However, there was some tension; ‘at times I would have liked it to slow down . . . it has a mind of its own . . . give it some space.’ Her solution to the ‘forward motion’ of the swarms was to ‘wait and allow the music to catch up’. (Blackwell, 2003, p. 47)
Another new technology in which musical stigmergy is evident is the reacTable (Jordà et al., 2007; Kaltenbrunner et al., 2007). The reacTable is an electronic synthesizer that permits several different performers to play it at the same time. The reacTable is a circular, translucent table upon which objects can be placed. Some objects generate waveforms, some perform algorithmic transformations of their inputs, and some control others that are nearby. Rotating an object, and using a fingertip to manipulate a visual interface that surrounds it, modulates a musical process (i.e., changes the frequency and amplitude of a sine wave). Visual signals displayed on the reacTable—and visible to all performers—indicate the properties of the musical event produced by each object as well as the flow of signals from one object to another.
The reacTable is an example of musical stigmergy because when multiple performers use it simultaneously, they are reacting to the existing musical events. These events are represented as physical locations of objects on the reacTable itself, the visual signals emanating from these objects, and the aural events that the reacTable is producing. By co-operatively moving, adding, or removing objects, the musicians collectively improvise a musical performance. The reacTable is an interface intended to provide a “combination of intimate and sensitive control, with a more macro-structural and higher level control which is intermittently shared, transferred and recovered between the performer(s) and the machine” (Jordà et al., 2007, p. 145). That is, the reacTable—along with the music it produces—provides control analogous to that provided by the nest-in-progress of an insect colony.
From the preceding discussion, we see that modern music shares many characteristics with the embodied reaction to classical cognitive science. With its decentralization of control, responsibility for the composition has “leaked” from the composer's mind. Its definition also requires contributions from both the performers and the audience, and not merely a score. This has implications for providing accounts of musical meaning, or of the goals of musical compositions. The classical notion of music communicating intended meanings to audiences is not easily applied to modern music.
Classical cognitive science’s view of communication is rooted in cybernetics (Shannon, 1948; Wiener, 1948), because classical cognitive science arose from exploring key cybernetic ideas in a cognitivist context (Conrad, 1964b; Leibovic, 1969; Lindsay & Norman, 1972; MacKay, 1969; Selfridge, 1956; Singh, 1966). As a result, the cybernetic notion of communication—transfer of information from one location to another—is easily found in the classical approach.
The classical notion of communication is dominated by the conduit metaphor (Reddy, 1979). According to the conduit metaphor, language provides containers (e.g., sentences, words) that are packed with meanings and delivered to receivers, who unpack them to receive the intended message. Reddy provides a large number of examples of the conduit metaphor, including: “You still haven’t given me any idea of what you mean”; “You have to put each concept into words very carefully”; and “The sentence was filled with emotion.”
The conduit metaphor also applies to the traditional view of classical music, which construes this music as a “hot medium” to which the listener contributes little (McLuhan, 1994): the composer places some intended meaning into a score, the orchestra brings the score to life exactly as instructed by the score, and the (passive) audience unpacks the delivered music to get the composer’s message.
We thus hear people say that music can only have meaning if it is seen to be a type of language, with elements akin to words, phrases and sentences, and with elements that refer beyond themselves to extramusical things, events, or ideas. (Johnson, 2007, p. 207)
In other words, the classical view of musical meaning is very similar to the view of meaning espoused by classical cognitive science: music is a symbolic, intentional medium.
The view of music as a symbolic medium that conveys intended meaning has generated a long history of resistance. The autonomist school of aesthetics (see Hanslick, 1957) argued against the symbolic theories of musical meaning, as well as against theories that music communicated emotion. Hanslick’s (1957) position was that music was a medium whose elements were pure and nonrepresentational. Hanslick famously argued that “the essence of music is sound and motion” (p. 48). Modern positions that treat musical meaning in an embodied fashion are related to Hanslick’s (Johnson, 2007; Leman, 2008).
Embodied alternatives to musical meaning become attractive because the conduit metaphor breaks down in modern music. If control is taken away from the score and the conductor, if the musicians become active contributors to the composition (Benson, 2003), if the audience is actively involved in completing the composition as well, and if music is actually a “cool medium,” then what is the intended message of the piece?
Modern embodied theories of music answer this question by taking a position that follows naturally from Hanslick’s (1957) musical aesthetics. They propose that the sound and motion of music literally have bodily effects that are meaningful. For instance, Johnson (2007) noted that,
to hear music is just to be moved and to feel in the precise way that is defined by the patterns of musical motion. Those feelings are meaningful in the same way that any pattern of emotional flow is meaningful to us at a pre-reflective level of awareness. (Johnson, 2007, p. 239)
Similarly, Leman (2008, p. 17) suggested that “moving sonic forms do something with our bodies, and therefore have a signification through body action rather than through thinking.” Some implications of this position are considered in the next section.
Minimalist composers themselves adopt a McLuhanesque view of the meaning of their compositions: the music doesn’t deliver a message, but is itself the message. After being schooled in the techniques of serialism, which deliberately hid the underlying musical structures from the audience’s perception, the minimalists desired to create a different kind of composition. When presented minimalist compositions, the audience would hear the musical processes upon which the pieces were built. Reich (2002, p. 34) said he was “interested in perceptible processes. I want to be able to hear the process happening throughout the sounding music.”
Reich made processes perceptible by making them gradual. But this didn’t make his compositions less musical.
Even when all the cards are on the table and everyone hears what is gradually happening in a musical process, there are still enough mysteries to satisfy all. These mysteries are the impersonal, unintended, psychoacoustic by-products of the intended process. (Reich, 2002, p. 35)
Reich’s recognition that the listener contributes to the composition—that classical music is a cool medium, not a hot one—is fundamental to minimalist music. Philip Glass (1987) was surprised to find that he had different experiences of different performances of Samuel Beckett’s Play, for which Glass composed music. He realized that “Beckett’s Play doesn’t exist separately from its relationship to the viewer, who is included as part of the play’s content” (p. 36). Audiences of Glass’ Einstein on the Beach had similar experiences. “The point about Einstein was clearly not what it ‘meant’ but that it was meaningful as generally experienced by the people who saw it” (p. 33).
Modern music has many parallels to embodied cognitive science, and has many characteristics that distinguish it from other traditions of classical music. Alternative views of composition, the role of the audience, and the control of a performance are clearly analogous to embodied concepts such as emergence, embodiment, and stigmergy. They also lead to a very different notion of the purpose of music, in its transition from “hot” to “cool.” Not surprisingly, the radical differences between classical and modern music are reflected in differences between classical and embodied cognitive science’s study of musical cognition, as is discussed in the next section. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/06%3A_Classical_Music_and_Cognitive_Science/6.06%3A_The_Embodied_Nature_of_Modern_Music.txt |
A well-established modern view of classical music is that it has meaning, and that its purpose is to convey this meaning in a fashion that is consistent with Reddy’s (1979) conduit metaphor.
Composers and performers of all cultures, theorists of diverse schools and styles, aestheticians and critics of many different persuasions are all agreed that music has meaning and that this meaning is somehow communicated to both participants and listeners. (Meyer, 1956, p. 1)
Furthermore, there is a general consensus that the meaning that is communicated is affective, and not propositional, in nature. However, the means by which musical meaning is communicated is subject to a tremendous amount of debate (Meyer, 1956; Robinson, 1997).
One view of musical communication, consistent with classical cognitive science, is that music is a symbol system. For example, the semiotic view of music is that it is a system of signs that provides a narrative or a discourse (Agawu, 1991, 2009; Austerlitz, 1983; Lidov, 2005; Monelle, 2000; Pekkilä, Neumeyer, & Littlefield, 2006; Tarasti, 1995; Turino, 1999). From this perspective, musical signs are intentional: they are about the tensions or emotions they produce or release in listeners. This approach naturally leads to an exploration of the parallels between music and language (Austerlitz, 1983; Jackendoff, 2009; Lidov, 2005), as well as to the proposal of generative grammars of musical structure (Lerdahl, 2001; Lerdahl & Jackendoff, 1983; Sundberg & Lindblom, 1976). Potential parallels between language and music have led some researchers to describe brain areas for syntax and semantics that are responsible for processing both music and language (Koelsch et al., 2004; Patel, 2003).
A related view of musical communication, but one more consistent with connectionist than classical cognitive science, is that music communicates emotion but does so in a way that cannot be captured by set of formal rules or laws (Lewis, 1991; Loy, 1991; Minsky, 1981; Todd, 1989). Instead, musical meanings are presumed to be entwined in a complex set of interactions between past experiences and current stimulation, interactions that may be best captured by the types of learning exhibited by artificial neural networks. “Many musical problems that resist formal solutions may turn out to be tractable anyway, in future simulations that grow artificial musical semantic networks” (Minsky, 1981, p. 35).
Both views of musical meaning described above are consistent with the conduit metaphor, in that they agree that (1) music is intentional and content-bearing (although they disagree about formalizing this content) and (2) that the purpose of music is to communicate this content to audiences. A third approach to musical meaning, most consistent with embodied cognitive science, distinguishes itself from the other two by rejecting the conduit metaphor.
According to the embodied view (Clarke, 2005; Johnson, 2007; Leman, 2008), the purpose of music is not to acquire abstract or affective content, but to instead directly, interactively, and physically experience music. “People try to be involved with music because this involvement permits an experience of behavioral resonance with physical energy” (Leman, 2008, p. 4).
The emphasis on direct contact that characterizes the embodied view of music is a natural progression from the autonomist school of musical aesthetics that arose in the nineteenth century (Hanslick, 1957). Music critic Eduard Hanslick (1957) opposed the view that music was representative and that its purpose was to communicate content or affect. For Hanslick, a scientific aesthetics of music was made impossible by sentimental appeals to emotion: “The greatest obstacle to a scientific development of musical aesthetics has been the undue prominence given to the action of music on our feelings” (p. 89).
As noted previously, Hanslick (1957, p. 48) argued instead that “the essence of music is sound and motion.” The modern embodied approach to music echoes and amplifies this perspective. Johnson (2007) agreed with Hanslick that music is not typically representative or intentional. Instead, Johnson argued that the dynamic nature of music—its motion, in Hanslick’s sense—presents “the flow of human experience, feeling, and thinking in concrete, embodied forms” (p. 236). The motion of music is not communicative, it is causal. “To hear the music is just to be moved and to feel in the precise way that is defined by the patterns of the musical motion” (p. 239). The motion intrinsic to the structure of music is motion that we directly and bodily experience when it is presented to us. Johnson argues that this is why metaphors involving motion are so central to our conceptualization of music.
“Many people try to get into direct contact with music. Why do they do so? Why do people make great efforts to attend a concert? Why do they invest so much time in learning to play a musical instrument” (Leman, 2008, p. 3). If the meaning of music is the felt movement that it causes, then the need for direct experience of music is completely understandable. This is also reflected in an abandonment of the conduit metaphor. The embodied view of music does not accept the notion that music is a conduit for the transmission of propositional or affective contents. Indeed, it hypothesizes that the rational assessment of music might interfere with how it should best be experienced.
Activities such as reasoning, interpretation, and evaluation may disturb the feeling of being directly involved because the mind gets involved in a representation of the state of the environment, which distracts the focus and, as a result, may break the ‘magic spell’ of being entrained. (Leman, 2008, p. 5)
Clearly embodied researchers have a very different view of music than do classical or connectionist researchers. This in turn leads to very different kinds of research on musical cognition than the examples that have been introduced earlier in this chapter.
To begin, let us consider the implication of the view that listeners should be directly involved with music (Leman, 2008). From this view, it follows that the full appreciation of music requires far more than the cognitive interpretation of auditory stimulation. “It is a matter of corporeal immersion in sound energy, which is a direct way of feeling musical reality. It is less concerned with cognitive reflection, evaluation, interpretation, and description” (Leman, 2008, p. 4). This suggests that cross-modal interactions may be critical determinants of musical experience.
Some research on musical cognition is beginning to explore this possibility. In one study (Vines et al., 2006) subjects were presented with performances by two clarinetists. Some subjects only heard, some subjects only saw, and some subjects both heard and saw the performances. Compared to the first two groups of subjects, those who both heard and saw the performances had very different experiences. The visual information altered the experience of tension at different points, and the movements of the performers provided additional information that affected the experienced phrasing as well as expectations about emotional content. “The auditory and visual channels mutually enhance one another to convey content, and . . . an emergent quality exists when a musician is both seen and heard” (p. 108).
In a more recent study, Vines et al. (2011) used a similar methodology, but they also manipulated the expressive style with which the stimulus (a solo clarinet piece composed by Stravinsky) was performed. Subjects were presented with the piece in restrained, standard, or exaggerated fashion. These manipulations of expressive style only affected the subjects who could see the performance. Again, interactions were evident when performances were both seen and heard. For instance, subjects in this condition had significantly higher ratings of “happiness” in comparison to other subjects.
The visual component of musical performance makes a unique contribution to the communication of emotion from performer to audience. Seeing a musician can augment, complement, and interact with the sound to modify the overall experience of music. (Vines et al., 2011, p. 168)
Of course, the embodied approach to music makes much stronger claims than that there are interactions between hearing and seeing; it views cognition not as a medium for planning, but instead as a medium for acting. It is not surprising, then, to discover that embodied musical cognition has studied the relationships between music and actions, gestures, and motion in a variety of ways (Gritten & King, 2011).
One of the most prominent of these relationships involves the exploration of new kinds of musical instruments, called digital musical instruments. A digital musical instrument is a musical instrument that involves a computer and in which the generation of sound is separate from the control interface that chooses sound (Marshall et al. 2009). This distinction is important, because as Marshall et al. (2009) pointed out, there are many available sensors that can register a human agent’s movements, actions, or gestures. These include force sensitive resistors, video cameras, accelerometers, potentiometers, and bend sensors, not to mention buttons and microphones.
The availability of digital sensors permits movements, actions, and gestures to be measured and used to control the sounds generated by a digital musical instrument. This requires that a mapping be defined from a measured action to a computer-generated sound (Verfaille, Wanderley, & Depalle, 2006). Of course, completely novel relationships between gesture and sound become possible within this framework (Sapir, 2002). This permits the invention of musical instruments that can be played by individuals with no training on an instrument, because they can interact with a digital musical instrument using everyday gestures and actions (Paradiso, 1999).
The development of digital musical instruments has resulted in the need to study a variety of topics quite different from those examined by classical and connectionist researchers. One important topic involves determining how to use measured actions to control sound production (Verfaille, Wanderley, & Depalle, 2006). However, an equally important topic concerns the nature of the gestures and actions themselves. In particular, researchers of digital musical instruments are concerned with exploring issues related to principles of good design (Dourish, 2001; Norman, 2002, 2004) in order to identify and evaluate possible interfaces between actions and instruments (Magnusson, 2010; O’Modhrain, 2011; Ungvary & Vertegaal, 2000; Wanderley & Orio, 2002). Another issue is to choose a set of actions that can be varied, so that a performer of a digital musical instrument can manipulate its expressiveness (Arfib, Couturier, & Kessous, 2005).
The development of digital musical instruments has also led to a reevaluation of the roles of composers, performers, and audience. In the acoustic paradigm (Bown, Eldridge, & McCormack, 2009), which adheres to the traditional view of classical music outlined earlier in this chapter, these three components have distinct and separable roles. Digital musical instruments result in the acoustic paradigm being disrupted. Bown, Eldridge, and McCormack (2009) argued that the software components should not be viewed as instruments, but instead as behavioral objects. A behavioral object is “an entity that can act as a medium for interaction between people through its dissemination and evolution, can develop interactively with individuals in processes of creative musical development, and can interact with other behavioral objects to produce musical output” (p. 193); it is behavioral in the sense that it can act and interact, but it is an object in the sense that it is a material thing that can be seen and touched.
In their role as behavioral objects, digital musical instruments blur the sharp distinctions between the roles defined by the acoustic paradigm (Bown, Eldridge, & McCormack, 2009). This is because their software components dramatically alter the interactions between composer, performer, and listener.
Interaction does not involve the sharing simply of passive ideas or content, but of potentially active machines that can be employed for musical tasks. Whereas musical ideas may once have developed and circulated far more rapidly than the inanimate physical objects that define traditional musical instruments, software objects can now evolve and move around at just as fast a pace. (Bown, Eldridge, & McCormack, 2009, p. 192)
The new interactions discussed by Bown, Eldridge, and McCormack (2009) suggested that digital musical instruments can affect musical thought. It has been argued that these new instruments actually scaffold musical cognition, and therefore they extend the musical mind (Magnusson, 2009). According to Magnusson, traditional acoustic instruments have been created in bricoleur fashion by exploring combinations of existing materials, and learning to play such an instrument involves exploring its affordances. “The physics of wood, strings and vibrating membranes were there to be explored and not invented” (p. 174). In contrast, the software of digital musical instruments permits many aspects of musical cognition to be extended into the instrument itself. Digital musical instruments,
typically contain automation of musical patterns (whether blind or intelligent) that allow the performer to delegate musical actions to the instrument itself, such as playing arpeggios, generating rhythms, expressing spatial dimensions as scales (as opposed to pitches), and so on. (Magnusson, 2009, p. 168)
The embodied approach is not limited to the study of digital musical instruments. Actions are required to play traditional musical instruments, and such actions have been investigated. For instance, researchers have examined the fingering choices made by pianists as they sight read (Sloboda et al., 1998) and developed ergonomic models of piano fingering (Parncutt et al., 1997). Bowing and fingering movements for string instruments have also been the subject of numerous investigations (Baader, Kazennikov, & Wiesendanger, 2005; Kazennikov & Wiesendange r, 2009; Konczak, van der Velden, & Jaeger, 2009; Maestre et al., 2010; Rasamima nana & Bevilacqua, 2008; Turner-Stokes & Reid, 1999). This research has included the development of the MusicJacket, a worn device that analyzes the movement of a violin player and provides vibrotactile feedback to teach proper bowing (van der Linden et al., 2011). The relationship between alternative flute fingerings and their effect on produced tones have also been examined (Botros, Smith, & Wolfe, 2006; Verfaille, Depalle, & Wanderley, 2010).
The embodied approach is also actively exploring the possibility that gestural or other kinds of interactions can be used to retrieve digitized music (Casey et al., 2008; Leman, 2008). Personal music collections are becoming vast, and traditional methods of discovering music (i.e., record stores and radio stations) are being replaced by social networking sites and the World Wide Web. As a result, there is a growing need for these large digital collections of music to be searchable. However, the most common approach for cataloguing and searching these collections is to use textual metadata that provides an indirect description of the stored music, such as the name of the composer, the title of the song, or the genre of the music (Leman, 2008).
The embodied approach is interested in the possibility of using more direct aspects of music to guide such retrieval (Leman, 2008). Is it possible to access music on the basis of one’s personal experience of music? Leman hypothesizes that human action can serve as the basis of a corporeal-based querying system for retrieving music. His idea is to use the body to convert a musical idea (e.g., a desire to retrieve a particular type of music) into musical physical energy that can be mapped onto the profiles of digitized music, permitting content-based retrieval. For instance, one could query a musical database by singing or playing a melody (De Mulder et al., 2006), by manipulating a spatial representation that maps the similarity of stored music (Cooper et al., 2006; Pampalk, Dixon, & Widmer, 2004), or even by making gestures (Ko & Byun, 2002).
Compared to the other two approaches described in this chapter, the embodied approach to musical cognition is fairly new, and it is not as established. “The hypothesis that musical communication is based on the encoding, transmission, and decoding of intended actions is, I believe, an attractive one. However, at this moment it is more a working hypothesis than an established fact” (Leman, 2008, p. 237). This “working hypothesis,” though, has launched an interesting literature on the study of the relationship between music and action that is easily distinguished from the classical and connectionist research on musical cognition. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/06%3A_Classical_Music_and_Cognitive_Science/6.07%3A_The_Embodied_Approach_to_Musical_Cognition.txt |
In the preceding sections of this chapter we have explored the analogy that cognitive science is like classical music. This analogy was developed by comparing the characteristics of three different types of classical music to the three different schools of cognitive science: Austro-German classical music to classical cognitive science, musical Romanticism to connectionist cognitive science, and modern music to embodied cognitive science.
We also briefly reviewed how each of the three different schools has studied topics in the cognition of music. One purpose of this review was to show that each school of cognitive science has already made important contributions to this research domain. Another purpose was to show that the topics in musical cognition studied by each school reflected different, tacit views of the nature of music. For instance, the emphasis on formalism in traditional classical music is reflected in classical cognitive science’s attempt to create generative grammars of musical structure (Lerdahl & Jackendoff, 1983). Musical romanticism’s affection for the sublime is reflected in connectionist cognitive science’s use of unsupervised networks to capture regularities that cannot be formalized (Bharucha, 1999). Modern music’s rejection of the classic distinctions between composer, performer, and audience is reflected in embodied cognitive science’s exploration of how digital musical instruments can serve as behavioral objects to extend the musical mind (Bown, Eldridge, & McCormack, 2009; Magnusson, 2009).
The perspectives summarized above reflect a fragmentation of how cognitive science studies musical cognition. Different schools of cognitive science view music in dissimilar ways, and therefore they explore alternative topics using diverse methodologies. The purpose of this final section is to speculate on a different relationship between cognitive science and musical cognition, one in which the distinctions between the three different schools of thought become less important, and in which a hybrid approach to the cognition of music becomes possible.
One approach to drawing the different approaches to musical cognition together is to return to the analogy between cognitive science and classical music and to attempt to see whether the analogy itself provides room for co-operation between approaches. One of the themes of the analogy was that important differences between Austro-German music, Romantic music, and modern music existed, and that these differences paralleled those between the different schools of cognitive science. However, there are also similarities between these different types of music, and these similarities can be used to motivate commonalities between the various cognitive sciences of musical cognition. It was earlier noted that similarities existed between Austro-German classical music and musical Romanticism because the latter maintained some of the structures and traditions of the former. So let us turn instead to bridging a gap that seems much wider, the gap between Austro-German and modern music.
The differences between Austro-German classical music and modern music seem quite clear. The former is characterized by centralized control and formal structures; it is a hot medium (McLuhan, 1994) that creates marked distinctions between composer, performer, and a passive audience (Bown, Eldridge, & McCormack, 2009), and it applies the conduit metaphor (Reddy, 1979) to view the purpose of music as conveying content from composer to listener. In contrast, modern music seems to invert all of these properties. It abandons centralized control and formal structures; it is a cool medium that blurs the distinction between composer, performer, and an active audience; and it rejects the conduit metaphor and the intentional nature of music (Hanslick, 1957; Johnson, 2007).
Such dramatic differences between types of classical music suggest that it would not be surprising for very different theories to be required to explain such cognitive underpinnings. For instance, consider the task of explaining the process of musical composition. A classical theory might suffice for an account of composing Austro-German music, while a very different approach, such as embodied cognitive science, may be required to explain the composition of modern music.
One reason for considering the possibility of theoretical diversity is that in the cool medium of modern music, where control of the composition is far more decentralized, a modern piece seems more like an improvisation than a traditional composition. “A performance is essentially an interpretation of something that already exists, whereas improvisation presents us with something that only comes into being in the moment of its presentation” (Benson, 2003, p. 25). Jazz guitarist Derek Bailey (1992) noted that the ability of an audience to affect a composition is expected in improvisation: “Improvisation’s responsiveness to its environment puts the performance in a position to be directly influenced by the audience” (p. 44). Such effects, and more generally improvisation itself, are presumed to be absent from the Austro-German musical tradition: “The larger part of classical composition is closed to improvisation and, as its antithesis, it is likely that it will always remain closed” (p. 59).
However, there is a problem with this kind of dismissal. One of the shocks delivered by modern music is that many of its characteristics also apply to traditional classical music.
For instance, Austro-German music has a long tradition of improvisation, particularly in church music (Bailey, 1992). A famous example of such improvisation occurred when Johann Sebastian Bach was summoned to the court of German Emperor Frederick the Great in 1747 (Gaines, 2005). The Emperor played a theme for Bach on the piano and asked Bach to create a three-part fugue from it. The theme was a trap, probably composed by Bach’s son Carl Philipp Emanuel (employed by the Emperor), and was designed to resist the counterpoint techniques required to create a fugue. “Still, Bach managed, with almost unimaginable ingenuity, to do it, even alluding to the king’s taste by setting off his intricate counterpoint with a few gallant flourishes” (Gaines, 2005, p. 9). This was pure improvisation, as Bach composed and performed the fugue on the spot.
Benson (2003) argued that much of traditional music is actually improvisational, though perhaps less evidently than in the example above. Austro-German music was composed within the context of particular musical and cultural traditions. This provided composers with a constraining set of elements to be incorporated into new pieces, while being transformed or extended at the same time.
Composers are dependent on the ‘languages’ available to them and usually those languages are relatively well defined. What we call ‘innovation’ comes either from pushing the boundaries or from mixing elements of one language with another. (Benson, 2003, p. 43)
Benson argued that improvisation provides a better account of how traditional music is composed than do alternatives such as “creation” or “discovery,” and then showed that improvisation also applies to the performance and the reception of pre-modern works.
The example of improvisation suggests that the differences between the different traditions of classical music are quantitative, not qualitative. That is, it is not the case that Austro-German music is (for example) formal while modern music is not; instead, it may be more appropriate to claim that the former is more formal (or more centrally controlled, or less improvised, or hotter) than the latter. The possibility of quantitative distinctions raises the possibility that different types of theories can be applied to the same kind of music, and it also suggests that one approach to musical cognition may benefit by paying attention to the concerns of another.
The likelihood that one approach to musical cognition can benefit by heeding the concerns of another is easily demonstrated. For instance, it was earlier argued that musical Romanticism was reflected in connectionism’s assumption that artificial neural networks could capture regularities that cannot be formalized. One consequence of this assumption was shown to be a strong preference for the use of unsupervised networks.
However, unsupervised networks impose their own tacit restrictions upon what connectionist models can accomplish. One popular architecture used to study musical cognition is the Kohonen network (Kohonen, 1984, 2001), which assigns input patterns to winning (most-active) output units, and which in essence arranges these output units (by modifying weights) such that units that capture similar regularities are near one another in a two-dimensional map. One study that presented such a network with 115 different chords found that its output units arranged tonal centres in a pattern that reflected a noisy version of the circle of fifths (Leman, 1991).
A limitation of this kind of research is revealed by relating it to classical work on tonal organization (Krumhansl, 1990). As we saw earlier, Krumhansl found two circles of fifths (one for major keys, the other for minor keys) represented in a spiral representation wrapped around a toroidal surface. In order to capture this elegant representation, four dimensions were required (Krumhansl & Kessler, 1982). By restricting networks to representations of smaller dimensionality (such as a twodimensional Kohonen feature map), one prevents them from detecting or representing higher-dimensional regularities. In this case, knowledge gleaned from classical research could be used to explore more sophisticated network architectures (e.g., higher-dimensional self-organized maps).
Of course, connectionist research can also be used to inform classical models, particularly if one abandons “gee whiz” connectionism and interprets the internal structure of musical networks (Dawson, 2009). When supervised networks are trained on tasks involving the recognition of musical chords (Yaremchuk & Dawson, 2005; Yaremchuk & Dawson, 2008), they organize notes into hierarchies that capture circles of major seconds and circles of major thirds, as we saw in the network analyses presented in Chapter 4. As noted previously, these so-called strange circles are rarely mentioned in accounts of music theory. However, once discovered, they are just as formal and as powerful as more traditional representations such as the circle of fifths. In other words, if one ignores the sublime nature of networks and seeks to interpret their internal structures, one can discover new kinds of formal representations that could easily become part of a classical theory.
Other, more direct integrations can be made between connectionist and classical approaches to music. For example, NetNeg is a hybrid artificial intelligence system for composing two voice counterpoint pieces (Goldman et al., 1999). It assumes that some aspects of musical knowledge are subsymbolic and difficult to formalize, while other aspects are symbolic and easily described in terms of formal rules. NetNeg incorporates both types of processes to guide composition. It includes a network component that learns to reproduce melodies experienced during a training phase and uses this knowledge to generate new melodies. It also includes two rule-based agents, each of which is responsible for composing one of the voices that make up the counterpoint and for enforcing the formal rules that govern this kind of composition.
There is a loose coupling between the connectionist and the rule-based agents in NetNeg (Goldman et al., 1999), so that both co-operate, and both place constraints, on the melodies that are composed. The network suggests the next note in the melody, for either voice, and passes this information on to a rule-based agent. This suggestion, combined with interactions between the two rule-based agents (e.g., to reach an agreement on the next note to meet some aesthetic rule, such as moving the melody in opposite directions), results in each rule-based agent choosing the next note. This selection is then passed back to the connectionist part of the system to generate the next melodic prediction as the process iterates.
Integration is also possible between connectionist and embodied approaches to music. For example, for a string instrument, each note in a composition can be played by pressing different strings in different locations, and each location can be pressed by a different finger (Sayegh, 1989). The choice of string, location, and fingering is usually not specified in the composition; a performer must explore a variety of possible fingerings for playing a particular piece. Sayegh has developed a connectionist system that places various constraints on fingering so the network can suggest the optimal fingering to use. A humorous—yet strangely plausible— account of linking connectionist networks with actions was provided in Garrison Cottrell’s (1989) proposal of the “connectionist air guitar.”
Links also exist between classical and embodied approaches to musical cognition, although these are more tenuous because such research is in its infancy. For example, while Leman (2008) concentrated on the direct nature of musical experience that characterizes the embodied approach, he recognized that indirect accounts—such as verbal descriptions of music—are both common and important. The most promising links are appearing in work on the cognitive neuroscience of music, which is beginning to explore the relationship between music perception and action.
Interactions between perception of music and action have already been established. For instance, when classical music is heard, the emotion associated with it can affect perceptions of whole-body movements directed towards objects (Van den Stock et al., 2009). The cognitive neuroscience of music has revealed a great deal of evidence for the interaction between auditory and motor neural systems (Zatorre, Chen, & Penhune, 2007).
Such evidence brings to mind the notion of simulation and the role of mirror neurons, topics that were raised in Chapter 5’s discussion of embodied cognitive science. Is it possible that direct experience of musical performances engages the mirror system? Some researchers are considering this possibility (D’Ausilio, 2009; Lahav, Saltzman, & Schlaug, 2007). Lahav, Saltzman, and Schlaug (2007) trained non-musicians to play a piece of music. They then monitored their subjects’ brain activity while they listened to this newly learned piece while not performing any movements. It was discovered that motor-related areas of the brain were activated during the listening. Less activity in these areas was noted if subjects heard the same notes that were learned, but presented in a different order (i.e., as a different melody).
The mirror system has also been shown to be involved in the observation and imitation of guitar chording (Buccino et al., 2004; Vogt et al., 2007); and musical expertise, at least for professional piano players, is reflected in more specific mirror neuron processing (Haslinger et al., 2005). It has even been suggested that the mirror system is responsible for listeners misattributing anger to John Coltrane’s style of playing saxophone (Gridley & Hoff, 2006)!
A completely hybrid approach to musical cognition that includes aspects of all three schools of cognitive science is currently only a possibility. The closest realization of this possibility might be an evolutionary composing system (Todd & Werner, 1991). This system is an example of a genetic algorithm (Holland, 1992; Mitchell, 1996), which evolves a solution to a problem by evaluating the fitness of each member of a population, preserves the most fit, and then generates a new to-be-evaluated generation by combining attributes of the preserved individuals. Todd and Werner (1991) noted that such a system permits fitness to be evaluated by a number of potentially quite different critics; their model considers contributions of human, rule-based, and network critics.
Music is a complicated topic that has been considered at multiple levels of investigation, including computational or mathematical (Assayag et al., 2002; Benson, 2007; Harkleroad, 2006; Lerdahl & Jackendoff, 1983), algorithmic or behavioural (Bailey, 1992; Deutsch, 1999; Krumhansl, 1990; Seashore, 1967; Snyder, 2000), and implementational or biological (Jourdain, 1997; Levitin, 2006; Peretz & Zatorre, 2003). Music clearly is a domain that is perfectly suited to cognitive science. In this chapter, the analogy between classical music and cognitive science has been developed to highlight the very different contributions of classical, connectionist, and embodied cognitive science to the study of musical cognition. It raised the possibility of a more unified approach to musical cognition that combines elements of all three different schools of thought. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/06%3A_Classical_Music_and_Cognitive_Science/6.08%3A_Cognitive_Science_and_Classical_Music.txt |
In the previous chapter, the characteristics of the three approaches to cognitive science were reviewed, highlighting important distinctions between the classical, connectionist, and embodied approaches. This was done by exploring the analogy between cognitive science and classical music. It was argued that each of the three approaches within cognitive science was analogous to one of three quite different traditions within classical music, and that these differences were apparent in how each approach studied music cognition. However, at the end of the chapter the possibility of hybrid theories of music cognition was raised.
The possibility of hybrid theories of music cognition raises the further possibility that the differences between the three approaches within cognitive science might not be as dramatic as could be imagined. The purpose of the current chapter is to explore this further possibility. It asks the question: are there marks of the classical? That is, is there a set of necessary and sufficient properties that distinguish classical theories from connectionist and embodied theories?
The literature suggests that there should be a large number of marks of the classical. It would be expected that classical theories appeal to centralized control, serial processing, local and internal representations, explicit rules, and a cognitive vocabulary that appeals to the contents of mental representations. It would also be expected that both connectionist and embodied theories reject many, if not all, of these properties.
In the current chapter we examine each of these properties in turn and make the argument that they do not serve as marks of the classical. First, an examination of the properties of classical theories, as well as a reflection on the properties of the computing devices that inspired them, suggests that none of these properties are necessary classical components. Second, it would also appear that many of these properties are shared by other kinds of theories, and therefore do not serve to distinguish classical cognitive science from either the connectionist or the embodied approaches.
The chapter ends by considering the implications of this conclusion. I argue that the differences between the approaches within cognitive science reflect variances in emphasis, and not qualitative differences in kind, amongst the three kinds of theory. This sets the stage for the possibility of hybrid theories of the type examined in Chapter 8.
7.02: Symbols and Situations
As new problems are encountered in a scientific discipline, one approach to dealing with them is to explore alternative paradigms (Kuhn, 1970). One consequence of adopting this approach is to produce a clash of cultures, as the new paradigms compete against the old.
The social structure of science is such that individual scientists will justify the claims for a new approach by emphasizing the flaws of the old, as well as the virtues and goodness of the new. Similarly, other scientists will justify the continuation of the traditional method by minimizing its current difficulties and by discounting the powers or even the novelty of the new. (Norman, 1993, p. 3)
In cognitive science, one example of this clash of cultures is illustrated in the rise of connectionism. Prior to the discovery of learning rules for multilayered networks, there was a growing dissatisfaction with the progress of the classical approach (Dreyfus, 1972). When trained multilayered networks appeared in the literature, there was an explosion of interest in connectionism, and its merits—and the potential for solving the problems of classical cognitive science—were described in widely cited publications (McClelland & Rumelhart, 1986, 1988; Rumelhart & McClelland, 1986c; Schneider, 1987; Smolensky, 1988). In response, defenders of classical cognitive science argued against the novelty and computational power of the new connectionist models (Fodor & McLaughlin, 1990; Fodor & Pylyshyn, 1988; Minsky & Papert, 1988; Pinker & Prince, 1988).
A similar clash of cultures, concerning the debate that arose as part of embodied cognitive science’s reaction to the classical tradition, is explored in more detail in this section. One context for this clash is provided by the research of eminent AI researcher Terry Winograd. Winograd’s PhD dissertation involved programming a computer to understand natural language, the SHRDLU system that operated in a restricted blocks world (Winograd, 1972a, 1972b). SHRDLU would begin with a representation of different shaped and coloured blocks arranged in a scene. A user would type in a natural language command to which the program would respond, either by answering a query about the scene or performing an action that changed the scene. For instance, if instructed “Pick up a big red block,” SHRDLU would comprehend this instruction, execute it, and respond with “OK.” If then told “Find a block that is taller than the one you are holding and put it in the box,” then SHRDLU had to comprehend the words one and it; it would respond “By it I assume you mean the block which is taller than the one I am holding.”
Winograd’s (1972a) program was a prototypical classical system (Harnish, 2002). It parsed input strings into grammatical representations, and then it took advantage of the constraints of the specialized blocks world to map these grammatical structures onto a semantic interpretation of the scene. SHRDLU showed “that if the database was narrow enough the program could be made deep enough to display human-like interactions” (p. 121).
Winograd’s later research on language continued within the classical tradition. He wrote what served as a bible to those interested in programming computers to understand language, Language As a Cognitive Process, Volume 1: Syntax (Winograd, 1983). This book introduced and reviewed theories of language and syntax, and described how those theories had been incorporated into working computer programs. As the title suggests, a second volume on semantics was planned by Winograd. However, this second volume never appeared.
Instead, Winograd’s next groundbreaking book, Understanding Computers and Cognition, was one of the pioneering works in embodied cognitive science and launched a reaction against the classical approach (Winograd & Flores, 1987b). This book explained why Winograd did not continue with a text on the classical approach to semantics, because he had arrived at the opinion that classical accounts of language understanding would never be achieved. “Our position, in accord with the preceding chapters, is that computers cannot understand language” (p. 107).
The reason that Winograd and Flores (1987b) adopted this position was their view that computers are restricted to a rationalist notion of meaning that, in accordance with methodological solipsism (Fodor, 1980), must interpret terms independently of external situations or contexts. Winograd and Flores argued instead for an embodied, radically non-rational account of meaning: “Meaning always derives from an interpretation that is rooted in a situation” (Winograd & Flores, 1987b, p. 111). They took their philosophical inspiration from Heidegger instead of from Descartes.
Winograd and Flores’ (1987b) book was impactful and divisive. For example, the journal Artificial Intelligence published a set of four widely divergent reviews of the book (Clancey, 1987; Stefik & Bobrow, 1987; Suchman, 1987; Vellino, 1987), prefaced by an introduction noting that “when new books appear to be controversial, we try to present multiple perspectives on them.” Winograd and Flores (1987a) also published a response to the four reviews. In spite of its contentious reception, the book paved the way for research in situated cognition (Clancey, 1997), and it is one of the earliest examples of what is now well-established embodied cognitive science.
The rise of the embodied reaction is the first part of the clash of cultures in Norman’s (1993) sociology of cognitive science. A second part is the response of classical cognitive science to the embodied movement, a response that typically involves questioning the adequacy and the novelty of the new paradigm. An excellent example of this aspect of the culture clash is provided in a series of papers published in the journal Cognitive Science in 1993.
This series began with a paper entitled “Situated action: A symbolic interpretation” (Vera & Simon, 1993), which provided a detailed classical response to theories of situated action (SA) or situated cognition, approaches that belong to embodied cognitive science. This response was motivated by Vera and Simon’s (1993) observation that SA theories reject central assumptions of classical cognitive science: situated action research “denies that intelligent systems are correctly characterized as physical symbol systems, and especially denies that symbolic processing lies at the heart of intelligence” (pp. 7–8). Vera and Simon argued in favor of a much different conclusion: that situated action research is essentially classical in nature. “We find that there is no such antithesis: SA systems are symbolic systems, and some past and present symbolic systems are SA systems” (p. 8).
Vera and Simon (1993) began their argument by characterizing the important characteristics of the two positions that they aimed to integrate. Their view of classical cognitive science is best exemplified by the general properties of physical symbol systems (Newell, 1980) that were discussed in Chapter 3, with prototypical examples being early varieties of production systems (Anderson, 1983; Newell, 1973, 1990; Newell & Simon, 1972).
Vera and Simon (1993) noted three key characteristics of physical symbol systems: perceptual processes are used to establish the presence of various symbols or symbolic structures in memory; reasoning processes are used to manipulate internal symbol strings; and finally, the resulting symbol structures control motor actions on the external world. In other words, sense-think-act processing was explicitly articulated. “Sequences of actions can be executed with constant interchange among (a) receipt of information about the current state of the environment (perception), (b) internal processing of information (thinking), and (c) response to the environment (motor activity)” (p. 10).
Critical to Vera and Simon’s (1993) attempt to cast situated action in a classical context was their notion of “symbol.” First, symbols were taken to be some sort of pattern, so that pattern recognition processes could assert that some pattern is a token of a particular symbolic type (i.e., symbol recognition). Second, such patterns were defined as true symbols when,
they can designate or denote. An information system can take a symbol token as input and use it to gain access to a referenced object in order to affect it or be affected by it in some way. Symbols may designate other symbols, but they may also designate patterns of sensory stimuli, and they may designate motor actions. (Vera & Simon, 1993, p. 9)
Vera and Simon (1993) noted that situated action or embodied theories are highly variable and therefore difficult to characterize. As a result, they provided a very general account of the core properties of such theories by focusing on a small number, including Winograd and Flores (1987b). Vera and Simon observed that situated action theories require accounts of behaviour to consider situations or contexts, particularly those involving an agent’s environment. Agents must be able to adapt to ill-posed (i.e., difficult to formalize) situations, and do so via direct and continuously changing interactions with the environment.
Vera and Simon (1993) went on to emphasize six main claims that in their view characterized most of the situated action literature:
1. situated action requires no internal representations
2. it operates directly with the environment (sense-act rather than sense-think-act)
3. it involves direct access to affordances
4. it does not use productions
5. it exploits a socially defined, not physically defined, environment
6. it makes no use of symbols.
With this position, Vera and Simon were situated to critique the claim that the embodied approach is qualitatively different from classical cognitive science. They did so by either arguing against the import of some embodied arguments, or by in essence arguing for the formal equivalence of classical and SA theories. Both of these approaches are in accord with Norman’s (1993) portrayal of a culture clash.
As an example of the first strategy, consider Vera and Simon’s (1993) treatment of the notion of readiness-to-hand. This idea is related to Heidegger’s (1962) concept of Dasein, or being-in-the-world, which is an agent’s sense of being engaged with its world. Part of this engagement involves using “entities,” which Heidegger called equipment, and which are experienced in terms of what cognitive scientists would describe as affordances or potential actions (Gibson, 1979). “Equipment is essentially ‘something-in-order-to’” (Heidegger, 1962, p. 97).
Heidegger’s (1962) position was that when agents experience the affordances of equipment, other properties—such as the physical nature of equipment—disappear. This is readiness-to-hand. “That with which our everyday dealings proximally dwell is not the tools themselves. On the contrary, that with which we concern ourselves primarily is the work” (p. 99). Another example of readiness-to-hand is the blind person’s cane, which is not experienced as such when it is being used to navigate, but is instead experienced as an extension of the person themselves (Bateson, 1972, p. 465): “The stick is a pathway along which transforms of difference are being transmitted.”
Heidegger’s philosophy played a dominant role in the embodied theory proposed by Winograd and Flores (1987b). They took readiness-to-hand as evidence of direct engagement with the world; we only become aware of equipment itself when the structural coupling between world, equipment, and agent breaks down. Winograd and Flores took the goal of designing equipment, such as human-computer interfaces, to be creating artifacts that are invisible to us when they are used. “A successful word processing device lets a person operate on the words and paragraphs displayed on the screen, without being aware of formulating and giving commands” (Winograd & Flores, 1987b, p. 164). The invisibility of artifacts—the readiness-to-hand of equipment—is frequently characterized as being evidence of good design (Dourish, 2001; Norman, 1998, 2002, 2004).
Importantly, readiness-to-hand was also used by Winograd and Flores (1987b) as evidence for rejecting the need for classical representations, and to counter the claim that tool use is mediated by symbolic thinking or planning (Miller, Galanter, & Pribram, 1960). From the classical perspective, it might be expected that an agent is consciously aware of his or her plans; the absence of such awareness, or readiness-to-hand, must therefore indicate the absence of planning. Thus readiness-to-hand reflects direct, non-symbolic links between sensing and acting.
If we focus on concernful activity instead of on detached contemplation, the status of this representation is called into question. In driving a nail with a hammer (as opposed to thinking about a hammer), I need not make use of any explicit representation of the hammer. (Winograd & Flores, 1987b, p. 33)
Vera and Simon (1993, p. 19) correctly noted, though, that our conscious awareness of entities is mute with respect to either the nature or the existence of representational formats: “Awareness has nothing to do with whether something is represented symbolically, or in some other way, or not at all.” That is, consciousness of contents is not a defining feature of physical symbol systems. This position is a deft dismissal of using readiness-to-hand to support an anti-representational position.
After dealing with the implications of readiness-to-hand, Vera and Simon (1993) considered alternate formulations of the critiques raised by situated action researchers. Perhaps the prime concern of embodied cognitive science is that the classical approach emphasizes internal, symbolic processing to the near total exclusion of sensing and acting. We saw in Chapter 3 that production system pioneers admitted that their earlier efforts ignored sensing and acting (Newell, 1990). (We also saw an attempt to rectify this in more recent production system architectures [Meyer et al., 2001; Meyer & Kieras, 1997a, 1997b]).
Vera and Simon (1993) pointed out that the classical tradition has never disagreed with the claim that theories of cognition cannot succeed by merely providing accounts of internal processing. Action and environment are key elements of pioneering classical accounts (Miller, Galanter, & Pribram, 1960; Simon, 1969). Vera and Simon stress this by quoting the implications of Simon’s (1969) own parable of the ant:
The proper study of mankind has been said to be man. But . . . man—or at least the intellective component of man—may be relatively simple; . . . most of the complexity of his behavior may be drawn from his environment, from his search for good designs. (Simon, 1969, p. 83)
Modern critics of the embodied notion of the extended mind (Adams & Aizawa, 2008) continue to echo this response: “The orthodox view in cognitive science maintains that minds do interact with their bodies and their environments” (pp. 1–2).
Vera and Simon (1993) emphasized the interactive nature of classical models by briefly discussing various production systems designed to interact with the world. These included the Phoenix project, a system that simulates the fighting of forest fires in Yellowstone National Park (Cohen et al., 1989), as well as the Navlab system for navigating an autonomous robotic vehicle (Pomerleau, 1991; Thorpe, 1990). Vera and Simon also described a production system for solving the Towers of Hanoi problem, but it was highly scaffolded. That is, its memory for intermediate states of the problem was in the external towers and discs themselves; the production system had neither an internal representation of the problem nor a goal stack to plan its solution. Instead, it solved the problem perceptually, with its productions driven by the changing appearance of the problem over time.
The above examples were used to argue that at least some production systems are situated action models. Vera and Simon (1993) completed their argument by making the parallel argument that some notable situated action theories are symbolic because they are instances of production systems. One embodied theory that received this treatment was Rodney Brooks’ behaviour-based robotics (Brooks, 1991, 1989, 1999, 2002), which was introduced in Chapter 5. To the extent that they agreed that Brooks’ robots do not employ representations, Vera and Simon suggested that this limits their capabilities. “It is consequently unclear whether Brooks and his Creatures are on the right track towards fully autonomous systems that can function in a wider variety of environments” (Vera & Simon, 1993, p. 35).
However, Vera and Simon (1993) went on to suggest that even systems such as Brooks’ robots could be cast in a symbolic mold. If a system has a state that is in some way indexed to a property or entity in the world, then that state should be properly called a symbol. As a result, a basic sense-act relationship that was part of the most simplistic subsumption architecture would be an example of a production for Vera and Simon.
Furthermore, Vera and Simon (1993) argued that even if a basic sense-act relationship is wired in, and therefore there is no need to view it as symbolized, it is symbolic nonetheless:
On the condition end, the neural impulse aroused by the encoded incoming stimuli denotes the affordances that produced these stimuli, while the signals to efferent nerves denote the functions of the actions. There is every reason to regard these impulses and signals as symbols: A symbol can as readily consist of the activation of a neuron as it can of the creation of a tiny magnetic field. (Vera and Simon, 1993, p. 42)
Thus any situated action model can be described in a neutral, symbolic language— as a production system—including even the most reflexive, anti-representational instances of such models.
The gist of Vera and Simon’s (1993) argument, then, was that there is no principled difference between classical and embodied theories, because embodied models that interact with the environment are in essence production systems. Not surprisingly, this position attracted a variety of criticisms.
For example, Cognitive Science published a number of articles in response to the original paper by Vera and Simon (Norman, 1993). One theme apparent in some of these papers was that Vera and Simon’s definition of symbol was too vague to be useful (Agre, 1993; Clancey, 1993). Agre, for instance, accused Vera and Simon not of defending a well-articulated theory, but instead of exploiting an indistinct worldview. He argued that they “routinely claim vindication through some ‘symbolic’ gloss of whatever phenomenon is under discussion. The problem is that just about anything can seem ‘symbolic’ if you look at it right” (Agre, 1993, p. 62).
One example of such vagueness was Vera and Simon’s (1993) definition of a symbol as a “designating pattern.” What do they mean by designate? Designation has occurred if “an information system can take a symbol token as input and use it to gain access to a referenced object in order to affect it or to be affected by it in some way” (Vera & Simon, 1993, p. 9). In other words the mere establishment of a deictic or indexing relationship (Pylyshyn, 1994, 2000, 2001) between the world and some state of an agent is sufficient for Vera and Simon to deem that state “symbolic.”
This very liberal definition of symbolic leads to some very glib characterizations of certain embodied positions. Consider Vera and Simon’s (1993) treatment of affordances as defined in the ecological theory of perception (Gibson, 1979). In Gibson’s theory, affordances—opportunities for action offered by entities in the world—are perceived directly; no intervening symbols or representations are presumed. “When I assert that perception of the environment is direct, I mean that it is not mediated by retinal pictures, neural pictures, or mental pictures” (p. 147). Vera and Simon (1993, p. 20) denied direct perception: “the thing that corresponds to an affordance is a symbol stored in central memory denoting the encoding in functional terms of a complex visual display, the latter produced, in turn, by the actual physical scene that is being viewed.”
Vera and Simon (1993) adopted this representational interpretation of affordances because, by their definition, an affordance designates some worldly state of affairs and must therefore be symbolic. As a result, Vera and Simon redefined the sense-act links of direct perception as indirect sense-think-act processing. To them, affordances were symbols informed by senses, and actions were the consequence of the presence of motor representations. Similar accounts of affordances have been proposed in the more recent literature (Sahin et al., 2007).
While Vera and Simon’s (1993) use of designation to provide a liberal definition of symbol permits a representational account of anti-representational theories, it does so at the expense of neglecting core assumptions of classical models. In particular, other leading classical cognitive scientists adopt a much more stringent definition of symbol that prevents, for instance, direct perception to be viewed as a classical theory. Pylyshyn has argued that cognitive scientists must adopt a cognitive vocabulary in their theories (Pylyshyn, 1984). Such a vocabulary captures regularities by appealing to the contents of representational states, as illustrated in adopting the intentional stance (Dennett, 1987) or in employing theory-theory (Gopnik & Meltzoff, 1997; Gopnik & Wellman, 1992).
Importantly, for Pylyshyn mere designation is not sufficient to define the content of symbols, and therefore is not sufficient to support a classical or cognitive theory. As discussed in detail in Chapter 8, Pylyshyn has developed a theory of vision that requires indexing or designation as a primitive operation (Pylyshyn, 2003c, 2007). However, this theory recognizes that designation occurs without representing the features of indexed entities, and therefore does not establish cognitive content. As a result, indexing is a critical component of Pylyshyn’s theory—but it is also a component that he explicitly labels as being non-representational and non-cognitive.
Vera and Simon’s (1993) vagueness in defining the symbolic has been a central concern in other critiques of their position. It has been claimed that Vera and Simon omit one crucial characteristic in their definition of symbol system: the capability of being a universal computing device (Wells, 1996). Wells (1996) noted in one example that devices such as Brooks’ behavior-based robots are not capable of universal computation, one of the defining properties of a physical symbol system (Newell & Simon, 1976). Wells argues that if a situated action model is not universal, then it cannot be a physical symbol system, and therefore cannot be an instance of the class of classical or symbolic theories.
The trajectory from Winograd’s (1972a) early classical research to his pioneering articulation of the embodied approach (Winograd & Flores, 1987b) and the route from Winograd and Flores’ book to Vera and Simon’s (1993) classical account of situated action to the various responses that this account provoked raise a number of issues.
First, this sequence of publications nicely illustrates Norman’s (1993) description of culture clashes in cognitive science. Dissatisfied with the perceived limits of the classical approach, Winograd and Flores highlighted its flaws and detailed the potential advances of the embodied approach. In reply, Vera and Simon (1993) discounted the differences between classical and embodied theories, and even pointed out how connectionist networks could be cast in the light of production systems.
Second, the various positions described above highlight a variety of perspectives concerning the relationships between different schools of thought in cognitive science. At one extreme, all of these different schools of thought are considered to be classical in nature, because all are symbolic and all fall under a production system umbrella (Vera & Simon, 1993). At the opposite extreme, there are incompatible differences between the three approaches, and supporters of one approach argue for its adoption and for the dismissal of the others (Chemero, 2009; Fodor & Pylyshyn, 1988; Smolensky, 1988; Winograd & Flores, 1987b).
In between these poles, one can find compromise positions in which hybrid models that call upon multiple schools of thought are endorsed. These include proposals in which different kinds of theories are invoked to solve different sorts of problems, possibly at different stages of processing (Clark, 1997; Pylyshyn, 2003c). These also include proposals in which different kinds of theories are invoked simultaneously to co-operatively achieve a full account of some phenomenon (McNeill, 2005).
Third, the debate between the extreme poles appears to hinge on core definitions used to distinguish one position from another. Is situated cognition classical? As we saw earlier, this depends on the definition of symbolic, which is a key classical idea, but it has not been as clearly defined as might be expected (Searle, 1992). It is this third point that is the focus of the remainder of this chapter. What are the key concepts that are presumed to distinguish classical cognitive science from its putative competitors? When one examines these concepts in detail, are they truly distinguished between positions? Or do they instead reveal potential compatibilities between the different approaches to cognitive science? | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.01%3A_Chapter_Overview.txt |
In previous chapters, the elements of classical, of connectionist, and of embodied cognitive science have been presented. We have proceeded in a fashion that accentuated potential differences between these three schools of thought. However, now that the elements of all three approaches have been presented, we are in a position to explore how real and extensive these differences are. Is there one cognitive science, or many? One approach to answering this question is to consider whether the distinctions between the elements of the cognitive sciences are truly differences in kind.
The position of the current chapter is that there are strong relations amongst the three schools of thought in cognitive science; differences between these schools are more matters of degree than qualitative differences of kind. Let us set a context for this discussion by providing an argument similar in structure to the one framed by Adams and Aizawa (2008) against the notion of the extended mind.
One important critique of embodied cognitive science’s proposal of the extended mind is based on an analysis of the mark of the cognitive (Adams & Aizawa, 2008). The mark of the cognitive is a set of necessary and sufficient features that distinguish cognitive phenomena from other phenomena. Adams and Aizawa’s central argument against the extended mind is that it fails to provide the required features.
If one thinks that cognitive processing is simply any sort of dynamical system process, then—so understood—cognitive processing is again likely to be found spanning the brain, body and environment. But, so understood, cognitive processing will also be found in the swinging of a pendulum of a grandfather clock or the oscillations of the atoms of a hydrogen molecule. Being a dynamical system is pretty clearly insufficient for cognition or even a cognitive system. (Adams & Aizawa, 2008, p. 23)
Connectionist and embodied approaches can easily be characterized as explicit reactions against the classical viewpoint. That is, they view certain characteristics of classical cognitive science as being incorrect, and they propose theories in which these characteristics have been removed. For instance, consider Rodney Brooks’ reaction against classical AI and robotics:
During my earlier years as a postdoc at MIT, and as a junior faculty member at Stanford, I had developed a heuristic in carrying out research. I would look at how everyone else was tackling a certain problem and find the core central thing that they all agreed on so much that they never even talked about it. I would negate the central implicit belief and see where it led. This often turned out to be quite useful. (Brooks, 2002, p. 37)
This reactive approach suggests a context for the current chapter: that there should be a mark of the classical, a set of necessary and sufficient features that distinguish the theories of classical cognitive science from the theories of either connectionist or of embodied cognitive science. Given the material presented in earlier chapters, a candidate set of such features can easily be produced: central control, serial processing, internal representations, explicit rules, the disembodied mind, and so on. Alternative approaches to cognitive science can be characterized as taking a subset of these features and inverting them in accordance with Brooks’ heuristic.
In the sections that follow we examine candidate features that define the mark of the classical. It is shown that none of these features provide a necessary and sufficient distinction between classical and non-classical theories. For instance, central control is not a required property of a classical system, but was incorporated as an engineering convenience. Furthermore, central control is easily found in non-classical systems such as connectionist networks.
If there is no mark of the classical, then this indicates that there are not many cognitive sciences, but only one. Later chapters support this position by illustrating theories of cognitive science that incorporate elements of all three approaches. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.03%3A_Marks_of_the_Classical.txt |
Two of the key elements of a classical theory of cognitive science are a set of primitive symbols and a set of primitive processes for symbol manipulation. However, these two necessary components are not by themselves sufficient to completely define a working classical model. A third element is also required: a mechanism of control.
Control is required to determine “what to do next,” to choose which primitive operation is to be applied at any given moment.
Beyond the capability to execute the basic operations singly, a computing machine must be able to perform them according to the sequence—or rather, the logical pattern—in which they generate the solution of the mathematical problem that is the actual purpose of the calculation in hand. (von Neumann, 1958, p. 11)
The purpose of this section is to explore the notion of control from the perspective of the three schools of thought in cognitive science. This is done by considering cognitive control in the context of the history of the automatic control of computing devices. It is argued that while the different approaches in cognitive science may claim to have very different accounts of cognitive control, there are in fact no qualitative differences amongst these accounts.
One of the earliest examples of automatic control was Jacquard’s punched card mechanism for, in essence, programming a loom to weave a particular pattern into silk fabric (Essinger, 2004), as discussed in Chapter 3. One punched card controlled the appearance of one thread row in the fabric. Holes punched in the card permitted rods to move, which raised specified threads to make them visible at this point in the fabric. The cards that defined a pattern were linked together as a belt that advanced one card at a time during weaving. A typical pattern to be woven was defined by around 2,000 to 4,000 different punched cards; very complex patterns required using many more cards. For instance, Jacquard’s self-portrait in silk was defined by 24,000 different punched cards.
Jacquard patented his loom in 1804 (Essinger, 2004). By the end of the nineteenth century, punched cards inspired by his invention had a central place in the processing of information. However, their role was to represent this information, not to control how it was manipulated.
After Herman Hollerith graduated from Columbia School of Mines in 1879, he was employed to work on the 1880 United States Census, which was the first census to collect not only population data but also to be concerned with economic issues (Essinger, 2004). Hollerith’s census experience revealed a marked need to automate the processing of the huge amount of information that had been collected.
While engaged in work upon the tenth census, the writer’s attention was called to the methods employed in the tabulation of population statistics and the enormous expense involved. These methods were at the time described as ‘barbarous[;] some machine ought to be devised for the purpose of facilitating such tabulations’. (Hollerith, 1889, p. 239)
Hollerith’s response was to represent census information using punched cards (Austrian, 1982; Comrie, 1933; Hollerith, 1889). A standard punched card, called a tabulating card, measured 18.7 cm by 8.3 cm, and its upper left hand corner was beveled to prevent the card from being incorrectly oriented. A blank tabulating card consisted of 80 vertical columns, with 12 different positions in each column through which a hole could be punched. The card itself acted as an electrical insulator and was passed through a wire brush and a brass roller. The brush and roller came in contact wherever a hole had been punched, completing an electrical circuit and permitting specific information to be read from a card and acted upon (Eckert, 1940).
Hollerith invented a set of different devices for manipulating tabulating cards. These included a card punch for entering data by punching holes in cards, a verifier for checking for data entry errors, a counting sorter for sorting cards into different groups according to the information punched in any column of interest, a tabulator or accounting machine for adding numbers punched into a set of cards, and a multiplier for taking two different numbers punched on a card, computing their product, and punching the product onto the same card. Hollerith’s devices were employed during the 1890 census. They saved more than two years of work and \$5 million dollars, and permitted complicated tables involving relationships between different variables to be easily created (Essinger, 2004).
In Hollerith’s system, punched cards represented information, and the various specialized devices that he invented served as the primitive processes available for manipulating information. Control, however, was not mechanized—it was provided by a human operator of the various tabulating machines in a room. “The calculating process was done by passing decks of cards from one machine to the next, with each machine contributing something to the process” (Williams, 1997, p. 253). This approach was very powerful. In what has been described as the first book about computer programming, Punched Card Methods in Scientific Computation (Eckert, 1940), astronomer Wallace Eckert described how a set of Hollerith’s machines—a punched card installation—could be employed for harmonic analysis, for solving differential equations, for computing planetary perturbations, and for performing many other complex calculations.
The human controller of a punched card installation was in a position analogous to a weaver in Lyon prior to the invention of Jacquard’s loom. That is, both were human operators—or more precisely, human controllers—of machines responsible for producing complicated products. Jacquard revolutionized the silk industry by automating the control of looms. Modern computing devices arose from an analogous innovation, automating the control of Hollerith’s tabulators (Ceruzzi, 1997, p. 8): The entire room comprising a punched card installation “including the people in it—and not the individual machines is what the electronic computer eventually replaced.”
The first phase of the history of replacing punched card installations with automatically controlled computing devices involved the creation of calculating devices that employed mechanical, electromechanical, or relay technology (Williams, 1997). This phase began in the 1930s with the creation of the German calculators invented by Konrad Zuse (Zuse, 1993), the Bell relay computers developed by George Stibitz (Irvine, 2001; Stibitz & Loveday, 1967a, 1967b), and the Harvard machines designed by Howard Aiken (Aiken & Hopper, 1946).
The internal components of any one of these calculators performed operations analogous to those performed by the different Hollerith machines in a punched card installation. In addition, the actions of these internal components were automatically controlled. Completing the parallel with the Jacquard loom, this control was accomplished using punched tape or cards. The various Stibitz and Aiken machines read spools of punched paper tape; Zuse’s machines were controlled by holes punched in discarded 35 mm movie film (Williams, 1997). The calculators developed during this era by IBM, a company that had been founded in part from Hollerith’s Computer Tabulating Recording Company, were controlled by decks of punched cards (Williams, 1997).
In the 1940s, electromechanical or relay technology was replaced with much faster electronic components, leading to the next generation of computer devices. Vacuum tubes were key elements of both the Atanasoff-Berry computer (ABC), created by John Atanasoff and Clifford Berry (Burks & Burks, 1988; Mollenhoff, 1988; Smiley, 2010), and the ENIAC (Electronic Numerical Integrator and Computer) engineered by Presper Eckert and John Mauchly (Burks, 2002; Neukom, 2006).
The increase in speed of the internal components of electronic computers caused problems with paper tape or punched card control. The issue was that the electronic machines were 500 times faster than relay-based devices (Pelaez, 1999), which meant that traditional forms of control were far too slow.
This control problem was solved for Eckert and Mauchly’s ENIAC by using a master controller that itself was an electronic device. It was a set of ten electronic switches that could each be set to six different values; each switch was associated with a counter that could be used to advance a switch to a new setting when a predefined value was reached (Williams, 1997). The switches would route incoming signals to particular components of ENIAC, where computations were performed; a change in a switch’s state would send information to a different component of ENIAC. The control of this information flow was accomplished by using a plug board to physically wire the connections between switches and computer components. This permitted control to match the speed of computation, but at a cost:
ENIAC was a fast but relatively inflexible machine. It was best suited for use in long and repetitious calculations. Once it was wired up for a particular program, it was in fact a special purpose machine. Adapting it to another purpose (a different problem) required manual intervention to reconfigure the electrical circuits. (Pelaez, 1999, p. 361)
Typically two full days of rewiring the plug board were required to convert ENIAC from one special purpose machine to another.
Thus the development of electronic computers led to a crisis of control. Punched tape provided flexible, easily changed, control. However, punched tape readers were too slow to take practical advantage of the speed of the new machines. Plug boards provided control that matched the speed of the new componentry, but was inflexible and time consuming to change. This crisis of control inspired another innovation, the stored program computer (Aspray, 1982; Ceruzzi, 1997; Pelaez, 1999).
The notion of the stored program computer was first laid out in 1945 by John von Neumann in a draft memo that described the properties of the EDVAC (Electronic Discrete Variable Automatic Computer), the computer that directly descended from the ENIAC (Godfrey & Hendry, 1993; von Neumann, 1993). One of the innovations of this design was the inclusion of a central controller. In essence, the instructions that ordinarily would be represented as a sequence on a punched tape would instead be represented internally in EDVAC’s memory. The central controller had the task of fetching, interpreting, and executing an instruction from memory and then repeating this process after proceeding to the next instruction in the sequence.
There is no clear agreement about which particular device was the first stored program computer; several candidate machines were created in the same era. These include the EDVAC (created 1945–1950) (Reitwiesner, 1997; von Neumann, 1993; Williams, 1993), Princeton’s IAS computer (created 1946–1951) (Burks, 2002; Cohen, 1999), and the Manchester machine (running in 1948) (Copeland, 2011; Lavington, 1980). Later work on the ENIAC also explored its use of stored programs (Neukom, 2006). Regardless of “firsts,” all of these machines were functionally equivalent in the sense that they replaced external control—as by a punched tape—with internalizing tape instructions into memory.
The invention of the stored program computer led directly to computer science’s version of the classical sandwich (Hurley, 2001). “Sensing” involves loading the computer’s internal memory with both the program and the data to be processed. “Thinking” involves executing the program and performing the desired calculations upon the stored data. “Acting” involves providing the results of the calculations to the computer’s operator, for instance by punching an output tape or a set of punched cards.
The classical sandwich is one of the defining characteristics of classical cognitive science (Hurley, 2001), and the proposal of a sense-act cycle to replace the sandwich’s sense-think-act processing (Brooks, 1999, 2002; Clark, 1997, 2008; Pfeifer & Scheier, 1999) is one of the characteristic reactions of embodied cognitive science against the classical tradition (Shapiro, 2011). Classical cognitive science’s adoption of the classical sandwich was a natural consequence of being inspired by computer science’s approach to information processing, which, at the time that classical cognitive science was born, had culminated in the invention of the stored program computer.
However, we have seen from the history leading up to its invention that the stored program computer—and hence the classical sandwich—was not an in-principle requirement for information processing. It was instead the result of a practical need to match the speed of control with the speed of electronic components. In fact, the control mechanisms of a variety of information processing models that are central to classical cognitive science are in fact quite consistent with embodied cognitive science.
For example, the universal Turing machine is critically important to classical cognitive science, not only in its role of defining the core elements of symbol manipulation, but also in its function of defining the limits of computation (Dawson, 1998). However, in most respects a universal Turing machine is a device that highlights some of the key characteristics of the embodied approach.
For instance, the universal Turing machine is certainly not a stored program computer (Wells, 2002). If one were to actually build such a device—the original was only used as a theoretical model (Turing, 1936)—then the only internal memory that would be required would be for holding the machine table and the machine head’s internal state. (That is, if any internal memory was required at all. Turing’s notion of machine state was inspired by the different states of a typewriter’s keys [Hodges, 1983], and thus a machine state may not be remembered or represented, but rather merely adopted. Similarly, the machine table would presumably be built from physical circuitry, and again would be neither represented nor remembered). The program executed by a universal Turing machine, and the data manipulations that resulted, were completely scaffolded. The machine’s memory is literally an external notebook analogous to that used by Oscar in the famous argument for extending the mind (Clark & Chalmers, 1998). That is, the data and program for a universal Turing machine are both stored externally, on the machine’s ticker tape.
Indeed, the interactions between a universal Turing machine’s machine head and its ticker tape are decidedly of the sense-act, and not of the sense-think-act, variety. Every possible operation in the machine table performs an action (either writing something on the ticker tape or moving the tape one cell to the right or to the left) immediately after sensing the current symbol on the tape and the current state of the machine head. No other internal, intermediary processing (i.e., thinking) is required.
Similarly, external scaffolding was characteristic of later-generation relay computers developed at Bell labs, such as the Mark III. These machines employed more than one tape reader, permitting external tapes to be used to store tables of precomputed values. This resulted in the CADET architecture (“Can’t Add, Doesn’t Even Try”) that worked by looking up answers to addition and other problems instead of computing the result (Williams, 1997). This was possible because of a “hunting circuit” that permitted the computer to move to any desired location on a punched tape (Stibitz & Loveday, 1967b). ENIAC employed scaffolding as well, obtaining standard function values by reading them from cards (Williams, 1997).
From an engineering perspective, the difference between externally controlled and stored program computers was quantitative (e.g. speed of processing) and not qualitative (e.g. type of processing). In other words, to a computer engineer there may be no principled difference between a sense-act device such as a universal Turing machine and a sense-think-act computer such as the EDVAC. In the context of cognitive control, then, there may be no qualitative element that distinguishes the classical and embodied approaches.
Perhaps a different perspective on control may reveal sharp distinctions between classical and embodied cognitive science. For instance, a key element in the 1945 description of the EDVAC was the component called the central control unit (Godfrey & Hendry, 1993; von Neumann, 1993). It was argued by von Neumann that the most efficient way to control a stored program computer was to have a physical component of the device devoted to control (i.e., to the fetching, decoding, and executing of program steps). Von Neumann called this the “central control organ.” Perhaps it is the notion that control is centralized to a particular location or organ of a classical device that serves as the division between classical and embodied models. For instance, behaviour-based roboticists often strive to decentralize control (Brooks, 1999). In Brooks’ early six-legged walking robots like Attila, each leg of the robot was responsible for its own control, and no central control organ was included in the design (Brooks, 2002).
However, it appears that the need for a central control organ was tied again to pragmatic engineering rather than to a principled requirement for defining information processing. The adoption of a central controller reflected adherence to engineering’s principle of modular design (Marr, 1976). According to this principle, “any large computation should be split up and implemented as a collection of small sub-parts that are as nearly independent of one another as the overall task allows” (p. 485). Failure to devise a functional component or process according to the principle of modular design typically means,
that the process as a whole becomes extremely difficult to debug or to improve, whether by a human designer or in the course of natural evolution, because a small change to improve one part has to be accompanied by many simultaneous compensating changes elsewhere. (Marr, 1976, p. 485)
Digital computers were explicitly designed according to the principle of modular design, which von Neumann (1958) called “the principle of only one organ for each basic operation” (p. 13). Not only was this good engineering practice, but von Neumann also argued that this principle distinguished digital computers from their analog ancestors such as the differential analyzer (Bush, 1931).
The principle of modular design is also reflected in the architecture of the universal Turing machine. The central control organ of this device is its machine table (see Figure 3-8), which is separate and independent from the other elements of the device, such as the mechanisms for reading and writing the tape, the machine state, and so on. Recall that the machine table is a set of instructions; each instruction is associated with a specific input symbol and a particular machine state. When a Turing machine in physical state x reads symbol y from the tape, it proceeds to execute the instruction at coordinates (x, y) in its machine table.
Importantly, completely decentralized control results in a Turing machine when von Neumann’s (1958) principle of only one organ for each basic operation is taken to the extreme. Rather than taking the entire machine table as a central control organ, one could plausibly design an uber-modular system in which each instruction was associated with its own organ. For example, one could replace the machine table with a production system in which each production was responsible for one of the machine table’s entries. The conditions for each production would be a particular machine state and a particular input symbol, and the production’s action would be the required manipulation of the ticker tape. In this case, the production system version of the Turing machine would behave identically to the original version. However, it would no longer have a centralized control organ.
In short, central control is not a necessary characteristic of classical information processing, and therefore does not distinguish between classical and embodied theories. Another way of making this point is to remember the Chapter 3 observation that production systems are prototypical examples of classical architectures (Anderson et al., 2004; Newell, 1973), but they, like many embodied models (Dawson, Dupuis, & Wilson, 2010; Holland & Melhuish, 1999; Susi & Ziemke, 2001; Theraulaz & Bonabeau, 1999), are controlled stigmergically. “Traditional production system control is internally stigmergic, because the contents of working memory determine which production will act at any given time” (Dawson, Dupuis, & Wilson, 2010, p. 76).
The discussion to this point has used the history of the automatic control of computers to argue that characteristics of control cannot be used to provide a principled distinction between classical and embodied cognitive science. Let us now examine connectionist cognitive science in the context of cognitive control.
Connectionists have argued that the nature of cognitive control provides a principled distinction between network models and models that belong to the classical tradition (Rumelhart & McClelland, 1986b). In particular, connectionist cognitive scientists claim that control in their networks is completely decentralized, and that this property is advantageous because it is biologically plausible. “There is one final aspect of our models which is vaguely derived from our understanding of brain functioning. This is the notion that there is no central executive overseeing the general flow of processing” (Rumelhart & McClelland, 1986b, p. 134).
However, the claim that connectionist networks are not under central control is easily refuted; Dawson and Schopflocher (1992a) considered a very simple connectionist system, the distributed memory or standard pattern associator described in Chapter 4 (see Figure 4.2.1). They noted that connectionist researchers typically describe such models as being autonomous, suggesting that the key operations of such a memory (namely learning and recall) are explicitly defined in its architecture, that is, in the connection weights and processors, as depicted in Figure 4.2.1.
However, Dawson and Schopflocher (1992a) proceeded to show that even in such a simple memory system, whether the network learns or recalls information depends upon instructions provided by an external controller: the programmer demonstrating the behaviour of the network. When instructed to learn, the components of the standard pattern associator behave one way. However, when instructed to recall, these same components behave in a very different fashion. The nature of the network’s processing depends critically upon signals provided by a controller that is not part of the network architecture.
For example, during learning the output units in a standard pattern associator serve as a second bank of input units, but during recall they record the network’s response to signals sent from the other input units. How the output units behave is determined by whether the network is involved in either a learning phase or a recall phase, which is signaled by the network’s user, not by any of its architectural components. Similarly, during the learning phase connection weights are modified according to a learning rule, but the weights are not modified during the recall phase. How the weights behave is under the user’s control. Indeed, the learning rule is defined outside the architecture of the network that is visible in Figure 4.2.1.
Dawson and Schopflocher (1992a) concluded that,
current PDP networks are not autonomous because their learning principles are not in fact directly realized in the network architecture. That is, networks governed by these principles require explicit signals from some external controller to determine when they will learn or when they will perform a learned task. (Dawson and Schopflocher 1992a, pp. 200–201)
This is not a principled limitation, for Dawson and Schopflocher presented a much more elaborate architecture that permits a standard pattern associator to learn and recall autonomously, that is, without the need for a user’s intervention. However, this architecture is not typical; standard pattern associators like the one in Figure 4.2.1 demand executive control.
The need for such control is not limited to simple distributed memories. The same is true for a variety of popular and more powerful multilayered network architectures, including multilayered perceptrons and self-organizing networks (Roy, 2008). “There is clearly a central executive that oversees the operation of the back-propagation algorithm” (p. 1436). Roy (2008) proceeded to argue that such control is itself required by brain-like systems, and therefore biologically plausible networks demand not only an explicit account of data transformation, but also a biological theory of executive control.
In summary, connectionist networks generally require the same kind of control that is a typical component of a classical model. Furthermore, it was argued earlier that there does not appear to be any principled distinction between this kind of control and the type that is presumed in an embodied account of cognition. Control is a key characteristic of a cognitive theory, and different schools of thought in cognitive science are united in appealing to the same type of control mechanisms. In short, central control is not a mark of the classical. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.04%3A_Centralized_versus_Decentralized_Control.txt |
Classical cognitive science was inspired by the characteristics of digital computers; few would deny that the classical approach exploits the digital computer metaphor (Pylyshyn, 1979a). Computers are existence proofs that physical machines are capable of manipulating, with infinite flexibility, semantically interpretable expressions (Haugeland, 1985; Newell, 1980; Newell & Simon, 1976). Computers illustrate how logicism can be grounded in physical mechanisms.
The connectionist and the embodied reactions to classical cognitive science typically hold that the digital computer metaphor is not appropriate for theories of cognition. It has been argued that the operations of traditional electronic computers are qualitatively different from those of human cognition, and as a result the classical models they inspire are doomed to fail, as are attempts to produce artificial intelligence in such machines (Churchland & Sejnowski, 1992; Dreyfus, 1972, 1992; Searle, 1980).
In concert with rejecting the digital computer metaphor, connectionist and embodied cognitive scientists turn to qualitatively different notions in an attempt to distinguish their approaches from the classical theories that preceded them. However, their attempt to define the mark of the classical, and to show how this mark does not apply to their theories, is not always successful.
For example, it was argued in the previous section that when scholars abandoned the notion of centralized control, they were in fact reacting against a concept that was not a necessary condition of classical theory, but was instead an engineering convenience. Furthermore, mechanisms of control in connectionist and embodied theories were shown not to be radically different from those of classical models. The current section provides another such example.
One of the defining characteristics of classical theory is serial processing, the notion that only one operation can be executed at a time. Opponents of classical cognitive science have argued that this means classical models are simply too slow to be executed by the sluggish hardware that makes up the brain (Feldman & Ballard, 1982). They suggest that what is instead required is parallel processing, in which many operations are carried out simultaneously. Below it is argued that characterizing digital computers or classical theories as being serial in nature is not completely accurate. Furthermore, characterizing alternative schools of thought in cognitive science as champions of parallel processing is also problematic. In short, the difference between serial and parallel processing may not provide a clear distinction between different approaches to cognitive science.
It cannot be denied that serial processing has played an important role in the history of modern computing devices. Turing’s (1936) original account of computation was purely serial: a Turing machine processed only a single symbol at a time, and did so by only executing a single operation at a time. However, the purpose of Turing’s proposal was to provide an uncontroversial notion of “definite method”; serial processing made Turing’s notion of computation easy to understand, but was not a necessary characteristic.
A decade later, the pioneering stored program computer EDVAC was also a serial device in two different ways (Ceruzzi, 1997; von Neumann, 1993). First, it only executed one command at a time. Second, even though it used 44 bits to represent a number as a “word,” it processed these words serially, operating on them one bit at a time. Again, though, this design was motivated by a desire for simplicity—in this case, simplicity of engineering. “The device should be as simple as possible, that is, contain as few elements as possible. This can be achieved by never performing two operations simultaneously, if this would cause a significant increase in the number of elements required” (von Neumann, 1993, p. 8).
Furthermore, the serial nature of EDVAC was also dictated by engineering constraints on the early stored program machines. The existence of such devices depended upon the invention of new kinds of memory components (Williams, 1997). EDVAC used a delay line memory system, which worked by delaying a series of pulses (which represented a binary number) for a few milliseconds, and then by feeding these pulses back into the delay line so that they persisted in memory. Crucially, delay line memories only permitted stored information to be accessed in serial, one bit at a time.
EDVAC’s simple, serial design reflected an explicit decision against parallel processing that von Neumann (1993) called telescoping processes.
It is also worth emphasizing that up to now all thinking about high speed digital computing devices has tended in the opposite direction: Towards acceleration by telescoping processes at the price of multiplying the number of elements required. It would therefore seem to be more instructive to try to think out as completely as possible the opposite viewpoint. (von Neumann, 1993, p. 8)
EDVAC’s opposite viewpoint was only practical because of the high speed of its vacuum tube components.
Serial processing was an attractive design decision because it simplified the architecture of EDVAC. However, it was not a necessary design decision. The telescoping of processes was a common design decision in older computing devices that used slower components. Von Neumann was well aware that many of EDVAC’s ancestors employed various degrees of parallel processing.
In all existing devices where the element is not a vacuum tube the reaction time of the element is sufficiently long to make a certain telescoping of the steps involved in addition, subtraction, and still more in multiplication and division, desirable. (von Neumann, 1993, p. 6)
For example, the Zuse computers performed arithmetic operations in parallel, with one component manipulating the exponent and another manipulating the mantissa of a represented number (Zuse, 1993). Aiken’s Mark II computer at Harvard also had multiple arithmetic units that could be activated in parallel, though this was not common practice because coordination of its parallel operations were difficult to control (Williams, 1997). ENIAC used 20 accumulators as mathematical operators, and these could be run simultaneously; it was a parallel machine (Neukom, 2006).
In spite of von Neumann’s (1993) championing of serial processing, advances in computer memory permitted him to adopt a partially parallel architecture in the machine he later developed at Princeton (Burks, Goldstine, & Von Neumann, 1989). Cathode ray tube memories (Williams & Kilburn, 1949) allowed all of the bits of a word in memory to be accessed in parallel, though operations on this retrieved information were still conducted in serial.
To get a word from the memory in this scheme requires, then, one switching mechanism to which all 40 tubes are connected in parallel. Such a switching scheme seems to us to be simpler than the technique needed in the serial system and is, of course, 40 times faster. We accordingly adopt the parallel procedure and thus are led to consider a so-called parallel machine, as contrasted with the serial principles being considered for the EDVAC. (Burks, Goldstine & von Neumann, 1989, p. 44)
Interestingly, the extreme serial design in EDVAC resurfaced in the pocket calculators of the 1970s, permitting them to be simple and small (Ceruzzi, 1997).
The brief historical review provided above indicates that while some of the early computing devices were serial processors, many others relied upon a certain degree of parallel processing. The same is true of some prototypical architectures proposed by classical cognitive science. For example, production systems (Newell, 1973, 1990; Newell & Simon, 1972) are serial in the sense that only one production manipulates working memory at a time. However, all of the productions in such a system scan the working memory in parallel when determining whether the condition that launches their action is present.
An alternative approach to making the case that the serial processing is not a mark of the classical is to note that serial processing also appears in non-classical architectures. The serial versus parallel distinction is typically argued to be one of the key differences between connectionist and classical theories. For instance, parallel processing is required to explain how the brain is capable of performing complex calculations in spite of the slowness of neurons in comparison to electronic components (Feldman & Ballard, 1982; McClelland, Rumelhart, & Hinton, 1986; von Neumann, 1958). In comparing brains to digital computers, von Neumann (1958, p. 50) noted that “the natural componentry favors automata with more, but slower, organs, while the artificial one favors the reverse arrangement of fewer, but faster organs.”
It is certainly the case that connectionist architectures have a high degree of parallelism. For instance, all of the processing units in the same layer of a multilayered perceptron are presumed to operate simultaneously. Nevertheless, even prototypical parallel distributed processing models reveal the presence of serial processing.
One reason that the distributed memory or the standard pattern associator requires external, central control (Dawson & Schopflocher, 1992a) is because this kind of model is not capable of simultaneous learning and recalling. This is because one of its banks of processors is used as a set of input units during learning, but is used completely differently, as output units, during recall. External control is used to determine how these units are employed and therefore determines whether the machine is learning or recalling. External control also imposes seriality in the sense that during learning input, patterns are presented in sequence, and during recall, presented cues are again presented one at a time. Dawson and Schopflocher (1992a) demonstrated how true parallel processing could be accomplished in such a network, but only after substantially elaborating the primitive components of the connectionist architecture.
A degree of serial processing is also present in multilayered networks. First, while all processors in one layer can be described as operating in parallel, the flow of information from one layer to the next is serial. Second, the operations of an individual processor are intrinsically serial. A signal cannot be output until internal activation has been computed, and internal activation cannot be computed until the net input has been determined.
Parallel processing is not generally proposed as a characteristic that distinguishes embodied from classical models. However, some researchers have noted the advantages of decentralized computation in behaviour-based robots (Brooks, 1999).
Again, though, embodied theories seem to exploit a mixture of parallel and serial processing. Consider the early insect-like walking robots of Rodney Brooks (1989, 1999, 2002). Each leg in the six-legged robot Genghis is a parallel processor, in the sense that each leg operates autonomously. However, the operations of each leg can be described as a finite state automaton (see the appendix on Genghis in Brooks, 2002), which is an intrinsically serial device.
The stigmergic control of the swarm intelligence that emerges from a collection of robots or social insects (Beni, 2005; Bonabeau & Meyer, 2001; Hinchey, Sterritt, & Rouff, 2007; Sharkey, 2006; Tarasewich & McMullen, 2002) also appears to be a mixture of parallel and serial operations. A collective operates in parallel in the sense that each member of the collective is an autonomous agent. However, the behaviour of each agent is often best characterized in serial: first the agent does one thing, and then it does another, and so on. For instance, in a swarm capable of creating a nest by blind bulldozing (Parker, Zhang, & Kube, 2003), agents operate in parallel. However, each agent moves in serial from one state (e.g., plowing, colliding, finishing) to another.
In summary, serial processing has been stressed more in classical models, while parallel processing has received more emphasis in connectionist and embodied approaches. However, serial processing cannot be said to be a mark of the classical.
First, serial processing in classical information processing systems was adopted as an engineering convenience, and many digital computers included a certain degree of parallel processing. Second, with careful examination serial processing can also be found mixed in with the parallel processing of connectionist networks or of collective intelligences. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.05%3A_Serial_versus_Parallel_Processing.txt |
Classical and connectionist cognitive scientists agree that theories of cognition must appeal to internal representations (Fodor & Pylyshyn, 1988). However, they appear to have strong disagreements about the nature of such representations. In particular, connectionist cognitive scientists propose that their networks exploit distributed representations, which provide many advantages over the local representations that they argue characterize the classical approach (Bowers, 2009). That is, distributed representations are often taken to be a mark of the connectionist, and local representations are taken to be a mark of the classical.
There is general, intuitive agreement about the differences between distributed and local representations. In a connectionist distributed representation, “knowledge is coded as a pattern of activation across many processing units, with each unit contributing to multiple, different representations. As a consequence, there is no one unit devoted to coding a given word, object, or person” (Bowers, 2009, p. 220). In contrast, in a classical local representation, “individual words, objects, simple concepts, and the like are coded distinctly, with their own dedicated representation” (p. 22).
However, when the definition of distributed representation is examined more carefully (van Gelder, 1991), two facts become clear. First, this term is used by different connectionists in different ways. Second, some of the uses of this term do not appear to differentiate connectionist from classical representations.
Van Gelder (1991) noted, for instance, that one common sense of distributed representation is that it is extended: a distributed representation uses many units to represent each item, while local representations do not. “To claim that a node is distributed is presumably to claim that its states of activation correspond to patterns of neural activity—to aggregates of neural ‘units’—rather than to activations of single neurons” (Fodor & Pylyshyn, 1988, p. 19). It is this sense of an extended or distributed representation that produces connectionist advantages such as damage resistance, because the loss of one of the many processors used to represent a concept will not produce catastrophic loss of represented information.
However, the use of extended to define distributed does not segregate connectionist representations from their classical counterparts. For example, the mental image is an important example of a classical representation (Kosslyn, 1980; Kosslyn, Thompson, & Ganis, 2006; Paivio, 1971, 1986). It would be odd to think of a mental image as being distributed, particularly in the context of the connectionist use of this term. However, proponents of mental imagery would argue that they are extended, functionally in terms of being extended over space, and physically in terms of being extended over aggregates of neurons in topographically organized areas of the cortex (Kosslyn, 1994; Kosslyn, Ganis, & Thompson, 2003; Kosslyn et al., 1995). “There is good evidence that the brain depicts representations literally, using space on the cortex to represent space in the world” (Kosslyn, Thompson, & Ganis, 2006, p. 15).
Another notion of distributed representation considered by van Gelder (1991) was the coarse code (Feldman & Ballard, 1982; Hinton, McClelland, & Rumelhart, 1986). Again, a coarse code is typically presented as distinguishing connectionist networks from classical models. A coarse code is extended in the sense that multiple processors are required to do the representing. These processors have two properties. First, their receptive fields are wide—that is, they are very broadly tuned, so that a variety of circumstances will lead to activation in a processor. Second, the receptive fields of different processors overlap. In this kind of representation, a high degree of accuracy is possible by pooling the responses of a number of broadly tuned (i.e., coarse) processors (Dawson, Boechler, & Orsten, 2005; Dawson, Boechler, & Valsangkar-Smyth, 2000).
While coarse coding is an important kind of representation in the connectionist literature, once again it is possible to find examples of coarse coding in classical models as well. For example, one way that coarse coding of spatial location is presented by connectionists (Hinton, McClelland, & Rumelhart, 1986) can easily be recast in terms of Venn diagrams. That is, each non-empty set represents the coarse location of a target in a broad spatial area; the intersection of overlapping nonempty sets provides more accurate target localization.
However, classical models of syllogistic reasoning can be cast in similar fashions that include Euler circles and Venn diagrams (Johnson-Laird, 1983). Indeed, Johnson-Laird’s (1983) more modern notion of mental models can themselves be viewed as an extension of these approaches: syllogistic statements are represented as a tableau of different instances; the syllogism is solved by combining (i.e., intersecting) tableaus for different statements and examining the relevant instances that result. In other words, mental models can be considered to represent a classical example of coarse coding, suggesting that this concept does not necessarily distinguish connectionist from classical theories.
After his more detailed analysis of the concept, van Gelder (1991) argued that a stronger notion of distributed is required, and that this can be accomplished by invoking the concept of superposition. Two different concepts are superposed if the same resources are used to provide their representations. “Thus in connectionist networks we can have different items stored as patterns of activity over the same set of units, or multiple different associations encoded in one set of weights” (p. 43).
Van Gelder (1991) pointed out that one issue with superposition is that it must be defined in degrees. For instance, it may be the case that not all resources are used simultaneously to represent all contents. Furthermore, operationalizing the notion of superposition depends upon how resources are defined and measured. Finally, different degrees of superposition may be reflected in the number of different contents that a given resource can represent. For example, it is well known that one kind of artificial neural network, the Hopfield network (Hopfield, 1982), is of limited capacity, where if the network is comprised of N processors, it will be only to be able to represent in the order of 0.18N distinct memories (Abu-Mostafa & St. Jacques, 1985; McEliece, et al., 1987).
Nonetheless, van Gelder (1991) expressed confidence that the notion of superposition provides an appropriate characteristic for defining a distributed representation. “It is strong enough that very many kinds of representations do not count as superposed, yet it manages to subsume virtually all paradigm cases of distribution, whether these are drawn from the brain, connectionism, psychology, or optics” (p. 54).
Even if van Gelder’s (1991) definition is correct, it is still the case that the concept of superposition does not universally distinguish connectionist representations from classical ones. One example of this is when concepts are represented as collections of features or microfeatures. For instance, in an influential PDP model called an interactive activation and competition network (McClelland & Rumelhart, 1988), most of the processing units represent the presence of a variety of features. Higherorder concepts are defined as sets of such features. This is an instance of superposition, because the same feature can be involved in the representation of multiple networks. However, the identical type of representation—that is, superposition of featural elements—is also true of many prototypical classical representations, including semantic networks (Collins & Quillian, 1969, 1970a, 1970b) and feature set representations (Rips, Shoben, & Smith, 1973; Tversky, 1977; Tversky & Gati, 1982).
The discussion up to this point has considered a handful of different notions of distributed representation, and has argued that these different definitions do not appear to uniquely separate connectionist and classical concepts of representation. To wrap up this discussion, let us take a different approach, and consider why in some senses connectionist researchers may still need to appeal to local representations.
One problem of considerable interest within cognitive neuroscience is the issue of assigning specific behavioural functions to specific brain regions; that is, the localization of function. To aid in this endeavour, cognitive neuroscientists find it useful to distinguish between two qualitatively different types of behavioural deficits. A single dissociation consists of a patient performing one task extremely poorly while performing a second task at a normal level, or at least very much better than the first. In contrast, a double dissociation occurs when one patient performs the first task significantly poorer than the second, and another patient (with a different brain injury) performs the second task significantly poorer than the first (Shallice, 1988).
Cognitive neuroscientists have argued that double dissociations reflect damages to localized functions (Caramazza, 1986; Shallice, 1988). The view that dissociation data reveals internal structures that are local in nature has been named the locality assumption (Farah, 1994).
However, Farah (1994) hypothesized that the locality assumption may be un warranted for two reasons. First, its validity depends upon the additional assumption that the brain is organized into a set of functionally distinct modules (Fodor, 1983). Farah argued that the modularity of the brain is an unresolved empirical issue. Second, Farah noted that it is possible for nonlocal or distributed architectures, such as parallel distributed processing (PDP) networks, to produce single or double dissociations when lesioned. As the interactive nature of PDP networks is “directly incompatible with the locality assumption” (p. 46), the locality assumption may not be an indispensable tool for cognitive neuroscientists.
Farah (1994) reviewed three areas in which neuropsychological dissociations had been used previously to make inferences about the underlying local structure. For each she provided an alternative architecture—a PDP network. Each of these networks, when locally damaged, produced (local) behavioural deficits analogous to the neuropsychological dissociations of interest. These results led Farah to conclude that one cannot infer that a specific behavioural deficit is associated with the loss of a local function, because the prevailing view is that PDP networks are, by definition, distributed and therefore nonlocal in structure.
However, one study challenged Farah’s (1994) argument both logically and empirically (Medler, Dawson, & Kingstone, 2005). Medler, Dawson, and Kingstone (2005) noted that Farah’s whole argument was based on the assumption that connectionist networks exhibit universally distributed internal structure. However, this assumption needs to be empirically supported; Medler and colleagues argued that this could only be done by interpreting the internal structure of a network and by relating behavioural deficits to interpretations of ablated components. They noted that it was perfectly possible for PDP networks to adopt internal representations that were more local in nature, and that single and double dissociations in lesioned networks may be the result of damaging local representations.
Medler, Dawson, and Kingstone (2005) supported their position by training a network on a logic problem and interpreting the internal structure of the network, acquiring evidence about how local or how nonlocal the function of each hidden unit was. They then created different versions of the network by lesioning one of its 16 hidden units, assessing behavioural deficits in each lesioned network. They found that the more local a hidden unit was the more profound and specific was the behavioural deficit that resulted when the unit was lesioned. “For a double dissociation to occur within a computational model, the model must have some form of functional localization” (p. 149).
We saw earlier that one of the key goals of connectionist cognitive science was to develop models that were biologically plausible. Clearly one aspect of this is to produce networks that are capable of reflecting appropriate deficits in behaviour when damaged, such as single or double dissociations. Medler, Dawson, and Kingstone (2005) have shown that the ability to do so, even in PDP networks, requires local representations. This provides another line of evidence against the claim that distributed representations can be used to distinguish connectionist from classical models. In other words, local representations do not appear to be a mark of the classical. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.06%3A_Local_versus_Distributed_Representations.txt |
One of the key properties of classical cognitive science is its emphasis on sensethink-act processing. Classical cognitive scientists view the purpose of cognition as planning action on the basis of input information. This planning typically involves the creation and manipulation of internal models of the external world. Is the classical sandwich (Hurley, 2001) a mark of the classical?
Sense-think-act processing does not distinguish classical models from connectionist networks. The distributed representations within most modern networks mediate all relationships between input units (sensing) and output units (responding). This results in what has been described as the connectionist sandwich (Calvo & Gomila, 2008). Sense-think-act processing is a mark of both the classical and the connectionist.
While sense-think-act processing does not distinguish classical cognitive science from connectionism, it may very well differentiate it from embodied cognitive science. Embodied cognitive scientists have argued in favor of sense-act processing that abandons using internal models of the world (Pfeifer & Scheier, 1999). The purpose of cognition might not be to plan, but instead to control action on the world (Clark, 1997). Behaviour-based robots arose as an anti-representational reaction to classical research in artificial intelligence (Brooks, 1991). The direct link between perception and action—a link often described as circumventing internal representation—that characterized the ecological approach to perception (Gibson, 1979; Turvey et al., 1981) has been a cornerstone of embodied theory (Chemero, 2009; Chemero & Turvey, 2007; Neisser, 1976; Noë, 2004; Winograd & Flores, 1987a).
The distinction between sense-think-act processing and sense-act processing is a putative differentiator between classical and embodied approaches. However, it is neither a necessary nor sufficient one. This is because in both classical and embodied approaches, mixtures of both types of processing can readily be found.
For example, it was earlier shown that the stored program computer—a digital computer explicitly designed to manipulate internal representations—emerged from technical convenience, and did not arise because classical information processing demanded internal representations. Prototypical classical machines, such as the Turing machine, can easily be described as pure sense-act processors (Wells, 1996). Also, earlier electromechanical computers often used external memories to scaffold processing because of the slow speed of their componentry.
Furthermore, prototypical classical architectures in cognitive science appeal to processes that are central to the embodied approach. For example, modern production systems have been extended to include sensing and acting, and have used these extensions to model (or impose) constraints on behaviour, such as our inability to use one hand to do two tasks at the same time (Kieras & Meyer, 1997; Meyer et al., 2001; Meyer & Kieras, 1997a, 1997b, 1999; Meyer et al., 1995). A production system for solving the Towers of Hanoi problem also has been formulated that uses the external towers and discs as the external representation of the problem (Vera & Simon, 1993). Some have argued that the classical emphasis on internal thinking, at the expense of external sense-acting, simply reflects the historical development of the classical approach and does not reflect its intrinsic nature (Newell, 1990).
Approaching this issue from the opposite direction, many embodied cognitive scientists are open to the possibility that the representational stance of classical cognitive science may be required to provide accounts of some cognitive phenomena. For instance, Winograd and Flores (1987a) made strong arguments for embodied accounts of cognition. They provided detailed arguments of how classical views of cognition are dependent upon the disembodied view of the mind that has descended from Descartes. They noted that “detached contemplation can be illuminating, but it also obscures the phenomena themselves by isolating and categorizing them” (pp. 32–33). However, in making this kind of observation, they admitted the existence of a kind of reasoning called detached contemplation. Their approach offers an alternative to representational theories, but does not necessarily completely abandon the possibility of internal representations.
Similarly, classical cognitive scientists who appeal exclusively to internal representations and embodied cognitive scientists who completely deny internal representations might be staking out extreme and radical positions to highlight the differences between their approaches (Norman, 1993). Some embodied cognitive scientists have argued against this radical polarization of cognitive science, such as Clark (1997):
Such radicalism, I believe, is both unwarranted and somewhat counterproductive. It invites competition where progress demands cooperation. In most cases, at least, the emerging emphasis on the roles of body and world can be seen as complementary to the search for computational and representational understandings. (Clark, 1997, p. 149)
Clark (1997) adopted this position because he realized that representations may be critical to cognition, provided that appeals to representation do not exclude appeals to other critical, embodied elements: “We should not be too quick to reject the more traditional explanatory apparatuses of computation and representation. Minds may be essentially embodied and embedded and still depend crucially on brains which compute and represent” (p. 143).
The reason that an embodied cognitive scientist such as Clark may be reluctant to eliminate representations completely is because one can easily consider situations in which internal representations perform an essential function. Clark (1997) suggested that some problems might be representation hungry, in the sense that the very nature of these problems requires their solutions to employ internal representations. A problem might be representation hungry because it involves features that are not reliably present in the environment, as in reasoning about absent states, or in counterfactual reasoning. A problem might also be representation hungry if it involves reasoning about classes of objects that are extremely abstract, because there is a wide variety of different physical realizations of class instances (for instance, reasoning about “computers”!).
The existence of representation-hungry problems leaves Clark (1997) open to representational theories in cognitive science, but these theories must be placed in the context of body and world. Clark didn’t want to throw either the representational or embodied babies out with the bathwater (Hayes, Ford, & Agnew, 1994). Instead, he viewed a co-operative system in which internal representations can be used when needed, but the body and the world can also be used to reduce internal cognitive demands by exploiting external scaffolds. “We will not discover the right computational and representational stories unless we give due weight to the role of body and local environment—a role that includes both problem definition and, on occasion, problem solution” (Clark, 1997, p. 154).
It would seem, then, that internal representations are not a mark of the classical, and some cognitive scientists are open to the possibility of hybrid accounts of cognition. That is, classical researchers are extending their representational theories by paying more attention to actions on the world, while embodied researchers are open to preserving at least some internal representations in their theories. An example hybrid theory that appeals to representations, networks, and actions (Pylyshyn, 2003c, 2007) is presented in detail in Chapter 8. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.07%3A_Internal_Representations.txt |
Connectionists have argued that one mark of the classical is its reliance on explicit rules (McClelland, Rumelhart, & Hinton, 1986). For example, it has been claimed that all classical work on knowledge acquisition “shares the assumption that the goal of learning is to formulate explicit rules (proposition, productions, etc.) which capture powerful generalizations in a succinct way” (p. 32).
Explicit rules may serve as a mark of the classical because it has also been argued that they are not characteristic of other approaches in cognitive science, particularly connectionism. Many researchers assume that PDP networks acquire implicit knowledge. For instance, consider this claim about a network that learns to convert verbs from present to past tense:
The model learns to behave in accordance with the rule, not by explicitly noting that most words take -ed in the past tense in English and storing this rule away explicitly, but simply by building up a set of connections in a pattern associator through a long series of simple learning experiences. (McClelland, Rumelhart, & Hinton, 1986, p. 40)
One problem that immediately arises in using explicit rules as a mark of the classical is that the notions of explicit rules and implicit knowledge are only vaguely defined or understood (Kirsh, 1992). For instance, Kirsh (1992) notes that the distinction between explicit rules and implicit knowledge is often proposed to be similar to the distinction between local and distributed representations. However, this definition poses problems for using explicit rules as a mark of the cognitive. This is because, as we have already seen in an earlier section of this chapter, the distinction between local and distributed representations does not serve well to separate classical cognitive science from other approaches.
Furthermore, defining explicit rules in terms of locality does not eliminate connectionism’s need for them (Hadley, 1993). Hadley (1993) argued that there is solid evidence of the human ability to instantaneously learn and apply rules.
Some rule-like behavior cannot be the product of ‘neurally-wired’ rules whose structure is embedded in particular networks, for the simple reason that humans can often apply rules (with considerable accuracy) as soon as they are told the rules. (Hadley, 1993, p. 185)
Hadley proceeded to argue that connectionist architectures need to exhibit such (explicit) rule learning. “The foregoing conclusions present the connectionist with a formidable scientific challenge, which is, to show how general purpose rule following mechanisms may be implemented in a connectionist architecture” (p. 199).
Why is it that, on more careful consideration, it seems that explicit rules are not a mark of the cognitive? It is likely that the assumption that PDP networks acquire implicit knowledge is an example of what has been called gee whiz connectionism (Dawson, 2009). That is, connectionists assume that the internal structure of their networks is neither local nor rule-like, and they rarely test this assumption by conducting detailed interpretations of network representations. When such interpretations are conducted, they can reveal some striking surprises. For instance, the internal structures of networks have revealed classical rules of logic (Berkeley et al., 1995) and classical production rules (Dawson et al., 2000).
The discussion in the preceding paragraphs raises the possibility that connectionist networks can acquire explicit rules. A complementary point can also be made to question explicit rules as a mark of the classical: classical models may not themselves require explicit rules. For instance, classical cognitive scientists view an explicit rule as an encoded representation that is part of the algorithmic level. Furthermore, the reason that it is explicitly represented is that it is not part of the architecture (Fodor & Pylyshyn, 1988). In short, classical theories posit a combination of explicit (algorithmic, or stored program) and implicit (architectural) determinants of cognition. As a result, classical debates about the cognitive architecture can be construed as debates about the implicitness or explicitness of knowledge:
Not only is there no reason why Classical models are required to be rule-explicit but—as a matter of fact—arguments over which, if any, rules are explicitly mentally represented have raged for decades within the Classicist camp. (Fodor & Pylyshyn, p. 60)
To this point, the current section has tacitly employed the context that the distinction between explicit rules and implicit knowledge parallels the distinction between local and distributed representations. However, other contexts are also plausible. For example, classical models may be characterized as employing explicit rules in the sense that they employ a structure/process distinction. That is, classical systems characteristically separate their symbol-holding memories from the rules that modify stored contents.
For instance, the Turing machine explicitly distinguishes its ticker tape memory structure from the rules that are executed by its machine head (Turing, 1936). Similarly, production systems (Anderson, 1983; Newell, 1973) separate their symbolic structures stored in working memory from the set of productions that scan and manipulate expressions. The von Neumann (1958, 1993) architecture by definition separates its memory organ from the other organs that act on stored contents, such as its logical or arithmetical units.
To further establish this alternative context, some researchers have claimed that PDP networks or other connectionist architectures do not exhibit the structure/process distinction. For instance, a network can be considered to be an active data structure that not only stores information, but at the same time manipulates it (Hillis, 1985). From this perspective, the network is both structure and process.
However, it is still the case that the structure/process distinction fails to provide a mark of the classical. The reason for this was detailed in this chapter’s earlier discussion of control processes. That is, almost all PDP networks are controlled by external processes—in particular, learning rules (Dawson & Schopflocher, 1992a; Roy, 2008). This external control takes the form of rules that are as explicit as any to be found in a classical model.
To bring this discussion to a close, I argue that a third context is possible for distinguishing explicit rules from implicit knowledge. This context is the difference between digital and analog processes. Classical rules may be explicit in the sense that they are digital: consistent with the neural all-or-none law (Levitan & Kaczmarek, 1991; McCulloch & Pitts, 1943), as the rule either executes or does not. In contrast, the continuous values of the activation functions used in connectionist networks permit knowledge to be applied to varying degrees. From this perspective, networks are analog, and are not digital.
Again, however, this context also does not successfully provide a mark of the classical. First, one consequence of Church’s thesis and the universal machine is that digital and analogical devices are functionally equivalent, in the sense that one kind of computer can simulate the other (Rubel, 1989). Second, connectionist models themselves can be interpreted as being either digital or analog in nature, depending upon task demands. For instance, when a network is trained to either respond or not, as in pattern classification (Lippmann, 1989) or in the simulation of animal learning (Dawson, 2008), output unit activation is treated as being digital. However, when one is interested in solving a problem in which continuous values are required, as in function approximation (Hornik, Stinchcombe, & White, 1989; Kremer, 1995; Medler & Dawson, 1994) or in probability matching (Dawson et al., 2009), the same output unit activation function is treated as being analog in nature.
In conclusion, though the notion of explicit rules has been proposed to distinguish classical models from other kinds of architectures, a more careful consideration suggests that this approach is flawed. Our analysis suggests, however, that the use of explicit rules does not appear to be a reliable mark of the classical. Regardless of how the notion of explicit rules is defined, it appears that classical architectures do not use such rules exclusively, and it also appears that such rules need to be part of connectionist models of cognition. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.08%3A_Explicit_Rules_versus_Implicit_Knowledge.txt |
The goal of cognitive science is to explain cognitive phenomena. One approach to such explanation is to generate a set of laws or principles that capture the regularities that are exhibited by members that belong to a particular class. Once it is determined that some new system belongs to a class, then it is expected that the principles known to govern that class will also apply to the new system. In this sense, the laws governing a class capture generalizations (Pylyshyn, 1984).
The problem that faced cognitive science in its infancy was that the classes of interest, and the laws that captured generalizations about their members, depended upon which level of analysis was adopted (Marr, 1982). For instance, at a physical level of investigation, electromechanical and digital computers do not belong to the same class. However, at a more abstract level of investigation (e.g., at the architectural level described in Chapter 2), these two very different types of physical devices belong to the same class, because their components are functionally equivalent: “Many of the electronic circuits which performed the basic arithmetic operations [in ENIAC] were simply electronic analogs of the same units used in mechanical calculators and the commercial accounting machines of the day” (Williams, 1997, p. 272).
The realization that cognitive systems must be examined from multiple levels of analysis motivated Marr’s (1982) tri-level hypothesis. According to this hypothesis, cognitive systems must be explained at three different levels of analysis: physical, algorithmic, and computational.
It is not enough to be able to predict locally the responses of single cells, nor is it enough to be able to predict locally the results of psychophysical experiments. Nor is it enough to be able to write computer programs that perform approximately in the desired way. One has to do all these things at once and also be very aware of the additional level of explanation that I have called the level of computational theory. (Marr, 1982, pp. 329–330)
The tri-level hypothesis provides a foundation for cognitive science and accounts for its interdisciplinary nature (Dawson, 1998). This is because each level of analysis uses a qualitatively different vocabulary to ask questions about cognitive systems and uses very different methods to provide the answers to these questions. That is, each level of analysis appeals to the different languages and techniques of distinct scientific disciplines. The need to explain cognitive systems at different levels of analysis forces cognitive scientists to be interdisciplinary.
Marr’s (1982) tri-level hypothesis can also be used to compare the different approaches to cognitive science. Is the tri-level hypothesis equally applicable to the three different schools of thought? Provided that the three levels are interpreted at a moderately coarse level, it would appear that this question could be answered affirmatively.
At Marr’s (1982) implementational level, cognitive scientists ask how information processes are physically realized. For a cognitive science of biological agents, answers to implementational-level questions are phrased in a vocabulary that describes biological mechanisms. It would appear that all three approaches to cognitive science are materialist and as a result are interested in conducting implementational-level analyses. Differences between the three schools of thought at this level might only be reflected in the scope of biological mechanisms that are of interest. In particular, classical and connectionist cognitive scientists will emphasize neural mechanisms, while embodied cognitive scientists are likely to be interested not only in the brain but also in other parts of the body that interact with the external world.
At Marr’s (1982) algorithmic level, cognitive scientists are interested in specifying the procedures that are used to solve particular information processing problems. At this level, there are substantial technical differences amongst the three schools of thought. For example, classical and connectionist cognitive scientists would appeal to very different kinds of representations in their algorithmic accounts (Broadbent, 1985; Rumelhart & McClelland, 1985). Similarly, an algorithmic account of internal planning would be quite different from an embodied account of controlled action, or of scaffolded, cognition. In spite of such technical differences, though, it would be difficult to claim that one approach to cognitive science provides procedural accounts, while another does not. All three approaches to cognitive science are motivated to investigate at the algorithmic level.
At Marr’s (1982) computational level, cognitive scientists wish to determine the nature of the information processing problems being solved by agents. Answering these questions usually requires developing proofs in some formal language. Again, all three approaches to cognitive science are well versed in posing computationallevel questions. The differences between them are reflected in the formal language used to explore answers to these questions. Classical cognitive science often appeals to some form of propositional logic (Chomsky, 1959a; McCawley, 1981; Wexler & Culicover, 1980), the behaviour of connectionist networks lends itself to being described in terms of statistical mechanics (Amit, 1989; Grossberg, 1988; Smolensky, 1988; Smolensky & Legendre, 2006), and embodied cognitive scientists have a preference for dynamical systems theory (Clark, 1997; Port & van Gelder, 1995b; Shapiro, 2011).
Marr’s (1982) tri-level hypothesis is only one example of exploring cognition at multiple levels. Precursors of Marr’s approach can be found in core writings that appeared fairly early in cognitive science’s modern history. For instance, philosopher Jerry Fodor (1968b) noted that one cannot establish any kind of equivalence between the behaviour of an organism and the behaviour of a simulation without first specifying a level of description that places the comparison in a particular context.
Marr (1982) himself noted that an even stronger parallel exists between the tri-level hypothesis and Chomsky’s (1965) approach to language. To begin with, Chomsky’s notion of an innate and universal grammar, as well as his idea of a “language organ” or a “faculty of language,” reflect a materialist view of language. Chomsky clearly expects that language can be investigated at the implementational level. The language faculty is due “to millions of years of evolution or to principles of neural organization that may be even more deeply grounded in physical law” (p. 59). Similarly, “the study of innate mechanisms leads us to universal grammar, but also, of course, to investigation of the biologically determined principles that underlie language use” (Chomsky, 1980, p. 206).
Marr’s (1982) algorithmic level is mirrored by Chomsky’s (1965) concept of linguistic performance. Linguistic performance is algorithmic in the sense that a performance theory should account for “the actual use of language in concrete situations” (Chomsky, 1965, p. 4). The psychology of language can be construed as being primarily concerned with providing theories of performance (Chomsky, 1980). That is, psychology’s “concern is the processes of production, interpretation, and the like, which make use of the knowledge attained, and the processes by which transition takes place from the initial to the final state, that is, language acquisition” (pp. 201– 202). An account of the processes that underlie performance requires an investigation at the algorithmic level.
Finally, Marr (1982) noted that Chomsky’s notion of linguistic competence parallels the computational level of analysis. A theory of linguistic competence specifies an ideal speaker-listener’s knowledge of language (Chomsky, 1965). A grammar is a theory of competence; it provides an account of the nature of language that “is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying . . . knowledge of the language in actual performance” (p. 3). As a computational-level theory, a grammar accounts for what in principle could be said or understood; in contrast, a performance theory accounts for language behaviours that actually occurred (Fodor, 1968b). Marr (1982) argued that influential criticisms of Chomsky’s theory (Winograd, 1972a) mistakenly viewed transformational grammar as an algorithmic, and not a computational, account. “Chomsky’s theory of transformational grammar is a true computational theory . . . concerned solely with specifying what the syntactic decomposition of an English sentence should be, and not at all with how that decomposition should be achieved” (Marr, 1982, p. 28).
The notion of the cognitive vocabulary arises by taking a different approach to linking Marr’s (1982) theory of vision to Chomsky’s (1965) theory of language. In addition to proposing the tri-level hypothesis, Marr detailed a sequence of different types of representations of visual information. In the early stages of visual processing, information was represented in the primal sketch, which provided a spatial representation of visual primitives such as boundaries between surfaces. Operations on the primal sketch produced the 2½-D sketch, which represents the properties, including depth, of all visible surfaces. Finally, operations on the 2½-D sketch produce the 3-D model, which represents the three-dimensional properties of objects (including surfaces not directly visible) in a fashion that is independent of view.
Chomsky’s (1965) approach to language also posits different kinds of representations (Jackendoff, 1987). These include representations of phonological structure, representations of syntax, and representations of semantic or conceptual structures. Jackendoff argued that Marr’s (1982) theory of vision could be directly linked to Chomsky’s theory of language by a mapping between 3-D models and conceptual structures. This link permits the output of visual processing to play a critical role in fixing the semantic content of linguistic representations (Jackendoff, 1983, 1990).
One key element of Jackendoff’s (1987) proposal is the distinction that he imposed between syntax and semantics. This type of separation is characteristic of classical cognitive science, which strives to separate the formal properties of symbols from their content-bearing properties (Haugeland, 1985).
For instance, classical theorists define symbols as physical patterns that bear meaning because they denote or designate circumstances in the real world (Vera & Simon, 1993). The physical pattern part of this definition permits symbols to be manipulated in terms of their shape or form: all that is required is that the physical nature of a pattern be sufficient to identify it as a token of some symbolic type. The designation aspect of this definition concerns the meaning or semantic content of the symbol and is completely separate from its formal or syntactic nature.
To put it dramatically, interpreted formal tokens lead two lives: SYNTACTICAL LIVES, in which they are meaningless markers, moved according to the rules of some self-contained game; and SEMANTIC LIVES, in which they have meanings and symbolic relations to the outside world. (Haugeland, 1985, p. 100)
In other words, when cognitive systems are viewed representationally (e.g., as in Jackendoff, 1987), they can be described at different levels, but these levels are not identical to those of Marr’s (1982) tri-level hypothesis. Representationally, one level is physical, involving the physical properties of symbols. A second level is formal, concerning the logical properties of symbols. A third level is semantic, regarding the meanings designated by symbols. Again, each of these levels involves using a particular vocabulary to capture its particular regularities.
This second sense of levels of description leads to a position that some researchers have used to distinguish classical cognitive science from other approaches. In particular, it is first proposed that a cognitive vocabulary is used to capture regularities at the semantic level of description. It is then argued that the cognitive vocabulary is a mark of the classical, because it is a vocabulary that is used by classical cognitive scientists, but which is not employed by their connectionist or embodied counterparts.
The cognitive vocabulary is used to capture regularities at the cognitive level that cannot be captured at the physical or symbolic levels (Pylyshyn, 1984). “But what sort of regularities can these be? The answer has already been given: precisely the regularities that tie goals, beliefs, and actions together in a rational manner” (p. 132). In other words, the cognitive vocabulary captures regularities by describing meaningful (i.e., rational) relations between the contents of mental representations. It is the vocabulary used when one adopts the intentional stance (Dennett, 1987) to predict future behaviour or when one explains an agent at the knowledge level (Newell, 1982, 1993).
To treat a system at the knowledge level is to treat it as having some knowledge and some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates. (Newell, 1982, p. 98)
The power of the cognitive vocabulary is that it uses meaningful relations between mental contents to explain intelligent behaviour (Fodor & Pylyshyn, 1988). For instance, meaningful, complex tokens are possible because the semantics of such expressions are defined in terms of the contents of their constituent symbols as well as the structural relationships that hold between these constituents. The cognitive vocabulary’s exploitation of constituent structure leads to the systematicity of classical theories: if one can process some expressions, then it is guaranteed that other expressions can also be processed because of the nature of constituent structures. This in turn permits classical theories to be productive, capable of generating an infinite variety of expressions from finite resources.
Some classical theorists have argued that other approaches in cognitive science do not posit the structural relations between mental contents that are captured by the cognitive vocabulary (Fodor & Pylyshyn, 1988). For instance, Fodor and Pylyshyn (1988) claimed that even though connectionist theories are representational, they are not cognitive because they exploit a very limited kind of relationship between represented contents.
Classical theories disagree with Connectionist theories about what primitive relations hold among these content-bearing entities. Connectionist theories acknowledge only causal connectedness as a principled relation among nodes; when you know how activation and inhibition flow among them, you know everything there is to know about how the nodes in a network are related. (Fodor and Pylyshyn, 1988, p. 12)
As a result, Fodor and Pylyshyn argued, connectionist models are not componential, nor systematic, nor even productive. In fact, because they do not use a cognitive vocabulary (in the full classical sense), connectionism is not cognitive.
Related arguments can be made against positions that have played a central role in embodied cognitive science, such as the ecological approach to perception advocated by Gibson (1979). Fodor and Pylyshyn (1981) have argued against the notion of direct perception, which attempts to construe perception as involving the direct pick-up of information about the layout of a scene; that is, acquiring this information without the use of inferences from cognitive contents: “The fundamental difficulty for Gibson is that ‘about’ (as in ‘information about the layout in the light’) is a semantic relation, and Gibson has no account at all of what it is to recognize a semantic relation” (p. 168). Fodor and Pylyshyn argued that Gibson’s only notion of information involves the correlation between states of affairs, and that this notion is insufficient because it is not as powerful as the classical notion of structural relations among cognitive contents. “The semantic notion of information that Gibson needs depends, so far as anyone knows, on precisely the mental representation construct that he deplores” (p. 168).
It is clear from the discussion above that Pylyshyn used the cognitive vocabulary to distinguish classical models from connectionist and embodied theories. This does not mean that he believed that non-classical approaches have no contributions to make. For instance, in Chapter 8 we consider in detail his theory of seeing and visualizing (Pylyshyn, 2003c, 2007); it is argued that this is a hybrid theory, because it incorporates elements from all three schools of thought in cognitive science.
However, one of the key elements of Pylyshyn’s theory is that vision is quite distinct from cognition; he has made an extended argument for this position. When he appealed to connectionist networks or embodied access to the world, he did so in his account of visual, and not cognitive, processes. His view has been that such processes can only be involved in vision, because they do not appeal to the cognitive vocabulary and therefore cannot be viewed as cognitive processes. In short, the cognitive vocabulary is viewed by Pylyshyn as a mark of the classical.
Is the cognitive vocabulary a mark of the classical? It could be—provided that the semantic level of explanation captures regularities that cannot be expressed at either the physical or symbolic levels. Pylyshyn (1984) argued that this is indeed the case, and that the three different levels are independent:
The reason we need to postulate representational content for functional states is to explain the existence of certain distinctions, constraints, and regularities in the behavior of at least human cognitive systems, which, in turn, appear to be expressible only in terms of the semantic content of the functional states of these systems. Chief among the constraints is some principle of rationality. (Pylyshyn, 1984, p. 38)
However, it is not at all clear that in the practice of classical cognitive science—particularly the development of computer simulation models—the cognitive level is distinct from the symbolic level. Instead, classical researchers adhere to what is known as the formalist’s motto (Haugeland, 1985). That is, the semantic regularities of a classical model emerge from the truth-preserving, but syntactic, regularities at the symbolic level.
If the formal (syntactical) rules specify the relevant texts and if the (semantic) interpretation must make sense of all those texts, then simply playing by the rules is itself a surefire way to make sense. Obey the formal rules of arithmetic, for instance, and your answers are sure to be true. (Haugeland, 1985, p. 106)
If this relation holds between syntax and semantics, then the cognitive vocabulary is not capturing regularities that cannot be captured at the symbolic level.
The formalist’s motto is a consequence of the physical symbol system hypothesis (Newell, 1980; Newell & Simon, 1976) that permitted classical cognitive science to replace Cartesian dualism with materialism. Fodor and Pylyshyn (1988, p. 13) adopt the physical symbol system hypothesis, and tacitly accept the formalist’s motto: “Because Classical mental representations have combinatorial structure, it is possible for Classical mental operations to apply to them by reference to their form.” Note that in this quote, operations are concerned with formal and not semantic properties; semantics is preserved provided that there is a special relationship between constraints on symbol manipulations and constraints on symbolic content.
To summarize this section: The interdisciplinary nature of cognitive science arises because cognitive systems require explanations at multiple levels. Two multiple level approaches are commonly found in the cognitive science literature. The first is Marr’s (1982) tri-level hypothesis, which requires cognitive systems to be explained at the implementational, algorithmic, and computational levels. It is argued above that all three schools of thought in cognitive science adhere to the tri-level hypothesis. Though at each level there are technical differences to be found between classical, connectionist, and embodied cognitive science, all three approaches seem consistent with Marr’s approach. The tri-level hypothesis cannot be used to distinguish one cognitive science from another.
The second is a tri-level approach that emerges from the physical symbol system hypothesis. It argues that information processing requires explanation at three independent levels: the physical, the symbolic, and the semantic (Dennett, 1987; Newell, 1982; Pylyshyn, 1984). The physical and symbolic levels in this approach bear a fairly strong relationship to Marr’s (1982) implementational and algorithmic levels. The semantic level, though, differs from Marr’s computational level in calling for a cognitive vocabulary that captures regularities by appealing to the contents of mental representations. This cognitive vocabulary has been proposed as a mark of the classical that distinguishes classical theories from those proposed by connectionist and embodied researchers. However, it has been suggested that this view may not hold, because the formalist’s motto makes the proposal of an independent cognitive vocabulary difficult to defend. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.09%3A_The_Cognitive_Vocabulary.txt |
Vera and Simon’s (1993) analysis of situated action theories defines one extreme pole of a continuum for relating different approaches in cognitive science. At this end of the continuum, all theories in cognitive science—including situated action theories and connectionist theories—are classical or symbolic in nature. “It follows that there is no need, contrary to what followers of SA seem sometimes to claim, for cognitive psychology to adopt a whole new language and research agenda, breaking completely from traditional (symbolic) cognitive theories” (p. 46).
The position defined by Vera and Simon’s (1993) analysis unites classical, connectionist, and cognitive science under a classical banner. However, it does so because key terms, such as symbolic, are defined so vaguely that their value becomes questionable. Critics of their perspective have argued that anything can be viewed as symbolic given Vera and Simon’s liberal definition of what symbols are (Agre, 1993; Clancey, 1993).
The opposite pole of the continuum for relating different approaches in cognitive science is defined by theories that propose sharp differences between different schools of thought, and which argue in favor of adopting one while abandoning others (Chemero, 2009; Fodor & Pylyshyn, 1988; Smolensky, 1988; Winograd & Flores, 1987b).
One problem with this end of the continuum, an issue that is the central theme of the current chapter, is that it is very difficult to define marks of the classical, features that uniquely distinguish classical cognitive science from competing approaches. Our examination of the modern computing devices that inspired classical cognitive science revealed that many of these machines lacked some of the properties that are often considered marks of the classical. That is, it is not clear that properties such as central control, serial processing, local and internal representations, explicit rules, and the cognitive vocabulary are characteristics that distinguish classical theories from other kinds of models.
The failure to find clear marks of the classical may suggest that a more profitable perspective rests somewhere along the middle of the continuum for relating different approaches to cognitive science, for a couple of reasons. For one, the extent to which a particular theory is classical (or connectionist, or embodied) may be a matter of degrees. That is, any theory in cognitive science may adopt features such as local vs. distributed representations, internal vs. external memories, serial vs. parallel processes, and so on, to varying degrees. Second, differences between approaches may be important in the middle of the continuum, but may not be so extreme or distinctive that alternative perspectives cannot be co-operatively coordinated to account for cognitive phenomena.
To say this differently, rather than seeking marks of the classical, perhaps we should find arcs that provide links between different theoretical perspectives. One phenomenon might not nicely lend itself to an explanation from one school of thought, but be more easily accounted for by applying more than one school of thought at the same time. This is because the differing emphases of the simultaneously applied models may be able to capture different kinds of regularities. Cognitive science might be unified to the extent that it permits different theoretical approaches to be combined in hybrid models.
A hybrid model is one in which two or more approaches are applied simultaneously to provide a complete account of a whole phenomenon. The approaches might be unable to each capture the entirety of the phenomenon, but—in a fashion analogous to coarse coding—provide a complete theory when the different aspects that they capture are combined. One example of such a theory is provided in David McNeill’s (2005) Gesture And Thought.
McNeill (2005) noted that the focus of modern linguistic traditions on competence instead of performance (Chomsky, 1965) emphasizes the study of static linguistic structures. That is, such traditions treat language as a thing, not as a process. In contrast to this approach, other researchers have emphasized the dynamic nature of language (Vygotsky, 1986), treating it as a process, not as a thing. One example of a dynamic aspect of language of particular interest to McNeill (2005) is gesture, which in McNeill’s view is a form of imagery. Gestures that accompany language are dynamic because they are extended through time with identifiable beginnings, middles, and ends. McNeill’s proposal was that a complete account of language requires the simultaneous consideration of its static and dynamic elements.
McNeill (2005) argued that the static and dynamic elements of language are linked by a dialectic. A dialectic involves some form of opposition or conflict that is resolved through change; it is this necessary change that makes dialectic dynamic. The dialectic of language results because speech and gesture provide very different formats for encoding meaning. For instance,
in speech, ideas are separated and arranged sequentially; in gesture, they are instantaneous in the sense that the meaning of the gesture is not parceled out over time (even though the gesture may take time to occur, its full meaning is immediately present). (McNeill, 2005, p. 93)
As well, speech involves analytic meaning (i.e., based on parts), pre-specified pairings between form and meaning, and the use of forms defined by conventions. In contrast, gestures involve global meaning, imagery, and idiosyncratic forms that are created on the fly.
McNeill (2005) noted that the dialectic of language arises because there is a great deal of evidence suggesting that speech and gesture are synchronous. That is, gestures do not occur during pauses in speech to fill in meanings that are difficult to utter; both occur at the same time. As a result, two very different kinds of meaning are presented simultaneously. “Speech puts different semiotic modes together at the same moment of the speaker’s cognitive experience. This is the key to the dialectic” (p. 94).
According to McNeill (2005), the initial co-occurrence of speech and gesture produces a growth point, which is an unstable condition defined by the dialectic. This growth point is unpacked in an attempt to resolve the conflict between dynamic and static aspects of meaning. This unpacking is a move from the unstable to the stable. This is accomplished by creating a static, grammatical structure. “Change seeks repose. A grammatically complete sentence (or its approximation) is a state of repose par excellence, a natural stopping point, intrinsically static and reachable from instability” (p. 95). Importantly, the particular grammatical structure that is arrived at when stability is achieved depends upon what dynamic or gestural information was present during speech.
McNeill’s (2005) theory is intriguing because it exploits two different kinds of theories simultaneously: a classical theory of linguistic competence and an embodied theory of gestured meaning. Both the static/classical and dynamic/ embodied parts of McNeill’s theory are involved with conveying meaning. They occur at the same time and are therefore co-expressive, but they are not redundant: “gesture and speech express the same underlying idea unit but express it in their own ways—their own aspects of it, and when they express overlapping aspects they do so in distinctive ways” (p. 33). By exploiting two very different approaches in cognitive science, McNeill is clearly providing a hybrid model.
One hybrid model different in nature from McNeill’s (2005) is one in which multiple theoretical approaches are applied in succession. For example, theories of perception often involve different stages of processing (e.g., visual detection, visual cognition, object recognition [Treisman, 1988]). Perhaps one stage of such processing is best described by one kind of theory (e.g., a connectionist theory of visual detection) while a later stage is best described by a different kind of theory (e.g., a symbolic model of object recognition). One such theory of seeing and visualizing favoured by Pylyshyn (2003c, 2007) is discussed in detail as an example of a hybrid cognitive science in Chapter 8. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/07%3A_Marks_of_the_Classical/7.10%3A_From_Classical_Marks_to_Hybrid_Theories.txt |
Zenon Pylyshyn is one of the leading figures in the study of the foundations of cognitive science. His own training was highly interdisciplinary; he earned degrees in engineering-physics, control systems, and experimental psychology. In 1994, he joined Rutgers University as Board of Governors Professor of Cognitive Science and Director of the Rutgers Center for Cognitive Science. Prior to his arrival at Rutgers he was Professor of Psychology, Professor of Computer Science, Director of the University of Western Ontario Center for Cognitive Science, and an honorary professor in the departments of Philosophy and Electrical Engineering at Western. I myself had the privilege of having Pylyshyn as my PhD supervisor when I was a graduate student at Western.
Pylyshyn is one of the key proponents of classical cognitive science (Dedrick & Trick, 2009). One of the most important contributions to classical cognitive science has been his analysis of its foundations, presented in his classic work Computation and Cognition (Pylyshyn, 1984). Pylyshyn’s (1984) book serves as a manifesto for classical cognitive science, in which cognition is computation: the manipulation of formal symbols. It stands as one of the pioneering appeals for using the multiple levels of investigation within cognitive science. It provides an extremely cogent argument for the need to use a cognitive vocabulary to capture explanatory generalizations in the study of cognition. In it, Pylyshyn also argued for establishing the strong equivalence of a cognitive theory by determining the characteristics of the cognitive architecture.
As a champion of classical cognitive science, it should not be surprising that Pylyshyn has published key criticisms of other approaches to cognitive science. Fodor and Pylyshyn’s (1988) Cognition article “Connectionism and cognitive architecture” is one of the most cited critiques of connectionist cognitive science that has ever appeared. Fodor and Pylyshyn (1981) have also provided one of the major critiques of direct perception (Gibson, 1979). This places Pylyshyn securely in the camp against embodied cognitive science; direct perception in its modern form of active perception (Noë, 2004) has played a major role in defining the embodied approach. Given the strong anti-classical, anti-representational perspective of radical embodied cognitive science (Chemero, 2009), it is far from surprising to be able to cite Pylyshyn’s work in opposition to it.
In addition to pioneering classical cognitive science, Pylyshyn has been a crucial contributor to the literature on mental imagery and visual cognition. He is well known as a proponent of the propositional account of mental imagery, and he has published key articles critiquing its opponent, the depictive view (Pylyshyn, 1973, 1979b, 1981a, 2003b). His 1973 article “What the mind’s eye tells the mind’s brain: A critique of mental imagery” is a science citation classic that is responsible for launching the imagery debate in cognitive science. In concert with his analysis of mental imagery, Pylyshyn has developed a theory of visual cognition that may serve as an account of how cognition connects to the world (Pylyshyn, 1989, 1999, 2000, 2001, 2003c, 2007; Pylyshyn & Storm, 1988). The most extensive treatments of this theory can be found in his 2003 book Seeing and Visualizing—which inspired the title of the current chapter—and in his 2007 book Things and Places.
The purpose of the current chapter is to provide a brief introduction to Pylyshyn’s theory of visual cognition, in part because this theory provides a wonderful example of the interdisciplinary scope of modern cognitive science. A second, more crucial reason is that, as argued in this chapter, this theory contains fundamental aspects of all three approaches—in spite of Pylyshyn’s position as a proponent of classical cognitive science and as a critic of both connectionist and embodied cognitive science. Thus Pylyshyn’s account of visual cognition provides an example of the type of hybrid theory that was alluded to in the previous two chapters: a theory that requires classical, connectionist, and embodied elements. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.01%3A_Chapter_Overview.txt |
Some researchers are concerned that many perceptual theorists tacitly assume a snapshot conception of experience (Noë, 2002) or a video camera theory of vision (Frisby, 1980). Such tacit assumptions are rooted in our phenomenal experience of an enormously high-quality visual world that seems to be delivered to us effortlessly. “You open your eyes and—presto!—you enjoy a richly detailed picture-like experience of the world, one that represents the world in sharp focus, uniform detail and high resolution from the centre out to the periphery” (Noë, 2002, p. 2).
Indeed, our visual experience suggests that perception puts us in direct contact with reality. Perception is transparent; when we attempt to attend to perceptual processing, we miss the processing itself and instead experience the world around us (Gendler & Hawthorne, 2006). Rather than experiencing the world as picture-like (Noë, 2002), it is as if we simply experience the world (Chalmers, 2006; MerleauPonty, 1962). Merleau-Ponty (1962, p. 77) noted that “our perception ends in objects, and the object[,] once constituted, appears as the reason for all the experiences of it which we have had or could have.” Chalmers (2006) asserts that,
in the Garden of Eden, we had unmediated contact with the world. We were directly acquainted with objects in the world and with their properties. Objects were presented to us without causal mediation, and properties were revealed to us in their true intrinsic glory. (Chalmers, 2006, p. 49)
To say that visual processing is transparent is to say that we are only aware of the contents that visual processes deliver. This was a central assumption to the so-called New Look theory of perception. For instance, Bruner (1957, p. 124) presumed that “all perceptual experience is necessarily the end product of a categorization process.” Ecological perception (Gibson, 1979), a theory that stands in strong opposition in almost every respect to the New Look, also agrees that perceptual processes are transparent. “What one becomes aware of by holding still, closing one eye, and observing a frozen scene are not visual sensations but only the surfaces of the world that are viewed now from here” (p. 286, italics original).
That visual processing is transparent is not a position endorsed by all. For instance, eighteenth-century philosopher George Berkeley and nineteenth-century art critic John Ruskin both argued that it was possible to recover the “innocence of the eye” (Gombrich, 1960). According to this view, it is assumed that at birth humans have no concepts, and therefore cannot experience the world in terms of objects or categories; “what we really see is only a medley of colored patches such as Turner paints” (p. 296). Seeing the world of objects requires learning about the required categories. It was assumed that an artist could return to the “innocent eye”: “the painter must clear his mind of all he knows about the object he sees, wipe the slate clean, and make nature write her own story” (p. 297).
Most modern theories of visual perception take the middle ground between the New Look and the innocent eye by proposing that our experience of visual categories is supported by, or composed of, sensed information (Mach, 1959). Mach (1959) proclaimed that,
thus, perceptions, presentations, volitions, and emotions, in short the whole inner and outer world, are put together, in combinations of varying evanescence and permanence, out of a small number of homogeneous elements. Usually, these elements are called sensations. (Mach, 1959, p. 22)
From this perspective, a key issue facing any theory of seeing or visualizing is determining where sensation ends and where perception begins.
Unfortunately, the demarcation between sensation and perception is not easily determined by introspection. Subjective experience can easily lead us to the intentional fallacy in which a property of the content of a mental representation is mistakenly attributed to the representation itself (Pylyshyn, 2003c). We see in the next section that the transparency of visual processing hides from our awareness a controversial set of processes that must cope with tremendously complex information processing problems. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.02%3A_The_Transparency_of_Visual_Processing.txt |
Some researchers have noted a striking tension between experience and science (Varela, Thompson, & Rosch, 1991). On the one hand, our everyday experience provides a compelling and anchoring sense of self-consciousness. On the other hand, cognitive science assumes a fundamental self-fragmentation, because much of thought is putatively mediated by mechanisms that are modular, independent, and completely incapable of becoming part of conscious experience. “Thus cognitivism challenges our conviction that consciousness and the mind either amount to the same thing or [that] there is an essential or necessary connection between them” (p. 49).
The tension between experience and science is abundantly evident in vision research. It is certainly true that the scientific study of visual perception relies heavily on the analysis of visual experience (Pylyshyn, 2003c). However, researchers are convinced that this analysis must be performed with caution and be supplemented by additional methodologies. This is because visual experience is not complete, in the sense that it does not provide direct access to or experience of visual processing. Pylyshyn (2003b) wrote,
what we do [experience] is misleading because it is always the world as it appears to us that we see, not the real work that is being done by the mind in going from the proximal stimuli, generally optical patterns on the retina, to the familiar experience of seeing (or imagining) the world. (Pylyshyn, 2003b, p. xii)
Vision researchers have long been aware that the machinery of vision is not a part of our visual experience. Helmholtz noted that “it might seem that nothing could be easier than to be conscious of one’s own sensations; and yet experience shows that for the discovery of subjective sensations some special talent is needed” (Helmholtz & Southall, 1962b, p. 6). Cognitive psychologist Roger Shepard observed that,
we do not first experience a two-dimensional image and then consciously calculate or infer the three-dimensional scene that is most likely, given that image. The first thing we experience is the three-dimensional world—as our visual system has already inferred it for us on the basis of the two-dimensional input. (Shepard, 1990, p. 168)
In the nineteenth century, Hermann von Helmholtz argued that our visual experience results from the work of unconscious mechanisms. “The psychic activities that lead us to infer that there in front of us at a certain place there is a certain object of a certain character, are generally not conscious activities, but unconscious ones” (Helmholtz & Southall, 1962b, p. 4). However, the extent and nature of this unconscious processing was only revealed when researchers attempted to program computers to see. It was then discovered that visual processes face a difficult problem that also spurred advances in modern linguistic theory: the poverty of the stimulus.
Generative linguistics distinguished between those theories of language that were descriptively adequate and those that were explanatorily adequate (Chomsky, 1965). A descriptively adequate theory of language provided a grammar that was capable of describing the structure of any possible grammatical sentence in a language and incapable of describing the structure of any sentence that did not belong to this language. A more powerful explanatorily adequate theory was descriptively adequate but also provided an account of how that grammar was learned. “To the extent that a linguistic theory succeeds in selecting a descriptively adequate grammar on the basis of primary linguistic data, we can say that it meets the condition of explanatory adequacy” (p. 25).
Why did Chomsky use the ability to account for language learning as a defining characteristic of explanatory adequacy? It was because Chomsky realized that language learning faced the poverty of the stimulus. The poverty-of-the-stimulus argument is the claim that primary linguistic data—that is, the linguistic utterances heard by a child—do not contain enough information to uniquely specify the grammar used to produce them.
It seems that a child must have the ability to ‘invent’ a generative grammar that defines well-formedness and assigns interpretations to sentences even though the primary linguistic data that he uses as a basis for this act of theory construction may, from the point of view of the theory he constructs, be deficient in various respects. (Chomsky, 1965, p. 201)
The poverty of the stimulus is responsible for formal proofs that text learning of a language is not possible if the language is defined by a complex grammar (Gold, 1967; Pinker, 1979; Wexler & Culicover, 1980).
Language acquisition can be described as solving the projection problem: determining the mapping from primary linguistic data to the acquired grammar (Baker, 1979; Peters, 1972). When language learning is so construed, the poverty of the stimulus becomes a problem of underdetermination. That is, the projection from data to grammar is not unique, but is instead one-to-many: one set of primary linguistic data is consistent with many potential grammars.
For sighted individuals, our visual experience makes us take visual perception for granted. We have the sense that we simply look at the world and see it. Indeed, the phenomenology of vision led artificial intelligence pioneers to expect that building vision into computers would be a straightforward problem. For instance, Marvin Minsky assigned one student, as a summer project, the task of programming a computer to see (Horgan, 1993). However, failures to develop computer vision made it apparent that the human visual system was effortlessly solving, in real time, enormously complicated information processing problems. Like language learning, vision is dramatically underdetermined. That is, if one views vision as the projection from primary visual data (the proximal stimulus on the retina) to the internal interpretation or representation of the distal scene, this projection is one-to-many. A single proximal stimulus is consistent with an infinite number of different interpretations (Gregory, 1970; Marr, 1982; Pylyshyn, 2003c; Rock, 1983; Shepard, 1990).
One reason that vision is underdetermined is because the distal world is arranged in three dimensions of space, but the primary source of visual information we have about it comes from patterns of light projected onto an essentially two dimensional surface, the retina. “According to a fundamental theorem of topology, the relations between objects in a space of three dimensions cannot all be preserved in a two-dimensional projection” (Shepard, 1990, pp. 173–175).
This source of underdetermination is illustrated in Figure 8-1, which illustrates a view from the top of an eye observing a point in the distal world as it moves from position X1 to position Y1 over a given interval of time.
Figure 8-1. Underdetermination of projected movement.
The primary visual data caused by this movement is the motion, from point A to point B, of a point projected onto the back of the retina. The projection from the world to the back of the eye is uniquely defined by the laws of optics and of projective geometry.
However, the projection in the other direction, from the retina to the distal world, is not unique. If one attempts to use the retinal information alone to identify the distal conditions that caused it, then infinitely many possibilities are available. Any of the different paths of motion in the world (occurring over the same duration) that are illustrated in Figure 8-1 are consistent with the proximal information projected onto the eye. Indeed, movement from any position along the dashed line through the X-labelled points to any position along the other dashed line is a potential cause of the proximal stimulus.
One reason for the poverty of the visual stimulus, as illustrated in Figure 8-1, is that information is necessarily lost when an image from a three-dimensional space is projected onto a two-dimensional surface.
We are so familiar with seeing, that it takes a leap of imagination to realize that there are problems to be solved. But consider it. We are given tiny distorted upsidedown images in the eyes, and we see separate solid objects in surrounding space. From the patterns of stimulation on the retinas we perceive the world of objects, and this is nothing short of a miracle. (Gregory, 1978, p. 9)
A second reason for the poverty of the visual stimulus arises because the neural circuitry that mediates visual perception is subject to the limited order constraint (Minsky & Papert, 1969). There is no single receptor that takes in the entire visual stimulus in a glance. Instead, each receptor processes only a small part of the primary visual data. This produces deficiencies in visual information. For example, consider the aperture problem that arises in motion perception (Hildreth, 1983), illustrated in Figure 8-2.
Figure 8-2. The aperture problem in motion perception.
In this situation, a motion detector’s task is to detect the movement of a contour, shown in grey. However, the motion detector is of limited order: its window on the moving contour is the circular aperture in the figure, an aperture that is much smaller than the contour it observes.
Because of its small aperture, the motion detector in Figure 8-2 can only be sensitive to the component of the contour’s motion that is perpendicular to the edge of the contour, vector A. It is completely blind to any motion parallel to the contour, the dashed vector B. This is because movement in this direction will not change the appearance of anything within the aperture. As a result, the motion detector is unable to detect the true movement of the contour, vector T.
The limited order constraint leads to a further source of visual underdetermination. If visual detectors are of limited order, then our interpretation of the proximal stimulus must be the result of combining many different (and deficient) local measurements together. However, many different global interpretations exist that are consistent with a single set of such measurements. The local measurements by themselves cannot uniquely determine the global perception that we experience.
Consider the aperture problem of Figure 8-2 again. Imagine one, or many, local motion detectors that deliver vector A at many points along that contour. How many true motions of the contour could produce this situation? In principle, one can create an infinite number of different possible vector Ts by choosing any desired length of vector B—to which any of the detectors are completely blind—and adding it to the motion that is actually detected, i.e., vector A.
Pylyshyn (2003b, 2007) provided many arguments against the theory that vision constructs a representation of the world, which is depictive in nature. However, the theory that Pylyshyn opposed is deeply entrenched in accounts of visual processing.
For years the common view has been that a large-scope inner image is built up by superimposing information from individual glances at the appropriate coordinates of the master image: as the eye moves over a scene, the information on the retina is transmitted to the perceptual system, which then projects it onto an inner screen in the appropriate location, thus painting the larger scene for the mind side to observe. (Pylyshyn, 2003b, pp. 16–17)
Proponents of this view face another source of the poverty of the visual stimulus. It is analogous to the limited order constraint, in the sense that it arises because vision proceeds by accessing small amounts of information in a sequence of fragmentary glimpses.
Although we experience our visual world as a rich, stable panorama that is present in its entirety, this experience is illusory (Dennett, 1991; Pylyshyn, 2003c, 2007). Evidence suggests that we only experience fragments of the distal world a glance at a time. For instance, we are prone to change blindness, where we fail to notice a substantial visual change even though it occurs in plain sight (O’Regan et al., 2000). A related phenomenon is inattentional blindness, in which visual information that should be obvious is not noticed because attention is not directed to it (even though the gaze is!). In one famous experiment (Simons & Chabris, 1999), subjects watched a video of a basketball game and were instructed to count the number of times that the teams changed possession of the ball. In the midst of the game a person dressed in a gorilla suit walked out onto the court and danced a jig. Amazingly, most subjects failed to notice this highly visible event because they were paying attention to the ball.
If the visual system collects fragments of visual information a glance at a time, then our visual experience further suggests that these different fragments are “stitched together” to create a stable panorama. In order for this to occur, the fragments have to be inserted in the correct place, presumably by identifying components of the fragment (in terms of visible properties) in such a way that it can be asserted that “object x in one location in a glimpse collected at time t + 1 is the same thing as object y in a different location in a glimpse collected at an earlier time t.” This involves computing correspondence, or tracking the identities of objects over time or space, a problem central to the study of binocular vision (Marr, Palm, & Poggio, 1978; Marr & Poggio, 1979) and motion perception (Dawson, 1991; Dawson & Pylyshyn, 1988; Ullman, 1978, 1979).
However, the computing of correspondence is a classic problem of underdetermination. If there are N different elements in two different views of a scene, then there are at least N! ways to match the identities of elements across the views. This problem cannot be solved by image matching—basing the matches on the appearance or description of elements in the different views—because the dynamic nature of the world, coupled with the loss of information about it when it is projected onto the eyes, means that there are usually radical changes to an object’s proximal stimulus over even brief periods of time.
How do we know which description uniquely applies to a particular individual and, what’s more important, how do we know which description will be unique at some time in the future when we will need to find the representation of that particular token again in order to add some newly noticed information to it? (Pylyshyn, 2007, p. 12)
To summarize, visual perception is intrinsically underdetermined because of the poverty of the visual stimulus. If the goal of vision is to construct representations of the distal world, then proximal stimuli do not themselves contain enough information to accomplish this goal. In principle, an infinite number of distal scenes could be the cause of a single proximal stimulus. “And yet we do not perceive a range of possible alternative worlds when we look out at a scene. We invariably see a single unique layout. Somehow the visual system manages to select one of the myriad logical possibilities” (Pylyshyn, 2003b, p. 94). Furthermore, the interpretation selected by the visual system seems—from our success in interacting with the world—to almost always be correct. “What is remarkable is that we err so seldom” (Shepard, 1990, p. 175).
How does the visual system compensate for the poverty of the stimulus as well as generate unique and accurate solutions to problems of underdetermination? In the following sections we consider two very different answers to this question, both of which are central to Pylyshyn’s theory of visual cognition. The first of these, which can be traced back to Helmholtz (Helmholtz & Southall, 1962b) and which became entrenched with the popularity of the New Look in the 1950s (Bruner, 1957, 1992), is that visual perception is full-fledged cognitive processing. “Given the slenderest clues to the nature of surrounding objects we identify them and act not so much according to what is directly sensed, but to what is believed” (Gregory, 1970, p. 11). | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.03%3A_The_Poverty_of_Stimulus.txt |
Hermann von Helmholtz was not aware of problems of visual underdetermination of the form illustrated in Figures 8-1 and 8-2. However, he was aware that visual sensors could be seriously misled. One example that he considered at length (Helmholtz & Southall, 1962a, 1962b) was the mechanical stimulation of the eye (e.g., slight pressure on the eyeball made by a blunt point), which produced a sensation of light (a pressure-image or phosphene) even though a light stimulus was not present. From this he proposed a general rule for determining the “ideas of vision”:
Such objects are always imagined as being present in the field of vision as would have to be there in order to produce the same impression on the nervous mechanism, the eyes being used under ordinary normal conditions. (Helmholtz & Southall, 1962b, p. 2)
Helmholtz’s studies of such phenomena forced him to explain the processes by which such a rule could be realized. He first noted that the visual system does not have direct access to the distal world, but instead that primary visual data was retinal activity. He concluded that inference must be involved to transform retinal activity into visual experience. “It is obvious that we can never emerge from the world of our sensations to the apperception of an external world, except by inferring from the changing sensation that external objects are the causes of this change” (Helmholtz & Southall, 1962b, p. 33). This theory allowed Helmholtz to explain visual illusions as the result of mistaken reasoning rather than as the product of malfunctions in the visual apparatus: “It is rather simply an illusion in the judgment of the material presented to the senses, resulting in a false idea of it” (p. 4).
Helmholtz argued that the accuracy of visual inferences is due to an agent’s constant exploration and experimentation with the world, determining how actions in the world such as changing viewpoints alter visual experience.
Spontaneously and by our own power, we vary some of the conditions under which the object has been perceived. We know that the changes thus produced in the way that objects look depend solely on the movements we have executed. Thus we obtain a different series of apperceptions of the same object, by which we can be convinced with experimental certainty that they are simply apperceptions and that it is the common cause of them all. (Helmholtz & Southall, 1962b, p. 31)
Helmholtz argued that the only difference between visual inference and logical reasoning was that the former was unconscious while the latter was not, describing “the psychic acts of ordinary perception as unconscious conclusions” (Helmholtz & Southall, 1962b, p. 4). Consciousness aside, seeing and reasoning were processes of the same kind: “There can be no doubt as to the similarity between the results of such unconscious conclusions and those of conscious conclusions” (p. 4).
A century after Helmholtz, researchers were well aware of the problem of underdetermination with respect to vision. Their view of this problem was that it was based in the fact that certain information is missing from the proximal stimulus, and that additional processing is required to supply the missing information. With the rise of cognitivism in the 1950s, researchers proposed a top-down, or theory-driven, account of perception in which general knowledge of the world was used to disambiguate the proximal stimulus (Bruner, 1957, 1992; Bruner, Postman, & Rodrigues, 1951; Gregory, 1970, 1978; Rock, 1983). This approach directly descended from Helmholtz’s discussion of unconscious conclusions because it equated visual perception with cognition.
One of the principal characteristics of perceiving [categorization] is a characteristic of cognition generally. There is no reason to assume that the laws governing inferences of this kind are discontinuous as one moves from perceptual to more conceptual activities. (Bruner, 1957, p. 124)
The cognitive account of perception that Jerome Bruner originated in the 1950s came to be known as the New Look. According to the New Look, higher-order cognitive processes could permit beliefs, expectations, and general knowledge of the world to provide additional information for disambiguation of the underdetermining proximal stimulus. “We not only believe what we see: to some extent we see what we believe” (Gregory, 1970, p. 15). Hundreds of studies provided experimental evidence that perceptual experience was determined in large part by a perceiver’s beliefs or expectations. (For one review of this literature see Pylyshyn, 2003b.) Given the central role of cognitivism since the inception of the New Look, it is not surprising that this type of theory has dominated the modern literature.
The belief that perception is thoroughly contaminated by such cognitive factors as expectations, judgments, beliefs, and so on, became the received wisdom in much of psychology, with virtually all contemporary elementary texts in human information processing and vision taking that point of view for granted. (Pylyshyn, 2003b, p. 56)
To illustrate the New Look, consider a situation in which I see a small, black and white, irregularly shaped, moving object. This visual information is not sufficient to uniquely specify what in the world I am observing. To deal with this problem, I use general reasoning processes to disambiguate the situation. Imagine that I am inside my home. I know that I own a black and white cat, I believe that the cat is indoors, and I expect that I will see this cat in the house. Thus I experience this visual stimulus as “seeing my cat Phoebe.” In a different context, different expectations exist. For instance, if I am outside the house on the street, then the same proximal stimulus will be disambiguated with different expectations; “I see my neighbour’s black and white dog Shadow.” If I am down walking in the forest by the creek, then I may use different beliefs to “see a skunk.”
It would seem that a higher agency of the mind, call it the executive agency, has available to it the proximal input, which it can scan, and it then behaves in a manner very like a thinking organism in selecting this or that aspect of the stimulus as representing the outer object or event in the world. (Rock, 1983, p. 39)
The New Look in perception is a prototypical example of classical cognitive science. If visual perception is another type of cognitive processing, then it is governed by the same laws as are reasoning and problem solving. In short, a crucial consequence of the New Look is that visual perception is rational, in the sense that vision’s success is measured in terms of the truth value of the representations it produces.
For instance, Richard Gregory (1970, p. 29, italics added) remarked that “it is surely remarkable that out of the infinity of possibilities the perceptual brain generally hits on just about the best one.” Gregory (1978, p. 13, italics added) also equated visual perception to problem solving, describing it as “a dynamic searching for the best interpretation of the available data.” The cognitive nature of perceptual processing allows,
past experience and anticipation of the future to play a large part in augmenting sensory information, so that we do not perceive the world merely from the sensory information available at any given time, but rather we use this information to test hypotheses of what lies before us. Perception becomes a matter of suggesting and testing hypotheses. (Gregory, 1978, p. 221)
In all of these examples, perception is described as a process that delivers representational contents that are most (semantically) consistent with visual sensations and other intentional contents, such as beliefs and desires.
The problem with the New Look is this rational view of perception. Because of its emphasis on top-down influences, the New Look lacks an account of links between the world and vision that are causal and independent of beliefs. If all of our perceptual experience was belief dependent, then we would never see anything that we did not expect to see. This would not contribute to our survival, which often depends upon noticing and reacting to surprising circumstances in the environment.
Pylyshyn’s (2003b, 2007) hybrid theory of visual cognition rests upon the assumption that there exists a cognitively impenetrable visual architecture that is separate from general cognition. This architecture is data-driven in nature, governed by causal influences from the visual world and insulated from beliefs and expectations. Such systems can solve problems of underdetermination without requiring assumptions of rationality, as discussed in the next section. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.04%3A_Enrichment_via_Unconscious_Inference.txt |
Some researchers would argue that perception is a form of cognition, because it uses inferential reasoning or problem solving processing to go beyond the information given. However, this kind of account is not the only viable approach for dealing with the poverty of the visual stimulus. Rock (1983, p. 3) wrote: “A phenomenon may appear to be intelligent, but the mechanism underlying it may have no common ground with the mechanisms underlying reasoning, logical thought, or problem solving.” The natural computation approach to vision (Ballard, 1997; Marr, 1982; Richards, 1988) illustrates the wisdom of Rock’s quote, because it attempts to solve problems of underdetermination by using bottom-up devices that apply built-in constraints to filter out incorrect interpretations of an ambiguous proximal stimulus.
The central idea underlying natural computation is constraint propagation. Imagine a set of locations to which labels can be assigned, where each label is a possible property that is present at a location. Underdetermination exists when more than one label is possible at various locations. However, constraints can be applied to remove these ambiguities. Imagine that if some label x is assigned to one location then this prevents some other label y from being assigned to a neighbouring location. Say that there is good evidence to assign label x to the first location. Once this is done, a constraint can propagate outwards from this location to its neighbours, removing label y as a possibility for them and therefore reducing ambiguity.
Constraint propagation is part of the science underlying the popular Sudoku puzzles (Delahaye, 2006). A Sudoku puzzle is a 9 × 9 grid of cells, as illustrated in Figure 8-3. The grid is further divided into a 3 × 3 array of smaller 3 × 3 grids called cages. In Figure 8-3, the cages are outlined by the thicker lines. When the puzzle is solved, each cell will contain a digit from the range 1 to 9, subject to three constraints. First, a digit can occur only once in each row of 9 cells across the grid. Second, a digit can only occur once in each column of 9 cells along the grid. Third, a digit can only occur once in each cage in the grid. The puzzle begins with certain numbers already assigned to their cells, as illustrated in Figure 8-3. The task is to fill in the remaining digits in such a way that none of the three constraining rules are violated.
Figure 8-3. An example Sudoku puzzle.
A Sudoku puzzle can be considered as a problem to be solved by relaxation labelling. In relaxation labelling, sets of possible labels are available at different locations. For instance, at the start of the puzzle given in Figure 8-3 the possible labels at every blank cell are 1, 2, 3, 4, 5, 6, 7, 8, and 9. There is only one possible label (given in the figure) that has already been assigned to each of the remaining cells. The task of relaxation labelling is to iteratively eliminate extra labels at the ambiguous locations, so that at the end of processing only one label remains.
Figure 8-4. The “there can be only one” constraint propagating from the cell labelled 5
In the context of relaxation labelling, Sudoku puzzles can be solved by propagating different constraints through the grid; this causes potential labels to be removed from ambiguous cells. One key constraint, called “there can be only one,” emerges from the primary definition of a Sudoku puzzle. In the example problem given in Figure 8-3, the digit 5 has been assigned at the start to a particular location, which is also shown in Figure 8-4. According to the rules of Sudoku, this means that this digit cannot appear anywhere else in the column, row, or cage that contains this location. The affected locations are shaded dark grey in Figure 8-4. One can propagate the “there can be only one” constraint through these locations, removing the digit 5 as a possible label for any of them.
This constraint can be propagated iteratively through the puzzle. During one iteration, any cell with a unique label can be used to eliminate that label from all of the other cells that it controls (e.g., as in Figure 8-4). When this constraint is applied in this way, the result may be that some new cells have unique labels. In this case the constraint can be applied again, from these newly unique cells, to further disambiguate the Sudoku puzzle.
The “there can be only one” constraint is important, but it is not powerful enough on its own to solve any but the easiest Sudoku problems. This means that other constraints must be employed as well. Another constraint is called “last available label,” and is illustrated in Figure 8-5.
Figure 8-5A illustrates one of the cages of the Figure 8-3 Sudoku problem partway through being solved (i.e., after some iterations of “there can be only one”). The cells containing a single number have been uniquely labelled. The other cells still have more than one possible label, shown as multiple digits within the cell. Note the one cell at the bottom shaded in grey. It has the possible labels 1, 3, 5, and 9. However, this cell has the “last available label” of 9—the label 9 is not available in any other cell in the cage. Because a 9 is required to be in this cage, this means that this label must be assigned here and the cell’s other three possible labels can be removed. Note that when this is done, the “last available label” constraint applies to a second cell (shown in grey in Figure 8-5B), meaning that it can be uniquely assigned the label 1 by applying this constraint a second time.
Figure 8-5. The “last available label” constraint.
After two applications of the “last available label” constraint, the cage illustrated in Figure 8-5A becomes the cage shown at the top of Figure 8-6. Note that this cage has only two ambiguous cells, each with the possible labels 3 and 5. These two cells define what Sudoku solvers call a naked pair, which can be used to define a third rule called the “naked pair constraint.”
Figure 8-6. The “naked pair constraint.”
In the naked pair pointed out by the two arrows in Figure 8-6, it is impossible for one cell to receive the label 3 and for the other cell not to receive the label 5. This is because these two cells have only two remaining possible labels, and both sets of labels are identical. However, this also implies that the labels 3 and 5 cannot exist elsewhere in the part of the puzzle over which the two cells containing the naked pair have control. Thus one can use this as a constraint to remove the possible labels 3 and 5 from the other cells in the same column as the naked pair, i.e., the cells shaded in grey in the lower part of Figure 8-6.
The three constraints described above have been implemented as a working model in an Excel spreadsheet. This model has confirmed that by applying only these three constraints one can solve a variety of Sudoku problems of easy and medium difficulty, and can make substantial progress on difficult problems. (These three constraints are not sufficient to solve the difficult Figure 8-3 problem.) In order to develop a more successful Sudoku solver in this framework, one would have to identify additional constraints that can be used. A search of the Internet for “Sudoku tips” reveals a number of advanced strategies that can be described as constraints, and which could be added to a relaxation labelling model.
For our purposes, though, the above Sudoku example illustrates how constraints can be propagated to solve problems of underdetermination. Furthermore, it shows that such solutions can be fairly mechanical in nature, not requiring higher-order reasoning or problem solving. For instance, the “there can be only one” constraint could be instantiated as a simple set of interconnected switches: turning the 5 on in Figure 8-4 would send a signal that would turn the 5 off at all of the other greyshaded locations.
The natural computation approach to vision assumes that problems of visual underdetermination are also solved by non-cognitive processes that use constraint propagation. However, the constraints of interest to such researchers are not formal rules of a game. Instead, they adopt naïve realism, and they assume that the external world is structured and that some aspects of this structure must be true of nearly every visual scene. Because the visual system has evolved to function in this structured environment, it has internalized those properties that permit it to solve problems of underdetermination. “The perceptual system has internalized the most pervasive and enduring regularities of the world” (Shepard, 1990, p. 181).
The regularities of interest to researchers who endorse natural computation are called natural constraints. A natural constraint is a property that is almost invariably true of any location in a visual scene. For instance, many visual properties of threedimensional scenes, such as depth, colour, texture, and motion, vary smoothly. This means that two locations in the three-dimensional scene that are very close together are likely to have very similar values for any of these properties, while this will not be the case for locations that are further apart. Smoothness can therefore be used to constrain interpretations of a proximal stimulus: an interpretation whose properties vary smoothly is much more likely to be true of the world than interpretations in which property smoothness is not maintained.
Natural constraints are used to solve visual problems of underdetermination by imposing additional restrictions on scene interpretations. In addition to being consistent with the proximal stimulus, the interpretation of visual input must also be consistent with the natural constraints. With appropriate natural constraints, only a single interpretation will meet both of these criteria (for many examples, see Marr, 1982). A major research goal for those who endorse the natural computation approach to vision is identifying natural constraints that filter out correct interpretations from all the other (incorrect) possibilities.
For example, consider the motion correspondence problem (Ullman, 1979), which is central to Pylyshyn’s (2003b, 2007) hybrid theory of visual cognition. In the motion correspondence problem, a set of elements is seen at one time, and another set of elements is seen at a later time. In order for the visual system to associate a sense of movement to these elements, their identities must be tracked over time. The assertion that some element x, seen at time t, is the “same thing” as some other element y, seen at time t + 1, is called a motion correspondence match. However, the assignment of motion correspondence matches is underdetermined. This is illustrated in Figure 8-7 as a simple apparent motion stimulus in which two squares (dashed outlines) are presented at one time, and then later presented in different locations (solid outlines). For this display there are two logical sets of motion correspondence matches that can be assigned, shown in B and C of the figure. Both sets of matches are consistent with the display, but they represent radically different interpretations of the identities of the elements over time. Human observers of this display will invariably experience it as Figure 8-7B, and never as Figure 8-7C. Why is this interpretation preferred over the other one, which seems just as logically plausible?
The natural computation approach answers this question by claiming that the interpretation illustrated in Figure 8-7B is consistent with additional natural constraints, while the interpretation in Figure 8-7C is not. A number of different natural constraints on the motion correspondence problem have been identified and then incorporated into computer simulations of motion perception (Dawson, 1987, 1991; Dawson, Nevin-Meadows,&Wright, 1994; Dawson&Pylyshyn, 1988; Dawson& Wright, 1989, 1994; Ullman, 1979).
Figure 8-7. The motion correspondence problem.
One such constraint is called the nearest neighbour principle. The visual system prefers to assign correspondence matches that represent short element displacements (Burt & Sperling, 1981; Ullman, 1979). For example, the two motion correspondence matches in Figure 8-7B are shorter than the two in Figure 8-7C; they are therefore more consistent with the nearest neighbour principle.
The nearest neighbour principle is a natural constraint because it arises from the geometry of the typical viewing conditions for motion (Ullman, 1979, pp. 114–118). When movement in a three-dimensional world is projected onto a two-dimensional surface (e.g., the retina), slower movements occur with much higher probability on the retina than do faster movements. A preference for slower movement is equivalent to exploiting the nearest neighbour principle, because a short correspondence match represents slow motion, while a long correspondence match represents fast motion.
Another powerful constraint on the motion correspondence problem is called the relative velocity principle (Dawson, 1987, 1991). To the extent that visual elements arise from physical features on solid surfaces, the movement of neighbouring elements should be similar. According to the relative velocity principle, motion correspondence matches should be assigned in such a way that objects located near one another will be assigned correspondence matches consistent with movements of similar direction and speed. This is true of the two matches illustrated in Figure 8-7B, which are of identical length and direction, but not of the two matches illustrated in Figure 8-7C, which are of identical length but represent motion in different directions.
Like the nearest neighbour constraint, the relative velocity principle is a natural constraint. It is a variant of the property that motion varies smoothly across a scene (Hildreth, 1983; Horn & Schunk, 1981). That is, as objects in the real world move, locations near to one another should move in similar ways. Furthermore, Hildreth (1983) has proven that solid objects moving arbitrarily in three-dimensional space project unique smooth patterns of retinal movement. The relative velocity principle exploits this general property of projected motion.
Other natural constraints on motion correspondence have also been proposed. The element integrity principle is a constraint in which motion correspondence matches are assigned in such a way that elements only rarely split into two or fuse together into one (Ullman, 1979). It is a natural constraint in the sense that the physical coherence of surfaces implies that the splits or fusions are unlikely. The polarity matching principle is a constraint in which motion correspondence matches are assigned between elements of identical contrast (e.g., between two elements that are both light against a dark background, or between two elements that are both dark against a light background) (Dawson, Nevin-Meadows, & Wright, 1994). It is a natural constraint because movement of an object in the world might change its shape and colour, but is unlikely to alter the object’s contrast relative to its background.
The natural computation approach to vision is an alternative to a classical approach called unconscious inference, because natural constraints can be exploited by systems that are not cognitive, that do not perform inferences on the basis of cognitive contents. In particular, it is very common to see natural computation models expressed in a very anti-classical form, namely, artificial neural networks (Marr, 1982). Indeed, artificial neural networks provide an ideal medium for propagating constraints to solve problems of underdetermination.
The motion correspondence problem provides one example of an artificial neural network approach to solving problems of underdetermination (Dawson, 1991; Dawson, Nevin-Meadows, &Wright, 1994). Dawson (1991) created an artificial neural network that incorporated the nearest neighbour, relative velocity, element integrity, and polarity matching principles. These principles were realized as patterns of excitatory and inhibitory connections between processors, with each processor representing a possible motion correspondence match. For instance, the connection between two matches that represented movements similar in distance and direction would have an excitatory component that reflected the relative velocity principle. Two matches that represented movements of different distances and directions would have an inhibitory component that reflected the same principle. The network would start with all processors turned on to similar values (indicating that each match was initially equally likely), and then the network would iteratively send signals amongst the processors. The network would quickly converge to a state in which some processors remained on (representing the preferred correspondence matches) while all of the others were turned off. This model was shown to be capable of modelling a wide variety of phenomena in the extensive literature on the perception of apparent movement.
The natural computation approach is defined by another characteristic that distinguishes it from classical cognitive science. Natural constraints are not psychological properties, but are instead properties of the world, or properties of how the world projects itself onto the eyes. “The visual constraints that have been discovered so far are based almost entirely on principles that derive from laws of optics and projective geometry” (Pylyshyn, 2003b, p. 120). Agents exploit natural constraints—or more precisely, they internalize these constraints in special processors that constitute what Pylyshyn calls early vision—because they are generally true of the world and therefore work.
To classical theories that appeal to unconscious inference, natural constraints are merely “heuristic bags of tricks” that happen to work (Anstis, 1980; Ramachandran & Anstis, 1986); there is no attempt to ground these tricks in the structure of the world. In contrast, natural computation theories are embodied, because they appeal to structure in the external world and to how that structure impinges on perceptual agents. As naturalist Harold Horwood (1987, p. 35) writes, “If you look attentively at a fish you can see that the water has shaped it. The fish is not merely in the water: the qualities of the water itself have called the fish into being. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.05%3A_Natural_Constraints.txt |
It was argued earlier that the classical approach to underdetermination, unconscious inference, suffered from the fact that it did not include any causal links between the world and internal representations. The natural computation approach does not suffer from this problem, because its theories treat vision as a data-driven or bottom-up process. That is, visual information from the world comes into contact with visual modules—special purpose machines—that automatically apply natural constraints and deliver uniquely determined representations. How complex are the representations that can be delivered by data-driven processing? To what extent could a pure bottom-up theory of perception succeed?
On the one hand, the bottom-up theories are capable of delivering a variety of rich representations of the visual world (Marr, 1982). These include the primal sketch, which represents the proximal stimulus as an array of visual primitives, such as oriented bars, edges, and terminators (Marr, 1976). Another is the 2½-D sketch, which makes explicit the properties of visible surfaces in viewercentred coordinates, including their depth, colour, texture, and orientation (Marr & Nishihara, 1978). The information made explicit in the 2½-D sketch is available because data-driven processes can solve a number of problems of underdetermination, often called “shape from” problems, by using natural constraints to determine three-dimensional shapes and distances of visible elements. These include structure from motion (Hildreth, 1983; Horn & Schunk, 1981; Ullman, 1979; Vidal & Hartley, 2008), shape from shading (Horn & Brooks, 1989), depth from binocular disparity (Marr, Palm, & Poggio, 1978; Marr & Poggio, 1979), and shape from texture (Lobay & Forsyth, 2006; Witkin, 1981).
It would not be a great exaggeration to say that early vision—part of visual processing that is prior to access to general knowledge—computes just about everything that might be called a ‘visual appearance’ of the world except the identities and names of the objects. (Pylyshyn, 2003b, p. 51)
On the other hand, despite impressive attempts (Biederman, 1987), it is generally acknowledged that the processes proposed by natural computationalists cannot deliver representations rich enough to make full contact with semantic knowledge of the world. This is because object recognition—assigning visual information to semantic categories—requires identifying object parts and determining spatial relationships amongst these parts (Hoffman & Singh, 1997; Singh & Hoffman, 1997). However, this in turn requires directing attention to specific entities in visual representations (i.e., individuating the critical parts) and using serial processes to determine spatial relations amongst the individuated entities (Pylyshyn, 1999, 200 1, 2003c, 2007; Ullman, 1984). The data-driven, parallel computations that characterize natural computation theories of vision are poor candidates for computing relationships between individuated objects or their parts. As a result, what early vision “does not do is identify the things we are looking at, in the sense of relating them to things we have seen before, the contents of our memory. And it does not make judgments about how things really are” (Pylyshyn, 2003b, p. 51).
Thus it appears that a pure, bottom-up natural computation theory of vision will not suffice. Similarly, it was argued earlier that a pure, top-down cognitive theory of vision is also insufficient. A complete theory of vision requires co-operative interactions between both data-driven and top-down processes. As philosopher Jerry Fodor (1985, p. 2) has noted, “perception is smart like cognition in that it is typically inferential, it is nevertheless dumb like reflexes in that it is typically encapsulated.” This leads to what Pylyshyn calls the independence hypothesis: the proposal that some visual processing must be independent of cognition. However, because we are consciously aware of visual information, a corollary of the independence hypothesis is that there must be some interface between visual processing that is not cognitive and visual processing that is.
This interface is called visual cognition (Enns, 2004; Humphreys & Bruce, 1989; Jacob & Jeannerod, 2003; Ullman, 2000), because it involves visual attention (Wright, 1998). Theories in visual cognition about both object identification (Treisman, 1988; Ullman, 2000) and the interpretation of motion (Wright & Dawson, 1994) typically describe three stages of processing: the precognitive delivery of visual information, the attentional analysis of this visual information, and the linking of the results of these analyses to general knowledge of the world.
One example theory in visual cognition is called feature integration theory (Treisman, 1986, 1988; Treisman & Gelade, 1980). Feature integration theory arose from two basic experimental findings. The first concerned search latency functions, which represent the time required to detect the presence or absence of a target as a function of the total number of display elements in a visual search task. Pioneering work on visual search discovered the so-called “pop-out effect”: for some targets, the search latency function is essentially flat. This indicated that the time to find a target is independent of the number of distractor elements in the display. This result was found for targets defined by a unique visual feature (e.g., colour, contrast, orientation, movement), which seemed to pop out of a display, automatically drawing attention to the target (Treisman & Gelade, 1980). In contrast, the time to detect a target defined by a unique combination of features generally increases with the number of distractor items, producing search latency functions with positive slopes.
The second experimental finding that led to feature integration theory was the discovery of illusory conjunctions (Treisman & Schmidt, 1982). Illusory conjunctions occur when features are mistakenly combined. For instance, subjects might be presented a red triangle and a green circle in a visual display but experience an illusory conjunction: a green triangle and a red circle.
Feature integration theory arose to explain different kinds of search latency functions and illusory conjunctions. It assumes that vision begins with a first, noncognitive stage of feature detection in which separate maps for a small number of basic features, such as colour, orientation, size, or movement, record the presence and location of detected properties. If a target is uniquely defined in terms of possessing one of these features, then it will be the only source of activity in that feature map and will therefore pop out, explaining some of the visual search results.
A second stage of processing belongs properly to visual cognition. In this stage, a spotlight of attention is volitionally directed to a particular spot on a master map of locations. This attentional spotlight enables the visual system to integrate features by bringing into register different feature maps at the location of interest. Different features present at that location can be conjoined together in a temporary object representation called an object file (Kahneman, Treisman, & Gibbs, 1992; Treisman, Kahneman, & Burkell, 1983). Thus in feature integration theory, searching for objects defined by unique combinations of features requires a serial scan of the attentional spotlight from location to location, explaining the nature of search latency functions for such objects. This stage of processing also explains illusory conjunctions, which usually occur when the attentional processing is divided, impairing the ability of correctly combining features into object files.
A third stage of processing belongs to higher-order cognition. It involves using information about detected objects (i.e., features united in object files) as links to general knowledge of the world.
Conscious perception depends on temporary object representations in which the different features are collected from the dimensional modules and inter-related, then matched to stored descriptions in a long-term visual memory to allow recognition. (Treisman, 1988, p. 204)
Another proposal that relies on the notion of visual cognition concerns visual routines (Ullman, 1984). Ullman (1984) noted that the perception of spatial relations is central to visual processing. However, many spatial relations cannot be directly delivered by the parallel, data-driven processes postulated by natural computationalists, because these relations are not defined over entire scenes, but are instead defined over particular entities in scenes (i.e., objects or their parts). Furthermore, many of these relations must be computed using serial processing of the sort that is not proposed to be part of the networks that propagate natural constraints.
For example, consider determining whether some point x is inside a contour y. Ullman (1984) pointed out that there is little known about how the relation inside (x, y) is actually computed, and argued that it most likely requires serial processing in which activation begins at x, spreading outward. It can be concluded that x is inside y if the spreading activation is contained by y. Furthermore, before inside (x, y) can be computed, the two entities, x and y, have to be individuated and selected—inside makes no sense to compute without their specification. “What the visual system needs is a way to refer to individual elements qua token individuals” (Pylyshyn, 2003b, p. 207).
With such considerations in mind, Ullman (1984) developed a theory of visual routines that shares many of the general features of feature integration theory. In an initial stage of processing, data-driven processes deliver early representations of the visual scene. In the second stage, visual cognition executes visual routines at specified locations in the representations delivered by the first stage of processing. Visual routines are built from a set of elemental operations and used to establish spatial relations and shape properties. Candidate elemental operations include indexing a salient item, spreading activation over a region, and tracing boundaries. A visual routine is thus a program, assembled out of elemental operations, which is activated when needed to compute a necessary spatial property. Visual routines are part of visual cognition because attention is used to select a necessary routine (and possibly create a new one), and to direct the routine to a specific location of interest. However, once the routine is activated, it can deliver its spatial judgment without requiring additional higher-order resources.
In the third stage, the spatial relations computed by visual cognition are linked, as in feature integration theory, to higher-order cognitive processes. Thus Ullman (1984) sees visual routines as providing an interface between the representations created by data-driven visual modules and the content-based, top-down processing of cognition. Such an interface permits data-driven and theory-driven processes to be combined, overcoming the limitations that such processes would face on their own.
Visual routines operate in the middle ground that, unlike the bottom-up creation of the base representations, is a part of the top-down processing and yet is independent of object-specific knowledge. Their study therefore has the advantage of going beyond the base representations while avoiding many of the additional complications associated with higher level components of the system. (Ullman, 1984, p. 119)
The example theories of visual cognition presented above are hybrid theories in the sense that they include both bottom-up and top-down processes, and they invoke attentional mechanisms as a link between the two. In the next section we see that Pylyshyn’s (2003b, 2007) theory of visual indexing is similar in spirit to these theories and thus exhibits their hybrid characteristics. However, Pylyshyn’s theory of visual cognition is hybrid in another important sense: it makes contact with classical, connectionist, and embodied cognitive science.
Pylyshyn’s theory of visual cognition is classical because one of the main problems that it attempts to solve is how to identify or re-identify individuated entities. Classical processing is invoked as a result, because “individuating and reidentifying in general require the heavy machinery of concepts and descriptions” (Pylyshyn, 2007, p. 32). Part of Pylyshyn’s theory of visual cognition is also connectionist, because he appeals to non-classical mechanisms to deliver visual representations (i.e., natural computation), as well as to connectionist networks (in particular, to winner-take-all mechanisms; see Feldman & Ballard, 1982) to track entities after they have been individuated with attentional tags (Pylyshyn, 2001, 2003c). Finally, parts of Pylyshyn’s theory of visual cognition draw on embodied cognitive science. For instance, the reason that tracking element identities—solving the correspondence problem—is critical is because Pylyshyn assumes a particular embodiment of the visual apparatus, a limited-order retina that cannot take in all information in a glance. Similarly, Pylyshyn uses the notion of cognitive scaffolding to account for the spatial properties of mental images. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.06%3A_Vision%2C_Cognition%2C_and_Visual_Cognition.txt |
Pylyshyn’s theory of visual cognition began in the late 1970s with his interest in explaining how diagrams were used in reasoning (Pylyshyn, 2007). Pylyshyn and his colleagues attempted to investigate this issue by building a computer simulation that would build and inspect diagrams as part of deriving proofs in plane geometry.
From the beginning, the plans for this computer simulation made contact with two of the key characteristics of embodied cognitive science. First, the diagrams created and used by the computer simulation were intended to be external to it and to scaffold the program’s geometric reasoning.
Since we wanted the system to be as psychologically realistic as possible we did not want all aspects of the diagram to be ‘in its head’ but, as in real geometry problemsolving, remain on the diagram it was drawing and examining. (Pylyshyn, 2007, p. 10)
Second, the visual system of the computer was also assumed to be psychologically realistic in terms of its embodiment. In particular, the visual system was presumed to be a moving fovea that was of limited order: it could only examine the diagram in parts, rather than all at once.
We also did not want to assume that all properties of the entire diagram were available at once, but rather that they had to be noticed over time as the diagram was being drawn and examined. If the diagram were being inspected by moving the eyes, then the properties should be within the scope of the moving fovea. (Pylyshyn, 2007, p. 10)
These two intersections with embodied cognitive science—a scaffolding visual world and a limited order embodiment—immediately raised a fundamental information processing problem. As different lines or vertices were added to a diagram, or as these components were scanned by the visual system, their different identities had to be maintained or tracked over time. In order to function as intended, the program had to be able to assert, for example, that “this line observed here” is the same as “that line observed there” when the diagram is being scanned. In short, in considering how to create this particular system, Pylyshyn recognized that it required two core abilities: to be able to individuate visual entities, and to be able to track or maintain the identities of visual entities over time.
To maintain the identities of individuated elements over time is to solve the correspondence problem. How does one keep track of the identities of different entities perceived in different glances? According to Pylyshyn (2003b, 2007), the classical answer to this question must appeal to the contents of representations. To assert that some entity seen in a later glance was the same as one observed earlier, the descriptions of the current and earlier entities must be compared. If the descriptions matched, then the entities should be deemed to be the same. This is called the image matching solution to the correspondence problem, which also dictates how entities must be individuated: they must be uniquely described, when observed, as a set of properties that can be represented as a mental description, and which can be compared to other descriptions.
Pylyshyn rejects the classical image matching solution to the correspondence problem for several reasons. First, multiple objects can be tracked as they move to different locations, even if they are identical in appearance (Pylyshyn & Storm, 1988). In fact, multiple objects can be tracked as their properties change, even when their location is constant and shared (Blaser, Pylyshyn, & Holcombe, 2000). These results pose problems for image matching, because it is difficult to individuate and track identical objects by using their descriptions!
Second, the poverty of the stimulus in a dynamic world poses severe challenges to image matching. As objects move in the world or as we (or our eyes) change position, a distal object’s projection as a proximal stimulus will change properties, even though the object remains the same. “If objects can change their properties, we don’t know under what description the object was last stored” (Pylyshyn, 2003b, p. 205).
A third reason to reject image matching comes from the study of apparent motion, which requires the correspondence problem to be solved before the illusion of movement between locations can be added (Dawson, 1991; Wright & Dawson, 1994). Studies of apparent motion have shown that motion correspondence is mostly insensitive to manipulations of figural properties, such as shape, colour, or spatial frequency (Baro & Levinson, 1988; Cavanagh, Arguin, & von Grunau, 1989; Dawson, 1989; Goodman, 1978; Kolers, 1972; Kolers & Green, 1984; Kolers & Pomerantz, 1971; Kolers & von Grunau, 1976; Krumhansl, 1984; Navon, 1976; Victor & Conte, 1990). This insensitivity to form led Nelson Goodman (1978, p. 78) to conclude that “plainly the visual system is persistent, inventive, and sometimes rather perverse in building a world according to its own lights.” One reason for this perverseness may be that the neural circuits for processing motion are largely independent of those for processing form (Botez, 1975; Livingstone & Hubel, 1988; Maunsell & Newsome, 1987; Ungerleider & Mishkin, 1982).
A fourth reason to reject image matching is that it is a purely cognitive approach to individuating and tracking entities. “Philosophers typically assume that in order to individuate something we must conceptualize its relevant properties. In other words, we must first represent (or cognize or conceptualize) the relevant conditions of individuation” (Pylyshyn, 2007, p. 31). Pylyshyn rejected this approach because it suffers from the same core problem as the New Look: it lacks causal links to the world.
Pylyshyn’s initial exploration of how diagrams aided reasoning led to his realization that the individuation and tracking of visual entities are central to an account of how vision links us to the world. For the reasons just presented, he rejected a purely classical approach—mental descriptions of entities—for providing these fundamental abilities. He proposed instead a theory that parallels the structure of the examples of visual cognition described earlier. That is, Pylyshyn’s (2003b, 2007) theory of visual cognition includes a non-cognitive component (early vision), which delivers representations that can be accessed by visual attention (visual cognition), which in turn deliver representations that can be linked to general knowledge of the world (cognition).
On the one hand, the early vision component of Pylyshyn’s (2003b, 2007) theory of visual cognition is compatible with natural computation accounts of perception (Ballard, 1997; Marr, 1982). For Pylyshyn, the role of early vision is to provide causal links between the world and the perceiving agent without invoking cognition or inference:
Only a highly constrained set of properties can be selected by early vision, or can be directly ‘picked up.’ Roughly, these are what I have elsewhere referred to as ‘transducable’ properties. These are the properties whose detection does not require accessing memory and drawing inferences. (Pylyshyn, 2003b, p. 163)
The use of natural constraints to deliver representations such as the primal sketch and the 2½-D sketch is consistent with Pylyshyn’s view.
On the other hand, Pylyshyn (2003b, 2007) added innovations to traditional natural computation theories that have enormous implications for explanations of seeing and visualizing. First, Pylyshyn argued that one of the primitive processes of early vision is individuation—the picking out of an entity as being distinct from others. Second, he used evidence from feature integration theory and cognitive neuroscience to claim that individuation picks out objects, but not on the basis of their locations. That is, preattentive processes can detect elements or entities via primitive features but simultaneously not deliver the location of the features, as is the case in pop-out. Third, Pylyshyn argued that an individuated entity—a visual object—is preattentively tagged by an index, called a FINST (“for finger instantiation”), which can only be used to access an individuated object (e.g., to retrieve its properties when needed). Furthermore, only a limited number (four) of FINSTs are available. Fourth, once assigned to an object, a FINST remains attached to it even as the object changes its location or other properties. Thus a primitive component of early vision is the solution of the correspondence problem, where the role of this solution is to maintain the link between FINSTs and dynamic, individuated objects.
The revolutionary aspect of FINSTs is that they are presumed to individuate and track visual objects without delivering a description of them and without fixing their location. Pylyshyn (2007) argued that this is the visual equivalent of the use of indexicals or demonstratives in language: “Think of demonstratives in natural language—typically words like this or that. Such words allow us to refer to things without specifying what they are or what properties they have” (p. 18). FINSTs are visual indices that operate in exactly this way. They are analogous to placing a finger on an object in the world, and, while not looking, keeping the finger in contact with it as the object moved or changed— thus the term finger instantiation. As long as the finger is in place, the object can be referenced (“this thing that I am pointing to now”), even though the finger does not deliver any visual properties.
There is a growing literature that provides empirical support for Pylyshyn’s FINST hypothesis. Many of these experiments involve the multiple object tracking paradigm (Flombaum, Scholl, & Pylyshyn, 2008; Franconeri et al., 2008; Pylyshyn, 2006; Pylyshyn & Annan, 2006; Pylyshyn et al., 2008; Pylyshyn & Storm, 1988; Scholl, Pylyshyn, & Feldman, 2001; Sears & Pylyshyn, 2000). In the original version of this paradigm (Pylyshyn & Storm, 1988), subjects were shown a static display made up of a number of objects of identical appearance. A subset of these objects blinked for a short period of time, indicating that they were to-be-tracked targets. Then the blinking stopped, and all objects in the display began to move independently and randomly for a period of about ten seconds. Subjects had the task of tracking the targets, with attention only; a monitor ended trials in which eye movements were detected. At the end of a trial, one object blinked and subjects had to indicate whether or not it was a target.
The results of this study (see Pylyshyn & Storm, 1988) indicated that subjects could simultaneously track up to four independently moving targets with high accuracy. Multiple object tracking results are explained by arguing that FINSTs are allocated to the flashing targets prior to movement, and objects are tracked by the primitive mechanism that maintains the link from visual object to FINST. This link permits subjects to judge targethood at the end of a trial.
The multiple object tracking paradigm has been used to explore some of the basic properties of the FINST mechanism. Analyses indicate that this process is parallel, because up to four objects can be tracked, and tracking results cannot be explained by a model that shifts a spotlight of attention serially from target to target (Pylyshyn & Storm, 1988). However, the fact that no more than four targets can be tracked also shows that this processing has limited capacity. FINSTs are assigned to objects, and not locations; objects can be tracked through a location-less feature space (Blase, Pylyshyn, & Holcombe, 2000). Using features to make the objects distinguishable from one another does not aid tracking, and object properties can actually change during tracking without subjects being aware of the changes (Bahrami, 2003; Pylyshyn, 2007). Thus FINSTs individuate and track visual objects but do not deliver descriptions of the properties of the objects that they index.
Another source of empirical support for the FINST hypothesis comes from studies of subitizing (Trick & Pylyshyn, 1993, 1994). Subitizing is a phenomenon in which the number of items in a set of objects (the cardinality of the set) can be effortlessly and rapidly detected if the set has four or fewer items (Jensen, Reese, & Reese, 1950; Kaufman et al., 1949). Larger sets cannot be subitized; a much slower process is required to serially count the elements of larger sets. Subitizing necessarily requires that the items to be counted are individuated from one another. Trick and Pylyshyn (1993, 1994) hypothesized that subitizing could be accomplished by the FINST mechanism; elements are preattentively individuated by being indexed, and counting simply requires accessing the number of indices that have been allocated.
Trick and Pylyshyn (1993, 1994) tested this hypothesis by examining subitizing in conditions in which visual indexing was not possible. For instance, if the objects in a set are defined by conjunctions of features, then they cannot be preattentively FINSTed. Importantly, they also cannot be subitized. In general, subitizing does not occur when the elements of a set that are being counted are defined by properties that require serial, attentive processing in order to be detected (e.g., sets of concentric contours that have to be traced in order to be individuated; or sets of elements defined by being on the same contour, which also require tracing to be identified).
At the core of Pylyshyn’s (2003b, 2007) theory of visual cognition is the claim that visual objects can be preattentively individuated and indexed. Empirical support for this account of early vision comes from studies of multiple object tracking and of subitizing. The need for such early visual processing comes from the goal of providing causal links between the world and classical representations, and from embodying vision in such a way that information can only be gleaned a glimpse at a time. Thus Pylyshyn’s theory of visual cognition, as described to this point, has characteristics of both classical and embodied cognitive science. How does the theory make contact with connectionist cognitive science? The answer to this question comes from examining Pylyshyn’s (2003b, 2007) proposals concerning preattentive mechanisms for individuating visual objects and tracking them. The mechanisms that Pylyshyn proposed are artificial neural networks.
For instance, Pylyshyn (2000, 2003b) noted that a particular type of artificial neural network, called a winner-take-all network (Feldman & Ballard, 1982), is ideally suited for preattentive individuation. Many versions of such a network have been proposed to explain how attention can be automatically drawn to an object or to a distinctive feature (Fukushima, 1986; Gerrissen, 1991; Grossberg, 1980; Koch & Ullman, 1985; LaBerge Carter, & Brown, 1992; Sandon, 1992). In a winner-take-all network, an array of processing units is assigned to different objects or to feature locations. For instance, these processors could be distributed across the preattentive feature maps in feature integration theory (Treisman, 1988; Treisman & Gelade, 1980). Typically, a processor will have an excitatory connection to itself and will have inhibitory connections to its neighbouring processors. This pattern of connectivity results in the processor that receives the most distinctive input becoming activated and at the same time turning off its neighbours.
That such mechanisms might be involved in individuation is supported by results that show that the time course of visual search can be altered by visual manipulations that affect the inhibitory processing of such networks (Dawson & Thibodeau, 1998). Pylyshyn endorses a modified winner-take-all network as a mechanism for individuation; the modification permits an object indexed by the network to be interrogated in order to retrieve its properties (Pylyshyn, 2000).
Another intersection between Pylyshyn’s (2003b, 2007) theory of visual cognition and connectionist cognitive science comes from his proposals about preattentive tracking. How can such tracking be accomplished without the use of image matching? Again, Pylyshyn noted that artificial neural networks, such as those that have been proposed for solving the motion correspondence problem (Dawson, 1991; Dawson, Nevin-Meadows, & Wright, 1994; Dawson & Pylyshyn, 1988; Dawson & Wright, 1994), would serve as tracking mechanisms. This is because such models belong to the natural computation approach and have shown how tracking can proceed preattentively via the exploitation of natural constraints that are implemented as patterns of connectivity amongst processing units.
Furthermore, Dawson (1991) has argued that many of the regularities that govern solutions to the motion correspondence problem are consistent with the hypothesis that solving this problem is equivalent to tracking assigned visual tags. For example, consider some observations concerning the location of motion correspondence processing and attentional tracking processes in the brain. Dawson argued that motion correspondence processing is most likely performed by neurons located in Area 7 of the parietal cortex, on the basis of motion signals transmitted from earlier areas, such as the motion-sensitive area MT. Area 7 of the parietal cortex is also a good candidate for the locus of tracking of individuated entities.
First, many researchers have observed cells that appear to mediate object tracking in Area 7, such as visual fixation neurons and visual tracking neurons. Such cells are not evident earlier in the visual pathway (Goldberg & Bruce, 1985; Hyvarinen & Poranen, 1974; Lynch et al., 1977; Motter & Mountcastle, 1981; Robinson, Goldberg, & Stanton, 1978; Sakata et al., 1985).
Second, cells in this area are also governed by extraretinal (i.e., attentional) influences—they respond to attended targets, but not to unattended targets, even when both are equally visible (Robinson, Goldberg, & Stanton, 1978). This is required of mechanisms that can pick out and track targets from identically shaped distractors, as in a multiple object tracking task.
Third, Area 7 cells that appear to be involved in tracking appear to be able to do so across sensory modalities. For instance, hand projection neurons respond to targets to which hand movements are to be directed and do not respond when either the reach or the target are present alone (Robinson Goldberg, & Stanton, 1978). Similarly, there exist many Area Y cells that respond during manual reaching, tracking, or manipulation, and which also have a preferred direction of reaching (Hyvarinen & Poranen, 1974). Such cross-modal coordination of tracking is critical, because as we see in the next section, Pylyshyn’s (2003b, 2007) theory of visual cognition assumes that indices can be applied, and tracked, in different sensory modalities, permitting seeing agents to point at objects that have been visually individuated.
The key innovation and contribution of Pylyshyn’s (2003b, 2007) theory of visual cognition is the proposal of preattentive individuation and tracking. This proposal can be seamlessly interfaced with related proposals concerning visual cognition. For instance, once objects have been tagged by FINSTs, they can be operated on by visual routines (Ullman, 1984, 2000). Pylyshyn (2003b) pointed out that in order to execute, visual routines require such individuation:
The visual system must have some mechanism for picking out and referring to particular elements in a display in order to decide whether two or more such elements form a pattern, such as being collinear, or being inside, on, or part of another element, so on. Pylyshyn (2003b, pp. 206–207)
In other words, visual cognition can direct attentional resources to FINSTed entities.
Pylyshyn’s (2003b, 2007) theory of visual cognition also makes contact with classical cognition. He noted that once objects have been tagged, the visual system can examine their spatial properties by applying visual routines or using focal attention to retrieve visual features. The point of such activities by visual cognition would be to update descriptions of objects stored as object files (Kahneman, Treisman, & Gibbs, 1992). The object file descriptions can then be used to make contact with the semantic categories of classical cognition. Thus the theory of visual indexing provides a causal grounding of visual concepts:
Indexes may serve as the basis for real individuation of physical objects. While it is clear that you cannot individuate objects in the full-blooded sense without a conceptual apparatus, it is also clear that you cannot individuate them with only a conceptual apparatus. Sooner or later concepts must be grounded in a primitive causal connection between thoughts and things. (Pylyshyn, 2001, p. 154)
It is the need for such grounding that has led Pylyshyn to propose a theory of visual cognition that includes characteristics of classical, connectionist, and embodied cognitive science. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.07%3A_Indexing_Objects_in_the_World.txt |
Why is Pylyshyn’s (2003b, 2007) proposal of preattentive visual indices important? It has been noted that one of the key problems facing classical cognitive science is that it needs some mechanism for referring to the world that is preconceptual, and that the impact of Pylyshyn’s theory of visual cognition is that it provides an account of exactly such a mechanism (Fodor, 2009). How this is accomplished is sketched out in Figure 8-8, which provides a schematic of the various stages in Pylyshyn’s theory of visual cognition.
Figure 8-8. Pylyshyn’s theory of preattentive visual indexing provides referential links from object files to distal objects in the world.
The initial stages of the theory posit causal links from distal objects arrayed in space in a three-dimensional world and mental representations that are produced from these links. The laws of optics and projective geometry begin by creating a proximal stimulus—a pattern of stimulation on the retina—that is uniquely determined, but because of the problem of underdetermination cannot be uniquely inverted. The problem of underdetermination is initially dealt with by a variety of visual modules that compose early vision, and which use natural constraints to deliver unique and useful representations of the world (e.g., the primal sketch and the 2½-D sketch). Pylyshyn’s theory of visual cognition elaborates Marr’s (1982) natural computation view of vision. In addition to using Marr’s representations, Pylyshyn claims that early vision can individuate visual objects by assigning them one of a limited number of tags (FINSTs). Furthermore, preattentive processes permit these tags to remain attached, even if the properties of the tagged objects change. This result of early vision is illustrated in Figure 8-8 as the sequences of solid arrows that link each visual object to its own internal FINST.
Once objects have been individuated by the assignment of visual indices, the operations of visual cognition can be applied (Treisman, 1986, 1988; Ullman, 1984, 2000). Attention can be directed to individuated elements, permitting visual properties to be detected or spatial relations amongst individuated objects to be computed. The result is that visual cognition can be used to create a description of an individuated object in its object file (Kahneman, Treisman, & Gibbs, 1992). As shown in Figure 8-8, visual cognition has created an internal object file for each of the three distal objects involved in the diagram.
Once object files have been created, general knowledge of the world—isotropic cognitive processes (Fodor, 1983) can be exploited. Object files can be used to access classical representations of the world, permitting semantic categories to be applied to the visual scene.
However, object files permit another important function in Pylyshyn’s theory of visual cognition because of the preattentive nature of the processes that created them: a referential link from an object file to a distal object in the world. This is possible because the object files are associated with FINSTs, and the FINSTs themselves were the end product of a causal, non-cognitive chain of events:
An index corresponds to two sorts of links or relations: on the one hand, it corresponds to a causal chain that goes from visual objects to certain tokens in the representation of the scene being built (perhaps an object file), and on the other hand, it is also a referential relationship that enables the visual system to refer to those particular [visual objects]. The second of these functions is possible because the first one exists and has the right properties. (Pylyshyn, 2003b, p. 269)
The referential links back to the distal world are illustrated as the dashed lines in Figure 8-8.
The availability of the referential links provides Pylyshyn’s theory of visual cognition (2003b, 2007) with distinct advantages over a purely classical model. Recall that a top-down model operates by creating and maintaining internal descriptions of distal objects. It was earlier noted that one problem with this approach is that the projected information from an object is constantly changing, in spite of the fact that the object’s identity is constant. This poses challenges for solving the correspondence problem by matching descriptions. However, this also leads a classical model directly into what is known as the frame problem (Ford & Pylyshyn, 1996; Pylyshyn, 1987). The frame problem faces any system that has to update classical descriptions of a changing world. This is because as a property changes, a classical system must engage in a series of deductions to determine the implications of the change. The number of possible deductions is astronomical, resulting in the computational intractability of a purely descriptive system.
The referential links provide a solution to the frame problem. This is because the tracking of a FINSTed object and the perseverance of the object file for that object occur without the need of constantly updating the object’s description. The link between the FINST and the world is established via the causal link from the world through the proximal stimulus to the operation of early vision. The existence of the referential link permits the contents of the object file to be refreshed or updated—not constantly, but only when needed. “One of the purposes of a tag was to allow the visual system to revisit the tagged object to encode some new property” (Pylyshyn, 2003b, p. 208).
The notion of revisiting an indexed object in order to update the contents of an object file when needed, combined with the assumption that visual processing is embodied in such a way to be of limited order, link Pylyshyn’s (2003b, 2007) theory of visual cognition to a different theory that is central to embodied cognitive science, enactive perception (Noë, 2004). Enactive perception realizes that the detailed phenomenal experience of vision is an illusion because only a small amount of visual information is ever available to us (Noë, 2002). Enactive perception instead views perception as a sensorimotor skill that can access information in the world when it is needed. Rather than building detailed internal models of the world, enactive perception views the world as its own representation (Noë, 2009); we don’t encode an internal model of the world, we inspect the outer world when required or desired. This account of enactive perception mirrors the role of referential links to the distal world in Pylyshyn’s theory of visual cognition.
Of course, enactive perception assumes much more than information in the world is accessed, and not encoded. It also assumes that the goal of perception is to guide bodily actions upon the world. “Perceiving is a way of acting. Perception is not something that happens to us, or in us. It is something we do” (Noë, 2004, p. 1). This view of perception arises because enactive perception is largely inspired by Gibson’s (1966, 1979) ecological approach to perception. Actions on the world were central to Gibson. He proposed that perceiving agents “picked up” the affordances of objects in the world, where an affordance is a possible action that an agent could perform on or with an object.
Actions on the world (ANCHORs) provide a further link between Pylyshyn’s (2003b, 2007) theory of visual cognition and enactive perception, and consequently with embodied cognitive science. Pylyshyn’s theory also accounts for such actions, because FINSTs are presumed to exist in different sensory modalities. In particular, ANCHORs are analogous to FINSTs and serve as indices to places in motor-command space, or in proprioceptive space (Pylyshyn, 1989). The role of ANCHORs is to serve as indices to which motor movements can be directed. For instance, in the 1989 version of his theory, Pylyshyn hypothesized that ANCHORs could be used to direct the gaze (by moving the fovea to the ANCHOR) or to direct a pointer.
The need for multimodal indexing is obvious because we can easily point at what we are looking at. Conversely, if we are not looking at something, it cannot be indexed, and therefore cannot be pointed to as accurately. For instance, when subjects view an array of target objects in a room, close their eyes, and then imagine viewing the objects from a novel vantage point (a rotation from their original position), their accuracy in pointing to the targets decreases (Rieser, 1989). Similarly, there are substantial differences between reaches towards visible objects and reaches towards objects that are no longer visible but are only present through imagery or memory (Goodale, Jakobson, & Keillor, 1994). Likewise, when subjects reach towards an object while avoiding obstacles, visual feedback is exploited to optimize performance; when visual feedback is not available, the reaching behaviour changes dramatically (Chapman & Goodale, 2010).
In Pylyshyn’s (2003b, 2007) theory of visual cognition, coordination between vision and action occurs via interactions between visual and motor indices, which generate mappings between the spaces of the different kinds of indices. Requiring transformations between spatial systems makes the location of indexing and tracking mechanisms in parietal cortex perfectly sensible. This is because there is a great deal of evidence suggesting that parietal cortex instantiates a variety of spatial mappings, and that one of its key roles is to compute transformations between different spatial representations (Andersen et al., 1997; Colby & Goldberg, 1999; Merriam, Genovese, & Colby, 2003; Merriam & Colby, 2005). One such transformation could produce coordination between visual FINSTs and motor ANCHORs.
One reason that Pylyshyn’s (2003b, 2007) theory of visual cognition is also concerned with visually guided action is his awareness of Goodale’s work on visuomotor modules (Goodale, 1988, 1990, 1995; Goodale & Humphrey, 1998; Goodale et al., 1991), work that was introduced earlier in relation to embodied cognitive science. The evidence supporting Goodale’s notion of visuomotor modules clearly indicates that some of the visual information used to control actions is not available to isotropic cognitive processes, because it can affect actions without requiring or producing conscious awareness. It seems very natural, then, to include motor indices (i.e., ANCHORs) in a theory in which such tags are assigned and maintained preattentively.
The discussion in this section would seem to place Pylyshyn’s (2003b, 2007) theory of visual cognition squarely in the camp of embodied cognitive science. Referential links between object files and distal objects permit visual information to be accessible without requiring the constant updating of descriptive representations. The postulation of indices that can guide actions and movements and the ability to coordinate these indices with visual tags place a strong emphasis on action in Pylyshyn’s approach.
However, Pylyshyn’s theory of visual cognition has many properties that make it impossible to pigeonhole as an embodied position. In particular, a key difference between Pylyshyn’s theory and enactive perception is that Pylyshyn does not believe that the sole goal of vision is to guide action. Vision is also concerned with descriptions and concepts—the classical cognition of represented categories:
Preparing for action is not the only purpose of vision. Vision is, above all, a way to find out about the world, and there may be many reasons why an intelligent organism may wish to know about the world, apart from wanting to act upon it. (Pylyshyn, 2003b, p. 133) | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.08%3A_Situation%2C_Vision%2C_and_Action.txt |
In Chapter 3 we introduced the imagery debate, which concerns two different accounts of the architectural properties of mental images. One account, known as the depictive theory (Kosslyn, 1980, 1994; Kosslyn, Thompson, & Ganis, 2006), argues that we experience the visual properties of mental images because the format of these images is quasi-pictorial, and that they literally depict visual information.
The other account, propositional theory, proposes that images are not depictive, but instead describe visual properties using a logical or propositional representation (Pylyshyn, 1973, 1979b, 1981a, 2003b). It argues that the privileged properties of mental images proposed by Kosslyn and his colleagues are actually the result of the intentional fallacy: the spatial properties that Kosslyn assigns to the format of images should more properly be assigned to their contents.
The primary support for the depictive theory has come from relative complexity evidence collected from experiments on image scanning (Kosslyn, 1980) and mental rotation (Shepard & Cooper, 1982). This evidence generally shows a linear relationship between the time required to complete a task and a spatial property of an image transformation. For instance, as the distance between two locations on an image increases, so too does the time required to scan attention from one location to the other. Similarly, as the amount of rotation that must be applied to an image increases, so too does the time required to judge that the image is the same or different from another. Proponents of propositional theory have criticized these results by demonstrating that they are cognitively penetrable (Pylyshyn, 2003c): a change in tacit information eliminates the linear relationship between time and image transformation, which would not be possible if the depictive properties of mental images were primitive.
If a process such as image scanning is cognitively penetrable, then this means that subjects have the choice not to take the time to scan attention across the image. But this raises a further question: “Why should people persist on using this method when scanning entirely in their imagination where the laws of physics and the principles of spatial scanning do not apply (since there is no real space)?” (Pylyshyn, 2003b, p. 309). Pylyshyn’s theory of visual cognition provides a possible answer to this question that is intriguing, because it appeals to a key proposal of the embodied approach: cognitive scaffolding.
Pylyshyn’s scaffolding approach to mental imagery was inspired by a general research paradigm that investigated whether visual processing and mental imagery shared mechanisms. In such studies, subjects superimpose a mental image over other information that is presented visually, in order to see whether the different sources of information can interact, for instance by producing a visual illusion (Bernbaum &Chung, 1981; Finke&Schmidt, 1977; Goryo, Robinson,&Wilson, 1984; Ohkuma, 1986). This inspired what Pylyshyn (2007) called the index projection hypothesis. This hypothesis brings Pylyshyn’s theory of visual cognition into contact with embodied cognitive science, because it invokes cognitive scaffolding via the visual world.
According to the index projection hypothesis, mental images are scaffolded by visual indices that are assigned to real world (i.e., to visually present) entities. For instance, consider Pylyshyn’s (2003b) application of the index projection hypothesis to the mental map paradigm used to study image scanning:
If, for example, you imagine the map used to study mental scanning superimposed over one of the walls in the room you are in, you can use the visual features of the wall to anchor various objects in the imagined map. In this case, the increase in time it takes to access information from loci that are further apart is easily explained since the ‘images,’ or, more neutrally, ‘thoughts’ of these objects are actually located further apart. (Pylyshyn, 2003b, p. 376, p. 374)
In other words, the spatial properties revealed in mental scanning studies are not due to mental images per se, but instead arise from “the real spatial nature of the sensory world onto which they are ‘projected’” (p. 374).
If the index projection hypothesis is valid, then how does it account for mental scanning results when no external world is visible? Pylyshyn argued that in such conditions, the linear relationship between distance on an image and the time to scan it may not exist. For instance, evidence indicates that when no external information is visible, smooth attentional scanning may not be possible (Pylyshyn & Cohen, 1999). As well, the exploration of mental images is accompanied by eye movements similar to those that occur when a real scene is explored (Brandt & Stark, 1997). Pylyshyn (2007) pointed out that this result is exactly what would be predicted by the index projection hypothesis, because the eye movements would be directed to real world entities that have been assigned visual indices.
The cognitive scaffolding of mental images may not merely concern their manipulation, but might also be involved when images are created. There is a long history of the use of mental images in the art of memory (Yates, 1966). One important technique is the ancient method of loci, in which mental imagery is used to remember a sequence of ideas (e.g., ideas to be presented in a speech).
The memory portion of the Rhetorica ad Herrenium, an anonymous text that originated in Rome circa 86 BC and reached Europe by the Middle Ages, teaches the method of loci as follows. A well-known building is used as a “wax tablet” onto which memories are to be “written.” As one mentally moves, in order, through the rooms of the building, one places an image representing some idea or content in each locus—that is, in each imagined room. During recall, one mentally walks through the building again, and “sees” the image stored in each room. “The result will be that, reminded by the images, we can repeat orally what we have committed to the loci, proceeding in either direction from any locus we please” (Yates, 1966, p. 7).
In order for the method of loci to be effective, a great deal of effort must be used to initially create the loci to be used to store memories (Yates, 1966). Ancient rules of memory taught students the most effective way to do this. According to the Rhetorica ad Herrenium, each fifth locus should be given a distinguishing mark. A locus should not be too similar to the others, in order to avoid confusion via resemblance. Each locus should be of moderate size and should not be brightly lit, and the intervals between loci should also be moderate (about thirty feet). Yates (1966, p. 8) was struck by “the astonishing visual precision which [the classical rules of memory] imply. In a classically trained memory the space between the loci can be measured, the lighting of the loci is allowed for.”
How was such a detailed set of memory loci to be remembered? The student of memory was taught to use what we would now call cognitive scaffolding. They should lay down a set of loci by going to an actual building, and by literally moving through it from locus to locus, carefully committing each place to memory as they worked (Yates, 1966). Students were advised to visit secluded buildings in order to avoid having their memorization distracted by passing crowds. The Phoenix, a memory manual published by Peter of Ravenna in 1491, recommended visiting unfrequented churches for this reason. These classical rules for the art of memory “summon up a vision of a forgotten social habit. Who is that man moving slowly in the lonely building, stopping at intervals with an intent face? He is a rhetoric student forming a set of memory loci” (Yates, 1966, p. 8).
According to the index projection hypothesis, “by anchoring a small number of imagined objects to real objects in the world, the imaginal world inherits much of the geometry of the real world” (Pylyshyn, 2003b, p. 378). The classical art of memory, the method of loci, invokes a similar notion of scaffolding, attempting not only to inherit the real world’s geometry, but to also inherit its permanence. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.09%3A_Scaffolding_the_Mental_Image.txt |
The purpose of this chapter was to introduce Pylyshyn’s (2003b, 2007) theory of visual cognition. This theory is of interest because different aspects of it make contact with classical, connectionist, or embodied cognitive science.
The classical nature of Pylyshyn’s theory is found in his insistence that part of the purpose of vision is to make contact with perceptual categories that can be involved in general cognitive processing (e.g., inference and problem solving). The connectionist nature of Pylyshyn’s theory is found in his invocation of artificial neural networks as the mechanisms for assigning and tracking indices as part of early vision. The embodied nature of Pylyshyn’s theory is found in referential links between object files and distal objects, the use of indices to coordinate vision and action, and the use of indices and of referential links to exploit the external world as a scaffold for seeing and visualizing.
However, the hybrid nature of Pylyshyn’s theory of visual cognition presents us with a different kind of puzzle. How is this to be reconciled with Pylyshyn’s position as a champion of classical cognitive science and as a critic of connectionist (Fodor & Pylyshyn, 1988) and embodied (Fodor & Pylyshyn, 1981) traditions? The answer to this question is that when Pylyshyn writes of cognition, this term has a very technical meaning that places it firmly in the realm of classical cognitive science, and which—by this definition—separates it from both connectionist and embodied cognitive science.
Recall that Pylyshyn’s (2003b, 2007) theory of visual cognition was motivated in part by dealing with some of the problems facing purely cognitive theories of perception such as the New Look. His solution was to separate early vision from cognition and to endorse perceptual mechanisms that solve problems of underdetermination without requiring inferential processing.
I propose a distinction between vision and cognition in order to try to carve nature at her joints, that is, to locate components of the mind/brain that have some principled boundaries or some principled constraints in their interactions with the rest of the mind. (Pylyshyn, 2003b, p. 39)
The key to the particular “carving” of the system in his theory is that early vision, which includes preattentive mechanisms for individuating and tracking objects, does not do so by using concepts, categories, descriptions, or inferences. Time and again in his accounts of seeing and visualizing, Pylyshyn describes early vision as being “preconceptual” or “non-conceptual.”
This is important because of Pylyshyn’s (1984) characterization of the levels of analysis of cognitive science. Some of the levels of analysis that he invoked—in particular, the implementational and algorithmic levels—are identical to those levels as discussed in Chapter 2 in this volume. However, Pylyshyn’s version of the computational level of analysis is more restrictive than the version that was also discussed in that earlier chapter.
For Pylyshyn (1984), a computational-level analysis requires a cognitive vocabulary. A cognitive vocabulary captures generalizations by appealing to the contents of representations, and it also appeals to lawful principles governing these contents (e.g., rules of inference, the principle of rationality). “The cognitive vocabulary is roughly similar to the one used by what is undoubtedly the most successful predictive scheme available for human behavior—folk psychology” (p. 2).
When Pylyshyn (2003b, 2007) separates early vision from cognition, he is proposing that the cognitive vocabulary cannot be productively used to explain early vision, because early vision is not cognitive, it is preconceptual. Thus it is no accident that when his theory of visual cognition intersects connectionist and embodied cognitive science, it does so with components that are part of Pylyshyn’s account of early vision. Connectionism and embodiment are appropriate in this component of Pylyshyn’s theory because his criticism of these approaches is that they are not cognitive, because they do not or cannot use a cognitive vocabulary! | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/08%3A_Seeing_and_Visualizing/8.10%3A_The_Bounds_of_Cognition.txt |
In the philosophy of G. W. F. Hegel, ideas developed by following a dialectical progression. They began as theses that attempted to explain some truth; deficiencies in theses permitted alternative ideas to be formulated. These alternatives, or antitheses, represented the next stage of the progression. A final stage, synthesis, approached truth by creating an emergent combination of elements from theses and antitheses. It has been argued that cognitive science provides an example of a dialectical progression. The current chapter begins by casting classical cognitive science as the thesis and considering both connectionist cognitive science and embodied cognitive science as viable antitheses. This argument is supported by reviewing some of the key differences amongst these three approaches. What remains is considering whether synthesis of these various approaches is possible.
Some of the arguments from previous chapters, including the possibility of hybrid accounts of cognition, are used to support the claim that synthesis in cognitive science is possible, though it has not yet been achieved. It is further argued that one reason synthesis has been impeded is because modern cognitivism, which exemplifies the classical approach, arose as a violent reaction against behaviourist psychology. Some of the core elements of cognitive antitheses, such as exploiting associations between ideas as well as invoking environmental control, were also foundations of the behaviourist school of thought. It is suggested that this has worked against synthesis, because exploring such ideas has the ideological impact of abandoning the cognitive revolution.
In this chapter I then proceed to consider two approaches for making the completion of a cognitive dialectic more likely. One approach is to consider the successes of the natural computation approach to vision, which developed influential theories that reflect contributions of all three approaches to cognitive science. It was able to do so because it had no ideological preference of one approach over the others. The second approach is for classical cognitive science to supplement its analytical methodologies with forward engineering. It is argued that such a synthetic methodology is likely to discover the limits of a “pure” paradigm, producing a tension that may only be resolved by exploring the ideas espoused by other positions within cognitive science.
9.02: Towards a Cognitive Dialectic
A dialectic involves conflict which generates tension and is driven by this tension to a state of conflict resolution (McNeill, 2005). According to philosopher G. W. F. Hegel (1931), ideas evolve through three phases: thesis, antithesis, and synthesis. Different approaches to the study of cognition can be cast as illustrating a dialectic (Sternberg, 1999).
Dialectical progression depends upon having a critical tradition that allows current beliefs (theses) to be challenged by alternative, contrasting, and sometimes even radically divergent views (antitheses), which may then lead to the origination of new ideas based on the old (syntheses). (Sternberg, 1999, p. 52)
The first two aspects of a dialectic, thesis and antithesis, are easily found throughout the history of cognitive science. Chapters 3, 4, and 5 present in turn the elements of classical, connectionist, and embodied cognitive science. I have assigned both connectionist and embodied approaches with the role of antitheses to the classical thesis that defined the earliest version of cognitive science. One consequence of antitheses arising against existing theses is that putative inadequacies of the older tradition are highlighted, and the differences between the new and the old approaches are emphasized (Norman, 1993). Unsurprisingly, it is easy to find differences between the various cognitive sciences and to support the position that cognitive science is fracturing in the same way that psychology did in the early twentieth century. The challenge to completing the dialectic is exploring a synthesis of the different cognitive sciences.
One kind of tool that is becoming popular for depicting and organizing large amounts of information, particularly for various Internet sites, is the tag cloud or word cloud (Dubinko et al., 2007). A word cloud is created from a body of text; it summarizes that text visually by using size, colour, and font. Typically, the more frequently a term appears in a text, the larger is its depiction in a word cloud. The goal of a word cloud is to summarize a document in a glance. As a way to illustrate contrasts between classical, connectionist, and embodied cognitive sciences, I compare word clouds created for each of chapters 3, 4, and 5. Figure \(1\) presents the word cloud generated for Chapter 3 on classical cognitive science. Note that it highlights words that are prototypically classical, such as physical, symbol, system, language, grammar, information, expression, as well as key names like Turing and Newell.
An alternative word cloud emerges from Chapter 4 on connectionist cognitive science, as shown in Figure \(2\). This word cloud picks out key connectionist elements such as network, input, hidden, output, units, connections, activity, learning, weights, and neural; names found within the cloud are McCulloch, Berkeley, Rescorla-Wagner, and Rumelhart. Interestingly, the words connectionist and classical are equally important in this cloud, probably reflecting the fact that connectionist properties are typically introduced by contrasting them with (problematic) classical characteristics. The word clouds in Figures \(1\) and \(2\) differ strikingly from one another.
A third word cloud that is very different from the previous two is provided in Figure \(3\), which was compiled from Chapter 5 on embodied cognitive science. The words that it highlights include behavior, world, environment, control, agent, robot, body, nature, extended, and mind; names captured include Grey Walter, Clark, and Ashby. Once again, embodied and classical are both important terms in the chapter, reflecting that the embodied approach is an antithesis to the classical thesis, and is often presented in direct contrast to classical cognitive science.
Another way to illustrate the differences between the different approaches to cognitive science is to consider a set of possible dimensions or features and to characterize each approach to cognitive science in terms of each dimension. Table \(1\) presents one example of this manoeuvre. The dimensions used in this table—core ideas, preferred formalism, tacit assumption, and so on—were selected because I viewed them as being important, but the list of these features could be extended.
Classical Cognitive Science Connectionist Cognitive Science Embodied Cognitive Science
Core Ideas
Mind as a physical symbol system
Mind as a digital computer
Mind as a planner
Mind as creator and manipulator of models of the world
Mind as sense-think-act processing
Mind as information processor, but not as a digital computer
Mind as a parallel computer
Mind as pattern recognizer
Mind as a statistical engine
Mind as biologically plausible mechanism
Mind as controller of action
Mind emerging from situation and embodiment, or being-in-the-world
Mind as extending beyond skull into world
Mind as sense-act processing
Preferred Formalism Symbolic logic Nonlinear optimization Dynamical systems theory
Tacit Assumption Nativism, naive realism Empiricism Embodied interaction
Type of Processing Symbol manipulation Pattern recognition Acting on the world
Prototypical Architecture Production system (Newell, 1973) Multilayer perceptron (Rumelhart et al., 1986b) Behavior-based robot (Brooks, 1989)
Prototypical Domain
Language Problem solving
Discrimination learning
Perceptual categorization
Locomotion Social interaction
Philosophical Roots
Hobbes
Descartes
Leibniz
Craik
Aristotle
Locke
Hume
James
Vico
Dewey
Heidegger
Merleau-Ponty
Some Key Modern Theorists
Chomsky
Dennett
Fodor
Pylyshyn
J.A. Anderson
Hinton
Kohonen
McClelland
Brooks
Clark
Noë
Wilson
Some Pioneering Works
Plans And The Structure Of Behavior (Miller et al., 1960)
Aspects Of The Theory Of Syntax (Chomsky, 1965)
Human Problem Solving (Newell & Simon, 1972)
Principles Of Neurodynamics (Rosenblatt, 1962)
Parallel Models Of Associative Memory (Hinton & Anderson, 1981)
Parallel Distributed Processing (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986c)
Cognition And Reality (Neisser, 1976)
The Ecological Approach To Visual Perception (Gibson, 1979)
Understanding Computers And Cognition (Winograd & Flores, 1987b)
Table \(1\). Contrasts between the three schools of thought in cognitive science.
An examination of Table \(1\) once again reveals marked differences between the three approaches as described in this volume. Other features could be added to this table, but I suspect that they too would reveal striking differences between the three views of cognitive science, and would be less likely to reveal striking similarities.
The illustrations so far—with the word clouds and with the table—definitely point towards the existence of theses and antitheses. An obvious tension exists within cognitive science. How might a synthesis be achieved to alleviate this tension? One approach to achieving synthesis in the cognitive dialectic may involve considering why the differences highlighted in Table \(1\) have arisen.
One context for considering Table \(1\) is the Indian fable of the six blind men and the elephant, the subject of a famous nineteenth-century poem by John Godfrey Saxe (Saxe, 1868). Each blind man feels a different part of the elephant, and comes away with a very different sense of the animal. The one who touched the tusk likens an elephant to a spear, the one who felt the knee compares the animal to a tree, the one who grabbed the tail likens it to a rope, and so on. After each has explored their part of the elephant, they reconvene to discuss its nature, and find that each has a dramatically different concept of the animal. The result is a heated, and ultimately unresolved, dispute: “And so these men of Indostan / Disputed loud and long, / Each in his own opinion / Exceeding stiff and strong, / Though each was partly in the right, / And all were in the wrong!” (p. 260).
To apply the moral of this story to the differences highlighted in Table \(1\), it is possible that the different approaches to cognitive science reflect differences that arise because each pays attention to different aspects of cognition, and none directs its attention to the complete picture. This view is consistent with one characterization of cognitive science that appeared at the cusp of the connectionist revolution (Norman, 1980).
Norman (1980) characterized a mature classical cognitive science that had decomposed human cognition into numerous information processing subsystems that defined what Norman called the pure cognitive system. The core of the pure cognitive system was a physical symbol system.
Norman’s (1980) concern, though, was that the classical study of the pure cognitive system was doomed to fail because it, like one of the blind men, was paying attention to only one component of human cognition. Norman, prior to the rise of either connectionist or embodied cognitive science, felt that more attention had to be paid to the biological mechanisms and the surrounding environments of cognitive agents.
The human is a physical symbol system, yes, with a component of pure cognition describable by mechanisms. . . . But the human is more: the human is an animate organism, with a biological basis and an evolutionary and cultural history. Moreover, the human is a social animal, interacting with others, with the environment, and with itself. The core disciplines of cognitive science have tended to ignore these aspects of behavior. (Norman, 1980, pp. 2–4)
Norman (1980) called for cognitive scientists to study a variety of issues that would extend their focus beyond the study of purely classical cognition. This included returning to a key idea of cybernetics, feedback between agents and their environments. “The concept has been lost from most of cognitive studies, in part because of the lack of study of output and of performance” (p. 6). For Norman, cognitive science had to consider “different aspects of the entire system, including the parts that are both internal and external to the cognizer” (p. 9).
Norman’s (1980) position points out one perspective for unifying the diversity illustrated in Table \(1\): recognize that each school of cognitive science is, like each blind man in the fable, investigating an incomplete aspect of cognition and take advantage of this by combining these different perspectives. “I believe in the value of multiple philosophies, multiple viewpoints, multiple approaches to common issues. I believe a virtue of Cognitive Science is that it brings together heretofore disparate disciplines to work on common themes” (pp. 12–14).
One illustration of the virtue of exploring multiple viewpoints in the study of single topics is Norman’s own work on design (Norman, 1998, 2002, 2004). Another illustration is the hybrid theory of seeing and visualizing (Pylyshyn, 2003c, 2007) described in Chapter 8, which draws on all three approaches to cognitive science in an attempt to arrive at a more complete account of a broad and diverse topic. The key to such successful examples is the acknowledgment that there is much to be gained from a co-operative view of different approaches; there is no need to view each approach to cognitive science as being mutually exclusive competitors. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/09%3A_Towards_a_Cognitive_Dialectic/9.01%3A_Chapter_Overview.txt |
Norman (1980) called for cognitive science to extend its domain beyond the investigation of pure cognition, suggesting, for example, a return to some of the topics that were central to cybernetics, such as feedback between agents and environments. This was not the first time that such a suggestion had been made.
Twenty years earlier, in Plans and the Structure of Behavior (Miller et al., 1960), cognitive psychologist George Miller, mathematical psychologist Eugene Galanter, and neuropsychologist Karl Pribram argued for cognitivism to revisit the contributions of cybernetics. The reason for this was that Miller, Galanter, and Pribram, like Norman, were worried that if cognitivism focused exclusively on mental representations, then it would be incomplete. Such a perspective “left an organism more in the role of a spectator than of a participant in the drama of living. Unless you can use your Image to do something, you are like a man who collects maps but never makes a trip” (p. 2).
A related perspective was the theme of another key work that preceded Norman (1980), Ulric Neisser’s (1976) Cognition and Reality. Neisser, an eminent pioneer of cognitivism (Neisser, 1967), argued that the relevance of cognitive psychology required it to be concerned with factors that lay beyond mental representations. “Perception and cognition are usually not just operations in the head, but transactions with the world. These transactions do not merely inform the perceiver, they also transform him” (Neisser, 1976, p. 11). Rather than being inspired by cybernetics, Neisser was interested in reformulating cognitivism in the context of Gibson’s (1966, 1979) theory of ecological perception. “Because perception and action take place in continuous dependence on the environment, they cannot be understood without an understanding of that environment itself” (Neisser, 1976, p. 183).
It would appear, then, that there is an extended history of important cognitivists calling for cognitive science to extend itself beyond the study of what Norman (1980) called the pure cognitive system. It is equally clear that this message has not had the desired impact. For instance, had the main theme of Miller, Galanter, and Pribram (1960) been widely accepted, then there would have been no need for similar proposals to appear decades later, as with Neisser (1976) and Norman (1980).
Why has cognitive science stubbornly held firm to the classical approach, emphasizing the study of pure cognition? One possible answer to this question is that the development of cognitivism in one of cognitive science’s key contributors, psychology, occurred in a combative context that revealed thesis and antithesis but was not conducive to synthesis. This answer is considered in more detail below.
It is often claimed that cognitive science is chiefly concerned with the human cognitive capacities (Gardner, 1984; von Eckardt, 1995). Ironically, the one discipline that would be expected to have the most to say about human mental phenomena—experimental psychology—was one of the last to accept cognitivism. This was because around the time cognitive science emerged, experimental psychology was dominated by behaviorism.
Behaviorists argued that a scientific psychology must restrict itself to the study of observable behavior and avoid invoking theoretical constructs that could not be directly observed, such as mental representation.
So long as behaviorism held sway—that is, during the 1920s, 1930s, and 1940s— questions about the nature of human language, planning, problem solving, imagination and the like could only be approached stealthily and with difficulty, if they were tolerated at all. (Gardner, 1984, p. 12)
Other disciplines were quicker to endorse cognitivism and to draw upon the insights of diverse fields of study because they were not restricted by the behaviorist yoke. For instance, mathematician Norbert Wiener (1948) created the field of cybernetics after realizing that problems involving communication, feedback, and information were general enough to span many disciplines. He held “the conviction that the most fruitful areas for the growth of the sciences were those which had been neglected as a no-man’s land between the various fields” (p. 8).
Wiener realized that progress in cybernetics required interaction between researchers trained in different disciplines. He was a key organizer of the first joint meeting concerning cybernetics, held at Princeton in 1944, which included engineers, physiologists, and mathematicians. This in turn led to the Macy conferences on cybernetics that occurred regularly from 1946 through 1953 (Conway & Siegelman, 2005). The Macy conferences broadened the range of participants who attended the 1944 Princeton meeting to include psychologists, sociologists, and anthropologists.
The success of the Macy meetings prepared the way for a variety of similar interdisciplinary conferences that in turn set the stage for cognitive science. One of these was a 1956 conference organized by MIT’s Special Interest Group in Information Theory. This conference included presentations by Newell and Simon on their logic machine, and by Chomsky on generative grammar (Miller, 2003). Thus conference participant George Miller, trained in the behaviorist tradition, would have heard computer scientists and linguists freely using representational terms to great effect.
The success of cognitivism in other disciplines, communicated to psychologists who participated in these interdisciplinary conferences, led to a reaction against behaviourism in psychology. “No longer were psychologists restricted in their explanatory accounts to events that could either be imposed on a subject or observed in one’s behavior; psychologists were now willing to consider the representation of information in the mind” (Gardner, 1984, p. 95).
George Miller (2003) has provided a personal account of this transition. His first book, Language and Communication (Miller, 1951), deliberately employed a behaviorist framework, a framework that he would completely abandon within a few years because of the influence of the cognitivist work of others. “In 1951, I apparently still hoped to gain scientific respectability by swearing allegiance to behaviorism. Five years later, inspired by such colleagues as Noam Chomsky and Jerry Bruner, I had stopped pretending to be a behaviorist” (Miller, 2003, p. 141).
However, because cognitivism arose as a reaction against behaviorism in North American experimental psychology, cognitive psychology developed by taking an antagonistic approach to almost all of the central behaviorist positions (Bruner, 1990; Sperry, 1993). “We were not out to ‘reform’ behaviorism, but to replace it” said Bruner (1990, p. 3). In psychology, the cognitive revolution,
was not one of finding new positives to support the important role of cognition, many of which were already long evident. Rather, the story is one of discovering an alternative logic by which to refute the seemingly incontestable reasoning that heretofore required science to ostracize mind and consciousness. (Sperry, 1993, p. 881)
Consider but one example that illustrates the tone within psychology during the cognitive revolution. Skinner’s (1957) account of language, Verbal Behavior, elicited a review by Noam Chomsky (1959b) that serves as one of the pioneering articles in cognitivism and is typically viewed as the turning point against psychological behaviourism (MacCorquodale, 1970; Schlinger, 2008). Some researchers, though, have objected to the tone of Chomsky’s review: “It is ungenerous to a fault; condescending, unforgiving, obtuse, and ill-humored” (MacCorquodale, 1970, p. 84).
On the other side of the antagonism, behaviorists have never accepted the impact of Chomsky’s review or the outcome of the cognitive revolution. Schlinger (2008, p. 335) argued that fifty years after its publication, Verbal Behavior (and behaviorism) was still vital because it worked: “It seems absurd to suggest that a book review could cause a paradigmatic revolution or wreak all the havoc that Chomsky’s review is said to have caused to Verbal Behavior or to behavioral psychology.”
The tone of the debate about Verbal Behavior is indicative of the tension and conflict that characterized cognitivism’s revolt against behaviorist psychology. As noted earlier, cognitivists such as Bruner viewed their goal as replacing, and not revising, behaviorist tenets: “It was not a revolution against behaviorism with the aim of transforming behaviorism into a better way of pursuing psychology by adding a little mentalism to it. Edward Tolman had done that, to little avail” (Bruner, 1990, p. 2).
One behaviorist position that was strongly reacted against by cognitivism “was the belief in the supremacy and the determining power of the environment” (Gardner, 1984, p. 11). Cognitive psychologists turned almost completely away from environmental determinism. Instead, humans were viewed as active information processors (Lindsay & Norman, 1972; Reynolds & Flagg, 1977). For instance, the New Look in perception was an argument that environmental stimulation could be overridden by the contents of beliefs, desires, and expectations (Bruner, 1957). In cognitivism, mind triumphed over environmental matter.
Cognitive psychology’s radical rejection of the role of the environment was a departure from the earlier cybernetic tradition, which placed a strong emphasis on the utility of feedback between an agent and its world. Cyberneticists had argued that,
for effective action on the outer world, it is not only essential that we possess good effectors, but that the performance of these effectors be properly monitored back to the central nervous system, and that the readings of these monitors be properly combined with the other information coming in from the sense organs to produce a properly proportioned output to the effectors. (Wiener, 1948, p. 114)
Some cognitivists still agreed with the view that the environment was an important contributor to the complexity of behavior, as shown by Simon’s parable of the ant (Simon, 1969; Vera & Simon, 1993). Miller, Galanter, and Pribram (1960) acknowledged that humans and other organisms employed internal representations of the world. However, they were also “disturbed by a theoretical vacuum between cognition and action” (p. 11). They attempted to fill this vacuum by exploring the relevance of key cybernetic ideas, particularly the notion of environmental feedback, to cognitive psychology.
However, it is clear that Miller, Galanter, and Pribram’s (1960) message about the environment had little substantive impact. Why else would Norman (1980) be conveying the same message twenty years later? It is less clear why this was the case. One possibility is that as cognitivism took root in experimental psychology, and as cognitive psychology in turn influenced empirical research within cognitive science, interest in the environment was a minority position. Cognitive psychology was clearly in a leading position to inform cognitive science about its prototypical domain (i.e., adult human cognition; see von Eckardt, 1995). Perhaps this informing included passing along antagonist views against core behaviorist ideas.
Of course, cognitive psychology’s antagonism towards behaviorism and the behaviorist view of the environment is not the only reason for cognitive science’s rise as a classical science. Another reason is that cognitive science was not so much inspired by cybernetics, but was instead inspired by computer science and the implications of the digital computer. Furthermore, the digital computer that inspired cognitive science—the von Neumann architecture, or the stored-program computer (von Neumann, 1993)—was a device that was primarily concerned with the manipulation of internal representations.
Finally, the early successes in developing classical models of a variety of highlevel cognitive phenomena such as problem solving (Newell et al., 1958; Newell & Simon, 1961, 1972), and of robots that used internal models to plan before executing actions on the world (Nilsson, 1984), were successes achieved without worrying much about the relationship between world and agent. Sense-think-act processing, particularly the sort that heavily emphasized thinking or planning, was promising new horizons for the understanding of human cognition. Alternative approaches, rooted in older traditions of cybernetics or behaviorism, seemed to have been completely replaced.
One consequence of this situation was that cognitive science came to be defined in a manner that explicitly excluded non-classical perspectives. For example, consider von Eckardt’s (1995) attempt to characterize cognitive science. Von Eckardt argued that this can be done by identifying a set of domain-specifying assumptions, basic research questions, substantive assumptions, and methodological assumptions. Importantly, the specific members of these sets that von Eckardt identified reflect a prototypical classical cognitive science and seem to exclude both connectionist and embodied varieties.
Consider just one feature of von Eckardt’s (1995) project. She began by specifying the identification assumption for cognitive science—its assumed domain of study. According to von Eckardt, the best statement of this assumption is to say that cognitive science’s domain is human cognitive capacities. Furthermore, her discussion of this assumption—and of possible alternatives to it—rejects non-classical variants of cognitive science.
For instance, Simon’s (1969) early consideration of the sciences of the artificial cast intelligence as being the ability to adapt behaviour to changing demands of the environment. Von Eckardt (1995) considered this idea as being a plausible alternative to her preferred identification assumption. However, her analysis of Simon’s proposal can be dismissed because it is too broad: “for there are cases of adaptive behavior (in Simon’s sense) mediated by fairly low-level biological mechanisms that are not in the least bit cognitive and, hence, do not belong within the domain of cognitive science” (p. 62). This view would appear to reject connectionism as being cognitive science, in the sense that it works upward from low-level biological mechanisms (Dawson, 2004) and that connectionism rejects the classical use of an explanatory cognitive vocabulary (Fodor & Pylyshyn, 1988; Smolensky, 1988).
Similarly, von Eckardt (1995) also rejected an alternative to her definition of cognitive science’s identification assumption, which would include in cognitive science the study of core embodied issues, such as cognitive scaffolding.
Human beings represent and use their ‘knowledge’ in many ways, only some of which involve the human mind. What we know is represented in books, pictures, computer databases, and so forth. Clearly, cognitive science does not study the representation and the use of knowledge in all these forms. (von Eckardt, 1995, p. 67)
If cognitive science does not study external representations, then by von Eckardt’s definition the embodied approach does not belong to cognitive science.
The performance of classical simulations of human cognitive processes led researchers to propose in the late 1950s that within a decade most psychological theories would be expressed as computer programs (Simon & Newell, 1958). The classical approach’s failure to deliver on such promises led to pessimism (Dreyfus, 1992), which resulted in critical assessments of the classical assumptions that inspired alternative approaches (Rumelhart & McClelland, 1986c; Winograd & Flores, 1987b). The preoccupation of classical cognitivism with the manipulation of internal models of the world may have prevented it from solving problems that depend on other factors, such as a cybernetic view of the environment.
As my colleague George Miller put it some years later, ‘We nailed our new credo to the door, and waited to see what would happen. All went very well, so well, in fact, that in the end we may have been the victims of our success.’ (Bruner, 1990, pp. 2–3)
How has classical cognitivism been a victim of its success? Perhaps its success caused it to be unreceptive to completing the cognitive dialectic. With the rise of the connectionist and embodied alternatives, cognitive science seems to have been in the midst of conflict between thesis and antithesis, with no attempt at synthesis. Fortunately there are pockets of research within cognitive science that can illustrate a path towards synthesis, a path which requires realizing that each of the schools of thought we have considered here has its own limits, and that none of these schools of thought should be excluded from cognitive science by definition. One example domain in which synthesis is courted is computational vision. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/09%3A_Towards_a_Cognitive_Dialectic/9.03%3A_Psychology_Revolution_and_Environment.txt |
To sighted human perceivers, visual perception seems easy: we simply look and see. Perhaps this is why pioneers of computer vision took seeing for granted. One student of Marvin Minsky was assigned—as a summer project—the task of programming vision into a computer (Horgan, 1993). Only when such early projects were attempted, and had failed, did researchers realize that the visual system was effortlessly solving astronomically difficult information processing problems.
Visual perception is particularly difficult when one defines its goal as the construction of internal models of the world (Horn, 1986; Marr, 1976, 1982; Ullman, 1979). Such representations, called distal stimuli, must make explicit the threedimensional structure of the world. However, the information from which the distal stimulus is constructed—the proximal stimulus—is not rich enough to uniquely specify 3-D structure. As discussed in Chapter 8, the poverty of proximal stimuli underdetermines visual representations of the world. A single proximal stimulus is consistent with, in principle, an infinitely large number of different world models. The underdetermination of vision makes computer vision such a challenge to artificial intelligence researchers because information has to be added to the proximal stimulus to choose the correct distal stimulus from the many that are possible.
The cognitive revolution in psychology led to one approach for dealing with this problem: the New Look in perception proposed that seeing is a form of problem solving (Bruner, 1957, 1992; Gregory, 1970, 1978; Rock, 1983). General knowledge of the world, as well as beliefs, expectations, and desires, were assumed to contribute to our visual experience of the world, providing information that was missing from proximal stimuli.
The New Look also influenced computer simulations of visual perception. Knowledge was loaded into computer programs to be used to guide the analysis of visual information. For instance, knowledge of the visual appearance of the components of particular objects, such as an air compressor, could be used to guide the segmentation of a raw image of such a device into meaningful parts (Tenenbaum& Barrow, 1977). That is, the computer program could see an air compressor by exploiting its pre-existing knowledge of what it looked like. This general approach—using pre-existing knowledge to guide visual perception—was widespread in the computer science literature of this era (Barrow & Tenenbaum, 1975). Barrow and Tenenbaum’s (1975) review of the state of the art at that time concluded that image segmentation was a low-level interpretation that was guided by knowledge, and they argued that the more knowledge the better.
Barrow and Tenenbaum’s (1975) review described a New Look within computer vision:
Higher levels of perception could involve partitioning the picture into ‘meaningful’ regions, based on models of particular objects, classes of objects, likely events in the world, likely configurations, and even on nonvisual events. Vision might be viewed as a vast, multi-level optimization problem, involving a search for the best interpretation simultaneously over all levels of knowledge. (Barrow & Tenenbaum, 1975, p. 2)
However, around the same time a very different data-driven alternative to computer vision emerged (Waltz, 1975).
Waltz’s (1975) computer vision system was designed to assign labels to regions and line segments in a scene produced by drawing lines and shadows. “These labels describe the edge geometry, the connection or lack of connection between adjacent regions, the orientation of each region in three dimensions, and the nature of the illumination for each region” (p. 21). The goal of the program was to assign one and only one label to each part of a scene that could be labelled, except in cases where a human observer would find ambiguity.
Waltz (1975) found that extensive, general knowledge of the world was not required to assign labels. Instead, all that was required was a propagation of local constraints between neighbouring labels. That is, if two to-be-labelled segments were connected by a line, then the segments had to be assigned consistent labels. Two ends of a line segment could not be labelled in such a way that one end of the line would be given one interpretation and the other end a different interpretation that was incompatible with the first. Waltz found that this approach was very powerful and could be easily applied to novel scenes, because it did not depend on specialized, scene-specific knowledge. Instead, all that was required was a method to determine what labels were possible for any scene location, followed by a method for comparisons between possible labels, in order to choose unique and compatible labels for neighbouring locations.
The use of constraints to filter out incompatible labels is called relaxation labelling (Rosenfeld, Hummel, & Zucker, 1976); as constraints propagate through neighbouring locations in a representation, the representation moves into a stable, lower-energy state by removing unnecessary labels. The discussion of solving Sudoku problems in Chapter 7 illustrates an application of relaxation labelling. Relaxation labelling proved to be a viable data-driven approach to dealing with visual underdetermination.
Relaxation labelling was the leading edge of a broad perspective for understanding vision. This was the natural computation approach to vision (Hildreth, 1983; Marr, 1976, 1982; Marr & Hildreth, 1980; Marr & Nishihara, 1978; Marr, Palm, & Poggio, 1978; Marr & Poggio, 1979; Marr & Ullman, 1981; Richards, 1988; Ullman, 1979). Researchers who endorse the natural computation approach to vision use naïve realism to solve problems of underdetermination.They hypothesize that the visual world is intrinsically structured, and that some of this structure is true of any visual scene. They assume that a visual system that has evolved in such a structured world is able to take advantage of these visual properties to solve problems of underdetermination.
The properties of interest to natural computation researchers are called natural constraints. A natural constraint is a property of the visual world that is almost always true of any location in any scene. For example, a great many visual properties of three-dimensional scenes (depth, texture, color, shading, motion) vary smoothly. This means that two locations very near one another in a scene are very likely to have very similar values for any of these properties. Locations that are further apart will not be as likely to have similar values for these properties.
Natural constraints can be used to solve visual problems of underdetermination by imposing restrictions on scene interpretations. Natural constraints are properties that must be true of an interpretation of a visual scene. They can therefore be used to filter out interpretations consistent with the proximal stimulus but not consistent with the natural constraint. For example, an interpretation of a scene that violated the smoothness constraint, because its visual properties did not vary smoothly in the sense described earlier, could be automatically rejected and never experienced.
The natural computation approach triumphed because it was able to identify a number of different natural constraints for solving a variety of visual problems of underdetermination (for many examples, see Marr, 1982). As in the scene labelling approach described above, the use of natural constraints did not require scene-specific knowledge. Natural computation researchers did not appeal to problem solving or inference, in contrast to the knowledge-based models of an earlier generation (Barrow & Tenenbaum, 1975; Tenenbaum & Barrow, 1977). This was because natural constraints could be exploited using data-driven algorithms, such as neural networks. For instance, one can exploit natural constraints for scene labelling by using processing units to represent potential labels and by defining natural constraints between labels using the connection weights between processors (Dawson, 1991). The dynamics of the signals sent through this network will turn on the units for labels consistent with the constraints and turn off all of the other units.
In the context of the current discussion of the cognitive sciences, the natural computation approach to vision offers an interesting perspective on how a useful synthesis of divergent perspectives is possible. This is because the natural computation approach appeals to elements of classical, connectionist, and embodied cognitive science. Initially, the natural computation approach has strong classical characteristics. It views visual perception as a prototypical representational phenomenon, endorsing sense-think-act processing.
The study of vision must therefore include not only the study of how to extract from images the various aspects of the world that are useful to us, but also an inquiry into the nature of the internal representations by which we capture this information and thus make it available as a basis for decisions about our thoughts and actions. (Marr, 1982, p. 3)
Marr’s theory of early vision proposed a series of different kinds of representations of visual information, beginning with the raw primal sketch and ending with the 2½-D sketch that represented the three-dimensional locations of all visible points and surfaces.
However representational it is, though, the natural computation approach is certainly not limited to the study of what Norman (1980) called the pure cognitive system. For instance, unlike New Look theories of human perception, natural computation theories paid serious attention to the structure of the world. Indeed, natural constraints are not psychological properties, but are instead properties of the world. They are not identified by performing perceptual experiments, but are instead discovered by careful mathematical analyses of physical structures and their optical projections onto images. “The major task of Natural Computation is a formal analysis and demonstration of how unique and correct interpretations can be inferred from sensory data by exploiting lawful properties of the natural world” (Richards, 1988, p. 3). The naïve realism of the natural computation approach forced it to pay careful attention to the structure of the world.
In this sense, the natural computation approach resembles a cornerstone of embodied cognitive science, Gibson’s (1966, 1979) ecological theory of perception. Marr (1982) himself saw parallels between his natural computation approach and Gibson’s theory, but felt that natural computation addressed some flaws in ecological theory. Marr’s criticism was that Gibson rejected the need for representation, because Gibson underestimated the complexity of detecting invariants: “Visual information processing is actually very complicated, and Gibson was not the only thinker who was misled by the apparent simplicity of the act of seeing” (p. 30). In Marr’s view, detecting visual invariants required exploiting natural constraints to build representations from which invariants could be detected and used. For instance, detecting the invariants available in a key Gibsonian concept, the optic flow field, requires applying smoothness constraints to local representations of detected motion (Hildreth, 1983; Marr, 1982).
Strong parallels also exist between the natural computation approach and connectionist cognitive science, because natural computation researchers were highly motivated to develop computer simulations that were biologically plausible. That is, the ultimate goal of a natural computation theory was to provide computational, algorithmic, and implementational accounts of a visual process. The requirement that a visual algorithm be biologically implementable results in a preference for parallel, co-operative algorithms that permit local constraints to be propagated through a network. As a result, most natural computation theories can be translated into connectionist networks.
How is it possible for the natural computation approach to endorse elements of each school of thought in cognitive science? In general, this synthesis of ideas is the result of a very pragmatic view of visual processing. Natural computation researchers recognize that “pure” theories of vision will be incomplete. For instance, Marr (1982) argued that vision must be representational in nature. However, he also noted that these representations are impossible to understand without paying serious attention to the structure of the external world.
Similarly, Marr’s (1982) book, Vision, is a testament to the extent of visual interpretation that can be achieved by data-driven processing. However, data-driven processes cannot deliver a complete visual interpretation. At some point—when, for instance, the 2½-D sketch is linked to a semantic category—higher-order cognitive processing must be invoked. This openness to different kinds of processing is why a natural computation researcher such as Shimon Ullman can provide groundbreaking work on an early vision task such as computing motion correspondence matches (1979) and also be a pioneer in the study of higher-order processes of visual cognition (1984, 2000).
The search for biologically plausible algorithms is another example of the pragmatism of the natural computation approach. Classical theories of cognition have been criticized as being developed in a biological vacuum (Clark, 1989). In contrast, natural computation theories have no concern about eliminating low-level biological accounts from their theories. Instead, the neuroscience of vision is used to inform natural computation algorithms, and computational accounts of visual processing are used to provide alternative interpretations of the functions of visual neurons. For instance, it was only because of his computational analysis of the requirements of edge detection that Marr (1982) was able to propose that the centre-surround cells of the lateral geniculate nucleus were convolving images with difference-of-Gaussian filters.
The pragmatic openness of natural computation researchers to elements of the different approaches to cognitive science seems to markedly contrast with the apparent competition that seems to characterize modern cognitive science (Norman, 1993). One account of this competition might be to view it as a conflict between scientific paradigms (Kuhn, 1970). From this perspective, some antagonism between perspectives is necessary, because newer paradigms are attempting to show how they are capable of replacing the old and of solving problems beyond the grasp of the established framework. If one believes that they are engaged in such an endeavour, then a fervent and explicit rejection of including any of the old paradigm within the new is to be expected.
According to Kuhn (1970), a new paradigm will not emerge unless a crisis has arisen in the old approach. Some may argue that this is exactly the case for classical cognitive science, whose crises have been identified by its critics (Dreyfus, 1972, 1992), and which have led to the new connectionist and embodied paradigms. However, it is more likely that it is premature for paradigms of cognitive science to be battling one another, because cognitive science may very well be pre-paradigmatic, in search of a unifying body of belief that has not yet been achieved.
The position outlined in Chapter 7, that it is difficult to identify a set of core tenets that distinguish classical cognitive science from the connectionist and the embodied approaches, supports this view. Such a view is also supported by the existence of approaches that draw on the different “paradigms” of cognitive science, such as the theory of seeing and visualizing (Pylyshyn, 2003c, 2007) discussed in Chapter 8, and the natural computation theory of vision. If cognitive science were not pre-paradigmatic, then it should be easy to distinguish its different paradigms, and theories that draw from different paradigms should be impossible.
If cognitive science is pre-paradigmatic, then it is in the process of identifying its core research questions, and it is still deciding upon the technical requirements that must be true of its theories. My suspicion is that a mature cognitive science will develop that draws on core elements of all three approaches that have been studied. Cognitive science is still in a position to heed the call of a broadened cognitivism (Miller, Galanter, & Pribram, 1960; Norman, 1980). In order to do so, rather than viewing its current approaches as competing paradigms, it would be better served by adopting the pragmatic approach of natural computation and exploiting the advantages offered by all three approaches to cognitive phenomena. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/09%3A_Towards_a_Cognitive_Dialectic/9.04%3A__Lessons_from_Natural_Computation.txt |
Modern experimental psychology arose around 1860 (Fechner, 1966), and more than a century and a half later is viewed by many as still being an immature, preparadigmatic discipline (Buss, 1978; Leahey, 1992). The diversity of its schools of thought and the breadth of topics that it studies are a testament to experimental psychology’s youth as a science. “In the early stages of the development of any science different men confronting the same range of phenomena, but not usually all the same particular phenomena, describe and interpret them in different ways” (Kuhn, 1970, p. 17).
Cognitive science was born in 1956 (Miller, 2003). Because it is about a century younger than experimental psychology, it would not be surprising to discover that cognitive science is also pre-paradigmatic. This might explain the variety of opinions about the nature of cognition, introduced earlier as the competing elements of classical, connectionist, and embodied cognitive science. “The pre-paradigm period, in particular, is regularly marked by frequent and deep debates over legitimate methods, problems, and standards of solution, though these serve rather to define schools than produce agreement” (Kuhn, 1970, pp. 47–48).
The current state of cognitive science defines an as yet incomplete dialectic. Competition amongst classical, connectionist, and embodied cognitive science reflects existing tensions between thesis and antithesis. What is missing is a state of synthesis in which cognitive science integrates key ideas from its competing schools of thought. This integration is necessary, because it is unlikely that, for instance, a classical characterization of the pure cognitive system will provide a complete explanation of cognition (Miller, Galanter, & Pribram, 1960; Neisser, 1976; Norman, 1980).
In the latter chapters of the current book, several lines of evidence are presented to suggest that synthesis within cognitive science is possible. First, it is extremely difficult to find marks of the classical, that is, characteristics that uniquely distinguish classical cognitive science from either the connectionist or embodied approaches. For instance, classical cognitive science was inspired by the digital computer, but a variety of digital computers incorporated processes consistent with connectionism (such as parallel processing) and with embodied cognitive science (such as external representations).
A second line of evidence is that there is a high degree of methodological similarity between the three approaches. In particular, each school of cognitive science can be characterized as exploring four different levels of investigation: computational, algorithmic, architectural, and implementational. We see in Chapter 6 that the different approaches have disagreements about the technical details within each level. Nevertheless, all four levels are investigated by all three approaches within cognitive science. Furthermore, when different approaches are compared at each level, strong similarities can be identified. This is why, for instance, that it has been claimed that the distinction between classical and connectionist cognitive science is blurred (Dawson, 1998).
A third line of evidence accounts for the methodological similarity amongst the different approaches: cognitive scientists from different schools of thought share many core assumptions. Though they may disagree about its technical details, all cognitive scientists view cognition as a form of information processing. For instance, each of the three schools of thought appeals to the notion of representation, while at the same time debating its nature. Are representations symbols, distributed patterns, or external artifacts? All cognitive scientists have rejected Cartesian dualism and are seeking materialist explanations of cognition.
More generally, all three approaches in cognitive science agree that cognition involves interactions between the world and states of agents. This is why a pioneer of classical cognitive science can make the following embodied claim: “A man, viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself” (Simon, 1969, p. 25). However, it is again fair to say that the contributions of world, body, and mind receive different degrees of emphasis within the three approaches to cognitive science. We saw earlier that production system pioneers admitted that they emphasized internal planning and neglected perception and action (Anderson et al., 2004; Newell, 1990). Only recently have they turned to including sensing and acting in their models (Kieras & Meyer, 1997; Meyer et al., 2001; Meyer & Kieras, 1997a, 1997b, 1999; Meyer et al., 1995). Even so, they are still very reluctant to include sense-act processing—links between sensing and acting that are not mediated by internal representations—to their sense-think-act production systems (Dawson, Dupuis, & Wilson, 2010).
A fourth line of evidence is the existence of hybrid theories, such as natural computation (Marr, 1982) or Pylyshyn’s (2003) account of visual cognition. These theories explicitly draw upon concepts from each approach to cognitive science. Hybrid theories are only possible when there is at least tacit recognition that each school of thought within cognitive science has important, co-operative contributions to make. Furthermore, the existence of such theories completely depends upon the need for such co-operation: no one school of thought provides a sufficient explanation of cognition, but each is a necessary component of such an explanation.
It is one thing to note the possibility of a synthesis in cognitive science. It is quite another to point the way to bringing such a synthesis into being. One required component, discussed earlier in this chapter, is being open to the possible contributions of the different schools of thought, an openness demonstrated by the pragmatic and interdisciplinary natural computation theory of perception.
A second component, which is the topic of this final section of the book, is being open to a methodological perspective that pervaded early cognitive science and its immediate ancestors, but which has become less favored in more recent times. Synthesis in cognitive science may require a return, at least in part, to the practice of synthetic psychology.
Present-day cognitive science for the most part employs analytic, and not synthetic, methodological practices. That is, most cognitive scientists are in the business of carrying out reverse engineering (Dennett, 1998). They start with a complete, pre-existing cognitive agent. They then observe its behavior, not to mention how the behavior is affected by various experimental manipulations. The results of these observations are frequently used to create theories in the form of computer simulations (Newell & Simon, 1961). For instance, Newell and Simon (1972) collected data in the form of verbal protocols, and then used these protocols to derive working production systems. In other words, when analytic methodologies are used, the collection of data precedes the creation of a model.
The analytic nature of most cognitive science is reflected in its primary methodology, functional analysis, a prototypical example of reverse engineering (Cummins, 1975, 1983). Functional analysis dictates a top-down decomposition from the broad and abstract (i.e., computational specification of functions) to the narrower and more concrete (i.e., architecture and implementation).
Even the natural computation approach in vision endorsed a top-down analytic approach, moving from computational to implementational analyses instead of in the opposite direction. This was because higher-level analyses were used to guide interpretations of the lower levels.
In order to understand why the receptive fields are as they are—why they are circularly symmetrical and why their excitatory and inhibitory regions have characteristic shapes and distributions—we have to know a little of the theory of differential operators, band-pass channels, and the mathematics of the uncertainty principle. (Marr, 1982, p. 28)
An alternative approach is synthetic, not analytic; it is bottom-up instead of top-down; and it applies forward engineering instead of reverse engineering. This approach has been called synthetic psychology (Braitenberg, 1984). In synthetic psychology, one takes a set of primitive building blocks of interest and creates a working system from them. The behavior of this system is observed in order to determine what surprising phenomena might emerge from simple components, particularly when they are embedded in an interesting or complex environment. As a result, in synthetic psychology, models precede data, because they are the source of data.
The forward engineering that characterizes synthetic psychology proceeds as a bottom-up construction (and later exploration) of a cognitive model. Braitenberg (1984) argued that this approach would produce simpler theories than those produced by analytic methodologies, because analytic models fail to recognize the influence of the environment, falling prey to what is known as the frame of reference problem (Pfeifer & Scheier, 1999). Also, analytic techniques have only indirect access to internal components, in contrast to the complete knowledge of such structures that is possessed by a synthetic designer.
It is pleasurable and easy to create little machines that do certain tricks. It is also quite easy to observe the full repertoire of behavior of these machines—even if it goes beyond what we had originally planned, as it often does. But it is much more difficult to start from the outside and try to guess internal structure just from the observation of the data. (Braitenberg, 1984, p. 20)
Although Braitenberg proposed forward engineering as a novel methodology in 1984, it had been widely practiced by cyberneticists beginning in the late 1940s. For instance, the original autonomous robots, Grey Walter’s (1950a, 1950b, 1951, 1963) Tortoises, were created to observe whether complex behavior would be supported by a small set of cybernetic principles. Ashby’s (1956, 1960) Homeostat was created to study feedback relationships between simple machines; after it was constructed, Ashby observed that this device demonstrated interesting and complicated adaptive relationships to a variety of environments. This kind of forward engineering is currently prevalent in one modern field that has inspired embodied cognitive science, behavior-based robotics (Brooks, 1999; Pfeifer & Scheier, 1999; Sharkey, 2006).
Forward engineering is not limited to the creation of autonomous robots. It has been argued that the synthetic approach characterizes a good deal of connectionism (Dawson, 2004). The thrust of this argument is that the building blocks being used are the components of a particular connectionist architecture. These are put together into a working system whose behavior can then be explored. In the connectionist case, the synthesis of a working network involves using a training environment to modify a network by applying a general learning rule.
Classical cognitive science is arguably the most commonly practiced form of cognitive science, and it is also far less likely to adopt synthetic methodologies. However, this does not mean that classical cognitive scientists have not usefully employed forward engineering. One prominent example is in the use of production systems to study human problem solving (Newell & Simon, 1972). Clearly the analysis of verbal protocols provided a set of potential productions to include in a model. However, this was followed by a highly synthetic phase of model development.
This synthetic phase proceeded as follows: Newell and Simon (1972) used verbal protocols to rank the various productions available in terms of their overall usage. They then began by creating a production system model that was composed of only a single production, the one most used. The performance of this simple system was then compared to the human protocol. The next step was to create a new production system by adding the next most used production to the original model, and examining the behavior of the new two-production system. This process would continue, usually revealing better performance of the model (i.e., a better fit to human data) as the model was elaborated by adding each new production.
Forward engineering, in all of the examples alluded to above, provides a systematic exploration of what an architecture can produce “for free.” That is, it is not used to create a model that fits a particular set of data. Instead, it is used to show how much surprising and complex behavior can be generated from a simple set of components—particularly when that architecture is embedded in an interesting environment. It is used to explore the limits of a system—how many unexpected complexities appear in its behavior? What behaviors are still beyond the system’s capability? While reverse engineering encourages the derivation of a model constrained by data, forward engineering is concerned with a much more liberating process of model design. “Only about 1 in 20 [students] ‘gets it’—that is, the idea of thinking about psychological problems by inventing mechanisms for them and then trying to see what they can and cannot do” (Minsky, 1995, personal communication).
The liberating aspect of forward engineering is illustrated in the development of the LEGO robot AntiSLAM (Dawson, Dupuis, & Wilson, 2010). Originally, this robot was created as a sonar-based version of one of Braitenberg’s (1984) simple thought experiments, Vehicle 2. Vehicle 2 used two light sensors to control the speeds of two separate motors and generated photophobic or photophilic behavior depending upon its wiring. We replaced the light sensors with two sonar sensors, which itself was a departure from convention, because the standard view was that the two sensors would interfere with one another (Boogaarts, 2007). However, we found that the robot generated nimble behaviors and effortlessly navigated around many different kinds of obstacles at top speed. A slight tweak of the robot’s architecture caused it to follow along a wall on its right. We then realized that if the environment for the robot became a reorientation arena, then it would generate rotational error. The forward engineering of this very simple robot resulted in our discovery that it generated navigational regularities “for free.”
The appeal of forward engineering, though, is not just the discovery of unexpected behavior. It is also appealing because it leads to the discovery of an architecture’s limits. Not only do you explore what a system can do, but you discover its failures as well. It has been argued that in the analytic tradition, failures often lead to abandoning a model (Dawson, 2004), because failures amount to an inability to fit a desired set of data. In the synthetic approach, which is not driven by data fitting, failures lead to tinkering with the architecture, usually by adding new capabilities to it (Brooks, 1999, 2002). The synthetic design of cognitive models is a prototypical instance of bricolage (Dawson, Dupuis, & Wilson, 2010; Turkle, 1995).
For instance, while the early version of AntiSLAM (Dawson, Dupuis, & Wilson, 2010) produced rotational error, it could not process competing geometric and local cues, because it had no capability of detecting local cues. After realizing that the robot was capable of reorientation, this issue was solved by adding a light sensor to the existing architecture, so that a corner’s brightness could serve as a rudimentary feature. The robot is still inadequate, though, because it does not learn. We are currently exploring how this problem might be solved by adding a modifiable connectionist network to map relations between sensors and motors. Note that this approach requires moving beyond a pure embodied account and taking advantage of connectionist concepts.
In my opinion, it is the limitations inevitably encountered by forward engineers that will provide incentive for a cognitive synthesis. Consider the strong anti-representational positions of radical embodied cognitive scientists (Chemero, 2009; Noë, 2004). It is certainly astonishing to see how much interesting behaviour can be generated by systems with limited internal representations. But how much of cognition can be explained in a data-driven, anti-representational manner before researchers have to appeal to representations? For instance, is a radical embodied cognitive science of natural language possible? If embodied cognitive scientists take their theories to their limits, and are then open—as are natural computation researchers—to classical or connectionist concepts, then an interesting and productive cognitive synthesis is inevitable. That some embodied researchers (Clark, 1997) have long been open to a synthesis between embodied and classical ideas is an encouraging sign.
Similarly, radical connectionist researchers have argued that a great deal of cognition can be accomplished without the need for explicit symbols and explicit rules (Rumelhart & McClelland, 1986a; Smolensky, 1988). Classical researchers have acknowledged the incredible range of phenomena that have yielded to the fairly simple PDP architecture (Fodor & Pylyshyn, 1988). But, again, how much can connectionists explain from a pure PDP perspective, and what phenomena will elude their grasp, demanding that classical ideas be reintroduced? Might it be possible to treat networks as dynamic symbols, and then manipulate them with external rules that are different from the learning rules that are usually applied? Once again, recent ideas seem open to co-operative use of connectionist and classical ideas (Smolensky & Legendre, 2006).
The synthetic approach provides a route that takes a cognitive scientist to the limits of their theoretical perspective. This in turn will produce a theoretical tension that will likely only be resolved when core elements of alternative perspectives are seriously considered. Note that such a resolution will require a theorist to be open to admitting different kinds of ideas. Rather than trying to show that their architecture can do everything cognitive, researchers need to find what their architectures cannot do, and then expand their theories by including elements of alternative, possibly radically different, views of cognition.
This is not to say that the synthetic approach is the only methodology to be used. Synthetic methods have their own limitations, and a complete cognitive science requires interplay between synthesis and analysis (Dawson, 2004). In particular, cognitive science ultimately is in the business of explaining the cognition of biological agents. To do so, its models—including those developed via forward engineering—must be validated. Validating a theory requires the traditional practices of the analytic approach, seeking equivalencies between computations, algorithms, and architectures. It is hard to imagine such validation not proceeding by adopting analytic methods that provide relative complexity, error, and intermediate state evidence. It is also hard to imagine that a complete exploration of a putative cognitive architecture will not exploit analytic evidence from the neurosciences.
Indeed, it may be that the inability to use analytic evidence to validate a “pure” model from one school of thought may be the primary motivation to consider alternative perspectives, fueling a true synthesis within cognitive science. According to Kuhn (1970), paradigms are born by discovering anomalies. The analytic techniques of cognitive science are well equipped to discover such problems. What is then required for synthesis is a willingness amongst cognitive scientists to admit that competing views of cognition might be able to be co-operatively applied in order to resolve anomalies. | textbooks/socialsci/Psychology/Cognitive_Psychology/Mind_Body_World_-_Foundations_of_Cognitive_Science_(Dawson)/09%3A_Towards_a_Cognitive_Dialectic/9.05%3A_A_Cognitive_Synthesis.txt |
The concept of community building has no universal definition. However, this could be more positive than negative because having no universal definition also keeps the conception of community fluid, which we believe speaks more accurately to its diverse complexities. To have some idea of what we mean by community building we (the editors) start this overview of the first three chapters by first giving you a look at how we conceptualize or think about community. Community can be geographical, virtual, groups of people with similarities such as a community of community practitioners. Importantly, we see community overall as a way of being and doing or navigating life with others. Therefore community building means actively seeking ways and resources to bring forward co-liberation. This might mean we find ourselves building community by holding space for difference. For example, as editors of this textbook, we are community psychologists working in academia and the community, an industrial/organizational psychologist, and an instructional designer. Our commonality is our passion for social and racial justice. We believe that this passion also shows up in the case stories you will find in this section.
The first case story, Cultural Development in Underrepresented Communities: Using an Empowerment and Citizen Participation Framework is offered by Dr. Jacqueline Samuel. Dr. Samuel’s work illustrates community building with those who seek to use art and culture as viable avenues for building community. Students interested in arts and culture, and the health and well-being of communities of all types will find a plethora of information that might guide your journey, as using the arts to inform and reclaim power is not at all a far-reaching idea.
The second case story in this section is Dr. Amber Kelly’s and Dr. Kathleen McAuliff’s (2021) work alongside members of the disability community, seeking to advance inclusivity in all facets of community life. Dare2Dialogue brings awareness around negative stigma associated with a disability and ways to address these challenges through dialogue and centering and raising the voices of people with disabilities.
The third case story in this section is the work of Dr. August Hoffman. Dr. Hoffman’s passion is visible as he comes alongside of a student using green space to promote communal experiences for the student completing an Individualized Studies program with an emphasis on Gardening Development at Metropolitan State University. The case study moves the reader through the student’s experiences with growing healthy foods for the community as part of education, with the ultimate goal of helping children understand the benefits of healthier eating and the actual origins of the foods they consume.
1.02: Cultural Development in Underrepresented Communitie
This case story illustrates the significance of using the arts in community development work which not only speaks to aesthetics but to how we can heal communities and promote well-being.
The Big Picture
In the late ’60s, Chicago Black Theater companies began to grow during a time called the There were several Black women, all iconic by today’s standards that led the charge. First, there was the mother of Black Theater, Val Gray Ward. Val gave us Kuumba Theatre where I became a member in the early ’80s. Val was the mother of Black Theater and mentored many. Abena Joan Brown former social worker, dancer, founded Ebony Talent Agency later becoming eta Creative Arts Foundation Theater. Then came Jackie Taylor founder of the Black Ensemble Theater (BET) Company whose mission was and still is today, to eradicate racism. All of these women participated in the Black Arts Movement including Illinois Poet Laureate Gwendolyn Brooks, and Margaret Taylor–Burroughs founder of the South Side Community Art Center (home of Black visual arts) and founder of the Du Sable Museum of African American Art. These Black Women were trailblazers and pre-community psychologists in their own right. If you were a performing, visual, or musical artist in Chicago between the 60’s-2005, at some point you were touched by their grace and wisdom. The Black Arts Movement gave Black people a sense of community, pride, and voice by creating a space to tell our stories through the arts and cultural experiences. It was a holistic approach, led by several daring Black women with intentionality to use the arts as a tool for liberation.
This experience motivated my work as a performing artist, curator of multidisciplinary arts, in addition to being an art consultant. By 2004, I was working in three different communities: Albany Park (north side of the city), Humboldt Park (northwest side of the city), and South Chicago (the far south side of the city), all very culturally rich and diverse communities yet struggling in their own way to sustain their arts communities. This is where I learned that I had to know the history of each community. What I found was:
• Albany Park is a cultural mecca, a trip around the world in one visit. It is a place where the average school has students that speak at least 50 different languages. The community is so physically dense that finding space for cultural activities and artistic expression was difficult.
• Humboldt Park is the home of a large Puerto Rican community that was gentrified out of their original homes in Lincoln Park, an affluent neighborhood near the lakefront. Humboldt Park is art activism at its best; a community where grassroots artists use the arts for social justice practices. They are determined to never be moved again as the sculpture of the Puerto Rican flag hovers over the street and benchmarks the gateway into the community. Their challenge is sustainable funding and competition with larger arts organizations in the area.
• South Chicago is a proud industrial community, known for its economic heyday before the steel mills shut down. It is tucked under a toll bridge called the Skyway, a visual marker that indicates your geographic location in the city of Chicago. South Chicago is 75% Africian American and 36% Mexican. At the time it had plenty of artisists who lived there but they needed to be reclaimed to come home and work in their own community. They also lacked resources for art-making.
In 2007, I returned to South Chicago in a new role. I became the intermediary with an arts background and the responsibility to support the implementation of a Quality of Life Plan (QLP) written by the residents and stakeholders of South Chicago. The QLP consisted of projects and program strategies that responded to social conditions that were a concern of the community. South Chicago identified economic development, safety, education, environment, food access, health, housing, youth development, and the arts as their strategic focus.
The challenge was to integrate the arts into community development practices. This meant bringing city government, residents, and other stakeholders to the table with the arts community. This project was complex because arts and cultural exchanges had never been used strategically in this fashion in Chicago. While working and navigating with a small team, a funder, an artist, and a scribe, I knew that we were too small to meet the masses and we would have to rely on a snowball effect in order for this concept to work. In each community, we were met by some form of skepticism. I learned that we had to wait to be invited into the community. When we allowed the time for relationship building we were eventually invited into each community. We engaged in cultural exchanges, tours, educational and reflective conversations as we broke bread in order to find our commonalities and deepen our relationships. Blending arts/culture and bricks/mortar is not an easy task when both groups do not have experience in working together. They both see their worlds in very different ways. However, the QLP provided the template for action and my past experience gave me the core knowledge I needed to engage both groups. What I did not realize was that I had more to learn.
Cultural Development and Prompting Creative Responses
In the beginning of this work, my colleagues and I spent many hours in meetings discussing issues, reflecting on history, looking at data, mapping where issues occurred, and discussing root causes, but we were not getting anywhere. These discussions were important and useful in understanding the issues but they never seemed to change anything. It felt like we were stuck on information but didn’t know what to do with it and how to move forward. Of course, we worked on collective behavioral and policy changes but it seemed more complicated when it came to social conditions deeply rooted in the community’s culture, such as violence, mental health and the needs of Black women, who historically have been underrepresented. Of course, there were some wins in deescalating gang violence but domestic violence and its impact on women of color in the community was much more complex. Domestic violence seemed to be a silent ill of society and difficult to detect and prevent from happening without witness support.
Mental health is similar, especially in communities of color where it is rarely addressed because of stigma. However, when we looked at issues impacting the community through the lens of art and culture there always seemed to be a stronger level of engagement from the residents. So we asked the question, how can we better engage the community to prompt quality responses that promote social change?The answer we were looking for was discovered after attending a conference on the facilitation of community meetings. I was introduced to the Art of Hosting (AoH). AoH is the practice of using different methodologies of discourse with groups of any size based on the context of the gathering. It is supported by principles that help maximize collective intelligence while being inclusive of diversity in addition to minimizing and transforming conflict. When utilizing AoH, the result is collective clarity, wiser actions, and sustainable workable solutions to the most complex problems. The approach ensures that stakeholders buy into the process because their participation in the design of the process is by definition transparent. This led me to change how I hosted the community as a community psychologist To foster solutions, we followed the same processes as before. Together we would educate ourselves about the issue.
I hosted café style conversations where we would break up into small groups and respond to questions related to the social condition. We took the time to reflect and this method helped to give voice to all that participated. We were becoming active listeners. In addition, it increased our collective efficacy. By being in a circle, it erased the hierarchy and reduced conflicts and high expectations. We invited everyone that wanted to have a seat at the table: residents, elected officials, business owners, youth, law enforcement, faith-based leaders, and artists. We were no longer limited by our titles. Using a cultural development lens gave the artists a context of how the social condition impacted the community; everyone became more transparent about their feelings and they let go of their personal agendas. These community conversations changed my trajectory and experience of community psychology. Using AoH allowed me to bring my humility and willingness to experience what communities have to offer. We learned and healed together. It was restorative and engaging and broke barriers and unleashed our truths. Now we were ready for change by using AoH to inform our use of the arts for Healing Centered engagement. The term Healing Centered expands how we think about responses to trauma and offers a more holistic approach to fostering well-being (Ginwright, 2018). By using this approach, participants can use the arts as a tool to collectively address their trauma and restore their well-being.
Arts and Healing Centered Engagement Promoting the Well-being Among Black Women
After one of our AoH café sessions, I recall several Black women expressing feelings of a lack of belonging in the community. They talked about feeling like outcasts because the housing complex where they lived was riddled with violence and trouble, and they felt that it reflected poorly on them and their families. I knew at that point I needed to connect them to something that would make them feel valued. The idea was to reduce and ease the impact of historical trauma by creating space through a healing-centered experience that encourages collective voice, the celebration of shared identity, and build a sense of community through belonging.
Historically, Black women have always taken on the weight of the family unit by taking on the challenges of caregivers. Without having any trusted support systems, Black women often forgo self-care. When self-care is ignored, so is your overall health. Lack of self-care increases stress and leads to chronic health diseases that impair Black women’s well-being. Therefore, participating in shared cultural engagement can play a significant role in defining good health and supporting ones’ well-being, resilience, and healing. Experiences such as family connections, expressions through spirituality or music, reliance on community networks, and the church can be great sources of strength and support (Nami, 2020). Integrating the arts is a way of achieving these goals to wellness. I thought about putting on a play, but as much as I love theater, I knew that it would take way too long with the planning, auditions, rehearsals, staging, and finding the right play and location. It would take a considerable amount of time and cost to pull it off, but I wanted them to be a part of something. We needed a quick win; something just as powerful but more spontaneous, celebratory, and memorable. I felt it would be better to do another aspect of theater. I was thinking of a “spectacle”. Something that would happen one time only and the process would unite us all through the effort of participating in it.
Responding to the Needs of Black Women
Assets
I believe the voices of Black women are powerful assets when used collectively. This assumption is supported by the MeToo, GirlTrek, and Black Lives Matter movements. Even politicians recognize Black women’s power during the most recent election. The challenge is to center their voices universally. Black women today demonstrate that the road to their well-being is traveled by holding space and holistically sharing their hidden stories. These stories define the commonality and range of their lived experiences. When Black women gather in supportive settings, it creates a safe space away from the atrocities of abuse and assaults against their mental and physical health and stability. It occurs in meaningful ways. Some might revert back to ancient and traditional behaviors of their ancestors, such as hosting circles, Bible study, or something of a spiritual nature. They also use their creativity through cooking, art-making, writing, music, and performance. They form groups such as book clubs, sororities, or auxiliaries. They also retain their childhood girlfriends by extending their relationships through social settings such as dinners or travel. The impact of COVID-19 has also increased gatherings through social media, for example, virtual panel discussions or talks, in addition to live internet events. Whichever way that Black women come together it provides them with a moment to collectively cleanse themselves of their griefs and sorrows, in addition to celebrating their shared identity and experiences.
Needs
The core needs for Black women is suggested foster relief from Racial Battle Fatigue (RBF) to improve mental health. Racial Battle Fatigue or RBF comes from the impact of experiencing daily battles of attempts to deflect racism, stereotypes, and discrimination and the necessity to always be on guard or wary of the next attack they may face. Coupled with caregiving, work, and maintaining the household adds layers to this toxic stress. The result can be suppressed immunity and an increase of sickness that causes multiple illnesses from tension headaches to elevated blood pressure, among other factors (Goodwin, 2018). The challenge is that these gateway maladies left untreated over time can eventually lead to chronic illnesses. Although healthcare professionals have recognized how COVID-19 has added another layer to health disparities, particularly within communities of color, it is yet to be seen what actions will be taken to promote health equity. In the meantime, community-based healing-centered engagement can be an accentual component to inspiring collective self-care experiences. So now the stage was set. We hoped to improve the way the community communicates and implemented the QLP strategies. We needed to address the feeling of exclusion from the community for Black women that participated in our gatherings. Now our goal was to find the right event.
When Black Women Gather: Gele’ Day
When Black women struggle with acceptance of their body image it is also a result of RBF. Black women are judged by the color of their skin, the size of their lips and hips, and often by the way they wear their hair. The irony is that it is acceptable for others to artificially create these features through tanning lotions, Botox, and fashion statements such as the bustle of the 19th century or the modern-day buttocks injections. However, for Black women, even how they wear their hair can potentially interfere with them keeping their job. The beauty of the Black woman has always been in question. Over the last decade, there has been more of a concerted effort to acknowledge the beauty of Black women in commercials. Actress Pilar Audain was featured in a Dove/Walmart commercial where Pilar walks down the street singing a song adorned in this beautiful head-wrap. Thus, was born Pilar’s Wrap Your beYOUty Movement”:
One day after I was hosting a community meeting about trauma-informed care, Pilar shared with me about the desire to host an event in the parks called It was a day to celebrate the beauty of Black women. When Pilar explained the event I was immediately sold. I had the resources to produce the project and Pilar knew the perfect location, a lakeside park on the south side of Chicago. Pilar shared how Gele’ Day had been done before on a smaller scale. All that was needed was to identify a location for women of color to gather and we would meet in the park and symbolically celebrate womanhood through activities and providing beautiful fabrics to wrap around each other’s heads. The adornment was called a Gele’. According to Pilar, Gele’ Day is another way of saying that you are “atoning your spirit”. However, there’s another custom out of Africa which is called the mass dancers. The mass dancers come out to honor a secret society, the mother spirit of the universe. Taking on this idea, Pilar created Gele’ Day for Black women in Chicago.
Black women vendors are featured at these events and sell their wares while other Black women provide performances, meditations, testimonials, prayer, and African dance exercises. Men are also welcome but mostly serve in a supportive role. At the end of the day, the women gather and rhythmically walk to the beach led by the elders and continue walking into the lake to cleanse themselves of all of life’s challenges. Gele’ Day sounded so beautiful I immediately agreed to support the next event. The event was held in Jackson Park on the south side of Chicago, and Pilar used the support of the women of her church to help set everything up for that day. I arrived early as an organizer and participant, to claim my spot and I brought my 91-year-old mother with me. We arrived, found our spot on the grass, registered, and immediately got in line to get our heads wrapped.
The fabrics were beautiful and free of charge. There were women that were wrapping the heads of other women in the most nurturing way. Another special moment was the acknowledgment of our elders. My mom was 1 of 3 in their 90’s and it made it a very special day for her. Women would come to my mom and kneel beside her and ask to shake her hand or for a hug. They praised her longevity and asked for her youthful secrets. At one point I think I was getting a bit jealous because I wanted the attention from my mom. It was very humbling. Later a sea of Black and Brown women filled the park while Pilar opened the day in prayer. They explained the meaning of Gele’ Day and summoned a group of men to lead us in meditation as they played their Tibetan bowls during the most perfect summer day. There were performances by children, face painters, poets, musicians, and singers. We were also led in a group African dance exercise. We laughed, talked, shared our life experiences.
Near the end of the day, we rhythmically lined up by age as Pilar is never short of giving back her time, guidance, and healing efforts to those that give and support her. There is an unspoken reciprocity noted between Pilar and her followers which I will call “the Village approach”. Some call it, “paying it forward”. we began to walk to the water. Since my mother was the elder in the group, my mom stood at the front of the line with Pilar. My mom looked at me and said, “Do I get in the water too?” I said, “Yes you do” with a smile wondering if they would actually do it. I didn’t think my mom would do it but they marched right in that water. People in the park would stop me and ask who we were and what were we doing as they watched us in awe. I felt like a queen. There was such beauty in the collective cleanse. When my mother and I walked backed to the car I could not help but kneel to clean the sand off her feet. I hope my mom felt as special as I did by being together on Gele’ Day.
Outcomes and Impact
“Gele’ Day” represented healing-centered experiences through spectacle by preserving culture in the way of sharing untold stories, promoting collective pride, acknowledging ones’ ancestors, and influencing citizen participation in an open and safe space. The spectacle being both visual and performance-based, also successfully demonstrates that by utilizing the arts as a healing-centered tool, women can feel collectively empowered, cherished, and valued. As much as there is singing and laughter there is also an emotional release that is symbolically enacted by the rhythmic walk to the beachfront and actually manifests based on the reactions that you see and hear. There was inquiry of amazement from onlookers, demonstrations of love, respect and affection among the participants, and the collective response through engagement of the mindfulness activities. The experience was an overall movement, a metaphoric dance, and a holistic healing experience. As intended, it is an experience that one will remember for a lifetime.
A most powerful aspect of the “Wrap Your beYOUty Movement” is Pilar’s skills and ability to use citizen participation to reach so many women without standard forms of marketing. Every year there is a noticeable increase in attendance to these events, resulting from a snowball effect–from word-of-mouth. There are no subscriptions, no brochures, no flyers, no posters, and no ads in periodicals. The communication style of the Wrap Your beYOUty Movement can easily be equated to a modern example of the traditional use of the beat of the African Drum to communicate to the Village. Gele’ day represents Empowerment and Citizen Participation in its purest form. This can also be compared to the Black Arts Movement where space was also intentionally created to support and sustain the aesthetic voices of the Black Arts Community.
Lessons Learned
Community Engagement and Citizen Participation
It is important to identify the best approach in facilitation that works for you. I prefer the Art of Hosting (AoH) because it not only gives the participant a voice, it also offers different options of facilitation based on the needs of the community. When there are sensitive matters that need addressing such as listening to the victims of violence, we used methods that encouraged storytelling and active listening. When there were community disagreements (and there will be disagreements) there are methods for having courageous conversations that will help get the group back on track. Always keep an open mind to new methods, that is how I discovered AoH. As much as I feel comfortable working with community groups, I feel my lesson learned through this event is that you have to decide what methods of engagement work best for you in order to achieve the outcomes you desire.
Collaborative Ways of Investing in Underrepresented Communities
There were two positions held in my life that influenced my work as a community psychologist, It was always a joy exploring a character in a play but it was temporary. However, exploring, connecting, and engaging with diverse populations and cultures in a community setting was more fulfilling.that of an arts consultant specializing in theater, and that of a community development intermediary for a neighborhood in Chicago. As much as I enjoyed performing, I enjoyed working with people in communities even more. The combination of the two careers built my foundation and prepared me for community psychology. As a community psychologist, the arts provide a platform for collaboration while promoting social change and working at different ecological levels to address social conditions through a creative lens. My work covers many domains but the focus of this work is using arts and culture as tools to support underrepresented voices, specifically Black women to improve social conditions. I have been fortunate to observe best practices through my travels and engaging with creative people that have impacted the world.
A Community Psychologist’s Role in Disinvested Communities
It cannot be assumed that as a community psychologist one must take the lead, teach, or have power over the community. Although the role of a community psychologist can be subtle, it is imperative to be foremost an active listener and observer as this will guide your actions. Other best practices proposed are:
• Building your knowledge about the community that you are working with.
• Allowing yourself to be invited in and knowing the community’s history before you enter.
• Building your networking skills and leveraging resources but never offering anything that you cannot deliver.
• Finding the commonalities to connect and build relationships with the people you plan to engage with.
• Be willing to adapt to the unexpected and watching for any influence that creates barriers for others.
• Reflecting on your work, reflecting with others, while making the effort to center and raise all voices.
• Bringing humility and willingness to experience what the community has to offer you. This helps to form relationships where lessons are learned, trust is restored, and engagement is sustained; and
• Acting with intentionality to promote social change.
Recommendations
Community Engagement and Citizen Participation
When working with the community, planning is important but I also believe in planning while having smaller events in order to build relationships in the community. Educate the community on the issue that you are addressing so that you are on the same page when you begin to address the issue. Research the multiple ways that you can host a meeting and determine what suits your leadership style. However, make sure your style of facilitation is open to everyone. If they live in the community they have every right to participate if they so choose, prepare to address conflicts, respect the time of others, and make sure meetings are engaging enough where people will want to come back for more.
Working with communities and diverse populations can be stressful, but an equitable representation of the community must be a part of the process. Identify the artists that live in and serve the community. Use their talents. Besides producing their work, I have hired artists to do graphic note-taking in meetings and have found it to be more detailed than writing notes (figures below). There is something about images that stimulates memory. Honor their talents by paying them. Their time is money. Make sure you are working with artists that enjoy working with the community. There are some great artists that prefer to create on their own. This type of artist might be more suitable for showcasing by creating themed pop-up galleries, performances in the park or public art. Artists that enjoy working with the community make great organizers. Engage them in every aspect of community planning and implementation. I have had stakeholders tell me that there are no artists in the area. I always laugh at that because artists are everywhere. If you can offer the use of free or shared space, believe me, they will appear out of nowhere. Just remember to do a contract or memo of understanding so that there are no surprises at the end of the day and everyone is on the same page.
My Work Relative to Community Psychology Practice
Empowerment Theory
Through this case study, we have looked at underserved communities where Black women have felt undervalued and the communities discussed have been underrepresented. We looked at empowerment and citizen participation from a community psychology perspective to better understand how it was achieved in this case study. The empowerment theory comes to mind in this case story.
While there are many definitions out there for empowerment, I favor some of the tenets of empowerment that have been defined by the Cornell Empowerment Group (1989), such as intentionality, ongoing process, and mutual respect. For instance, intentionality is extremely important because it is the foundation of empowerment that gives the community psychologist their purpose. Intentionality is the reason behind the passion that fuels the action. In the Black Arts Movement, the iconic Black Women that led the charge to make space for Black artists recognized their absence in society, and then determined to create space for them. Their mentorship and coalition building was the driving force for sustainability. Many Black actors have trailblazed alone, but having a vision and recognizing that your needs are in common with others becomes the catalyst to mobilizing others for the cause. So when you are working with a community and you recognize that the group is at a stalemate, you use your influence to expose the group to new ideas and challenge their thinking. In South Chicago, we recognized that the arts were a great tool but it was AoH that unleased their voice and their truths.
The Wrap Your beYOUty Movement recognizes the beauty and value of Black women, so Gele’ Day used its influence to help Black women see their beauty. The Wrap Your beYOUty Movement becomes a mirror to show and reflect the power of Black women. It also opens up space for others to see the Beauty of a Black Women. Remember how people inquired about the Black women walking to the lakefront. This means that the Black women’s presence was no longer invisible. They were shown respect through acknowledgment. The care and nurturing, the ongoing process of wrapping the beauty of each woman commands mutual respect by being invited into the community of Black Women. When you enter, you enter with humility and a willingness to experience what the community has to offer. This reminds the community psychologist of ones’ own vulnerability and we must treat everyone with care and compassion. This Empowerment Framework model is shown below:
Empowerment Framework Table
Empowerment Black Women in the Black Arts Movement The Role of Community Psychology Building Communities Through the Arts Wrap Your beYouty Movement for Black Women
Intentionality To crate opportunities and recognize the talent and contributors of Black Artists To use the arts as a tool for healing, empowerment, and citizen participation Holistic healing, atonement, and collective empowerment of Black Women
Ongoing Process The development of Black institutions dedicated to the uplifting of Black Culture Program support and the reclamation of art as a community development tool Annual tradition and celebration of Gele Day and She Through Me to promote collective pride
Mutual Respect Engagement and mentoring of Black Artists Inviting into the community and entering with humility and willingness to experience what the community has to offer The reciprocity of sharing and celebration of each other’s talents
Citizen Participation
Citizen participation is another aspect of Empowerment where knowing how to engage others over time requires skill. There is Critical Awareness that informs the purpose of the collective action. It is the social barriers that ignite the need for action. The Black Arts Movement recognizes the need to acknowledge the creative contributions of Black Artist. The Community Psychologist sees the inequities and disinvestment that challenges the quality of life of underrepresent communities. The Wrap Your beYOUty Movement recognizes the need to bring attention to the value of Black Women. Once the awareness is achieved then we must reflect on how we arrived to this this issue and what did we learn. This is where the skills of Community Psychology are needed. Organizing and the ability to mobilize others to bring awareness requires skills, relationship building, leveraging resources, and trust. Once this is achieved the commitment of others will follow. In South Chicago it was AoH that brought us to the level of commitment. This is where engagement becomes strong. In the Black Arts Movement, 3 Black Women are now the founders of theater companies. Gwendolyn Brooks breaks barriers and become the first Black Women Poet Laureate of the state of Illinois, and Margaret Burroughs opens a museum on city property that showcases Black excellence. South Chicago has a plan with a vision and mission to implement. Wrap Your beYOUty Movement starts with one day and becomes an annual event. Once all of this is steps are achieved then long lasting relational connections are made which leads to sustainability. The Citizen Participation Table is shown below.
Citizen Participation Table
Citizen Participation Black Women in the Black Arts Movement The Role of Community Psychology Building Communities Through the Arts Wrap Your beYouty Movement for Black Women
Critical Awareness Recognizing the significance of Black talent and cultural Attention to resources: finding space and disinvestment Recognizing the significance of Black women in society
Participatory Skills Mobilizing and building relationships with Black artist and audiences Mobilizing and building relationships with community artists, residents, and stakeholders Mobilizing and building relationships with Black women
Participatory Values and Commitment Respect and honoring Black pride Collectively respecting the writing and implementation of strategic plans (Arts and Quality of Life Plans) Respect and honoring pride of Black women
Relational Connections Mentoring actors and support of the artistic community Hosting of the community in circle to share wants and needs; Active listening Sharing of talent and resources to support celebration
Conclusion
I am hoping that by reading through this case story you are inspired to begin to think about ways you can use arts and culture in community psychology work or other work seeking to foster resilience and build community—leaning into advancing social and racial justice. I continue to do this work using my lived experiences and education as a community psychologist and educator. Here is a website with more on Gwendolyn Brooks: https://www.poetryfoundation.org/poets/gwendolyn-brooks
From Theory to Practice Reflection and Questions
• Share with your classmates or others the ways in which the information in this chapter challenged or expanded your thinking about how the arts and culture can be used to build community.
• Provide and discuss examples of how gaining an understanding of an underrepresented community will affect your ongoing work.
• What questions does this chapter raise for you related to community psychology, community practice work, or other related fields of study? | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/01%3A_Community_Building/1.01%3A_Prelude_to_Community_Building.txt |
The chapter will share insights on this innovative strategy for promoting inclusion among individuals with a disability using community dialogue and events.
The Big Picture
According to the (WHO), more than 1 billion people live with a disability (WHO, 2017). Globally and nationally, disabilities are more commonly found in areas of poverty. Approximately 26% of people in the U.S. have a disability, with the most common disabilities being related to mobility or cognition (Okoro et al., 2018). As people live longer, they are more likely to experience a disability (Bialik, 2020). Although there is a higher prevalence of disability than some might guess, individuals with a disability still experience adverse outcomes when compared to individuals who do not have a disability, including reduced access to healthcare, educational, and employment opportunities (Okoro et al., 2018). While the causes of these adverse outcomes are multi-faceted, one potential avenue for positive social change is increasing visibility and educating the community about the experiences of people with disabilities. While overall perceptions are changing, there are still many myths about disabilities. One of the most common myths is that all disabilities and chronic conditions are visible.
In fact, 96% of people with a chronic condition do not have a condition that is visible, and 73% of people with a severe disability do not use a personal assistive device such as a wheelchair or walker Educating people through first-hand experiences can help dispel these myths, and illuminate ways in which the community could change to promote inclusion for people with disabilities. To move our theoretical framework of awareness and storytelling to practice, we sought funding and begin to put the project together. See highlights below of statistics regarding disabilities:
World Health Organization (WHO)
Disability and Health, 1 December 2020
Key Facts
The number of people with disability are dramatically increasing. This is due to demographic trends and increases in chronic health conditions, among other causes.
• Over 1 billion people live with some form of disability.
• Almost everyone is likely to experience some form of disability - temporary or permanent - at some point in life.
• People with disbaility are disproportionately affected during the COVID-19 pandemic
• If health services for people with disability exist, they are invariably of poor qualty or under resourced.
• There is an urgent need to scale up disability services in primary healthcare, particularly in rehabilitation interventions.
The goal of Dare2Dialogue was to bring awareness and engage in storytelling around challenging topics to promote change, specifically focused on individuals with a disability as a means to remove the negative stigma that can be associated with a disability. This goal was initially achieved by having two individuals who live with disabilities share their stories to highlight challenges and encourage dialogue among those who may have the privilege of not living with a disability. Subsequently, three community discussions (including one documentary screening) created an opportunity for 96 Dare2Dialogue attendees to challenge their thinking around inclusion. The chapter will share insights on this innovative strategy for promoting inclusion for individuals with a disability among community events.
Community Needs
Traditionally, local organizations that serve individuals with a physical or cognitive impairment ensure that events prioritize ability. Some examples used in hosting events include using locations that are wheelchair accessible and having sitting space for caregivers. Organizations share a challenge of attracting others to attend that typically do not attend ability-focused events. Attendees are ability advocates, and there is an interest in broadening the audience’s diversity when it pertains to interest in ability-related issues. One local initiative emphasizes inclusion through the medium of film With an open door to help close this gap in initiatives that focus on living with a disability, there were opportunities to create additional events. Further, People’s Liberty was a philanthropic lab that awarded 120 visionaries grants to impact their community in an innovative way. The five-year initiative focused on empowering individuals to help improve communities’ well-being where they lived and thrived. At the 2.5-year mark of this initiative, People’s Liberty hosted a midway celebration, “Intermission”, which allowed for twenty grantees to host an innovative event in a storefront space. The first event using this storefront space was the pilot for Dare2Dialogue, and it was the only event during the series that focused on the inclusion of individuals living with a disability. It was one of the most attended events during the celebration, which prompted the development of additional events in ability.
Collaborative Partners
When selecting collaborative partners, they should share the collective vision for promoting the inclusion of individuals living with a disability. (CEC) partnered withStarfire, an organization focusing on empowering leaders to build community and inclusion alongside people with developmental disabilities and People’s Liberty using their storefront building as the venue for all Dare2Dialogue events.
Language Matters
Using language that does not demean a population is critical to consider for working within community settings. The disability field has evolved in terms of the preferred language and continues to grow. The recommends using language to promote solidarity, respect, and ultimately honor all individuals as human beings. First-person language focuses on the individual, while individual first language emphasizes the disability. Dunn and Andrews (2015) make a case for using both first-person and identity-first language when doing work in the disability field and taking a flexible approach is respectful. Ultimately, using the term preferred by the individuals with the lived experience is the best option (American Psychological Association, 2019).
What Does Inclusion of Individuals with a Disability Really Mean?
Inclusion ensures that people with disabilities have their voices and experiences heard by people who may not see the same barriers or opportunities that exist in a community. If only people without disabilities are making decisions, certain barriers and opportunities may not be identified, thus excluding vital community members. For example, a community may not offer physical public spaces which are ADA (Americans with Disabilities Act) compliant, or a library may not offer spaces for people with sensory processing disorders, such as autism spectrum disorder. As no two people without disabilities are the same, people with disabilities are not a monolith, even if they have the same type of disability. Different disabilities bring different strengths and challenges, so creating an inclusive environment where all voices can be respected and heard is critical. Because most people do not have a disability, openness to learning and listening is imperative to make our communities truly inclusive and create larger community change. Storytelling in diverse settings allows speakers to share their stories and make connections with others that may not have previously had the opportunity.
Storytelling and Inclusion
One method of including individuals with disabilities is through storytelling. Storytelling allows people with disabilities to share their experiences in a structured format where participants actively listen to another person’s experiences. One of the 10 principles, that is key to intergroup relations is to provide opportunities for members of different groups to get to know one another as individuals Through listening to individual stories from people with disabilities, people without disabilities can put a face to an experience, rather than stereotyping. Additionally, it also provides an opportunity for people to build a sense of community with one another. Storytelling is a unique type of participatory action research method that empowers a participant to shape and share their own narrative, communicating the truth and emotional impact of their experiences (Bailey & Tilley, 2002). Without using prompts or being guided by a researcher’s agenda, participants with disabilities can decide which event(s) and experiences they can share.
Community conversations are another method of bringing together people with disabilities and people without disabilities (Carter & Bumble, 2018). Community conversations use an asset-based focus, solution-focused framing, awareness building, and a shared commitment to improving the community for individuals with disabilities. While storytelling and community conversations are similar, storytelling was selected for this event so that the focus could be on the speakers with disabilities sharing their experiences and shaping the narrative to promote awareness, so that the community members could listen, reflect, and, for some, redefine the ways they perceived someone with a disability.
Dare2Dialogue’s Linkage to Community Psychology Practice
Empowering individuals with disabilities through creating spaces where they can shape the narrative and a community can listen is both an act of social justice and empowerment.
Dare2Dialogue fused together community conversation and storytelling (Dare2Dialogue, 2021) through the use of multiple principles of community psychology. Community psychology explicitly states a sense of community, respect for human diversity, social justice, and empowerment/citizen participation as some of its core values (Prilleltensky & Nelson, 1997). This particular event is focused on fostering a sense of community through including and listening to the experiences of individuals with disabilities. As inclusivity is often lacking for people with disabilities, creating a sense of community where everyone feels like they belong requires community members to understand the ways in which they (individually, interpersonally, or as a community) may be excluding community members who have disabilities. Furthermore, as disability is a type of diversity, it is also important that people with disabilities are represented and respected in the community. As mentioned earlier in this case story, individuals with disabilities often experience adverse outcomes because of social injustices, such as reduced access to employment, education, and discriminatory practices.
Description of the Project
Dare2Dialogue Series on Inclusion consisted of events where an individual (or individuals) with a disability shared their experiences with the larger community, including people without disabilities, in order to educate, enlighten, and promote dialogue about community inclusion for people with disabilities. The series included three distinct events which are: (1) storytelling from the perspective of someone living with a physical disability, (2) storytelling from the perspective of living with a cognitive disability, and (3) movie screening sharing the lives of individuals with a disability.
For recruitment, a snowball sampling strategy was used (Noy, 2008). A snowball sampling strategy is commonly used in qualitative research and involves one participant using their network (i.e., people they know) to recruit additional participants. It is particularly useful in populations that may be difficult to reach or recruit. In some cases, a researcher will ask the participant to provide the contact information for additional potential participants to facilitate recruitment. In other cases, the initial participant(s) will recruit the potential participants, as they already have an established relationship. This event used the latter method for recruiting participants. Each event took place in the same venue, located in downtown Cincinnati, Ohio, close to public transportation, and was accessible. The first event took place during lunchtime, and the second and third events took place in the evening. When arriving at the first two events, attendees received a card to complete a word cloud. For word clouds, the statement should allow for a one-word response only. The same prompt was used. Below is a picture of a cloud with the prompt in it which says, “in one word what comes to mind when you think about inclusiveness?
Attendees had the option of texting their response, completing via website link, or sharing their reactions with a volunteer. A Quick Response (QR) code on the card helped to make the connection to the link easier. The word cloud was shown on a projector for attendees to view. Before the community member spoke, the guests enjoyed free food and drinks since it was close to mealtimes. Attendees took the time to mix and mingle before each event to facilitate community building.
Physical Ability
Word Cloud from 1st Event
Image above is a “word cloud” or collage of words such as fairness, access, opportunity, ability…
Narrative One
Speech from 1st Event
Hello everyone, I would like to start by thanking everyone for showing up for our discussion on inclusion in a midwestern city. So for starters, what is Inclusion? According to Webster, it’s the act of or state of being included. That’s it, it’s just that simple. Now add people to inclusion and it becomes much more convoluted because every one of us is different. Just take a look around the room and you can see the diversity here. Every one of us has multiple qualities that make us unique, be it our physical differences such as our height, color of our skin, body shape, hair, you name it. But we also have different nonphysical qualities which aren’t so prevalent, such as our upbringing, education, and financial backgrounds, the way we process information, biases, etc. There is no cookie-cutter form of inclusion that will work for everyone so inclusion will look different in different settings. For example, at my job, I am the only person there who doesn’t have to walk anywhere. Steps… pssh I’m not walking up steps. I have my own way of getting around and it doesn’t cause me to burn much energy. Just look at this bad boy. 2 all-terrain front wheels, good for climbing obstacles, fog headlights, padded leather back and armrests, and a hydraulic system that’d put most low-riders to shame. It doesn’t get any better than this.
“This is an awesome chair but I haven’t always been this blessed, over 9 years ago I used to have to walk like you all until one morning as I was riding my bicycle to my job at the hospital and was hit by a guy making a left turn who didn’t see me”.
I spent the next 6 months learning how to breathe on my own again, I was determined not to have to be dependent on a ventilator for the rest of my life. So when they tested to see if I could breathe on my own, I’d do it until I couldn’t take it anymore. Each time I’d go a little longer. I relearned how to eat without choking because I was on a feeding tube for 3 months. You never know how much you actually miss chewing and tasting food until you can’t anymore. Just being able to suck on small pieces of crushed ice was a treat. Eventually, I learned to feed myself, and I was able to strengthen my muscles enough to go home.
All of these tasks were difficult but I would prefer them over not being able to see my daughters for 3 months. You see, my accident happened during “flu season” at the hospitals, and during that time children aren’t allowed past a certain point. Thankfully, I had my family who brought me pictures of them. It wasn’t the same as seeing them in person but it was better than nothing. My family helped tremendously with my recovery so when I think of inclusion, I think of FAMILY because to be included in a group sometimes can be like gaining a new family.
After my accident, I saw my city differently. I became aware of how inaccessible it really was. I have come across sidewalks that I cannot access so I had to ride in the street with busy traffic, businesses without a ramp or elevator. Transportation for wheelchair users is inconvenient. I began thinking of ways to improve the city in that arena, but I didn’t have to think I could change anything alone nor did I have the resources, nor did I know where to start because all of this was still new to me, so my ideas were placed on the back burner. It wasn’t ‘til I interviewed at my job that I started back thinking about Inclusion. Because when I went for my interview, I needed help getting inside because there weren’t any automated doors. My now Supervisor asked if I saw any obstacles for me to work there and the doors were my only issue. Within a few months, plans were in place for an automated door to be put in and now it is truly accessible all because of the pebbles I threw in the pond. So I’ve learned that little things can cause big ripples and that my city is ready for change.
Cognitive Ability
Narrative Two
Speech from 2nd Event
I was born 3 months premature. Doctors thought I would not be able to walk (paralyzed). I weighed 1lb. Mom said she could fit me in the palm of her hand). My disability is called cerebral palsy. I have 1 older sister and 2 younger brothers. It is hard being the oldest brother with a disability because I thought I would be the first one to do everything like drive. I do not like asking people to do stuff for me because I feel like I can do it all on my own. But we all need help sometimes.
Elementary School
My mom asked me did I want to go to a regular school or a school for people with disabilities. My answer was a regular school. I wanted to go to a regular school because I felt like I had to do it. I had a choice. Everyone does not have a choice. Sometimes people make the choice for you. When I came home from school one day, I asked my sister “Why am I different?” She did not have a response to my question. Her mind was puzzled. I asked her because I wanted to know what her response would be. Later, I was bullied in the 1st grade. A student at school picked on me. I stood up for myself. It felt good. Today, I do not like seeing people getting picked on.
I played football as a kid. I played football ages 7-12 (except age 9). It was important for my dad to coach me so I could feel like a regular kid. I loved having my dad coach me because he understood me. I loved playing football. When I was 10 years old, my 7-year-old brother played in a game with me. I remember I asked my brother did he want to fill in because we did not have enough players to play that game. He said “Yeah.” I said, “These are 10-year-olds, you are 7, are you sure you want to play?” I asked him twice. He said “Yeah.” He held his own and he made tackles. I was proud of my little brother. At 7 years old, I would fall to the ground so I wouldn’t get tackled. I played basketball too and my dad coached me as well. I was a better basketball player than a football player.
Gym Class Story
This one particular afternoon in school, the gym teacher had asked me to jump rope. My response was no because I did not feel comfortable jumping rope in front of the whole class. I felt embarrassed to be put on the spot like that because she knew of my circumstances. I was upset and that was the hardest thing I went through as a child. It felt great to have my classmates have my back. They asked the teacher why she had me jump rope in front of the whole class when she knew of my circumstance. She did not have a response.
Junior High School
When I first went to Junior High School, I was nervous because it was a new environment. I did not know how I would adapt to the new place. Being accepted in the was an excellent thing. It was great to realize that I was doing good in school. However, in the 7th grade, there was a teacher who did not want to modify the work for me in my social studies class. To avoid the problem, I just went to a special education social studies class instead. I was disappointed because I wanted to be in a regular social studies class with all of my friends.
High School
I had a wonderful person that scribed for me and helped me with my thoughts. She worked with me since junior high school. The gym teacher was nice. She was very understanding and helpful. My senior year was the most exciting out of my 4 years of high school because I got to participate on the football team. I wanted to motivate people and be an encourager. I did not let anything stop me from doing what I wanted to do. Graduation was emotional. I thought about all the things from elementary school from when teachers did not want to modify my work to graduating high school. I felt like I had completed my goal to graduate from high school.
StarfireU
I wanted to do StarfireU because it was exciting and a 4-year program. I was the youngest in the program. It helped me to learn how to communicate with others. It helped me realize how to be a good friend to people. I learned how to make sure everybody felt like they were included. I remember a few times in the building some people would ask me to pray for them. I would take them to a room and pray for them. This is how I found my calling.
My Calling
My calling is to preach in ministry. It was hard at first to accept the calling because I did not know if people would accept me because of my disability. There were a lot of sleepless nights struggling to accept my calling. Once I accepted my calling, it felt great. The goal of my sermons is to encourage people to be great. I want to let people know they should live life to the fullest and be great at what they do.
Movie Screening
For the final event, a movie screening, which was a collaboration between two local nonprofit organizations and Community Engagement Collective took place. Partnering organizations shared the event on their social media platforms. Attendees enjoyed fresh popcorn and pizza before the movie. The previous event used in-person storytelling as the foundation; however, this event used movies as a form of art. The movie follows protagonists Tracy and Larry as they travel the world sharing their advocacy message for individuals with autism. After the movie, attendees discussed their reactions to the film in small groups. Some of the group sharing included statements like “Having my voice heard is life-changing,” “More like you than not,” and ” The humanity of acceptance.” (See Figure 3)
Figure 3 Illustration from Movie Screening Dialogue
Outcomes and Impact
The goal of Dare2Dialogue was to bring awareness and engage in dialogue around challenging topics to promote change (See Figure 4). The purpose of the community discussions on inclusion was to bring awareness to the lack of inclusion of individuals with a disability using storytelling, community conversations, and movie screening. After the launch of the Dare2Dialogue inclusion series, the community partners collaborated on various initiatives including training, speaking engagements, and conference planning. The common thread of these initiatives focused on the inclusion of individuals living with a disability. Figure 4 below shows that in the program, there were 96 attendees, 3 events for the purpose of bringing awareness to the importance of inclusion.
Lessons Learned
We learned through this process three important factors: (1) Do not use identity-first language (e.g., disability), (2) partner with other organizations for hosting events, and (3) engage audience members in multiple ways.
Hosting three different events gave insight into how to plan future events that emphasize individuals of diverse abilities. When striving to foster inclusion within events, it is important not to label the event with disability or ability. This limits the diversity of audience members in attendance. In turn, the conversation’s potential impact is limited because the audience members tend to understand the issues within the ability space. People are creatures of habit and tend to participate in events that address a topic of interest versus an unfamiliar issue. For instance, ability advocates are more likely to attend events that focus on ability. To target non-advocates, use terminology that describes the event’s activities, not the storytellers and or community members. For example, if a storyteller is living with autism and is sharing gardening, emphasize gardening in marketing materials. In this example, the audience members will build connections due to the collective interest in gardening, not autism. Example words can include diversity, inclusion, conversation, storytelling, dialogue, and community connections in event marketing materials. Intentional marketing helps to reach a broad audience that may have been less likely to connect otherwise.
Partnering with organizations for event planning and implementation can achieve the following: (1) expanded outreach by marketing to multiple networks, (2) diversity promoted among attendees, and (3) combined resources can be more cost-effective.:
Utilize multiple techniques to engage audience members at events such as word clouds, small group discussions, rotating group conversations, live polls, surveys, and narrative illustrations.
Looking Forward/Recommendations
In order to promote inclusion for people with disabilities, it is imperative that we create spaces and opportunities in the community to make sure their voices are not only heard but are the focus of the conversation. Additionally, storytelling is a relatively straightforward participatory action research (PAR) method of creating knowledge and understanding of a phenomenon. Using PAR allows the participants to shape other peoples’ minds with their own experiences, rather than the researcher doing so. As one speaker shared, telling their story was “life-changing.” It is important for researchers to consider the power of storytelling from the perspective of the person with a disability. After periods of feeling excluded, being not only included but also respected and heard can help ameliorate previous pain and can foster a sense of community. Furthermore, attending a storytelling event permits attendees to reflect and think about ways in which they or their community can be more inclusive for people with disabilities. Consider the following suggestions as you move forward in your own practice:
Strategies for Replicating in Communities
The storytelling method used in Dare2Dialogue is something that can easily be customized and implemented in other communities. Below are some strategies:
• Pick a community space that is accessible to all
• Use preferred language of the individual with the lived experience (e.g., first-person, identity-first etc.)
• Focus on a topic or interest that will engage community members with and without disabilities
• Promote community events in physical and virtual space frequeneted by people with and without disabilities
• Use community connectors to help market event
• Provide an option for attendees to reach out event coordinators for any accessibility needs (e.g., wheechair access in front of building, assistive technology, accessible parking and sign language interpreter)
• Apply recommendations of the DisABILITY Resources Tookbox (DART) for Practitioners for events (e.g., ensure access to building, provide accessible parking, use accessible modes of communication and create a welcoming environment)
Conclusion
We must create and hold space for people living with disabilities whenever we can. We read somewhere that “diversity is inviting people differently in every facet (e.g. ethnically, age, religion or spirituality, physical or mentally diverse, etc.) to the party, but “inclusion” is when they show up, you ask them to dance. We understand that some people living with a disability cannot physically stand up to dance or cognitively grasp what this means, but in this context, we mean “dance” as a metaphor for “engage in “building a relationship or community.”
DisABILITY Resources Toolbox (DART) for Practitioners
From Theory to Practice Reflections and Questions
• What does the inclusion of individuals living with a disability mean to you?
• What feelings surface for you when you think about becoming involved in community building alongside people living with any type of disability?
• In order to promote inclusion for people with disabilities, it is imperative that we create spaces and opportunities in the community to make sure their voices are not only heard but are the focus of the conversation. How would you create spaces and opportunities in the community to make sure their voices are not only heard but are the focus of the conversation? | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/01%3A_Community_Building/1.03%3A_Promoting_Inclusion_Through_Storytelling_and_Dialog.txt |
This case study narrative describes the personal growth process, learning experiences, and development of a young Somali-American woman (Amian) through the lens of a community gardening and green space program located in St. Paul, MN.
The Big Picture
Amian1 is currently completing a graduate program in Individualized Studies with an emphasis in Gardening Development at Metropolitan State University and actively participates in developing healthy foods at several gardens in the Twin Cities region. Her focus is providing members of underrepresented groups with greater access to food plots and enabling them with resources in growing healthier foods. Community growth, collaboration, and communal development of healthy foods is the central thesis of this paper, and how one young woman’s efforts to provide access to healthier foods to underserved and marginalized community members gradually became realized through her work at the Inver Hills – Metropolitan State Community Garden during the Summer 2020 growing season. Amian’s long-term goals include promoting food justice to marginalized groups residing in the upper Midwest region and developing a more sustainable ecosystem that promotes autonomy and healthy living conditions. This case study identifies important psychosocial factors such as community engagement, collaboration, inclusion, and superordinate goals as vital concepts in not only helping to produce a successful community garden but perhaps more importantly how these intersectional qualities can help us to better understand one another and coexist within a more harmonious society.
Community gardening, natural outdoor and green space activities have recently gained empirical support in providing a broad range of health-related benefits, including reduced obesity (Ornelas, et al., 2018), ecological resilience among indigenous populations (Shava, et al., 2010), social capital (Alaimo, et al., 2010) and increased community resilience among immigrant populations (Okvat & Zautra, 2011). While community gardens, green space, and natural environments have long remained popular activities among individuals and family members living in both rural and urban environments, only recently have these environments been examined as providing specific benefits such as psychological well-being (Soga, et al., 2017) and improved quality of life among younger populations such as adolescents and children (McCracken, et al., 2016).
More recently, current research has identified activity and participation within community gardening and green space environments as particularly beneficial to immigrant and refugee populations (Hartwig & Mason, 2016). Community gardening and environmentally sustainable green space activities support fundamental principles shared by community psychologists in that they provide unique opportunities for diverse groups of individuals to work collaboratively and establish a stronger sense of community inclusion, engage in social change, and promote psychosocial health (Fetterman, 2015).
The Inver Hills – Metropolitan State University Community Garden
The Inver Hills – Metropolitan State University Community Garden was established in 2010 for the purpose of providing healthy foods to low-income families of the Dayton’s Bluff area located in St. Paul, MN. The garden was established primarily as a cooperative between two higher educational institutions of the Minnesota State System (Metropolitan State University and Inver Hills Community College) and provides an environmentally sustainable learning environment for students from both institutions to work collaboratively in producing a broad
range of healthy foods for community residents. The community garden (approximately one-half acre) is located on the southeast portion of the Inver Hills campus which is located in Inver Grove Heights, MN. The garden is comprised of three separate components or divisions: A cooperative garden area where students work in producing healthy foods for local food banks; a fruit tree orchard (over 60 fruit trees) consisting of over 16 different apple tree cultivars (i.e., Honeycrisp, FrostBite, Haralson, etc.); and the final segment consists of 40 vegetable garden plots (10’ X 10’) designated for community residents. Since 2010, the Inver Hills – Metropolitan State University Community Garden has produced over 15, 000 lbs. of a variety of fresh vegetables and over 3,000 lbs. of apples which have been donated to local charities and food distribution centers throughout St. Paul and Minneapolis. The purpose of this current case study is to provide a personal and in-depth description of how a young Somali-American woman (Amian) has participated and worked in several community gardens in the upper Midwest region (i.e., Big River Farms located in Marine on St. Croix, MN, The Interfaith Garden located in Minneapolis, MN, and The Inver Hills – Metropolitan State Community Garden).
The current case study will focus primarily on Amian’s work in developing her own community garden at the Inver Hills – Metropolitan State Community Garden located in Inver Grove Heights, MN. An interesting component of this case study review is in understanding how one individual uses her personal experiences in witnessing malnutrition and food insecurity as a young child in Somalia as both a transformational and motivational process in growing healthy foods for underserved populations in the Twin Cities region. Amian’s primary initiative in green space development and community gardening activities has been her early childhood experiences while growing up in Somalia, where food shortages and other natural resources (i.e., potable water) remain increasingly in short supply. Amian comments: “My goal [at the garden] is to grow food for the community specifically for the elders as well as teach healthy food options … and encourage health and wellness during these difficult times.” Additionally, we hope to provide useful information to other immigrant families who would like to participate in community stewardship programs such as community gardening and forestry programs to improve access to healthier foods.
Community Gardening, Social Integration and Health Promotion among Immigrant Families
Given the rapid increases in the populations of individuals living in impacted and urban environments, the benefits of both green space and community gardens are becoming both necessary and important activities that help in sustaining optimal physical and psychological health. For specific refugee and immigrant populations, often just having access to potable water and minimal amounts of sustainable foods are challenges that are faced on a daily basis. According to recent estimates published by the Food and Agricultural Organization (an international agency devoted to promoting healthy foods to impoverished families throughout the world), over 153 million people or approximately 26% of the population of sub-Saharan Africa will suffer from starvation or food insecurity (Food and Agriculture Organization of the United Nations, 2016). The need to promote community gardening programs and skills in producing healthy foods that are similar to the refugee’s native homeland and food environment is critical to the successful transition to the United States (Gichunge & Kidwaro, 2014). In Amian’s personal experience as a child growing up in Somalia, she experienced firsthand how food shortages can have serious negative consequences on the health and welfare of community residents, and how immigrants and POCI populations here in the United States often face disproportionate levels of food insecurity. More specifically, as the frequency of refugee and immigrant populations arriving in the United States have increased over time researchers have discovered that access to native foods through horticultural (i.e., community gardening) and green space programs play a critical link in successful adaptation, assimilation, and improved mental health (Hartwig & Mason, 2016; Wilson, et al., 2010).
Communities that educate how native foods are grown and provide tools and resources within those environments that facilitate a successful harvest can help immigrant families adapt to their new communities more successfully and also improve both psychosocial and physiological
measures of optimal health, including a greater “sense of identity with their former selves” (Hartwig & Mason, 2016, p. 1158). Measures of improved physical and psychosocial health included greater access to organic foods (Carney, et al., 2012), reduced cardiovascular disease, depression (Tracey, et al., 2020) and body mass index (BMI) (Soga, et al., 2017). Recent data suggests that community gardening programs have been associated with the development of healthier and more sustainable lifestyles (i.e., connection with nature, social health, and increased physical activity) that are compatible with ethnically diverse families who are currently living in urban environments (Tharrey, et al., 2019). Additionally, recent topics of research within the discipline of community psychology have identified green sustainable programs such as community gardening as a viable approach in promoting wellness and mental health especially among vulnerable populations (Androff, et al., 2017).
Communities that provide residents with opportunities to share knowledge and their own personal experiences in the development of a community garden not only improve food security but have also been identified as providing other numerous benefits to the community, such as increased social capital, resilience and empowerment among those residents who live within those neighborhoods (Alaimo, et al., 2010). Indeed, communities providing residents with an opportunity to participate in green space and community gardens are perceived as desirable
living environments that contribute to a greater sense of psychological well-being, social cohesiveness and trust (Spano, et al., 2020). Amian immigrated to the United States when she was eight years old and has had a life-long passion to provide healthy foods to the community members where she currently resides in Minneapolis, MN. Growing up in a drought-stricken community in Somalia, food and potable water have never been taken for granted and are considered precious commodities. Amian has seen families struggle in just maintaining enough food to survive and has commented that she would like to see a more “collective effort from the community” in participating and contributing to the development of increased sustainable gardening programs. The culture from which Amian is accustomed to is just that – collectivistically oriented to groups working together in promoting a better way of life for survival. In the United States, Amian has commented that people are more dispersed, competitive, and more concerned about getting ahead at the expense of others rather than working collectively to share benefits with each other. “I came from a communal environment” Amian explains, and “when families experience a crisis, such as a death in the family, the first thing we do is to bring food. Food has a wonderful way of bringing people together especially during times of stress and grief.”
Growing Food for a Healthy Community as an Educational Process
An important component of growing healthy foods for the community is education. Part of Amian’s goal in developing a healthy foods program is in helping children understand not only the benefits of healthier eating, but also the actual origins of the foods they consume. Amian is convinced that when children play a role in the development and maintenance of a community garden they are improving their knowledge about food but also are more likely to consume the foods that they have helped propagate. This is especially important with vegetables (i.e., leafy greens) that are healthy but are often less preferential and palatable for younger children (i.e., broccoli, spinach, and kale). In more collectivistically-oriented environments, groups of individuals (of all ages) work together and share the benefits of common goals that are vital to the survival and well-being of community members. Perhaps more importantly, younger populations of children who share in the responsibility of growing healthy and sustainable foods such as mulching, cultivating, planting seeds, and harvesting learn the delicate balance of sustainable eco-systems and the human responsibility in respecting the fragile and finite resources of the environment. Amian has indicated that she is “committed to serving the refugee and immigrant community in Minnesota because . . . I have seen first-hand from my own experiences [in Somalia] coming from an immigrant family. I know that I cannot do this alone, and that is why I have partnered with Big River Farms and The Interfaith Garden.”
From Somalia to Minnesota: Foods that Facilitate Resettlement & Assimilation
Community gardening programs, green space, and natural environments are unique in that they provide numerous health benefits to community residents in both urban and rural areas. Adapting to a new environment as an immigrant or refugee can be very stressful, and providing
native foods from one’s homeland can be an effective stress-coping mechanism that holds numerous benefits to individual health among vulnerable populations (Tracey, et al., 2020). Minnesota is rapidly becoming a common settlement worldwide among refugees and immigrants who are escaping a number of threats to their personal safety and well-being, including malnutrition, oppressive governments (i.e., ethnic genocide), and persecution for religious beliefs. Currently, Minnesota ranks as the 13th leading state within the United States in resettlement for refugees primarily from Sub-Sahara Africa and Southeast Asia (Hartwig & Mason, 2016, p. 1154).
A Greener Vision for the Future
An important scope and general purpose in community gardening and green space programs is the future itself and how to get more community members involved in participating in sustainable and healthy foods production. Several of the student participants in the community gardening program indicated that they enjoyed working with other students and community participants in providing healthier foods for low-income families. One of the student participants (Abdiaziz) commented that he “loved every minute of working outside to help produce healthy foods for the community members. Providing people with these kinds of opportunities gives us an opportunity to get to know each other better and help people who are less fortunate than us.” Amian has indicated that she is trying to help community members work with immigrant families in a more collaborative process that will not only teaching participants the nutritious benefits of her native foods from Somalia, but that most food-related problems (i.e., shortages of healthy foods) is actually preventable and human-related. “When I started growing my vegetables in the gardens
I noticed how much food is wasted here in the United States and that hunger is actually a ‘man-made’ phenomenon.” If she is provided with the resources and opportunities, Amian plans to build an even larger sustainable foods program in southeast Minnesota (Kenyon) up to 20 acres. Amian has indicated that some of her most formidable challenges have been in finding community stakeholders who are willing to help provide resources in the continued development of sustainable green space activities and community gardening programs.
Providing opportunities for immigrants to work in a more collaborative process in the development and proliferation of ethnic foods is an empowering process that can help people from all backgrounds to better understand different cultures. Community psychologists can help facilitate the process of bridging cultural gaps and reducing racial stereotypes by serving as advocates in the development of sustainable green space environments such as community gardens. Amian has found the process of developing sustainable community gardens and providing healthier foods from her native homeland of Somalia personally a very rewarding and intrinsically satisfying experience. Her long-term goals are to provide healthier and organic foods for vulnerable Somali populations and older adults who are currently facing economic hardships. Amian has recognized that “the
voices within the community need to be heard and . . . I would like to see more families eating healthier foods together and growing their own foods.” Amian’s proposal includes planting native seeds from her native homeland Somalia including leafy green vegetables, beans and even fried bananas for families with low incomes in the United States, including immigrant families from Somalia, Ethiopia, Kenya and Southeast Asia. Amian is also concerned about increasing global environmental pollution (i.e., the proliferation of plastics) which directly impacts the quality of soil and water used for agricultural purposes. Her future plans include working in environmentally sustainable projects in the St. Paul and Minneapolis areas and promoting greater access to healthier foods among Somali and other underrepresented populations in these areas. Additionally, Amian is trying to reduce the impact of global pollution through the education and practice of simple environmentally-responsible behaviors, such as recycling, composting, and developing rainwater irrigation systems for garden sites.
Conclusion
The benefits of community gardening cannot be understated. Results of evaluations and studies offer clear evidence that community gardens provide numerous health benefits, improved access to food, related nutritional needs, and improved mental health. An important aspect that can be overlooked is its promotion of social health and community cohesion, both essential aspects of a healthy community.
From Theory to Practice Reflections and Questions
• Community gardening and environmentally sustainable green space activities support fundamental principles shared by community psychologists in that they provide unique opportunities for diverse groups of individuals to work collaboratively and establish a stronger sense of community inclusion, engage in social change, and promote psychosocial health (Fetterman, 2015). Share with your classmates or others at least one other way, outside of formal education, that can foster a sense of community, engagement in social change, and the promotion of psychosocial health?
• This case story mentions that a challenge in doing social justice work involving food has been in finding community stakeholders who are willing to help provide resources in the continued development of sustainable green space activities and community gardening programs. Consider and provide examples of how you might address this challenge.
• In addition to physiological well-being, what are some psychological influences of food on one’s sense of community?
Note:
1The person centered in this chapter summary, Amian (name changed), is aware that this chapter is being published and has provided her consent. Due to cultural restrictions of her native country (Somalia) she has requested to remain anonymous. | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/01%3A_Community_Building/1.04%3A_Green_Space_Programs_as_a_Shared_Growth_and_Communa.txt |
When we use the term global we are referring to both physical regions, communities, and spaces around the world, as well as viewing everything from a global lens—which in turn broadens our perspectives and strengthens our foundational truths and helps all to live authentically. Abdul Kalam (n.d.) offers that living globally means, “… to take into consideration the cultures, ethnicity, religions, and living situations of everyone around the world…” (para. 1). From this foundation we bring to the reader two case stories from outside of the United States. We believe you will be enlightened and importantly drawn just a little closer to human lives, that may seem so far away.
The first case story is Better Together: Better Together: Creating Alternative Settings to Reduce Conflict Among Youth in Lebanon. Upcoming community psychologist Ramy Barhouche shares a case study of an effort in Lebanon within the Nongovernmental Organization (NGO) sector to collaboratively create alternative and preventive settings to reduce discrimination and prejudice and develop collaborative living and conflict transformation among youth and young adults. Barhouche provides important historical context and rationale for using a process-based relational approach to develop the relationships for collaboration.
The second case story is Promoting Community-Driven Change in Family and Community Systems to Support Girls’ Holistic Development in Senegal. Dr. Judi Aubel draws you into the country of where you get to see the lives of a place often called, “The Gateway to Africa”. The case study describes an innovative program designed to address the issue of female genital mutilation (FGM) a seldom-discussed subject, but here we are going beyond the most popular topics and centering and raising the voices of Senegalese girls and women. Come with us. You’ll never be the same again.
2.02: Better Together- Creating Alternative Settings to
This case story illustrates community psychology in action within the region of Lebanon, where a collaborative partnership worked to create alternative settings for youth to reduce conflict.
The Big Picture
Community psychology in action can be seen through program implementation by Non-Governmental Organizations (NGOs – Nonprofits) and Civil Society in much of the international community. There are two main broad subcategories of NGOs: Humanitarian Aid and International Development. Humanitarian Aid responds to an incident or event (e.g., conflict, natural disaster, poverty, or mass human displacement) and focuses on short-term disaster relief and meeting the immediate needs of the impacted communities. However, these services often take much longer periods than expected due to systemic dysfunctions. International Development programs, on the other hand, respond to long-term systematic problems and focus mainly on economic, social, and political development. It does so through human rights, diplomacy, and advocacy programs; as well as, economic, infrastructure, and capacity development. Both fields, along with others, often are implemented interchangeably and are impacted by and impact local realities.
Lebanon, for example, experienced an influx of Syrian refugees (a quarter of the population) due to the ongoing 2011 Syria conflict. As a result, the country also experienced an influx of International Non-Governmental Organizations (INGO) and Humanitarian Aid funding to support the refugee population. The situation eventually increased tension between host communities and refugees, and between different Lebanese sectarian groups, which can be attributed to the following factors shown below:
Factors Increasing Tensions
• Lebanon is a country of multiple minority groups with a fragile socio-politico-economic system and power-dynamics that can be impacted by minimal demographic changes.
• Lebanon has a history of long-term mass resettlement (e.g., Armenian, Palestinian, and Iraqi).
• Lebanon came out of a gruesome 15-year civil war, which involved local and external forces (Lebanese, Palestinian, Syrian, and Israeli), ending in 1990, with two occupations. After the end of the Israeli (2000) and Syrian (2005) occupations the country was left even more divided, polarized, and with increased corruption, creating the perfect environment for foreign influence (e.g., US-Saudi Arabia vs Iran-Syria).
• Much of the population still has resentment and ongoing fear from the Syrian occupation and influence, which was displaced towards the Syrian refugees.
• The Lebanese economy was impacted by the regional situation and tension, which also led to the increase of unemployment and poverty. Thus, creating more sectarian divide and resentment towards the refugees, which were seen as getting unlimited aid and taking Lebanese jobs.
• The Sectarian system and political elite has been threatened by secular civil society movements, which led to the increase of the sectarian and xenophobic rhetoric; and
• The international community preferred to fund initiatives to support the refugees in Lebanon, Turkey, and Jordan, while restricting their movement to Europe and other neighboring and western countries.
Taking this overview and factors together, this case study will focus on a 2014 project that I (author Ramy Barhouche) worked on, to empower Syrian and Lebanese youth, reduce prejudice and discrimination, and create a culture of dialogue, collaboration, and conflict transformation.
Community Assets/Needs
In the business development process of the project proposal, no formal community assets/needs assessments were conducted. Instead, a brief literature review of past reports, projects, and context were conducted to better understand the situation and needs. In addition, the proposal was developed in consultancy/collaboration with local partners.
We found that youth and young adults in Lebanon have been facing a high level of unemployment. Thus we determined there was a need for capacity development to support prospective job seeking. In addition, we identified a growing market for the arts/entertainment field with little to no opportunities to further develop certain skills. Importantly, there was a rise of tension, prejudice, and discrimination, as mentioned above. As a result, the project team, through local partners, reached out to youth and young adults between the age of 15 to 25, from multiple socio-economic backgrounds (e.g., nationality: Lebanese, Syrian, Palestinian; Religion: Muslim, Christian, Druze; and economic class). With these factors in mind, the project was then designed to: (1) empower youth and adults, (2) develop skills that can be used in future careers, (3) reduce tension in certain areas, (4) reduce prejudice towards Syrian and Palestinian refugees, and (5) develop professional, interpersonal and collaborative relations between participants and their communities.
• Empower youth and young adults
• Develop skills that can be used in their prospective professional career in arts and entertainment (e.g., acting, drawing, filming/audio-visual, singing/playing instruments)
• Reduce tension in certain areas
• Reduce prejudice towards Syrian and Palestinian refugees
• Develop professional, interpersonal and collaborative relations between the participants (directly) and their communities (indirectly).
Some of the participants had previous experience with the designated arts (i.e. acting, drawing, filming/audio-visual, singing/playing instruments), while others had interest but never had the opportunity to be exposed to them. The project hired Lebanese, Syrian, and Palestinian coaches/artists to mentor the participants. The project also asked some of the more experienced participants (active volunteers in local organizations and those with experience in the arts) to act as peer mentors.
Collaborative Partners
The project team included a project coordinator, a project associate, and a monitoring and evaluation coordinator from the lead partner, as well as, the coaches and the implementing partners’ teams. We worked with two main local partners that were well established in the South and the Bekaa areas. Their relationship with the communities allowed us to better understand the local context and gaps/needs, reach out to Syrian and Lebanese youth and their parents, and recruit interested participants. The partners were part of the strategy team and were also responsible for the local implementation, support, and follow-up with community members, and logistics. The coaches and the coaches-artists came from Lebanese, Syrian, and Palestinian backgrounds and were responsible for teaching and mentoring participants in the four arts/entertainment skills: acting, drawing, filming/audio-visual, singing/playing instruments.
Description of the Project
With the foundation in place, the project was set to begin. The following discussion describes the components of the project:
The Summer Camp
Participants were invited to a one-week summer camp in each of the two areas. On the first day, the participants went through orientation and were matched with youth from different communities, and then were assigned separate tents. The sessions began the next day, and 100% of the participants went to all art sessions to explore and decide which to focus on. The sessions had a theoretical and practical aspect. Figure 1 below shows the model used for the summer camp.
Figure 1: Summer Camp Model
The participants also went through several sessions that focused on social capital and conflict transformation. They were exposed to activities related to the topics, and then had a chance to discuss and reflect on these issues, while linking them to their life experiences, which included narratives on Identity and Perception and Perspective shown below:
Identity - Reflections and discussion included:
• The complexity of identity and its multi-layered nature.
• The fluidity of identity and the inherited vs acquired aspects of it.
• The commonality of identity with those we view as the other vs the indifference that we might experience with those closest to us.
Perception and Perspective - Activities to further explore this topic, and build on the reflections from identity included:
Leadership We reflected and discussed what it means to be a leader, in relation to the other, to our communityes, and to the self. We also further discussed what are the ideal characteristics of a leader.
Conflict transformation and common ground approach We reflected and discussed positions vs. interests vs. needs, dignity, empathy, active listening, and practiced conflict transformation.
The project team also provided individual psycho-social support with a relational needs approach[1]. The participants shared things related to their struggles as refugees, family, relationships, life, as well as to issues that arose because of the camp. We also made sure to address every conflict that arose during the camp and had sessions with the individuals and/or groups that were involved in them. At the end of the day, the team also conducted debrief sessions for the participants and the project team (organizers + coaches) to reflect and assess the day and discuss possible changes.
After the Camp
A similar model was implemented the remainder of the year, where participants met regularly and continued their sessions while collaborating together to develop their art and content. After the year ended, they presented their work to their communities.
The following year, new participants were recruited, while some of the previous ones were asked to be peer mentors for the incoming ones. Meanwhile, the project included a monitoring and evaluation aspect that recorded all the progress of the participants, activities, and impact.
Outcomes and Impacts
We began seeing signs of possible reduced prejudice by the end of camp. This assumption was determined from the following observation:
• During the first two days of the camp, the participants had avoided hanging or socializing with people outside of their in-groups.
• We began seeing a shift on the third day. where people from different backgrounds (nationality and class) began spending time with each other. By the end of the camp several youth had plans together outside of the project context.
The participants continued building their relationships throughout the year, where they continued meeting regularly for the sessions and collaborating. They also continued meeting outside of the project context, even after it ended. The information about ongoing relationships is based on their social media accounts, as many followed the project team and stayed in touch. This is significant to us because many youths initially reported not having any friends or romantic relationships outside of their socio-economic background. Some of the participants today still collaborate on art and entertainment projects, while others took an NGO and civil society path. The participants have been actively involved with issues related to social justice, human rights, and anti-racism and gender equality campaigns.
Little impact was noticed within local communities or nationally. During the events held at the end of the project, some of the participants’ family members stated that their views changed due to their children’s relationships with others. However, no long-term or in-depth follow-up has been made for clearer results. In fact, in the following years, sectarian and xenophobic rhetoric increased, likely due to multiple reasons. At the time of this case story, we do not have the data at hand that sheds light on the reasons. However, as shared in the Lessons Learned and Recommendations section, moving forward, a summative evaluation on long-term changes should, if possible, be included in the program design.
Lessons Learned and Recommendations
The programmatic components of the project had some observed successes with the youth. The process allowed the participants to feel heard and be open to the experiences of others. Meanwhile, they were guided through the process and given tools to explore their realities and emotional well-being. Through the reflection sessions, the youth discussed topics related to their personal, relational, and systematic struggles and possible ways to overcome them and collaborate. This process was extremely important to provide the youth with some tools to further explore their perspectives, build relations with the other, and seek alternative options.
It seems that the project did a good job with relational-based changes, however, with little to no evidence of systematically impacting local communities or nationally.
Importantly, it must be noted the project did not apply a multi-dimensional lens, especially when it came to policy and systematic change, and power dynamics. Rather, the project mainly focused on discrimination prevention and reduction, and cohesive living through interpersonal relationship building (e.g., conflict transformation) skills. This approach comes with the assumption that if the right tools are given to individuals and community members, they do learn to transform conflict and collaborate to achieve common interests. That perspective might work if ideal conditions are in place, however, unfortunately, there are too many factors at play that hinder such ideals. That was seen on several occasions in Lebanon.
The NGO I (Ramy) worked with had several projects running simultaneously, some worked with the community as a whole and municipalities, while others focused on youth, women, police, and/or refugees. We began seeing more leniency and openness towards Syrian refugees and collaboration with some of the communities we were working with. However, the sectarian and xenophobic rhetoric rose again due to the socio-politico-economic situation in the country and the region. This was especially seen with the rise of unemployment, decreased sense of security, and during political and economic crises. Further, with the recent rise of secular/social justice movements and revolution threatening the sectarian political elite and system, the bigoted and fear-based rhetoric has become more of a norm. Thus, working in the field came with limitations due to realities outside of the project team’s control. However, programming and organizational structure and process could be accommodated to better meet the needs of the communities, despite these circumstances.
Creating alternative and preventative settings needs long-term planning and multi-level and multi-dimensional collaborations. However, this is extremely challenging to do with the structure of industrial non-profits/NGOs and the current socio-politico-economic systems in place. Challenges include: (1) limited grants for projects, (2) difficulty in conducting long-term need and asset assessments, and (3) not having the appropriate systems in place.
• Grant-based projects often come in a pilot format and are rarely funded for more than one year.
• This makes it difficult to conduct long-term need and asset assessments to plan for long-term programming and impact.
• This points to not having the appropriate systems in place to conduct long-term follow-up evaluations.
Additionally, it leaves NGOs in a constant cycle of seeking and applying for funding, which takes up much of their focus. At the same time, this forces the organizations in adapting and adjusting their program objectives and proposals to attract grantmakers and increase their likelihood of receiving grants. In addition, the bureaucratic structure of NGOs often forces program teams to focus on administrational tasks and reporting, leaving less time to focus on the programs and communities’ needs.
The overall budget and two-year duration of the ‘Better Together’ project helped some with implementation and allowed for more support. The project team included a project coordinator, a project associate, and a monitoring and evaluation coordinator from the lead partner, as well as, the coaches and the implementing partners’ teams. However, their capacity was spread thin, due to most people working on several other projects at the same time. Furthermore, there were challenges with the partnership’s distribution of tasks and communication, which created several obstacles along the way. Thus, more relationship building, clearer tasks, and conflict resolution processes are needed to be further developed and agreed on prior to the project.
Moreover, grants often come with predetermined objectives, agendas, and/or restrictions that better meet the interests of the international donors (e.g., governmental agencies, INGOs, foundations). The donor determines where and whom to work with or exclude and the structure and limitations of the program. Consequently, this restricts freedom for truly meeting the communities’ needs, sincerely taking into consideration local knowledge and lived experience, and is a form of imposing soft-power (neo-colonialism) on the country; which creates a vicious cycle. For example, for much of the duration of the war, European and U.S. donors rarely diplomatically attempted or funded peace initiatives in Syria, due to their opposition to the Assad Syrian regime and the complexity of the conflict. Rather, they focused their funding relief programs in host countries, such as Lebanon, Turkey, and Jordan. This funding later included more social cohesion and international development programs, as the war continued for years.
Looking deeper into the context, the situation in Syria and the region was caused by multiple factors, from foreign interference (geopolitics) to internal injustice, and water drought (climate change). Similarly, regional instability can be traced to the involvement of the U.S. and allies in the middle east long before the current Syria conflict. Their actions facilitated the creation of many extremist groups such as ISIS, which prolonged the Syria conflict. The same regime that the U.S. and allies oppose today, was endorsed to occupy Lebanon in the early 1990s because the regime supported their efforts against Saddam Hussein during the first gulf war. The Syrian Regime, in turn, used intelligence, brutal force, and collaboration with/overseeing Lebanese public agencies/government to control and oppress the people. This dynamic also created more division (Sunni vs Shia)[3] and more corruption in the country. Thus, much of the Lebanese were traumatized by a 15-year sectarian civil war, two occupations, and a corrupt system that did not allow the country to sustainably grow. In turn that trauma and anger were displaced toward the other and those most vulnerable, other sectarian groups and Syrian refugees.
Thus, the donors and countries that are trying to support and fund international development, are the same ones that had a hand in creating the current conditions that led to conflict and division in the region. When we discuss issues surrounding social justice and community psychology we should include geopolitics, coloniality, and global power dynamics. Coloniality in the Middle East takes multiple forms. Sometimes it looks Euro-U.S., while other times it takes the face of regional powers (i.e., Iran, Russia/USSR, Egypt, Israel, Saudi Arabia).
Other times it takes an ideological form (e.g., religion, pan-Arabism, communism, capitalism). This cycle often creates more local divisions and injustices. The reason this narrative regarding historical context is included is that to better support and collaborate with communities, we need to understand their context, history, struggles, and needs. While on the project in this case study, we barely had the chance to do so; and that seems to be indicative of most NGOs.
It is unfortunate that foreign aid and international development funding often serve the interests of those in power.
Although the funding is needed to support those lacking resources and the means to support each other, the funding often acts as a band-aid rather than a transformative solution. Other recommendations include thinking through how to design and implement preventative programming, along with the creation of alternative-based NGO structures that include multidimensional community and participatory-based programming. This programming should include process and relational-based aspects, as well as policy and systemic advocacy and change, in addition to communication and outreach aspects. This would include, human rights and social justice, education, investing in local economies, and long-term local and regional stability initiatives. It is also important to monitor and evaluate these efforts before, during, and after their implementation for learning opportunities.
Looking Forward
I (Ramy) decided to move away from the NGO and nonprofit field for the time being. I am continuing my higher education earning my Ph.D. in Community Psychology. My applied research interests focus on social movements, power dynamics, social transformation, and decoloniality. I will be working in communities in Lebanon and North America, with a multi-disciplinary, decolonial, intersectionality, non-binary, and non-hierarchal approach.
Conclusion
Community psychology practice is integrated into four levels in this case study. The objective of the project was to create alternative and preventive settings that would reduce discrimination and prejudice, and develop collaborative living and conflict transformation. The project used a process-based relational approach, which was important to begin the conversation and develop the relationships for collaboration. Lastly, the program exposed participants to new perspectives, many of whom sought roles and activism opportunities related to social justice, gender equality, LGBTQ+ rights, as well as entertainment and arts.
However, it was important that the project did not include a policy, systemic, and advocacy-based approach. If included, this could have created tension with the local and/or national government, according to the NGO’s perspective. In addition, the project raised the team’s awareness of local contexts, but not international power dynamics or the structure of the nonprofit field. Therefore, this case study highlights the impact and the need for critical applied research of the impact and structure of NGOs, funders, geopolitics, and systemic change.
From Theory to Practice Reflections and Questions
• The case study shared that the summer camp participants engaged in training sessions covering the topics of social capital and conflict transformation (Barhouche, 2021). What does social capital mean for you and how would you cultivate it?
• Why does funding that NGOs or nonprofit organizations in the U.S., receive sometimes create a “band-aid” approach versus accomplishing true individual, family or community healing and change?
• Consider what, if anything, you would have done differently when trying to support the reduction of conflict, tension, and highly prejudice bigotry in the context of the Lebanese refugee community? If you would have done something different, what resources would be needed to make that happen?
Endnotes
An approach that is based on the premise that everyone has relational needs (acceptance, approval, affection, appreciation, attention, respect, security, comfort, support, encouragement). Those needs can only be met by having an interdependent community, and with the golden rule of “treating others as they like to be treated”. Link: https://www.relationalcare.org/
[2] Pettigrew, T. F. (1998). Intergroup contact theory. Annual Review of Psychology, 49, 65–85. https://doi.org/0066-4308/98
[3] Lebanon has a confessional democratic system that represents the sects/religious groups in the country. The situation has often created fluctuating power dynamics and alliances that have been used by geopolitical powers for their benefit. Traditionally the Sunni Muslims have been the main power in the Middle East for centuries, with Shia, Christians, Alawites, Jews, and other minority groups being treated as second class. The situation changed with the creation of Lebanon, moving the leadership to the Christian Maronites, which created tension between them and the Sunni and Druze, which traditionally ruled the Lebanon region, and eventually led to the 1975 civil war. The Ta’if Agreement ended the civil war, by redistributing power which allowed the rise to the Sunni and Shia leadership, and weakening the Maronite leadership and role in the government. The Sunni leadership was supported by Saudi Arabia (Sunni) and the US, while the Shia leadership was supported by Syria’s Assad regime (Alawite Muslim) and Iran (Shia). Thus, this situation created more rivalry and division. | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/02%3A_Global_Perspectives/2.01%3A_Prelude_to_Global_Perspectives.txt |
This case study takes a look at an innovative intergenerational approach to promoting girls’ rights and development in Senegal.
The Big Picture
This is the story of the designed and implemented by the non-profit organization, Grandmother Project – Change through Culture, in Southern Senegal starting in 2010 and evaluated on several occasions by outside researchers. The case study describes this innovative program, the results, and the lessons learned that are relevant to other African contexts, and to other collectivist cultures in the Global South or Global North. This author is the co-founder and Executive Director of the Grandmother Project initiative.
The Grandmother Project’s Mission
…is to improve the health and well-being of women, children, and families
in countries in the Global South, by empowering communities to drive their own development
by building on their own experience, resources, and cultural realities.
Senegal Community Context
The American and Senegalese non-profit organization Grandmother Project – Change through Culture (GMP) works mainly in Senegal, on the west coast of Africa. GMP’s work is in southern Senegal, a rural area where farming and cattle raising are the main economic activities. This area is severely economically challenged and one of high migration of men to the capital city and in some cases to Europe. The quality of health services and of schools is generally poor. Only very young, inexperienced health workers and teachers tend to work in this area that is a 10-hour drive from the capital, Dakar. Only about 5% of all teachers in the area are women. Very few communities have electricity which makes evening studying very difficult even for very motivated students.
More than 95% of Senegalese people are Muslim. People’s lives are influenced by Muslim and African values that include respect for elders, solidarity, generosity, and interdependency. While western societies value individual rights and achievement, African cultures are built on collectivist, or relational, values and accord greater importance to interdependency and support of group values and achievement than to autonomy and individual accomplishment.
Like other African countries, in Senegal families and communities are organized hierarchically with men having more power and influence than women and with elders having authority over younger family members. The role of elders is to transmit cultural and religious values to younger generations. An often heard saying in Senegal, and across Africa, says, “What an elder can see sitting on the ground, a young person cannot see even if she/he is on the top of a tree”.
Intergenerational relationships were traditionally very strong, however, nowadays, in many cases, they are strained by a breakdown in communication between elders, parents, and children. The lives of Senegalese people are very much influenced by non-western, collectivist values, many of which differ markedly from those of western societies. In the Velingara area in Senegal where GMP is working, extended families predominate, and multi-generational decision-making and caregiving of younger family members are prevalent.
Growing up in this context is full of challenges, especially for girls. Some deep-seated social norms greatly limit their opportunities for growth and development. Most families prioritize boys’ education over that of girls, and early marriage of girls, sometimes as young as 12, is prevalent, often with families playing a major role in identifying a spouse for their young daughters. For girls who stay in school beyond puberty, teen pregnancy is another very problematic phenomenon.
Female genital mutilation (FGM) is practiced by some ethnic groups in Senegal. A study in 1988 by the Environmental Development Action in the Third World (ENDA) revealed that approximately 20 percent of the female population had undergone an FGM procedure (para. 2). Other estimates suggest the figure is between 5 and 20 percent. Among the Halpularen (Peul and Toucouleur) populations in rural areas of eastern and southern Senegal DHS statistics from 2017 indicate that 50% of women age 15 to 45 underwent the practice. These figures refer to the total population, not the Halpulotas practice.
Background of the Girls’ Holistic Development Program in Senegal
At the outset, the international non-profit organization World Vision asked the Grandmother’s Project to develop a strategy to specifically address female genital mutilation (FGM) in the Velingara area of Senegal. The World Vision Director told me (Dr. Aubel) that their earlier efforts discourage the practice using traditional health talks had not been effective and they wondered if involving the grandmothers participating in our programs might be a good way to address the issue. I (Dr. Aubel) told them that FGM is a very complex issue and that there are no simple strategies to promote the abandonment of this harmful practice and I proposed an initial Preparatory Phase composed of two activities to understand communities’ attitudes toward the practice, and then three additional components: Implementation and Learning Phase, Program Evaluation Phase, and Development of Lessons Learned for a total of four components. This process, was carried out over a 12-year period as an iterative action learning process.
Preparatory Phase
To develop the Girls’ Holistic Development Program, the two initial activities under the proposed Preparatory Phrase were: a participatory and rapid qualitative assessment and a series of dialogue forums to discuss the results of the assessment with community actors.
Participatory and Rapid Qualitative Assessment
As it is important to foster an environment where community members are viewed as the experts of the situation in their own communities. The initial assessment sought to first, understand families’ priorities and concerns regarding girls’ education and upbringing; and second, to understand the attitudes, roles, and experiences of community members and of health and development workers related to FGM. This assessment was conducted by the author (Dr. Aubel) in conjunction with members of the Grandmother Project Team. A participatory approach was used consisting of small in-depth group interviews with traditional community male leaders, religious leaders, grandmothers, men, women, local authorities, health workers, and other Non-Government Organizations’ (NGO) staff working in the area.
The rapid qualitative assessment provided critical information on the attitudes of both community and health and development workers toward FGM. However, of greater significance was the information that revealed community members’ concerns regarding the education and upbringing of children, and specifically of girls. Three main themes emerged from the interview data related to families’ concerns regarding 1) the breakdown in communication between elders, parents, and children; 2) the resulting decrease in transmission of moral, cultural, and religious values and traditions to children, e.g. showing respect for elders and story-telling; and 3) families’ concerns regarding children’s attitudes and behavior that conflict with priority family and cultural values.
“We should go back to our roots. We need to recognize what is positive within our culture and hold on to it jealously.”
Demba, NGO community development worker
“If we lose our cultural values, we will be forced to replace them with other people’s values.”
Abdoulaye, Teacher
These insights into community members’ concerns regarding children’s education and development were critical in subsequently developing an intervention that would respond to their concerns while at the same time catalyze reflection on an issue that we viewed as a problem, but that they did not.
Series of Forum-Dialogues
The second activity in the Preparatory Phase consisted of forum dialogues held in four communities. Each forum involved 25 participants, male and female elders and parents, many of whom were community leaders. The objectives of the forums were: to share the results of the community study; to elicit dialogue on how cultural values and traditions being lost could be revitalized in families and communities to ensure the development and well-being of children, and specifically of girls, and to identify strategies to promote discussion of FGM in communities. Based on participatory adult education principles, the team developed a training design for the forums to encourage open discussion related to these objectives recognizing that FGM had never before been discussed in a public setting.
In Africa, community resistance to many social programs is engendered by the fact that they are critical of and aim to change certain ingrained cultural norms and practices. A fundamental principle that shaped the forums and subsequently the entire GHD program, is the idea that programs that promote change in certain harmful traditions, e.g. FGM, should simultaneously promote positive cultural values and traditions.
A key idea that shaped the discussions during these initial forums was a profound statement by Amadou Hampâté Ba, a Malian philosopher (1901-2002), a member of the same Halpular ethnic group as forum participants.
“Become rooted in yourselves. Preserve positive traditional values and let the abusive customs disappear.”
He studied for many years in France and upon his return home, in a letter to African youth he urged them to “Become rooted in yourselves. Preserve positive traditional values and let the abusive customs disappear”. This important quote by a respected Malian intellectual was used in the forums to catalyze community reflection on the relevance of different cultural values and traditions for today’s children.
The forum dialogues consisted of two days of discussions, often based on small group exercises. The first day dealt with communities’ expectations and concerns regarding girls’ education and development and the second day focused on FGM and community ideas on how to catalyze discussion of this issue within the wider community. On both days, community members formulated recommendations for actions to be taken by families, community leaders, teachers, and NGO partners.
Feedback from forum participants in all four sites was similar and very positive. Many participants expressed appreciation for the respectful way in which culture was addressed and for the non-directive approach used which, they said, contrasts with the directive techniques often used by other NGOs.
“The approach is very good because the discussion took place in the cultural context and was based on the idea of promoting what is good and discouraging what is negative. In the past, development workers would come only to criticize our traditions and propose strange ideas. You have begun by appreciating what exists in our tradition and not criticizing it directly.”
Bassirou, Community Health Volunteer
“Even though we didn’t go to school, we understood everything, we shared our knowledge and everyone appreciated our ideas.”
Fatamata, Grandmother Leader
Traditionally in community meetings, there was no open communication between men and women. The inclusive nature of the forums, with men and women of different ages and statuses within the community, was appreciated by almost all community members. However, a few of the elders said that they felt uncomfortable being in the same meeting with people much younger than themselves.
“There is often a constraint in community discussions because different categories of community members do not openly speak up. It is good to bring together men and women of different social classes and ages so that everyone can learn from each other.”
Mballo, Former National Parliamentarian
“In other workshops, we grandmothers were criticized for our traditional ideas. That’s why, before coming, we were afraid. But we are happy that we could contribute to the discussion without being criticized.”
Oumou, Grandmother Leader
During the forums, there was much discussion on the central role of grandmothers in families and specifically in the lives of girls. Participants stated that any efforts to promote the well-being of girls and to discourage FGM should involve grandmothers first because they are responsible for preserving cultural norms and second because they have a close relationship with the cutters and hence, they have the greatest possibility of discouraging them from continuing this practice.
The combined results of the initial community assessment and forum dialogues provided substantive insight into communities’ priorities and concerns regarding the development and upbringing of children, especially of girls, and specific information on community views on FGM. Based on these results, as well as key principles from community development, adult education, anthropology, and community psychology, the GMP team identified a set of concepts and priorities that informed the design of the GHD intervention. These objectives included:
• to promote multiple facets of girls’ development related to positive cultural values and traditions, not only FGM, in order to increase both program relevance to communities and their involvement;
• to address both community and NGO concerns related to GHD;
• to promote positive cultural values and traditions while discouraging harmful ones;
• to relate program goals and activities to religious values and to include religious leaders;
• to build on existing community resources, or assets, in terms of experience, knowledge, and influential roles of community actors;
• to actively involve elders, including traditional and religious leaders on an ongoing basis;
• to strengthen communication between the generations e.g. elders, parents, and adolescents, and between men and women;
• to acknowledge and strengthen the role of grandmothers in families and communities as key transmitters of cultural values and as allies of young girls;
• to strengthen relationships between girls, mothers, and grandmothers;
• to strengthen the skills and commitment of community leaders, both male and female, and of three generations, to work collectively to promote positive change for girls in their communities;
• to use participatory adult learning approaches that catalyze reflection and community consensus-building for change.
Another critical factor related to the interface between the GHD program and communities was the development of respectful and ongoing relationships between GHD staff and community leaders and groups. Understanding of and sensitivity to local cultural values and traditions, and humility, were key parameters for the selection of DHF staff. In African societies, positive relationships are the basis for all interaction and collaboration.
Conceptually the GHD program draws on several disciplines namely, community psychology (especially the work of Foster-Fishman et al., 2007; Trickett et al., 2011; Hawe et al., 2009; Schensul & Trickett, 2009; O’Donnell & Tharp, 2011; Zimmerman et al., 2011); community development (especially Lasker & Weiss, 2003; Chaskin et al., 2001; Hughes et al., 2005); anthropology (Airhihenbuwa, 1995); adult education (Freire, 1972; Brookfield, 1984; White, 1999) and social work (Hartman & Laird, 1983).
Partnership with the Ministry of Education (MOE)
The GHD program is implemented in close collaboration with the District Education Office in Velingara. All of GMP’s objectives are related to priority concerns of the MOE related to children’s education and development, and specifically, that of girls, concerning girls’ education, child marriage, and teen pregnancy. GMP’s long-term objective is for the MOE to integrate the GHD intergenerational and grandmother-inclusive approach into their programs with communities.
Implementation of the Girls’ Holistic Development Program
A Holistic Approach for Systemic Change
GHD Model
The first funding for GHD came from World Vision and their initial concern was only FGM. However, based on the insights obtained during the Preparatory Phase regarding community concerns and priorities we proposed an approach to address girls’ needs holistically. The GHD circle, which has been widely used with communities, teachers and partner organizations, presents the key facets of girls’ development that are important to local communities, namely: moral, cultural, intellectual, spiritual, emotional, health, physical, and civic responsibility. Unfortunately, many national and international programs in support of girls ignore some other aspects of girls’ development that communities value related to their moral, spiritual and cultural development.
Program Goal and Objectives
The goal of the GHD program is to strengthen community capacity to promote girls’ health and well-being, with two general objectives:
1. To strengthen communication and social cohesion within communities and families in order to promote systemic change in harmful social norms related to girls education, child marriage, FGM, teen pregnancy and FGM; and to
2. To promote positive cultural roles, values, and practices that contribute to girls’ development and upbringing.
Implementation of the GHD program began in 2008 in the Velingara area in southern Senegal and has evolved over the past 12 years through iterative action research and learning process. It has involved testing, evaluation, revision, and expansion of the program in response to strong support and input from communities and local elected officials.
Holistic Focus on Girls’ Rights and Needs
Many international programs to improve the lives of girls address the four priority GHD issues (see objective no. 1 above), all widespread problems across Africa. The predominant pattern in such programs is to target girls, either primarily or solely, in a linear fashion based on the assumption that if they are empowered they will be able to catalyze change in families and communities. From the perspective of both anthropology and community psychology, in African societies, girls are embedded in family, community, and cultural systems, as visualized in the Onion Model (Aubel & Rychtarik, 2015). And those systems impose expectations on girls while at the same time providing them with critical support as they grow up and are faced with life’s challenges.
The Onion Model (Figure II) presents key facets of the context in which Senegalese girls are embedded and have several implications for the design of programs to support them: adolescent girls are not isolated and rarely make decisions on their own; various family members are involved in decisions affecting their wellbeing and options in life; grandmothers play a central role in the socialization of young girls, and grandmothers typically have more influence on men’s decision-making within the family than do mothers of young girls.
GHD aims to create an enabling environment around girls so that they can flourish. This is achieved by directly supporting girls while at the same time encouraging community-wide consensus building for the adoption of social norms and attitudes that are more supportive of girls. This two-pronged strategy is presented in the simplified Theory of Change in Diagram I below.
Building Communication Relationships
To promote community-wide change, key community actors must be involved. Also, strong communication relationships between them are the foundation for open dialogue and reflection on existing social norms and on alternative concepts and practices. In the initial assessment and forum dialogues, community members discussed the serious breakdown in communication between generations that exists in virtually all communities.
The following quote from a community elder illustrates the importance of communication relationships and the situation that existed in many communities at the outset.
“Communication is the foundation of life for any group. Without communication and understanding, there will never be any development. Many interventions failed in our communities because there was not enough dialogue and understanding between people. As long as there is a conflict or the absence of communication, the community will not progress.”
Diallo, Community Headman
Involvement of Community Leaders
A prerequisite for the success of any effort to improve community life is the existence of committed leadership, strong relationships between leaders and other community members, and a sense of solidarity often referred to as social cohesion. In light of the role and influence of both formal and informal community leaders, the GHD activities primarily target leaders of three generations (elders, adults, and adolescents), male and female, as well as recognized traditional leaders.
Building on Cultural and Religious Roles and Values
In all communities, there are leaders whose roles are determined by cultural and religious values and structures, and they have moral authority and influence on the attitudes of other community members. In the Velingara area, more than 98% of the population is Muslim. Traditional leaders and local Imams share responsibility for ensuring the well-being of their communities. In each community where the GHD program was launched, the GMP team first identified and established rapport with local formal and informal leaders.
A very erudite and respected imam, Oustaz Balde (in the photo on left ), has been a key resource for the GHD program. He has participated in many key activities and he is able to articulate the need to create a bridge between “traditional” cultural and religious values and more “modern” ideas related to various aspects of GHD including girls’ education and FGM.
“Through a participatory approach that encouraged communication between people, the GHD program has reinforced the sense of celebration among community members and acceptance of those with different opinions. Before, there was a real breakdown in communication between neighbors, within families, and between generations. GHD has encouraged introspection and self-critique. It seems that GHD has brought about an incredible reconciliation between the generations who now accept each other, understand each other, and are more tolerant of each other.”
Oustaz Balde, Imam, Velingara
Program Components
Change through Culture Process of Change
For designing any community change strategy there are two fundamental concepts, iterated in both community psychology and community development regarding the importance of building social cohesion within communities and specifically between community leaders and groups; and adopting an asset-based approach in which existing social resources, e.g. elders, leaders of all ages, are identified and strengthened. In Diagram II below, key elements in the Change through Culture process of change are laid out with both of those concepts (in the first column) along with the initial weaknesses in the cultural and community context (second column) identified during the Preparatory Phase.
Community Dialogue for Consensus-Building for Change
In order to promote community-wide change, a Community Dialogue for Consensus-Building approach was developed to involve various community actors in a series of participatory activities to elicit discussion on different facets of GHD. The objective of these activities was to catalyze dialogue and reflection, primarily between formal and informal leaders of both sexes and of three generations, first, to develop a consensus regarding the need for change, for example, to abandon child marriage, and second, to collectively decide on actions to be taken to promote change in existing norms and practices.
In light of the initially weak communication relationships in all communities where GHD has been introduced, an initial and ongoing priority in the GHD program is to strengthen those relationships in two ways: first, to strengthen existing relationships within communities, for example, between girls and grandmothers, and second, to encourage the creation of new communication relationships for example, between fathers and adolescent daughters. The objective is to create a synergistic effect through discussions of the same GHD issues by different community groups that can lead to a community-wide consensus on actions to be taken to promote GHD.
The core elements of the GHD program are a series of dialogical activities that involve different categories and combinations of community actors, of three generations, of both sexes, traditional and religious leaders, teachers, and local health workers. In all of these activities, grandmothers are key actors. Along with other community members, they participate in dialogue and debate, and their involvement strengthens their capacity and commitment to lead positive change.
These key activities are briefly described below in terms of its purpose and participants.
1) Intergenerational Forums
This is the foundational activity in GHD. Participants include leaders of both sexes and of 3 generations, traditional and religious leaders, teachers, and local health workers. The two-day forums build solidarity between participants through a series of carefully designed small and large group exercises, all involving dialogue, problem-solving, and consensus-building. Key topics addressed include the 4 priority GHD issues, intergenerational communication, cultural values, and identity.
“The intergenerational forums are very important as they help to re-establish dialogue between elders, parents, and youth. In the recent past, elders and adults didn’t listen to young people’s ideas and underestimated them. Thanks to the forums there is now more communication between older and younger community members.”
Grandmother Leader
“The forums have increased women’s confidence in themselves. Before, they didn’t dare express themselves in front of men. During community meetings, only men were allowed to speak. But now men know that women also have good ideas and encourage them to speak up.”
Young mother
“Before there was not enough discussion between men and women in families. It was a problem that separated them and was the root of frequent misunderstandings and arguments. The intergenerational forums have helped to solve this problem.”
Young adolescent boy
“We never before had the opportunity to sit all together and discuss like this although it is the best way to promote the development of our community elder leaders”
2) Days of Praise of Grandmothers
The purpose of these events is to celebrate grandmothers’ role and commitment to promoting the wellbeing of children, especially girls. Participants in these one-day gatherings include grandmother leaders from 8 surrounding communities, traditional and religious leaders, local musicians, local elected officials, and teachers.
Songs of Praise of Grandmothers are used during these events and provide relaxed interludes of singing and dancing.
These special days of recognition of grandmothers reflect the ideas of psychologist, Carl Rogers, on Unconditional Positive Regard (1980). The concept is that when GMs are recognized and their self-confidence is reinforced, they will be more open to revisiting their existing attitudes and practices. The following quotes reflect community attitudes toward these events.
“This is a very important day because we are here to honor the grandmothers who are the teachers of young couples and of children. Before this project, the grandmothers were practically dead in the village and now they have been revived. It is since the grandmothers have resumed their role that teen pregnancies have greatly decreased.”
Mamadou, Village Elder
3) GM Leadership Training
GMP realized that in all communities there are natural grandmother leaders who are recognized by other community members for their dynamism and selfless commitment to promoting community well-being. Through discussions with grandmother groups, five natural grandmother leaders were identified in each community to participate in the under-the-tree GM Leadership Training. The objectives of the training were to: increase GMs’ knowledge of adolescence, improve their communication with girls, and to empower them to act collectively to promote and protect girls, building on their status and authority at family and community levels. The training lasted for 8 days, divided into 4 two-day modules conducted over a period of 6 months.
Four months after the leadership training was completed, individual in-depth interviews were conducted with 40 grandmother leaders to assess the outcomes of the training Analysis of interview responses revealed three key results:
1. Strengthened relationships between grandmothers,
2. Strengthened relationships between girls, mothers, and grandmothers, thereby constituting a source of power and influence to protect and promote girls’ rights and well-being in a culturally consonant way, and
3. Strengthened relationships between grandmothers and other influential community actors.
“Since we participated in the grandmother leadership training, the relationship between us grandmothers has changed. Now there is a permanent dialogue between us. Whenever one person has an idea of what we should do regarding our girls, we get together to discuss. Since the training, the relationships and communication between us have been strengthened”.
Grandmother Leader
“Thanks to these training sessions, I have become more confident. I no longer hesitate when there is something that needs to be said or done. I no longer bow my head when speaking before a group of men.
Grandmother Leader
“Before we used to scold our granddaughters all the time and they were rather afraid of us. Through the training, we realized that that is not a good way to communicate with them. Now we talk softly to the young girls and they listen to our advice with regards to sexuality and other things.”
Village headman’s wife
4) Teacher workshops on “Integrating Positive Cultural Values into Schools” (IPCVS)
Many families do not have a strong motivation to send their children to school nor to let them stay in school for many years. A major reason for families’ reticence is that schools do not teach cultural values that are important to communities. In 2019 one of the priorities defined in the Ministry of Education’s five-year plan was to expand the teaching of cultural values in schools. In partnership with the District Education Office in Velingara GMP developed the Integrating Positive Cultural Values (IPCV) into Schools strategy. The initial activity in launching this strategy in schools consists of teacher workshops to increase teachers’ commitment to developing children’s knowledge of cultural values and traditions, in addition to the “modern” knowledge inscribed in the official curriculum, and to strengthen their relationships with communities. Local education officials are very supportive of this strategy as suggested in the following quote.
“These workshops support national priorities to increase children’s understanding and adoption of positive cultural values and to strengthen teacher-community relationships and communication”.
Amadou Lamine Wade, District Education Office Director
5) Grandmother-Teacher Workshops
Both teachers and grandmothers have frequent contact with and influence on children, and specifically on girls related to their education, child marriage, and teen pregnancy. However, in most communities, direct contact between teachers and grandmothers is very limited, and teachers often have a sense of superiority over illiterate grandmothers while these guardians of tradition suffer from a sense of inferiority toward well-educated teachers. The objective of these workshops is to strengthen relationships between teachers and grandmothers in order to increase their collaboration to promote both the teaching of positive cultural values in schools and in the community and to promote GHD. These innovative workshops are strongly supported by education officials, teachers, and grandmothers.
“Teachers alone do not have all of the knowledge that children need to learn. They also need to learn about positive cultural values and behavior. I don’t know of anyone else in the community who is more knowledgeable regarding the values that children should acquire. That is what justifies the presence of the grandmothers here today. And increased communication between teachers and grandmothers is very beneficial to children, especially to girls.”
Mr. Ba, Supervisor, District Education Office
“We are honored to have been invited to participate in this workshop along with teachers and school directors. We are going to work together with the teachers to encourage all children, those in school and not, to learn the values that are important in our culture.”
Maimouna, Grandmother
An important component of the IPCVS strategy is the participation of grandmothers in classrooms to facilitate value education sessions with children. This further contributes to strengthening relationships between schools and communities.
6) Under-the-Tree Sessions with Grandmothers, Mothers, and Girls
Building on grandmothers’ traditional advisory role with adolescent girls, it is primarily grandmothers and girls who are involved in these sessions, but also mothers. A major activity in GHD is frequent participatory dialogue and learning sessions to strengthen communication between the generations and discussion of topics related to girls’ education and development. A variety of activities using stories, songs, games, and discussion pictures are used to elicit dialogue and increase understanding between the generations.
7) All Women Forums
An activity that was initiated in 2019, these two-day forums: strengthen communication between girls, mothers, GMs, and female teachers; promote collective empowerment of girls; and catalyze dialogue on concerns to both girls and their mothers and GMs. During these forums, a variety of participatory exercises encourage girls to express their feelings, concerns, and ambitions related to school and life beyond and encourage mothers and grandmothers to listen to, empathize with and encourage girls. This activity aims to strengthen the collective sense of responsibility that mothers and grandmothers have not only for their own daughters but for all girls in their communities.
Below are several comments by adolescent girls who participated in under-the-tree sessions with grandmothers and who also attended the All Women Forums.
“There is a change in our relationships with our grandmothers. Before, we preferred to go to dancing parties or to watch television instead of being with them. Now, we spend more time with the grandmothers, listening to their stories that teach us about important values”.
Adolescent girl
”We are closer to our grandmothers now. If we have questions related to sexuality we can discuss them with our grandmothers, more easily than with our mothers. Now we are comfortable talking to the grandmothers”.
Adolescent girl
8) Days of Dialogue and Solidarity
In all communities where GHD is implemented, elders play an important role in families and communities and they have a big influence on the social norms that define acceptable attitudes related to many aspects of life. Many facets of GHD, e.g. girls’ education and FGM, are influenced by the attitudes of the elders, specifically the elder men in each family, traditional community male leaders, Imams, and grandmothers. These are the community actors who are involved in the Days of Dialogue and Solidarity. The purpose of this activity is to elicit reflection by the elders from several adjacent communities on the role that they can play to promote GHD. As with other GHD activities, the idea is to strengthen the knowledge and the role played by existing community actors. In other terms, the elders constitute a community resource, or asset, that can be strengthened to promote the programs’ objectives. In each of these events, participants articulate their plans for actions they can take in their respective communities.
“This meeting has been very useful because it has allowed us to discuss important issues with others from our same village and with people from other villages. In our community we plan to organize meetings with all generations, to discuss what we can do together to prevent child marriage and FGM.”
Moussa, Village Headman
“These meetings are very beneficial because they encourage communication and understanding between us. During this meeting, I realized that FGM is not recommended by Islam. Many Imams were present and none of them support the practice”.
Cissé, Grandmother Leader
Referring back to Figure 3, I think that you can see how the various dialogical activities organized by the GHD Program created a synergy between the different community actors by promoting community-wide discussion of various issues related to GHD.
GHD Theory of Change and Program Results
At the outset of the GHD Program in 2008, a Theory of Change (TOC) was developed with the GMP team in Senegal. However, during the ten years of development of GHD, there was a conscious effort to encourage a process of continuous learning. To support this learning process, a series of evaluations and studies were conducted by external researchers, in collaboration with GMP staff for two purposes: 1) to understand communities’ attitudes and response to the GHD Program, and 2) to identify changes that may have come about as a result of the GHD program. Several external evaluations looked broadly at program results. The additional studies focused on: family decision-making related to child marriage; the process of abandonment of FGM/C in some communities; communication between the generations; changes in gender roles and the status of women; the effects of the grandmother leadership training; and the relationship between the culturally-grounded program and community engagement in it. And in 2019, the Institute of Reproductive Health (IRH) at Georgetown University, in collaboration with the University of Dakar, conducted extensive
Based on the conclusions of the various evaluations and studies, and the GMP team’s lived experience with communities carrying out the program, we revised the Theory of Change to reflect what has happened as a result of implementing the program.
Diagram IV (below) synthesizes the relationship between the GHD program and its intermediary and long-term results. We can also refer to this sequence of events as the pathway to change.
As the diagram indicates, the implementation of the GHD Program was initiated through the development of respectful relationships between GMP facilitators and community actors. Those relationships were the foundation for a series of dialogical activities addressing GHD. Intermediary results of the GHD Program are observed first, in terms of increased communication and social cohesion between generations and between the sexes. Strengthening those communication relationships supported the subsequent changes at the community level and with schools in support of girls, and of children more broadly. The combination of those changes has supported change within families, related to family roles and communication which ultimately has had a positive impact on girls. These multi-level results are contributing to long-term changes which are taking place related to increased community capacity to collectively promote GHD in an ongoing fashion and changes in social norms to support girls related to their education, marriage, teen pregnancy and FGM/C.
Georgetown University Evaluation of the GHD Program
The most extensive research on the GHD Program was carried out by the Institute of Reproductive Health (IRH) in the context of the USAID-funded PASSAGES project. Between 2017 and 2019, IRH provided support for several smaller studies (mentioned above) and for the larger Realist Evaluation conducted in collaboration with the University of Cheikh Anta Diop in Senegal. Key conclusions of the two-part IRH research are presented below. They are (1) The GHD program created safe spaces for dialogue and consensus building (2) it provided a space for community discussions, (3) decision making in families and communities became more inclusive and gender equitable, (4) the program reestablished grandmothers as traditional family counselors and advocates for girls in Senegal, (5) relationships between grandmothers and girls was strengthened and girls improved their confidence levels, (6) grandmothers demonstrated more openness and support for girls’ education, (7) social changes were visible, and (8) programs should equally focus on family members and communities.
Key conclusions of the Georgetown University of GMP’s Girls Holistic Development Program
• The GHD Program has created safe spaces for dialogue and consensus building between the three generations, e.g., elders, adults and adolescents, as well as between men and women. The inclusive strategy has contributed to increased communication between the sexes and between age groups.
• The dialogue-based approach has provided a platform for the community to discuss norms and practices that are harmful to girls and to identify their own solutions through consensus-building.
• Decision making in families and communities has become more inclusive, more participatory and more gender equitable.
• GHD reestablishes grandmothers as traditional family counselors and advocates for girls. The approach has increased grandmothers' role and power in family decision-making related to all issues concerning girls.
• In GHD-supported communities, relationships between grandmothers and girls have been greatly strengthened and girls have greater self-confidence to discuss and seek grandmothers's advice and support on a variety of issues including sexuality.
• Grandmothers who participated in leadership training demonstrate more open, progressive attitudes and support for girls' education, delayed marriage and pregnancy and abandonment of FGM/C.
• GHD promotes change in culturally embedded social norms and practices related to girls' education, child marriage, extra marital teen pregnancy and female genital mutilation by both empowering girls and creating an enabling environment where family and community actors support change for girls.
• Most programs promoting GHD primarily target girls. Based on the experience of the GHD Programs, interventions that focus more on involving community and family members than adolescent girls can have greater impact on changing norms and behaviors that affect girls.
Lessons Learned from the Girls’ Holistic Development Program
Based on the various studies and experience of the GHD team working in southern Senegal, a number of lessons are identified that have wide application for other programs addressing GHD across Africa and elsewhere in the Global South where societies are hierarchically structured, elders are respected and have influence, and where grandmothers play a role in socializing and supporting adolescent girls.
1. In all African communities, elders have status and power over younger generations and determine the social norms that structure family and community life. When an approach based on respect and dialogue is used, they are not automatically opposed to change.
2. When programs respect and build on cultural and religious roles and values that communities cherish, community actors are more receptive and more engaged. The opposite is also true.
3. In non-western cultural contexts, where extended family networks are stronger, grandmothers play a role in all aspects of the upbringing and development of children, especially girls. Using an assets-based approach, programs should acknowledge and build on this cultural resource.
4. Families are concerned about all facets of girls’ upbringing and development. Communities are more receptive to programs that address various facets of girls’ development, rather than single-issue strategies.
5. Community involvement in programs is greater when programs supporting girls’ development address issues that are of concern to communities in addition to the priority concerns of development organizations.
6. Communication between three generations (elders, adults, and adolescents) should be strengthened in order to promote harmonious change within family and community systems rather than creating conflict between generations with differing opinions.
7. Both formal and informal leaders, of all three generations and both sexes, should be involved in all efforts to promote change in communities. Given their influence with their respective peer groups, they are powerful gatekeepers who can either support or block new ideas and behaviors.
8. Communication and education methods used with community groups should be based on adult education methods which elicit critical reflection among community actors rather than on persuasion and messages disseminated to passive beneficiaries, or audiences, to convince them to adopt expert-identified solutions.
9. Communities are more open and engaged in programs that adopt an asset-based approach where positive roles, values, and practices are encouraged and reinforced while harmful ones are discouraged. This lesson builds on Carl Rogers’ concept of Unconditional Positive Regard.
10. In any program, it is important to determine at the outset what the roles and influence are of different family and community actors in order to involve all categories of people who influence or who could influence the issue being addressed.
Conclusion
The GHD Program was primarily initiated through the development of respectful relationships between GMP facilitators and community actors. These relationships served as a foundation for a series of dialogical activities addressing GHD. Intermediary results of the GHD Program were observed first, in terms of increased communication and social cohesion between generations and between the sexes. This work made us even more aware of an asset- versus deficit-based approach, no matter the context.
From Theory to Practice Reflections and Questions
• As you think through this case study, identify one or two ways it can be challenging to work alongside different generations of individuals and families.
• We all have differences of opinions and worldviews on social issues facing our world. What are your thoughts on how we advance social and racial justice internationally when worldviews diverge regarding what is culturally appropriate?
• We believe that all work involving social and racial justice should begin with engaging in introspection and self-awareness processes. What are some ways you have engaged in self-reflection concerning working across groups who have different customs and beliefs from your own? Share at least two ways you will do so moving forward.
• The Girls’ Holistic Development Program was developed and implemented in Senegal. We believe that the concepts and methods used in Senegal are relevant to other African settings and that many are relevant to other contexts in the non-western world and also to communities in the global north.
References
Airhihenbuwa, C. O. (1995). Health and culture: Beyond the western paradigm. Sage Publications, Inc.
Aubel, J. & Rychtarik, A. (2015) funded by USAID, Washington D.C.
Brookfield, S. D. (1991) Developing critical thinkers: Challenging adults to explore alternative ways of thinking and acting. Jossey-Bass.
Chaskin, R.J., Brown, P. Venkatesh, S., & Vidal, A. (2001) Building community capacity. Aldine De Gruyer.
Foster-Fishman, P. G., Nowell, B., & Yang, H. (2007). Putting the system back into systems change: A framework for understanding and changing organizational and community systems. American Journal of Community Psychology 39, 197-215. https://doi.org/10.1007/s10464-007-9109-0
Freire, P. (1970) Pedagogy of the oppressed. Continuum.
Hartman, A. & Laird, J. (1983). Family-centered social work practice. Free Press.
Hawe, P., Shiell, A. & Riley, T. (2009). Theorizing interventions as events in systems. American Journal of Community Psychology, 43, 267-276. https://doi.org/10.1007/s10464-009-9229-9
Institute of Reproductive Health, Georgetown University (2019) Grandmother Project – Change through Culture: Program for Girls’ Holistic Development: Qualitative Research Report. Washington D.C., for USAID.
Institute of Reproductive Health, Georgetown University (2019) Grandmother Project – Change through Culture: Program for Girls’ Holistic Development: Quantitative Research Report. Washington D.C., for USAID.
Lasker, R.D. & Weiss, E.S. (2003). Journal of Urban Health. 80 (1) 14-60.
O’Donnell, C. R. & R.G. Tharp (2011), Integrating cultural community psychology: Activity settings and the shared meanings of intersubjectivity. American Journal of Community Psychology. https://doi.org/10.1007/s10464-011-9434-1
Rogers, C. (1980). A way of being. Houghton Mifflin.
Schensul, J. J. & Trickett, E. (2009). American Journal of Community Psychology 43, 232-240. DOI 10.1007/s10464-009-9238-8
Trickett, E. et al. (2011). Advancing the science of community-level interventions. American Journal of Public Health 101(8),1410-1419. https://doi.org/10.2105/AJPH.2010.300113
White, S.A. (1999). The art of facilitating participation. Sage Publications, Inc.
Note
*All photographs are courtesy of Dr. Judi Aubel | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/02%3A_Global_Perspectives/2.03%3A_Promoting_Community-Driven_Change_in_Family_and_Co.txt |
refers to evaluation research, also known as program evaluation, as research purpose instead of a specific method. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal. Additionally, evaluation research is a type of applied research, and so it is intended to have some real-world effect. Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications.
The three case studies found in this section provide real-world evaluation research for you to see how community psychology practitioners conduct this work in community settings. In Lessons from Conducting an Equity-Focused, Participatory Needs Assessment, Brown et al. describe their process of engaging in community-based collaborative work with the LGBTQIA community in North Texas with a partnership that consisted of a full-time community psychology practitioner, an academic partner, and other stakeholders.
Program Evaluation: A Fundamental Component in Effective Community Practice contributed by Dr. Patricia O’Connor, expands the traditional single-case study format to include multiple mini-case studies from which “lessons learned” are highlighted through evaluation-based practice of community psychology (CP). In this study, CP practitioners and relevant stakeholders who work together to design and implement needed community-based programs.
Our third case story, Showing up and Standing with: An Intersectional Approach to a Participatory Evaluation of a Housing First Program on O’ahu, contributed by Dr. Anna Pruitt takes us to the lovely O’ahu and captures the work conducted in an ongoing five-year participatory evaluation partnership between Housing First program participants, staff, and community psychologist evaluators in the multicultural context of the Island of O‘ahu in Hawai‘i. Using an intersectional lens (Crenshaw, 1989; Weber, 2009), this case study explores the challenges and successes of building this partnership among individuals from diverse racial and ethnic backgrounds with varying degrees of power, housing experiences, and mental and physical health issues.
3.02: Lessons from Conducting an Equity-Focused Particip
This case story focuses on the authors’ work to conduct a comprehensive Ryan White HIV/AIDS needs assessment in North Texas. We highlight three key issues from a community psychology perspective.
The Big Picture
In the United States, an estimated 1.2 million people are living with the human immunodeficiency virus (HIV). Men who have sex with men (MSM) and bisexual men of color, African American heterosexual women, and Latina heterosexual women continue to be disproportionately impacted by HIV. For thirty years, service organizations funded by the Ryan White HIV/AIDS program have been mandated to conduct comprehensive needs assessments every three years to help planning councils develop and implement strategies to improve access, reduce barriers, and enhance service delivery and satisfaction. This case story focuses on the authors’ work to conduct a comprehensive Ryan White HIV/AIDS needs assessment in North Texas. We highlight three key issues from a community psychology perspective.
First, in most cases, previous needs assessments have consistently demonstrated needs and barriers influenced by structural or political forces; yet the service landscape largely focuses on first-order strategies whereby individual behaviors, knowledge and attitudes are the target of programs and services.
Second, consultants leading previous needs assessments in this community have rarely used participatory approaches which are known to enhance buy-in, trust, and participation, especially among people living with HIV who are further marginalized based on race/ethnicity and/or gender identity.
Third, previous needs assessments have failed to incorporate an equity perspective that includes the impact of sexism, homophobia, racism, and intersectionality on prevention and intervention.
About the Community or Big Picture Concerns
In the United States (U.S), an estimated 1.2 million people are living with the human immunodeficiency virus (HIV). Although overall HIV rates have declined in the U.S., persistent disparities exist based on race/ethnicity, gender and gender identity, and sexual orientation. Black heterosexual women, transgender women of color, and others continue to be disproportionately impacted by HIV. For example, although Black people make up 13% of the U.S. population, 43% of U.S. adults living with HIV are Black and 42% of new HIV diagnoses are Black people. An estimated 14% of transgender women in the US are estimated to be living with HIV, for Black transgender women, the figure is 44% (Lancet, 2020).
Disparities persist within the HIV care system as well. For example, in the Dallas metropolitan area, 79% of people living with HIV were linked to care after diagnosis in 2018 (Wolfe & Brown, 2020). When disaggregated, fewer Black (76.3%) and Hispanic people (77.8%) were linked to care after diagnosis compared to White people (83.1%). In 2018, 72.9% of people living with HIV were retained in care, but lower retention rates were observed among Black people (68%), youth and young people (ages 13-24; 24-34), and people whose mode of transmission was intravenous drug use (67%). Retention in care is associated with viral suppression (i.e., the amount of HIV in the blood was at a very low level) and is an important determinant of health. Overall, 64% of people living with HIV in the Dallas metropolitan area were virally suppressed in 2018. Black (57%) and Hispanic (64%) were less likely to have viral suppression than White people (72%). Also, compared to men who have sex with men (66%), viral suppression was relatively lower among people whose mode of transmission was intravenous drug use (56%). Youth ages 13-24 (51%) and young adults ages 25-34 (57%) also had lower viral suppression compared to older adults and the overall population (Wolfe & Brown, 2020). Much of these disparities are attributable to system-wide inequities in access to health and behavioral health care (as well as gender-affirming care), housing stability, and economic stability (Lancet, 2020).
In this work, it is important to recognize that systems of oppression (i.e., racism, classism, transphobia, homophobia) operate in a mutually reinforcing manner to produce inequities in HIV risk, testing, infection, and mortality (McGibbon, 2012; Ford, 2017; Gee & Ford, 2011; Wilson et al., 2016). Structural racism is a system of oppression and is a key driver of persistent poverty or pay inequity, unequal educational attainment, mass incarceration, residential segregation, and housing instability. In the U.S., Black people are less likely to be prescribed pre-exposure prophylaxis (PrEP), less likely to have access to healthcare, and underrepresented in clinical trials. Additionally, healthcare providers are less likely to discuss PrEP with Black patients. In addition, income inequality and socioeconomic deprivation are two key socioeconomic drivers for HIV diagnosis and transmission. The authors of a study (Ransome et al., 2016) found that income inequality and socioeconomic deprivation were associated with higher rates of late HIV diagnosis in unadjusted models. Black racial concentration robustly predicted late HIV diagnosis. Black residential segregation was positively correlated with HIV incidence. The white racial composition was among neighborhoods across quadrants of low-high inequality and HIV, neighborhoods with above 50% black racial concentration were in low inequity-high.
From a societal perspective, people living with HIV systematically experience disparate health and healthcare outcomes depending on their proximity to oppression and privilege (McGibbon, 2012). For example, in the U.S., Black men who have sex with men (MSM) are closer in proximity to systems of oppression based on race compared to white MSM; with white MSM systematically experience better health and health outcomes. Community psychologists have interrogated these systems by examining the historical narratives behind the HIV movement and the crediting of white gay men as the victors of the social and political response to the HIV epidemic (Wilson et al., 2016). Consequently, such narratives fail to acknowledge the diverse and intersectional identities (and experiences) of people living with HIV; and thus, these groups are further marginalized.
For thirty years, service organizations funded by the Ryan White HIV/AIDS program have been mandated to conduct comprehensive needs assessments every three years to help planning councils develop and implement strategies to improve access, reduce barriers, and enhance service delivery and satisfaction. Eligibility for Ryan White HIV/AIDS Program funding is based in part on the number of confirmed HIV/AIDS cases within a specified metropolitan area, which is characterized by a central urban area surrounded by other urban areas that work together economically or socially. The Dallas eligible metropolitan area includes eight counties with the city of Dallas representing the largest population of people living with diagnosed HIV infection.
The findings from a comprehensive needs assessment also helped to identify and address persistent health disparities in the population. In fact, the Ryan White HIV/AIDS program emphasizes the importance of focusing on high-priority populations, such as Black MSM, heterosexual Black women, and transgender men and women, in the needs assessments.
The needs assessment we describe in this chapter includes a seven-county area, although the largest share of people living with HIV/AIDS (PLWHA) in North Texas reside in only one of the seven counties (Dallas County). This is likely attributable to resource allocation, whereby most of the services and specialized health care are in the City of Dallas, with little or no HIV/AIDS specific services available in the outlying, rural counties. In fact, in 2018, 81% of people living with HIV resided in Dallas County (Wolfe & Brown, 2020). Further, the Pew Research Center found increasing income and racial segregation over the past three decades in the Dallas Metro Area, with most low income, predominantly non-white households located in the southern part of the city of Dallas, particularly south of Interstate 30. Most of the health care and other services for PLWHA are located north of Interstate 30.
Background of How We Came to Work with this Community Partner
Prior to receiving the contract to conduct the needs assessment, the community consultant (SMW) and academic partner (KKB), both trained community psychologists, had a history of strong relationships with organizations throughout the Dallas metropolitan area. In particular, the community consultant’s company is a woman-owned firm located in the Dallas metropolitan area. For over 10 years, her firm has partnered with organizations and foundations in the area to build their capacity for evaluation, program development, and coalition building. In addition to having conducted resource and needs assessments in other communities, the community consultant had a history working with Dallas County HHS and the Dallas County public hospital system.
In early 2019, Ryan White HIV/AIDS Program (RWHAP) issued a request for proposals (RFP) for their 2019 Comprehensive HIV/AIDS Needs Assessment. After the initial deadline passed with no proposal submissions, the community consultant was informed by a community colleague that the RFP deadline was extended and encouraged her to submit a proposal. The community consultant invited an academic partner (also trained in community psychology) at a local university (KKB) to collaborate and form the evaluation team.
The evaluation team consisted of the:
• community consultant as the lead,
• the academic partner,
• three student assistants (TB, JS, & CJP) who worked as paid project assistants, and
• over 15 trained undergraduate and graduate public health students who assisted with needs assessment data collection and data entry.
Evaluation Team and Positionality
The consultant (SMW) is a white, cisgender, heterosexual woman from a blue-collar, working-class background with a PhD. She has over 35 years of experience working with programs and systems change initiatives that are addressing health, education, and other racial and ethnic disparities. She explicitly recognizes her positionality and privilege as she engages in this work.
The academic partner (KKB) is a Black, cisgender, heterosexual woman from a middle-class background. She has a PhD and over 10 years of experience working in diverse community and organizational settings as well as academic expertise in health equity, racial and ethnic health disparities, and women’s health.
TB is a cis gender, heterosexual Black woman from a lower middle-class background. She has a Master’s in Social Work with a concentration in Mental Health and Substance Abuse. Her background includes research surrounding maternal child health and postpartum depression. She also has experience in community health and mental health working with individuals living with HIV/AIDS. She previously served for seven years in the United States Air Force.
JS is a Black, heterosexual woman from a working-class background. She is an Air Force Veteran with eight years of exemplary service. Her experience includes qualitative research in maternal and infant health, food deserts, PTSD in low-income communities, and drug and alcohol abuse among veterans that received a dishonorable discharge.
CJP is a Black, cisgender, heterosexual man from a middle-class background. He has a Bachelor of Science in Public Health and is currently pursuing his Master’s degree in Public Health. He has experience working with different populations including women’s health and people living with HIV/AIDS. He also works internationally to help women in Haiti to engage in healthy behavior to prevent pregnancy complications.
Collaborative Partners and Our Approach to this Work
Dallas County Health and Human Services (DCHHS) Ryan White HIV/AIDS Program and the Ryan White Planning Council (RWPC) commissioned this needs assessment project. Therefore, our evaluation team worked closely with RWPC as its primary collaborative partner, which is a community group consisting of 33 people living with HIV (PLWH), an HIV service provider representative, and other community members. Our evaluation team also worked closely with the RWPC Health Planner (JMH) who shares responsibility with the RWPC to oversee the planning and implementation of the needs assessment. The RWPC’s mission is to optimize the health and well-being of people living with HIV/AIDS through coordination, evaluation, and continuous planning to improve the North Texas regional system of medical, supportive, and prevention services. Our evaluation team met on a bi-weekly basis with the RWPC Health Planner and on a routine basis with the RWPC committee members based on their existing meeting schedule. Although not directly involved in the planning, there were also more than 60 individuals (many of whom were PLWH) and HIV health and service organizations in the care continuum who assisted with implementation in many ways including scheduling of data collection activities, serving as points-of-contact for data collection activities, and sharing information about the needs assessment.
Through initial discussions, our evaluation team and the RWPC determined that a participatory evaluation approach, guided by the evaluation team, was most appropriate for this project. A participatory evaluation approach ensures that community partners are involved in a collaborative and meaningful way throughout every phase of the needs assessment. Our evaluation team did discuss the importance of an empowerment approach whereby community members and RWPC members could be trained and take ownership of needs assessment activities. However, the final decision for a participatory approach was made because the amount of time to complete the project was shortened due to system delays, the implications of not meeting the timeline (e.g., loss of federal funding), the RWPC’s desire to be engaged in the planning and implementation in such a way that did not exceed their own capacity, and the fact that our evaluation team was hired to ensure the completion of the work as contracted. The RWPC was empowered in that they had the authority to fire us at any time during the process if we did not provide what they needed, and they had oversight authority for our work as well. We discuss the nature of this participatory approach later in this case study (also see Table 2).
During the process of gathering background information in preparation for this collaborative work, our evaluation team reviewed the comprehensive HIV/AIDS needs assessment reports from the previous cycle (Dallas County Department of Health and Human Services, 2017). We identified three key shortcomings from a community psychology perspective and tailored our approach to addressing them.
First, the prior needs assessment failed to incorporate a health equity perspective, which involves understanding and acting on the relationships between social determinants of healthhealth inequities, and health disparities (or population-based differences in health outcomes). Health equity requires a justice orientation and commitment to addressing avoidable inequalities, historical and contemporary injustices, and the elimination of health and healthcare disparities. Community health needs assessments that lack a health equity perspective can unintentionally increase inequities (which result in greater disparities). Community health needs assessments are used to guide community action; and when the factors that drive inequities are left unacknowledged, this can inadvertently lead to them being unaddressed. Consequently, our evaluation team applied a health equity lens by:
1. supporting honest dialogue with collaborative partners about the community’s history and systems of oppression that marginalize various groups of people living with HIV,
2. engaging in informal conversations with marginalized consumers (e.g., Transpeople of color, heterosexual women) to ensure that their voices were included across multiple data sources and the final report, and
3. continuously framing the needs assessment within a health equity lens when engaging in conversations with providers and HIV service organizations.
Second, the prior needs assessment failed to frame the identified community assets and needs from an intersectional perspective. An intersectionality perspective holds that people’s social and political identities (and the socially constructed meanings tied to those identities) interact to create contextualized experiences of oppression and privilege. This perspective holds that people’s intersectional experience is “greater than the sum” of racism, sexism, classism, and other forms of oppression (Crenshaw, 1989). Accordingly, our evaluation team was committed to preparing a report and community presentation that explicitly considered how the lived experiences of people living with HIV is shaped by their proximity to oppression and privilege along the lines of health status, (dis)ability, gender identity, and expression, race, class, sexuality, language and so on. We used the available data to identify and report the varying needs of sub-populations of people living with HIV and provided tailored recommendations for these groups.
For example, we made sure to triangulate focus group data and consumer survey data to create an infographic of perceived assets and needs among the following priority sub-populations:
• Black men who have sex with men,
• cisgender heterosexual Black women,
• Latina/o/x people,
• Transgender people,
• youth and millennials,
• senior adults, and
• rural residents.
This approach was appreciated because early on in the process, we were informed by heterosexual Black women as well as Transgender women and advocates about concerns related to being unheard and overlooked in broader HIV prevention efforts.
Third, the prior needs assessment did document service needs and barriers shaped by structural and political factors. However, the discussion of these factors was perfunctory and recommendations focused heavily on first-order rather than second-order changes. There was also no mention of structural oppression and its contribution to the disparities identified across populations. This is problematic because improvements in HIV prevention and treatment cannot be accomplished by focusing exclusively on the individual level (i.e., providers and consumers). There is a need for more enduring solutions situated in systems and policy. The social-ecological perspective recognizes individuals as embedded within larger social contexts and considers the complex interplay of individual, interpersonal, community, institutional, and societal factors (Bronfenbrenner, 1979; Lewin, 1936; Kelly, 1966). Accordingly, our team applied this perspective to frame the community assets and needs from the needs assessment. This was important because, in the HIV/AIDS health and social service system, the biomedical model of illness remains a predominant paradigm that often reinforces victim-blaming and fails to consider how institutions and systems shape health risk and health behaviors. It was important to our team to apply a social-ecological perspective to help us place more emphasis on the policies and systems beyond the individual that impact people living with HIV. Therefore, during the data analysis and reporting phase, we organized the results regarding assets and needs into three multi-level categories: individual/interpersonal level, socio-economic level, and systems and structural level. We also made sure that our recommendations were organized in this matter to assist the community partners with system-wide action planning.
Description of Project
At the beginning of August 2019, our evaluation team was awarded the contract to conduct the comprehensive HIV/AIDS Needs Assessment. The project period was August 2019 to January 2020 with the final report due in early February 2020. The image below highlights the report:
Based on the scope of work, the project period for the comprehensive needs assessment needed to be at least 15 months. Our evaluation team communicated initial concerns about the timeline and the limitations of completing this amount of work within such a tight timeline. Under different circumstances, we would recommend re-scoping the work; however, our team agreed to take the project on because (1) the RWPC had prior experience and competence in planning and implementing a comprehensive needs assessment, (2) our evaluation team had the human and material resources needed to accomplish the task, and (3) our participatory approach allowed for diffusion of some planning and implementation activities needed to get the work done.
Initial Planning Phase
Our first step was to convene the RWPC and other community partners to define roles for the project. During the planning phase of the HIV/AIDS needs assessment, we met with the RWPC, our primary collaborative partners, to identify their needs and assets. Our meetings focused on discussing the following questions: What did you dislike about the product or process of the last needs assessments? What does a successful needs assessment look like? How do you hope to be able to use the information from this needs assessment? What human, technical, or financial resources do you currently have that may help contribute to the success of the needs assessment? What additional human, technical, or financial resources are needed to make this needs assessment successful?
Our biggest surprise during this process was when we learned that the prior consultants, who were located in another state, had relied upon the RWPC and service providers to collect all of the needs assessment data. This included scheduling data collection activities, administering surveys at local sites, conducting the focus groups at sites, manually entering data from paper surveys. This process was problematic because it placed an undue burden on the community partners, which took away time from service and care provision. As we discussed assets and needs amongst ourselves, our evaluation team assured the community partners that our team would handle the ‘heavy lifting’ with these activities while emphasizing that they could be as involved in the process to the extent that they were able.
Based on discussions around these questions, our evaluation team was able to identify pertinent information about partners’ perceived assets and needs which would help inform the planning and implementation of the needs assessment (see Table 1). The benefit of identifying assets and needs among community partners before the start of the needs assessment was that everyone on the team understood the strength that they brought to the team and was clear about what role they had on the project. After these initial discussions, our team drafted a document that outlined each partner’s role on the project (see Table 1).
Table 1. Community Partner Assets and Needs
Assets Needs
• The Ryan White program had a community advisory board which consisted of community members living with HIV/AIDS
• Ryan White program staff and community advisory board members had strong relationships within the community and among people who were out of care (not receiving HIV/AIDS care or services)
• Ryan White program staff and other partners had exisiting relationship with health and social service system prodivder sites where data collection could occur
• The Ryan White program had staff availble who could support the coordination of site visits for data collection
• The Ryan White Program had a budget and tracking system availble for the discernment of gift cards for participants
• The Ryan White program and stakeholders expressed a need for data collection software and analysis tools
• The Ryan White program and stakeholders expressed a need for technical expertise in data collection sampling and analysis
• The Ryan White program and stakeholders expressed a need for a needs assessment report that could be used to inform action by stakeholders
Our community partners, the RWPC and the RWPC Health Planner had prior experience with planning and implementing needs assessments. They developed the survey questions and the focus group questions. Our team worked collaboratively with them to make modifications to the length, added skip-logic instructions, created an online version of the survey, and provided a Spanish version of the instruments and flyers. All final changes to the data collection tool and recruitment material were reviewed and approved by the collaborative partners before data collection started. The implementation of the needs assessment was characterized by our evaluation team and the community partners completing activities in a complementary manner (e.g., RWPC would help schedule and recruit people for focus groups; RWPC partners would provide trusted interpreters for data collection activities) and a collaborative manner (e.g., both RWPC partners and members of our evaluation team would set up at local sites to recruit for surveys or to conduct focus groups). Although the evaluation team took the lead on data analysis and preparing the report, the RWPC was involved via progress meetings to review the information and provide feedback along the way. Table 2 below provides a breakdown of the roles and responsibilities each of the partners played.
Table 2. Approach to Key Needs Assessment Activities
Key Activities RWPC Partners Our Evaluation Team
Bi-Weekly Meetings and Updates X X
Develop data collection tools (survey, key informant, focus group questions) and materials X X
Develop the Epidemiological Profile and Resource Inventory (There were partners who assisted with providing information needed to complete this task) X
Conduct key informant interviews X
Disseminate information about needs assessment (e.g., sharing flyers, word-of-mouth, social media posting) X X
Coordinate incentive distribution (gift cards) X
Scheduling and Making On-Site Visits for Data Collection X X
Data entry and management X
Data cleaning and analysis X
Report and presentation preparation (including provision of ongoing feedback) X X
During this comprehensive needs assessment, we sought to understand the diverse perspectives within the communities of people living with HIV by working closely with the local Ryan White planning council whose members included consumers. With any needs assessment, prior to starting data collection, it is important to ensure everyone shares common definitions of key terms. In this instance, we needed a clear definition and conceptualization of what we meant by “assets” and “needs.”
Defining and Conceptualizing Assets and Needs
The field of community psychology emphasizes the importance of adopting an asset-based perspective, which means building on existing skills or resources rather than fixating on community deficits. Accordingly, we were intentional about balancing the discussion of community assets and needs in the HIV/AIDS needs assessment. To ensure a comprehensive understanding of need, our team incorporated Bradshaw’s taxonomy of need [13]. According to this taxonomy, need may be defined based on (1) the extent to which groups within a community meet a standard of health that has been established by experts (normative need), (2) the extent to which one group experiences a greater health burden compared to another group within a community (comparative need), (3) the extent to which groups within a community use (or do not use) available services or programs (expressed need), and (4) the explicit input from groups in the community who have a related lived experience (perceived need). For the HIV/AIDS needs assessment, our team collected a variety of primary and secondary data sources to capture the community assets and needs (see Table 3).
Table 3. Data Sources and Strategies
Data Source Strategy
Assets/Strength Our team conducted a survey of Ryan White funded and non-Ryna White funded organizations to identify the scope of availble services and capacity. We also conducted key informant interviews with Ryan White funded providers to gather additional information.
Normative Need Our team used epidemiologic data collected from the department of state health services, the U.S. Census Bureau, the Centers for Disease Control and Prevention, and other official data sources to identify trends in HIV infection and mortality, and other health status indicators.
Comparative Need
Our team used epidemiologic and demographic data collected from the Texas Department of State Health Services, the U.S. Census Bureau, the Centers for Disease Control and Prevention, and other official data sources to identitfy disparities in HIV infection, mortaility, modes of transmission, and other health status indicators.
Our team conducted focus groups to gather additional information about groups that experienced a greater health burden compared to other groups within the community.
Expressed Need
Our team used epidemiologi and demographic data collected from the department of state health services to examine trends related to the number of people living with HIV/AIDS who were linked to care after diagnosis and were retained in medical care.
Our tean used survey data collected to understand what services consumer reported using.
Perceived Need
Our team used survey data collected from HIV/AIDS service providers to identify provider-reported needs sucha as barriers to successful linkages to care, and prevention challenges.
Our team used targeted focus groups and survey data collected from people living with HIV to service utilization patterns, perceptions of most important services.
Conducting a Consumer Survey
The consumer survey was designed to collect information about socio-demographics, health history, medical care, health behaviors, intimate relationships, use of prevention and intervention services, and barriers to services. A cover sheet explained the purpose of the survey, risks and benefits, planned data uses, and consent.
We calculated the sample size based on the current total HIV prevalence for the Dallas Eligible Metropolitan Area (2018), with a 95% confidence interval at a 5% margin of error. Eligibility criteria included individuals who were age 18 years or older, live in one of the Dallas EMA/HSDA counties, were diagnosed with HIV and/or AIDS, and have not already completed the survey. Efforts were taken to over-sample in rural locations, youth (via social media), and those out-of-care. However, the two-month timeframe for data collection presented a key challenge.
We administered consumer surveys at pre-scheduled sessions at Ryan White HIV/AIDS Program provider sites, housing facilities, and specific community locations and organizations. Staff contacts at each location were responsible for session promotion and participant recruitment. Out-of-care consumers were recruited through flyers, word-of-mouth, social media, and staff promotion. Surveys were self-administered in English and Spanish, with staff and interns available for verbal interviewing for individuals who needed assistance. There were also bilingual staff and/or interns who provided verbal interviewing when needed. Members of the evaluation team and RWPC administered surveys asked each survey participant if they would prefer to have the questions read to them or to complete the survey on their own. This ensured that individuals with literacy challenges or visual impairments would be able to participate without having to disclose their disabilities. In fact, during data collection, there were many instances of our teams assisting consumers with their surveys in this manner. Our evaluation team consisted of individuals who were fluent in English, Spanish, and French, and French-Creole. Participation was voluntary, anonymous, and monetarily incentivized (\$15); and respondents were advised of these conditions verbally and in writing. Most surveys were completed in 20 to 30 minutes. Surveys were received on-site by trained staff, interns, and the evaluation team for completion and translation of written comments. Completed surveys were logged into a centralized survey database. Online survey participants were provided with an auto-generated unique code at the end of the completed survey. Participants were instructed to contact the Ryan White Planning Council Health Planner to provide the code and arrange a time to retrieve their gift cards.
In total, 421 consumer surveys were collected from December 2019 to January 2020 during 10 sessions at six survey sites (including one rural location and one housing facility). The final sample size was 392 after eliminating ineligible cases.
A major limitation was that we used a convenience sampling strategy, rather than random sampling, for this portion of the needs assessment. As a result, the majority of the sample represents PLWHA in urban settings (Dallas County) and in care receiving Ryan White Program services. This sample is less representative of youth (18 to 24-year-olds), transgender women and men, heterosexual women, individuals experiencing homelessness, and individuals living in rural settings. A longer project timeline would have allowed the team to overcome these challenges. For example, our team had on-site visits in several rural communities where we would spend several hours set up to recruit individuals. But, those facilities had so few walk-in consumers that more visits would have been needed to recruit enough people from those communities.
Creating an Inventory of HIV Service Providers and Assessing Service Capacity
To conduct the inventory of HIV service providers, the evaluation team trained a group of five graduate public health students to generate a resource inventory of agencies serving people living with HIV and/or AIDS without Ryan White HIV/AIDS Program funding. These students were required to engage in this project for their graduate course and were able to fulfill their course requirements by completing this project. Using the resource inventory template, students performed internet searches and made phone calls to organizations to verify key information. The student team used a snowball sampling technique to identify additional organizations.
There were four key challenges during data collection: (1) two organizations with websites that contained incomplete information, (2) difficulty identifying and contacting personnel at five organizations, (3) two organizations had websites that were out of date, and (4) two organizations on the original list were no longer in business. The image below highlights these challenges.
The Ryan White Planning Council Health Planner provided the evaluation team with a list of nine organizations funded by the Ryan White HIV/AIDS Program along with contact information. The Ryan White HIV Service Provider Capacity Survey was administered to these nine organizations. Eight of the nine organizations (88%) completed the survey. Once data collection was complete, services information from the non-Ryan White funded organizations was combined with services information obtained from the provider capacity survey.
The evaluation team experienced some challenges with obtaining responses from providers. It is possible that the nature of some of the questions (e.g., number of unduplicated clients served by service type) posed a challenge for respondents which delayed survey completion. Additionally, it is possible that some providers interpreted certain questions differently than others. For the next administration of the survey, the evaluation team will address survey question specificity and clarity. Also, the evaluation team used the provider capacity survey from previous years. This version of the survey does not capture detailed information about service capacity. Therefore, steps will be taken to ensure that the survey is designed to address this topic.
Key Informant Surveys
The Key Informant Surveys were conducted by the community consultant. The RWPC provided the community consultant with a list of organizations, contact names, and contact information for individuals who play a key role in the development and provision of services to PLWHA in the Dallas EMA. Organizations represented housing services, health care services, mental health services, children’s health services, consumers, policy and advocacy services, transgender services, and other service providers serving PLWHA in the Dallas EMA. Nineteen respondents served Dallas County and one respondent served a rural area.
Email invitations were sent to individuals from 27 different organizations requesting their participation. Recipients were asked to click on a link to Sign-Up Genius to select a date and time slot to schedule their interview. Follow-up invitations were sent to non-respondents after the sign-up deadline passed. Twenty-three individuals responded and signed up to be interviewed. One individual was unable to participate at her designated time due to an unforeseen event; one had to cancel because of a conflict and did not reschedule, and another did not show at the scheduled time. The final number of interviews was 20 key informants.
The interview was conducted using a semi-structured interview protocol via Zoom conferencing technology on the computer or telephone. All Key Informants agreed to have their interviews recorded. Interviews averaged one hour. Three interviewees were unable to complete the entire interview because of scheduling conflicts or other time limitations.
We also prepared the focus group and key informant interview protocols using questions that were developed by the RWPC. Our goal was to ensure that we answered the questions they needed to be answered and obtained the information they needed for planning.
Consumer Focus Groups
We conducted 12 in-depth focus groups with various groups including transgender individuals, youth (18-24), Black heterosexual women, aging MSM, MSM of color, and Latina/o/x people to gain an understanding of their experiences and worldviews. A major gap in previous work has been a lack of participatory approaches, insufficient attention to African American and Latina heterosexual women and transgender individuals. For example, there are strong sentiments among African American women against being categorized as “women of color”. Also, transgender individuals, especially transgender people of color, can be especially marginalized within the LGBTQIA+ community. Our partners provided an interpreter for the groups with Latina/o/x people since many spoke Spanish as their first or primary language. At the time this needs assessment was conducted, a number of Black transgender women had been murdered in this community, and therefore safety was a concern. Our team honored the need expressed by the Black transgender community to protect itself from the threats of structural violence and accepted that we would not be able to recruit many Black transgender women for focus groups. Although not a planned data collection strategy, our team did have informal conversations with some transgender women of color and advocates for the transgender community informally in an effort to capture their voices in this process. This reality illustrates how systems of oppression shape marginalized communities’ feelings of safety and ability to share their voices on matters that directly impact them.
To demonstrate respect for this community we were careful about language and dynamics within the community of PLWHA. We made sure to listen carefully to program staff and consumers about the language and focus that they desired for this needs assessment. Most importantly, we did not go in using our Dr. titles, nor did we position ourselves as “experts.” We conducted focus groups at sites chosen by the group and at times that were most convenient for them, mostly during the evenings. Our partners provided snacks or meals (depending on the timing and the group) and we stayed to socialize when the group invited us.
Five of the focus groups were conducted by the Care Coordination Ad Hoc Committee or a researcher from the local public hospital system before the consultants were retained and the remaining seven focus groups were conducted by our team. All focus groups used a standard, semi-structured protocol. Eleven of the 12 focus groups were recorded. Participants were asked if they consented to record and one participant in one group asked that the focus group not be recorded. Participants were asked to sign an informed consent form and each participant received a gift card as compensation for their time and input. All focus groups were arranged by the RWPC in collaboration with service providers.
As mentioned earlier, one of the focus groups consisted of transgender men and women. The team member who facilitated the group was a heterosexual, white woman in her 60’s. The facilitator built trust and comfort in several ways.
First, the group was arranged by a trusted health care provider in a space that was familiar to the group.
Second, the facilitator arrived early which offered an opportunity to engage in casual conversations with participants who arrived early. One was a transgender woman of a similar age, well-known and respected within the community, who had been an activist for the transgender community for many years. The facilitator enjoyed an opportunity to hear of the history of activism and the methods that had been employed, in addition to the rich stories that were shared. By the time the remaining three participants arrived, she had built rapport and trust with this key individual.
Third, the facilitator had many years of experience conducting such groups and interviewing individuals that were different than herself. She has developed the ability to listen carefully and a level of comfort with difficult conversations.
When the focus group was completed, she shared a meal with the participants and continued with more casual and social conversations about topics such as hobbies, jobs, and food.
Outcomes or Impact
Despite the challenges encountered during this needs assessment, we were able to successfully complete it on time. In alignment with our focus on health equity, we wanted to ensure that the needs assessment results were presented in a manner that was equitable so that anyone from the general community could consume the information. We accomplished this by delivering a user-focused needs assessment report that was informed by design best practices. During early conversations, our collaborative partners expressed their desire for a report that was easy to understand and navigate; and one that could be used to effectively inform action. Our team studied prior HIV/AIDS needs assessments completed by different consultants and reviewed examples from other communities. Our team initially planned to apply Stephanie Evergreen’s 1-3-25 reporting model wherein reports consist of a one-page handout, a three-page executive summary, a 25-page report, and appendices that include more detailed information (i.e., description of the methodology, larger tables, and figures, etc.). While our team was unable to limit the report to 25-pages, we did limit it to 80 pages (162 pages including references and appendices). This was in stark contrast to prior needs assessments which had page counts that ranged from 300 to 400 pages in total.
We also divided the 80 pages into sections using additional design principles to facilitate easier reading and navigation. For example, we color coded sections so that headings, icons, and graphs within each section were the same color. We minimized text to the extent possible and used visual representations such as infographics and graphs wherever possible. We de-cluttered graphs as much as possible so results from quantitative data were easily digested. We also used data visualization design principles to create and present a presentation to community members. We created a presentation that showed easily interpreted graphs and key findings.
Finally, a student member of our team was producing Infographics to share with community members, but this work was disrupted when the student contracted COVID-19 and her recovery period went beyond the time period for this project. However, as of January 2021, our team is working to complete the infographics in collaboration with the RWPC partners.
The COVID-19 pandemic significantly disrupted action planning and community engagement activities based on the needs assessment. At this point, we are not able to necessarily assess the longer-term impact of the needs assessment. However, as of March 2021, the RWPC is preparing to engage in data-driven action planning using the data and recommendations provided in the needs assessment report. Also, our team has maintained a relationship with the RWPC partners through ongoing communication, and hopefully, partnership in future needs assessment cycles.
The key findings and conclusions presented in this needs assessment were written to guide solutions targeting structural change and to address disparities at a systemic level. We described needs in a way that put the onus on the health care and service providers to make changes to improve access and availability, as well as to decrease stigma within healthcare systems and communities and to reduce other systemic barriers. We avoided describing our findings in ways that would suggest individual-focused interventions such as educating the PLWHA or changing their behaviors in some ways.
In terms of intermediate impacts, the needs assessment was effective to identify key findings that will help to guide further system-wide action planning. More information about the needs assessment is publicly available (Wolfe & Brown, 2020), we will share that the needs assessment did reveal the following key findings:
Lessons Learned
It is important to recognize that when community organizations such as local health departments commission the funding for projects they determine the timelines, plans, and resources. The Community Psychologist has to adhere to the designated timelines, resources, and plans. Our case study of this work has broader implications that can advance interdisciplinary work in community psychology and public health. Based on our work, we recommend that community psychologists conducting needs assessments (or other community-based evaluative work) advocate for longer timeframes if possible, to employ participatory and empowerment approaches and to build trusting relationships. There is a responsibility to educate communities about the time and resources that may be needed to undertake such work in a way that is truly inclusive.
The needs assessment covered a vast target area—much of it rural and without many resources or other centralized organizations where the target populations would be found. Additional time was needed to develop strategies and identify avenues to reach more PLWHA in the rural areas via surveys and focus groups.
It is also important to go beyond singular categories and recognize the role intersectionality impacts on needs and incorporate it into the data collection plan. As an example, if we had simply looked at males as a group, we would have missed all the unique needs of gay Black males, gay and straight Hispanic males, transgender individuals, and adolescent males.
We recognized the importance of acknowledging our own positionality as PhDs. The education level of our partners and the focal population varied widely. Even though the members of the RWPC had the authority to delegate responsibilities and even fire us, we still need to acknowledge the perception of the “expert” power we hold in this society. Additionally, the consultant is white. In the past, we relied on being “culturally competent” and more recently on exercising “cultural humility,” and this is insufficient. As a white heterosexual woman working in LGBTQIA+ communities of color and Black communities, deep reflection and additional work was required. This includes developing a deep understanding of concepts that include white privilege and white fragility, heterosexual privilege, anti-racism, and the impact colonization practices have had on different communities. It also requires ongoing, consistent reflection of what this means, and how, as a white person, the consultant may be (even unintentionally) upholding systemic racism and supporting oppressive systems. Most importantly, it requires recognition that this learning and reflection isn’t one-and-done. It is ongoing and must be persistent as no matter how “woke” we may be, we are all capable of saying or supporting racism, heterosexism, and other forms of oppression. We need to be ready to be called out, and when we are, acknowledge what we have said or done.
We also learned about the need to engage with and be present in the focal communities. This includes a need to spend more time in rural, outlying communities where LGBTQIA+ and HIV-positive individuals who need Ryan White services are not easily identified. There was little information obtained from six of the seven counties for this needs assessment. We were unsure whether it was because most of those who are affected by HIV or HIV positive reside in urban areas that are closer to services, or if the individuals are residing in rural areas, but remain hidden because of the stigma of HIV or being LGBTQIA+.
Recommendations
Be ready to start where the community is and adjust expectations to what is, not what should be in an ideal world. This project had a limited timeline which required our team to be less participative than desirable. We did not have time to pilot instruments and test strategies and had to hit the ground running with what we had. The budget was also limited which required that we find the best way to stretch the resources that were available.
Work with partners to teach them how they can get better results by engaging the consultant earlier. Spent time clearly defining roles and responsibilities. Provide partners with adequate resources so they are not overly burdened by adding needs assessment tasks to their already busy schedules.
Partner with community members and community-based organizations to reach populations that may be otherwise inaccessible. Consider the role of intersectionality when you are thinking of the different populations you need to reach. We found substantial differences between White and Black gay men and Black men and women in regard to needs and accessibility of assets, demonstrating that identifying by a single category will yield insufficient and potentially misleading results.
Ensure there are resources for translation services for surveys and to conduct focus groups and interviews. The prior needs assessment consumer survey was translated by a Spanish-speaking staff member at one of the partner organizations. RWPC members reported that it had not worked well because the individual had translated using a local dialect rather than a universally understood standard Spanish language. It is important when selecting translation services that they are professional services with linguistic and cultural translation expertise.
Needs and Resources as Participatory Research Projects
Community needs and resources assessments have many uses across community psychology practice. They are frequently requirements of funders and the Affordable Care Act and produce information that is useful for planning and writing grant applications. They are also useful for identifying policy needs and developing policies to meet them. By focusing on disparities and inequity, they can also support anti-racism and racial justice initiatives. Community psychologists are specifically skilled at participatory methods. The participatory process ensures higher quality results. Consumers know how to best reach the focal population, can provide endorsements that help to build trust, and provide expert advice for everything from what questions to ask to how to word the questions so they are understandable.
This, and the majority of needs and resource assessments require interaction with multiple races, ethnicities, and cultures. Cultural humility ensures that questions are appropriate to the focal population; that participants are not subjected to macroaggressions and other discomfort or potentially traumatizing treatment; and that results are interpreted accurately. Additionally, community psychologists must be ready and willing to serve as advocates by consistently identifying and calling out racism, homophobia, coloniality, and other behaviors and practices that support oppressive structures. Doing so requires prior introspection and acknowledgment of their own positionality.
Looking Forward
In summary, needs and resources such as the one presented here present opportunities for community psychologists to conduct meaningful work that has potential for impact. The data gathered will be used as the basis for planning services in this community for PLWHA for the next three years while ensuring that the needs of specific groups are considered. We recommended that the RWPC retain a consultant and begin the work at least a year before the report is due, ideally 15-18 months before. With more time we would have been able to increase data collection efforts, engage graduate and undergraduate student interns, create course projects to provide more meaningful evaluation experiences and make more connections to generate a more representative sample.
Conclusion
Community psychologists work in a number of areas in the community, where many work within the research evaluation or program evaluation field. Organizations and companies call on community psychologists in this area because they have been trained to see through a lens of co-creating health and wellness together with others than being the expert in the space. With this framework community psychologists hold space for difference and play integral roles in the continuation of fostering and nurturing community resilience and healing.
From Theory to Practice Reflections and Questions
• In this case study, Brown et al. (2021) shared the importance of recognizing that systems of oppression (i.e., racism, classism, transphobia, homophobia) operate in a mutually reinforcing manner to produce inequities in HIV risk, testing, infection, and mortality (McGibbon, 2012; Ford, 2017; Gee & Ford, 2011; Wilson et al., 2016). When you consider this statement, share with others what immediately comes up for you.
• How, if at all, did this case story challenge your beliefs or thinking about people living with HIV or programs serving them?
• As you think through the importance of research or program evaluations, what new considerations will you take away from this case story? | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/03%3A_Evaluation_Research/3.01%3A_Prelude_to_Evaluation_Research.txt |
This case story expands the traditional single-case study format to include multiple mini-case studies from which “lessons learned” are extracted through evaluation-based community psychology practice.
The Big Picture
In this chapter I modify and expand the traditional single-case study format to include multiple mini-case studies from which I extract “lessons learned” through my evaluation-based practice of community psychology (CP).
Program evaluation plays an important, structural role in its contributions to the assessment of the intervention work of change agents, here CP practitioners, and relevant stakeholders who work together to design and implement needed community-based programs. My aim here and in all of my work is to encourage an evaluation mentality in CP practitioner-change agents. If these change agents develop interventions with an evaluation mentality, that is, with program evaluation as a core part of planning, design, and implementation, the resulting evaluative feedback can provide validation of the effectiveness of programmatic efforts, and thus, of change agents, or illustrate the need for substantive changes in aspects of the intervention efforts. The inclusion of evaluation strategies can assist program implementers/change agents in identifying the critical elements to ensure meaningful interventions and to provide evidence of the viability of replication. Additionally, we must recognize both the CP-based values (social justice, sense of community, empowerment, etc.) which underlie the development of community practice interventions and the critically important role of a change agent who incorporates a program evaluation mentality into the design of those interventions. Program evaluation thus becomes an essential tool in the practice of CP.
The overall aim of community psychologists’ work is the improvement of participants’ quality of life; some examples include Beauregard et al. (2020), Lin et al. (2020), O’Connor (2013), O’Shaughnessy and Greenwood (2020), Stewart and Townley (2020), and Suarez-Balcazar (2020). Improving quality may range from enhancing individuals’ sense of well-being to ensuring needed supports; some examples include DaViera et al. (2020), Goodkind et al. (2020), Maleki, et al. (2020), Shek et al. (2017), and Wadsworth et al. (2020). However, confirming the value of such work or appropriately modifying it can only be accomplished through the inclusion of community-based program evaluations. The essential questions for program implementers or change agents are whether a proposed program is appropriate, whether the implemented program is as planned or how it has changed, and whether the program outcomes are as hoped for or as expected. Thus, developing and implementing interventions must be paired with evaluating the initial designs, implementations, and/or outcomes of those programmatic interventions, all of which can improve participants’ quality of life.
My work and my career focus have been two-fold: as a professor in a small college in upstate New York teaching a program evaluation course in a master’s program, and as an evaluation consultant, engaging in numerous small, primarily local and large, state-based and national program evaluations. From that work, I have identified seven lessons regarding the use of program evaluation strategies that are offered as guides to those in the CP practice of evaluating community-based programs. I also provide three principles that serve as guides for program evaluators. The seven lessons, with illustrative mini-case studies, are based on two kinds of evaluation projects: student-based through my graduate program evaluation course and consultation-based through my CP practice. The former evaluations emerge from a course requirement for students to participate in the design and implementation of a group evaluation project and the latter projects include my consultation-based evaluations of specific programs or organizations. These mini-case studies, with their “lessons learned” immediately following, document that some efforts were successful and, not surprisingly, some were not.
Mini-Case Study One
The Executive Director (ED) of a human services agency that provides residential treatment for adolescents was interested in front-line employees’ perceptions of their work environments. The ED, a manager, and an assistant met with the graduate students and me to discuss the ED’s purpose for the evaluation: to learn how to make the organization a “best place” to work. We agreed that interviews with front-line workers would be the most appropriate way to collect data as there could be flexibility with open-ended questions. The manager would provide access to front-line workers. The meeting ended quite satisfactorily, with a potential schedule for the next steps, and the ED, manager, and assistant left.
Students walking down the hallway after the meeting overheard the manager say to the assistant, “They should just do their f*** jobs!” illustrating that the manager had little interest in soliciting feedback from employees. This attitude was reflected in the difficulty students had throughout the project, first in getting access to employees, and second, in having employees agree to be interviewed. Some employees expressed concern that their interview information might not remain anonymous; the manager was not considered trustworthy. Students completed interviews but fewer than expected and with less useful information than planned or expected. Seven overarching lessons learned are depicted in tables below.
Lesson Learned:
• Success of an evaluation project relies on the effective participation of all levels of stakeholders, not just the leader or person seeking the evaluation.
Mini-Case Study Two
A local County Commissioner of Mental Health was interested in whether children’s visits to an Emergency Department (ED) could have been avoided, particularly among those children receiving assistance from the County. The question was whether the children were in contact with any service agencies and whether that contact should have resulted in interventions that would have precluded the ED visit. Students in the graduate Program Evaluation class and I met with the Commissioner and formulated a plan for record reviews of intake information at the ED of the local hospital. Although students were reviewing unredacted records, Institutional Review Board (IRB) approval was not considered necessary; unpublished program evaluations do not require IRB review.
Students presented the results to the Commissioner who was able to work with the Department of Social Services to develop preventive interventions to reduce children’s unnecessary use of the ED. The success of this project resulted in the Commissioner taking the methodology to the relevant State Offices and the data collection strategy was replicated in nine counties. Three factors led to this generalizability: the quality of the student-designed project, the Commissioner’s appreciation of, and reliance on relevant data, and the Commissioner’s interest in expanding the use of relevant methodologies and usable information as a foundation for decision-making.
Lesson Learned:
• The generalizability of methodology and the usability of results rely on the involvement of appropriate policymakers.
A local prevention program focused on specific issues related to illegal substance use among younger people as required by their funding sources. Through my ongoing relationship with the program director as the program’s evaluation consultant, we conducted multiple evaluations, including focus groups/interviews with key leaders in the community, an asset-liabilities assessment of a specific neighborhood, and pre-post surveys with a summer leadership program for high school students, among several other projects over a period of approximately six years. Below are examples of successful and not-so-successful implementations of those evaluations.
Mini-Case Study Three
In one particularly effective evaluation, students conducted an observational assessment of a neighborhood to identify both assets (open stores, schools, churches, shops, etc.) and liabilities (closed stores, vacant houses, empty lots with trash). Although the student evaluation groups typically included only five or six participants, this self-selected group comprised 14 very dedicated students, divided into seven pairs for the observations and interviews. The paired students divided the neighborhood into approximately equal manageable areas, did the observations in pairs, and conducted interviews (with a structured interview developed during class time) to obtain residents’ perceptions of the neighborhood. The collected information enabled the program director to develop strategies to advocate for neighborhood improvements and to identify specific locations for program development. The degree of determined and dedicated student involvement led to the clear success of this evaluation effort.
Mini-Case Study Four
In another evaluation, the program director of the same substance abuse prevention program requested that students conduct interviews with people presumed to be key stakeholders to obtain their feedback on the program. Working with the program director, students identified approximately 40 locally based, potential stakeholders, including religious leaders, politicians, educators, local business owners, and others. The project itself was built on the expectation that people in the community would be familiar with, if not involved in, the work of the project. However, these stakeholders-leaders, all of whom the students contacted directly, were not sufficiently knowledgeable about, or in some cases, invested in the work of the program to participate in the interview process, resulting in inadequate numbers of completed interviews and thus, inadequate feedback regarding program implementation. Here the lack of success seems tied to the lack of interest or commitment on the part of the external stakeholders, most of who did not view themselves as stakeholders at all.
Mini-Case Study Five
To evaluate a summer leadership program for high school students offered by the same substance abuse prevention program, the graduate students in the program evaluation course and I met with the program coordinator to identify the aims and activities of the program which would enable the students to develop pre-and post-surveys. The coordinator who reported to the program director did not seem particularly interested in any kind of evaluation. After the initial meeting, the students were virtually unable to connect with the program coordinator who simply did not respond to emails or phone calls. The students, under my direction, finally developed a draft survey to enable some completion of the project before the end of the semester. The lack of success here reflected the lack of commitment on the part of the internal stakeholder.
Mini-Case Study Six
In working with a program director in an agency that provides support to underserved, generally homeless, people, I suggested conducting a focus group with people who were receiving services to solicit their input in developing strategies to address their needs, which could result in modifications of existing programs. The program director asked approximately six or seven people to participate and four arrived at the designated time. Transportation costs in the form of bus passes and given gift certificates to a local chain were offered to encourage participation and to compensate for their time. However, the focus group did not achieve the expected outcome in that all participants had extensive experiences with such agencies/programs and were familiar with the kinds of questions that might be asked and with the range of what they perceived that agencies might consider acceptable responses. Thus, the circumstances under which the focus group was conducted, that is, in the agency itself with a peer as a co-facilitator, led to repetitions of stories and statements which only affirmed what was already happening, rather than suggestions for novel approaches to addressing the needs of program participants. Here the previous experiences of the participants framed and even limited the range of their contributions.
Lesson Learned:
• The successes of community-based projects rely on the community psychologists and community stakeholder commitment or interest in the program.
Mini-Case Study Seven
As director of a graduate program in Community Psychology, I have consistently encouraged student-designed and implemented process and outcome evaluations of the program itself and of other offices at the College, for example, access to registration and financial aid offices, availability of library, and separately, food services.
The program-based evaluations provided useful and useable information, including:
students' preferences regarding the timing of classes ... evenings
the development of a student organization ... organized, then dissolved
the availability of advisors ... more needed
the helpfulness of field placement coordinators .... helpful
the employment outcomes of graduates ... particularly useful for current students
suggestions for program improvement ... for example, add, modify, or eliminate courses, increase field placement experiences, add electives, etc.
The above table provides information on the results of a program-based evaluation including students’ preference, the availability of advisors, and suggestions for program improvements.
Participation in these kinds of evaluations provided the students with meaningful, hands-on experiences with the process of evaluation and with the programmatic commitment to assessing the usefulness and value of one’s work. Only one among numerous CP program evaluations yielded a particularly negative response; when asked the reason for not continuing in the program, one person responded, “I hate [the program director who happens to be the author!].”
Lesson Learned:
• Engaging in the evaluation of one's own program plays a critical role in establishing the legitimacy of program evaluation for internal and external audiences, including those in the program.
Several of my evaluation experiences have reinforced the importance of effective process evaluations, particularly of observation. Three mini-case studies below illustrate that importance, two from one setting and the third from another setting which is described below. The first setting was a well-funded arts-education program that comprised artists collaborating with teachers in the delivery of primary school curricula. Storytellers emphasize the logical progression of a story (beginning, middle, end) for kindergartners and first-graders and math operations (addition and subtraction) for second and third graders; dancers express the meaning of words in movement (lean forward, then backward for wax and wane or ebb and flow).
This arts-education collaboration can result in improved grades for students, which can be documented over time through appropriate outcome measures, for example, quarterly grades compared with the previous year, or compared with another no-arts unit. However, the actual viability and replicability of the program will depend on two factors: first, the support of the classroom teachers through their involvement in the collaborative process, and second, the actual use of the arts by the change-agent artists.
Mini-Case Study Eight
An illustration of the first factor, support of the classroom teacher, was my effort to observe both the teacher-artist collaboration and the actual artist’s presence in the classroom with at least two observations of each teacher. One second grade teacher was determinedly not interested in participating in any aspect of the process, though expected to do so by the principal; the teacher even stated to me, “You can do your little [arts-education dance] program here [in the classroom] but I am not going to be involved.” That teacher retired at the end of that school year. Most other observations were conducted with the enthusiastic involvement of teachers and artists. One other significant observation was of the grade-level teacher-artist planning meetings to select the curriculum for the artistic mode of delivery. After the first year of the program, the planning meetings became more about setting up the calendar than about modifying or expanding the content and mode of the artists’ delivery of the curriculum. That focus, on the calendar rather than content, reflected the decreasing commitment of the teachers to effective participation in the process of teacher-artist collaboration.
Mini-Case Study Nine
An illustration of the second factor, actual use of the arts by the artist, an effective songwriter/poet/musician collaborated with a fourth-grade teacher in the delivery of a poetry unit with the expectation that the artist would use music to demonstrate the role of rhythm in poetry. In the observed classroom experience, the artist used her own skills in poetry-writing to deliver the lesson rather than her musical talent and musical instruments. The outcome of improved grades for the students was, in fact, related to the skill of the artist as a poet than to the artist as a musician in the delivery of the curriculum. This effectively precluded the presumed replicability of the teacher-artist collaboration. Although such a conclusion would not have been drawn without the evaluator’s observation of the classroom exercise, there were also numerous observations of the effective and appropriate implementation of the collaboration as designed.
Mini-Case Study Ten
An entirely different example reflects the importance of observation in an entirely different setting, a national organization with a focus on a specific medical condition. The organization had developed an extensive curriculum, a set of nine chapters with accompanying slides, for medical professionals to bring current, in-depth information to those with the condition and to inform the general public about the condition. The aim of the evaluation was the assessment of the effectiveness and usefulness of this standardized curriculum. As the evaluator, I included observation of each of the three planned implementations of the curriculum, one in a rural setting with people with the condition, one in a university with providers, caretakers, and people with the condition, and a third in an urban setting with providers and caretakers, primarily parents and family members of children with the condition. The observation revealed that the actual use of the curriculum varied widely across the three settings. The physician-presenter in the rural setting discussed the first several chapters; the multiple presenters in the university setting each reviewed their own areas of expertise without reference to the curriculum, and the presenters in the urban/primarily family setting focused on one chapter in the curriculum which did not overlap at all with the rural presentation. Participants in each setting completed pre-post surveys which demonstrated some increase in knowledge about, and understanding of, the condition across the three settings but clearly those improvements were not related to the actual use of the curriculum. Again, the importance of observation is demonstrated in that the conclusion could only have emerged through my evaluator observation of each implementation.
Lesson Learned:
• Program evaluations can identify successful programs which can be replicated; however, such programs require careful analyses, typically through observation, to ensure that the implementation processes are well-documented.
Mini-Case Study Eleven
In the mid-1990s a local philanthropic foundation began to support locally-based academic-community collaborations through mini-grants, and I applied for and received one of the first. Upon the completion of that grant, I was subsequently approached to collaborate with a variety of community-based programs and agencies over a period of years. These included focus groups with elderly residents of a public housing project to assess their satisfaction (which impacted planned renovations of the housing project), and observations of an advisory board for a child sexual abuse intervention program to identify strategies to enhance the Executive Director’s success with the Advisory Board (one obstreperous person resigned; the Chair reorganized meeting structure). The success of each led to my being contacted by subsequent community agencies and programs to participate in a joint submission to the funding source as the value and usefulness of engaging in evaluation activities became more evident. Here the overall success emerged out of my previous experiences and my local reputation.
Mini-Case Study Twelve
As part of an overall assessment, another national organization/foundation with a focus on differently-abled individuals was interested in whether locally-based programs which they funded were using strategies that matched the vision and mission of the national organization and whether the implementations were resulting in the desired outcomes. Most local program directors were understandably proud of their own efforts, the extent of local participation, and the outcomes of the programs. As the evaluator for the national organization, I undertook the task of assessing six of the local programs (somewhat randomly selected) to identify both aspects that were congruent with the national organization’s goals and objectives and those that needed modification to increase their rates of success. These program directors were willing to participate in the evaluation activities but were also accustomed to receiving only praise for their efforts in initiating and managing their programs. My evaluation reports for each program documented their successes but also included recommendations for improvement. The reports were not well received; directors who had welcomed me, participated actively in the evaluation activities and seemed to accept and even welcome verbal recommendations at the end of each visit, did not appreciate having any of what they perceived as less than positive results in a written report. The outcome was the termination of the entire evaluation project.
Lesson Learned:
• The value of program evaluations is learned primarily through the experience of having results that easily lead to program improvements, which highlight the usefulness of conducting assessments.
Mini-Case Study Thirteen
In one New York State-based evaluation, six counties were selected to participate in a public health intervention and were asked to different their own program designs in their efforts to achieve the desired public health outcome. At the end of the evaluation period, some strategies were clearly more effective than others which led to the adoption, or at least the encouragement of the adoption, of those strategies state-wide. As the evaluator I had assured each of the participating counties that their results would be anonymous, that is, the State as the funding source would not know which counties were successful and which were not. The need for that promise of anonymity was essential because the local staff was concerned that future funding could be affected by the State staff’s knowledge of specific outcomes. At the end of the project, with positive results clearly disseminated, the State staff requested rather strongly that the anonymity be unveiled so that the successful and not so successful counties be identified. I refused, based on the ethics of my adhering to my promise. That ethical decision led to the termination of that relationship!
Mini-Case Study Fourteen
Another instance of ethical difficulties was in the final first-year evaluation report of two-year community-based, federally-funded project which required collaboration across multiple human service agencies. Funding for year two was based on the viability of the project and commitment of the agencies, both of which were to be documented in the evaluation report. A new project manager, who started just weeks before the first-year report was due, requested changes in the report which would enhance the appearance of a positive outcome for year two, but which somewhat misrepresented the actual data. Discussion ensued resulting in the project manager asserting her position as manager and me asserting my role as evaluator with my intention to adhere to the independence of the evaluation process and outcome. The awkwardness of the situation for me resulted in my submitting only a hard copy (in the days before electronic submissions) on the day the report was submitted, precluding the manager’s interest in, and possibility of making changes in the report.
Lesson Learned:
• Adhering to ethical boundaries can be difficult in some circumstances.
Conclusion
This was a series of mini-case studies to illustrate factors that can affect the critical role that program evaluation plays in the community-based practice of community psychology. Each of these factors emerges from my on-the-ground, in-the-trenches, front-line experiences of working with community-based agencies and programs that rely on county, state, and/or federal funding, that is, public monies, or on local, state-based, or national foundations for private money. Extracted from these seven “lessons learned” are three principles that have emerged as guides for my work in the field of program evaluation. They are: (1) ensuring that the evaluation results are useful, (2) making sure the evaluation is simple and doable, and (3) evaluation efforts are congruent with the program efforts. The figure below highlights these three principles:
Finally, program evaluation serves as a critical part of the practice of community psychology, providing essential information for funding sources, and crucial feedback for those aiming to improve individuals’ quality of life and well-being. Those of us who work/practice in the community most assuredly value the consistencies and, at the same time, the idiosyncrasies of that work, as reflected in the seven takeaways and the illustrations of each. Those who are change agents or interventionists also intuitively or actually know the value of building assessments or measures into their change efforts from the beginning to identify both areas in need of improvement and areas of success. Using appropriate program evaluation strategies based on the three principles cited above will enhance the efficacy of community-based interventions.
From Theory to Practice Reflections and Questions
• Program evaluation plays an important, structural role in its contributions to the assessment of the intervention work of change agents, where community psychology practitioners and relevant stakeholders work together to design and implement needed community-based programs (O’Connor, 2021). What lens might a community psychologist bring to the table in a program evaluation? What lens would another psychologist bring (e.g. social psychologist or clinical psychologist)
• Describe why it is important when conducting program evaluations to analyze the data collected from an ecological level.
• What conceptions did you hold prior to reading this case story about program evaluations | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/03%3A_Evaluation_Research/3.03%3A_Program_Evaluation-_A_Fundamental_Component_in_Eff.txt |
This case story describes the process of forming a five-year, ongoing participatory evaluation partnership between Housing First program participants, staff, and community psychologist evaluators in the multicultural context of the Island of O‘ahu in Hawai‘i.
The Big Picture
This chapter describes the process of forming a five-year, ongoing participatory evaluation partnership between Housing First program participants, staff, and community psychologist evaluators in the multicultural context of the Island of O‘ahu in Hawai‘i. Housing First is a community-based program that quickly provides permanent housing to individuals experiencing homelessness and emphasizes “consumer choice” in housing and service plans (Tsemberis, 2010). What started out as a top-down evaluation design conducted by traditional academic researchers became a weekly support group that engages in participatory research, often utilizing arts-based methods. Using an intersectional lens (Crenshaw, 1989; Weber, 2009), this case study explores the challenges and successes of building this partnership among individuals from diverse racial and ethnic backgrounds with varying degrees of power, housing experiences, and mental and physical health issues. While many members of the partnership experienced challenges that typically deter traditional researchers from engaging in collaborative research, our partnership demonstrates many strengths, including valuable lived experience, resourcefulness, and critical insight that allowed for the creation of a space that is both supportive and conducive to rigorous participatory research and advocacy.
The main objectives of this case study are to demonstrate the application of community psychology values—particularly, respect for diversity, collaboration and participation, and historical context—in building research partnerships among individuals located at multiple axes of oppression. In particular, this case study demonstrates that respect for diversity is incomplete without attention to intersectionality and colonial trauma and argues for community psychology practice that is explicitly intersectional. Learning outcomes are shown below and include (1) gaining an awareness of the complexities of participatory research, (2) being able to recognize the steps taken in long-term participatory research, (3) critically examining the role that historical context plays in community-based participatory research, and (4) recognizing the value of going beyond respect for diversity in community psychology research and practice. The figure below highlights these learning outcomes:
Intersectionality as Critical Praxis
Intersectionality is a field of study, an analytical strategy, and a critical praxis that understands race, ethnicity, class, gender, sexuality, age, ability, and other salient social categories as interconnected, reciprocal phenomena that interact to influence complex social inequalities (Collins, 2015). Within an intersectional framework, these identity categories are not mutually exclusive but rather are socially constructed categories whose intersections manifest in experiences of oppression and privilege (Bilge, 2014; Collins, 2015). For example, an individual identifying as a white, heterosexual, low-income woman is privileged along lines of race and sexual orientation but potentially oppressed along lines of gender and class. Importantly, these identities are tied to systems of power that are embedded within specific geographic, social, political, and historical contexts that have implications for lived experiences (Weber, 2009). For example, a person living in the American South who identifies as a white woman will likely have different experiences than an individual identifying as a Black woman in the same context. And these experiences are likely to shift because identities—and different aspects of identities—have different meanings in different contexts. Indeed, a 20-year-old individual identifying as a Black woman in New York City has different experiences than an 80-year-old individual identifying as a Black woman in the rural American South. An intersectional approach challenges the notion that singular identities can explain lived experiences of oppression and directs attention to the interdependent and structural forces, processes, and practices that result in complex inequalities (Grzanka, 2020).
Fig. 1, HERstory of Intersectionality
Examples of Black women's and other women of colors' contribution to intersectionaility
Sojourner Truth's, "Ain't I a Woman" Speech The Combahee River Collective Borderlands/La Frontera (1987) by Gloria Anzaldúa
In this speech, delivered at the 1851 women's rights convention, Sojourner Truth spoke to the intersection of the women's rights movement and the abolition movement, explaining how her experience as a black woman was not represented by either. She articulated how African American women's experiences must be uniquely acknowledge and included in the fight for women's rights (Moradi & Grzanka, 2017). This group of Africian American women activists. many of whom identified as queer, help solidify the link between intersectional ways of thinking and social justice action. In their 1982 paper titled "A Black Feminist Statement," they described the interdependence of systems of oppression including classism, heterosexism, racism, and sexism, arguing that race-only or gender-only politics did not reflect no promote action against social injustices that characterized their experiences as Black queer women (Collective, 1982). This book represents an example of Chicana/Latina feminist work that has shaped the concept and theory or intersectionality. Through this semi-autobiographical piece, Anzaldúa tells her story of holding multiple, intersecting identities, including being Chicanan and lesbian, and explores the borderlands between these identifies and the systems of oppressions attached to them.
Grounded in Black feminism, the concept and field of intersectionality were created and shaped by African American women and other women of color scholars and activists. In fact, the HERstory of intersectionality traces back centuries, with prominent contributions from Sojourner Truth, the and Gloria Anzaldúa (see Fig. 1). In the late 1980s, Kimberlé Crenshaw applied the framework to the legal realm and introduced the term “intersectionality” to describe the colliding systems of racism and sexism that Black women experience and that result in a unique form of oppression that single identity politics and legal protections had yet to address (Crenshaw, 1989; Crenshaw, 1991). For example, because Black women are discriminated against on the basis of race and gender, they often fall through the cracks in the legal system that implicitly assumes racism to affect Black men and sexism to affect White women.
Part of the ingenuity of this framework is that, despite its grounding in the experiences, scholarship, and activism of Black women and other women of color, it can be used in novel contexts and across diverse interpretive communities (Collins, 2015; Moradi & Grzanka, 2017).
Fig. 2 - Consider
Consider your multiple identities (race, class, gender, sexual orientation, etc.) and the ways in which they are related to systems of power. How might that relationship to power change depending on the context? For example, do you have more power in certain contexts than others based on the intersections of your socially constructed identities? Do you have more or less power when Interacting with people with different intersecting identities?
While this chapter relies on all three of these conceptualizations at times, intersectionality’s conceptualization as a form of critical praxis is most relevant to this case study. Critical praxis refers to the merging of critical thinking and social and political activism, with the ultimate goal of transforming systems of oppression (Gramsci, 1971). As a form of critical praxis, intersectionality not only seeks to understand experiences resulting from interdependent identities and systems of oppression but also seeks to critique and change the systems we study in order to create more just systems (Collins, 2015; Grzanka, 2020). Through this conceptualization, authentic community engagement, social and political impact, and centering voices and stories of resistance becomes essential (Moradi & Grzanka, 2017). Indeed, this type of engagement and transformation were major goals of our evaluation partnership.
Intersectionality and Community Psychology
While community psychologists rarely refer to intersectionality explicitly in academic literature, the overlap between intersectionality and community psychology exists, and we (community psychologist evaluators) found that an intersectional approach was helpful in guiding our community psychology practice. From a theoretical standpoint, both intersectionality and community psychology emphasize the impacts of macro-level systems (e.g., policies, economic processes, etc.) on individuals and communities and the role of power in constructing lived experiences. Importantly, both recognize the interactions between macro-and micro-level processes, and both community psychology and intersectionality emphasize the importance of social action and the potential of a collective power to respond to the inequities created by oppressive systems of power. Additionally, community psychology’s focus on context is reminiscent of intersectional frameworks that highlight the interaction of individuals and their context and the fact that different identities and intersections are more or less salient in certain contexts (Weber, 2009). Community psychology has long argued that any community practice and research must begin with an understanding of the social and historical context. In fact, one of community psychology’s four guiding principles is that social problems are best understood by viewing people within their social, cultural, economic, geographic, and historical contexts .
In addition to understanding others in context, community psychologists should engage in ethical and reflective practice—a key competency for community psychologist practitioners . Thus, it is important for community psychologists not only to understand the context that impacts community partners but also to reflect and understand their place within it. Intersectionality also highlights the necessity of the examination of one’s own place within power structures (Weber, 2009). It is especially important for community psychologists to seek out such an understanding given our field’s stated values of respect for diversity and inclusion (SCRA, 2020). We cannot live up to these values without an understanding as to what makes our social experiences diverse. As we turn to the project of focus, consider the potential systems of power interacting and impacting the ongoing partnership and resulting research. We start with attention to historical and sociopolitical context.
Community Context
This partnership takes place on the Island of O‘ahu in Hawai‘i. O‘ahu, known as “The Gathering Place”, is home to almost a million people, with approximately 115,000 tourists visiting the island on any given day, pre-COVID-19 . Hawai‘i is one of the most diverse states in the United States, with no ethnic group holding a majority, and O‘ahu is the most diverse of the islands, with 43% of residents identifying as Asian, 23% identifying as multiracial, 10% Native Hawaiian or Other Pacific Islander, and only 22% identifying as White only, compared to 76% nationwide The state capital, Honolulu, as well as the internationally-known tourist hotspot, Waikīkī, are located on O‘ahu. Not surprisingly, tourism is the major economic engine of the state. In 2019, the state brought in over 17 billion dollars in tourism monies and enjoyed one of the lowest unemployment rates in the nation (Hawai‘i Tourism Authority, 2019b; US Bureau of Labor Statistics, 2020b). However, tourism also drives up the cost of living and reduces the affordable housing stock. As expensive apartments and tourist lodgings replace affordable housing, rental rates and housing costs increase, and local residents are “priced out” (Moore, 2019). Indeed, Hawai‘i has both the highest cost of living and the lowest wages in the nation (after adjusting for said cost of living . Thus, despite its low rates of unemployment, Hawai‘i has high rates of poverty and homelessness.
Homelessness in Hawai‘i
Hawai‘i has one of the highest homelessness rates in the United States. In 2019, Hawai‘i had the 4th highest homelessness rate in the nation behind Washington D.C., Guam, and New York. Additionally, its homelessness rate has grown since 2007, while the overall national homelessness rate has fallen during this same time period . On any given night, approximately, 6,458 individuals were experiencing homelessness in Hawai‘i in 2020 . The majority—4,448 individuals—lived on O‘ahu . Notably, the majority of these individuals (53%) were living unsheltered (e.g., in parks, on beaches), making homelessness highly visible (Partners in Care, 2020). Additionally, between July 1, 2018 and June 30, 2019, a total of 16,527 people received some form of housing services or assessment, suggesting that homelessness affects a significant number of people in Hawai‘i (Pruitt, 2019). Given the high visibility and its perceived impact on tourism, the “homelessness problem” is especially salient in local public policy and local media (Pruitt et al., 2020). Unfortunately, due to the economic fallout from the global pandemic, the homelessness rate is expected to increase, and 19,000 low-income people are projected to fall into poverty in the coming year (Hawai‘i Data Collaborative, 2020; Partners in Care, 2020).
Decades of research reveal that homelessness in the United States results from a lack of affordable housing, high rates of poverty, and social exclusion on the basis of certain individual characteristics (Shinn & Khadduri, 2020). While certain individual characteristics are associated with increased risk for homelessness (e.g., experiencing mental illness, being a member of an ethnic or racial minority), social exclusions (e.g., racist housing policies, such as redlining”) actually “turn individual characteristics into vulnerabilities for homelessness” (Shinn & Khadduri, 2020, p. 52). Homelessness, in turn, exacerbates existing risk factors and can lead to further social exclusion and isolation from community support networks. From an intersectional perspective, individual characteristics interact to produce identities that are associated with different intersecting systems of power that lead to homelessness.
The local context of O‘ahu reflects these research findings. Honolulu’s high fair market rent rate is positively associated with its high homelessness rate (Barile & Pruitt, 2017). Importantly, not all residents are affected by poverty and homelessness equally. Despite prominent narratives that claim Hawai‘i is a “racial paradise,” stark inequalities exist related to race, class, and native ancestry. For example, Native Hawaiians are disproportionately represented in the island’s homeless population, comprising 43 percent of individuals experiencing homelessness, while representing only 19 percent of the general population on O‘ahu (OHA, 2019).
A recent racial equity report suggested that racial disparities may exist in housing services provisions as well (Pruitt, 2019). For example, Native Hawaiians and other Pacific Islanders were less likely to receive permanent supportive housing compared to whites and Asians. Additionally, Native Hawaiians make up a larger percentage of the unsheltered than sheltered homeless . In Hawai‘i, large encampments of homeless communities are not uncommon, offering social support and a return to kauhale living. Disparities between social classes are prominent as well. Hawai‘i was rated second of all states with the highest rates of taxes on low-income households, further increasing inequities between the wealthiest and the poorest residents . Class intersects with race and ethnicity as non-white and non-Asian groups are more likely to live in poverty and rely on housing subsidies.
Colonial History
Homelessness in Hawai‘i cannot be understood without an understanding of Hawai‘i’s colonial history. Prior to Western contact, Hawaiians—Kānaka Maoli—lived in kauhale living systems, sharing sleeping and living spaces, often under the stars (Watson, 2010). Each island was divided into ahupuaʻa, wedged-shaped pieces of land that stretched from the mountains to the sea. These ahupuaʻa were ruled by local chiefs, and each ahupuaʻa was meant to be self-sustaining, ensuring that everyone, including commoners (makaʻāinana), had necessary resources from both the land and sea (Minerbi, 1999).
Private land ownership did not exist within a Native Hawaiian system. Native Hawaiian homelessness has been attributed, in part, to two major historical events: The Great Mahele and the illegal overthrow of the Hawaiian Kingdom by the United States. With the dispossession of land and the fragmentation of Hawaiian communities, came Western homelessness. Even after contact with Western nations, The Kingdom of Hawai‘i remained a sovereign nation until the end of the 19th century (Goodyear-Ka‘ōpua, 2014). However, foreign pressures led to changes to the Hawaiian way of life. In 1848, under pressure from foreign advisors, King Kamehameha III introduced the Great Māhele (division of land), marking the beginning of private land ownership in Hawai‘i. To be awarded newly privatized land, makaʻāinana were required to file a claim, provide testimony, pay for a survey of the land to be completed, and obtain a Royal Patent. Only around 30% of makaʻāinana achieved all steps and were awarded on average 3.3 acres. Thus, the Great Māhele displaced a sizable number of maka āinana from their ancestral lands (Stover, 1997).
The late 19th century saw further challenges to the Hawaiian way of life and the sovereignty of the monarchy. In 1887, the Hawaiian League, a group of mostly White American businessmen, forced a new constitution upon King Kalākaua at gunpoint. This “Bayonet Constitution,” diminished the power of the monarchy (Osorio, 2001). In response to later attempts by the king’s successor, Queen Lili‘uokalani, to restore these powers, a group of European and American businessmen backed by the United States military overthrew the monarchy. On January 17th, 1893, Queen Lili‘uokalani surrendered in an effort to save lives and in hopes she would be reinstated. The Kingdom of Hawai‘i was proclaimed to be the “Republic of Hawai‘i” by coup members (“The Overthrow”, 1999).
Since the Great Māhele and the illegal overthrow of the monarchy, Kānaka Maoli have fought to maintain their connection to the land. For example, in the 1970s, the rural communities of Waiāhole and Waikāne successfully resisted evictions meant to make room for suburban and tourism developments (Lasky, 2014). Local “houseless” communities on Oʻahu also continue to fight for their right to define community and access ancestral land. For example, Puʻuhonua O Waiʻanae is a self-governed village, where on average 250 houseless people live, two-thirds of whom are Kānaka Maoli. What began as a village on the edge of the Waiʻanae Boat Harbor has transitioned into a permanent village community meant to be a place of refuge for all people who have been unable to afford the cost of living in Hawaiʻi. There, people have access to social services and a return to kauhale living. Other such communities exist on the Windward and South sides of the island, with Kānaka Maoli community leaders stepping in to address local homelessness. Building a Partnership
Housing First on O‘ahu
In addition to local community leaders responding to the homelessness crisis, government officials have invested in solving the problem. In 2014, the City and County of Honolulu responded to O‘ahu’s increasing homelessness rates with a flurry of housing policies, including funding for a program based on the program model. In contrast to “treatment first” models which assume people need to be “housing ready” (e.g., achieving sobriety, employment, etc.) before “earning” housing, Housing First, as a philosophy and program model, consider homelessness to be primarily an affordable housing problem solved by providing individuals with housing quickly and then, providing wraparound services if desired by participants—or “clients” (Tsemberis, 2010). The approach had been successful in other major US cities, and Honolulu government officials hoped the model would have an impact on O‘ahu.
Housing First Evaluation
The first-year evaluation revealed: However, results indicated that: Evaluators hoped to better understand these findings and turned to those who knew best-clients.
• High housing retention and improvements on many quality-of-life metrics (Smith & Barile, 2015);
• Approximately 97% of clients did not return to homelessness in the first year; and
• Monthly survey data showed decreased exposure to violence/trauma and improved physical health.
• Clients mental health and physical health still were significantly worse than the general public's; and
• Stress increased for clients between months three and six in the program (Pruitt & Barile; Smith & Barile, 2015).
The figure above shows that the first-year evaluation revealed high housing retention and improvements; 97% of clients did not return to homelessness, and decreased exposure to violence/trauma and improved physical health. Other results showed that stress increased among the clients and clients’ mental health and physical health were still worse than the general public.
A local service agency implemented the program and contracted with community psychologist evaluators (Drs. Barile and Pruitt) to conduct an evaluation of the program. An evaluation is a systematic investigation of program merits, outcomes, and processes, using social science methodologies (Cousins & Chouinard, 2012). In particular, Drs. Pruitt and Barile were tasked with evaluating the program for fidelity to the model (i.e., how well does the program adhere to the original program model?), housing retention, and cost-benefit analysis (i.e., do the benefits outweigh the costs?). The original evaluation plan was a mixed-methods design, including staff and client interviews as well as monthly client surveys.
Participatory Evaluation
In an effort to better understand the experiences of individuals in the program, community psychologist evaluators decided to engage in a which engages non-evaluator stakeholders in the research and evaluation process (Cousins & Chouinard, 2012). Participatory evaluations can be grouped into practical participatory evaluations and transformative participatory evaluations (T-PE). Our project falls within a transformative participatory evaluation approach, which aims to create conditions in which individuals who have traditionally had little access to power can empower themselves. Evaluators felt that T-PE would work well with the Housing First program philosophy, particularly the value of “consumer choice.” By creating a new evaluation process in which the researched become co-researchers, T-PE allowed clients to have more say in the policies and research that affect them. Additionally, with its attention to power and explicit goal of transforming systems, T-PE complemented Dr. Pruitt’s intersectional approach to research and evaluation. Finally, this approach seemed to fit better within the local context, which values cooperation and collaboration over power and competition and traditional hierarchical Western approaches.
Fig. 3 - Consider
What are some of the ways in which T-PE might encourage or work well with intersectional praxis?
Collaborative Partners
In general, participatory evaluations focus on relationships as an important outcome, and this project, likewise, prioritized building relationships between partners. Collaborative partners have included program staff, community psychology evaluators, and program participants—“clients”. While the individuals who hold these roles have changed over time, the partnership has remained stable. In particular, consistency in lead evaluators and a core group of clients has helped maintain the partnership even as case managers and other program staff have shifted. The initial community psychology evaluators consisted of Drs. Barile and Pruitt. Dr. Pruitt was in graduate school at the beginning of the partnership and has now taken on a leadership role in the project. Another community psychologist (McKinsey) joined in 2018 and, along with Dr. Pruitt, worked closely with community partners, attending weekly meetings. These two community psychologist evaluators—two White women in their 20s and 30s from the southern United States—had the most on-the-ground contact with partners. While evaluators stayed relatively stable, program staff has shifted over time. At any one time, program staff consisted of four case managers, two housing specialists, two administrative staff, and an interfaith chaplain/community liaison. However, the individuals who have served in these roles have changed over time. Clients involved in the partnership have also changed over time; however, for the most part, the core group of clients has remained consistent.
In evaluation, it is important to consider the ways in which different partners had different stakes in the project and varying levels of power. An intersectional approach to evaluation necessitates attention to power. In our partnership, clients had the least amount of power—both within the program and within the greater community—and also had the most at stake (e.g., their housing). Program staff also had much at stake (e.g., their jobs) but had considerably more power than clients. Various levels of power also existed among program staff. Case managers, for example, had less decision-making power than upper-level administration. While evaluators had significant power (e.g., determining what results and recommendations are passed along to funders), they were under contract with the program and largely depended on program staff for access to data and ultimately, a successful evaluation project. For the partnership to be successful and equitable, evaluators knew they had to consider these dynamics and how they were informed by the larger socio-historical context. For example, as members of the colonizer group, the community psychologist evaluators were constantly considering the ways in which systems of power attached to their social identities were impacting the group dynamics. In this case, intersectionality was employed as an analytical tool.
Community Assets/Needs
Despite power differentials, the partnership offered the potential to meet the various needs of the diverse partners involved. The T-PE approach was embraced by the program, which was looking for a way to build client feedback into the program model. The participatory evaluation was one way to formalize that process. Additionally, community psychologist evaluators initially had difficulty accessing program data. Clients were hesitant to fill out monthly surveys and case managers were hesitant to encourage clients to complete them for fear of compromising program fidelity related to consumer choice. Case managers who were overburdened with high caseloads in the first year needed a way to see multiple clients at once (Smith & Barile, 2015). Clients expressed a desire for social support, community, structure, and something meaningful to do. The chaplain/community liaison was looking for a way to address these issues. Overall, community psychologist evaluators were looking for a way to better engage with the program without adding to the workload of program staff or stress of the clients.
Assets and Strengths
Thankfully, the partnership allowed for partners to meet these needs by capitalizing on the existing strengths and resources of various partners. For example, one of the biggest strengths was the ongoing commitment to building community and social support among program staff and clients as part of an optional weekly support group. Led by the chaplain/community liaison, the group met to discuss challenges with housing, to provide peer support, and to (re)learn “life” skills. Case managers also participated, some of them having had experienced similar challenges in the past. Dr. Pruitt began attending the group as part of an initial Photovoice project aimed at engaging clients and staff in the evaluation through the use of photography. Building the project and partnership into an ongoing program component was beneficial in that trust between group members had already been established. Over the next five years, the Housing First Community Group, comprising HF clients, program staff, and community psychologist evaluators became an integral part of the program and evaluation design. Additional strengths included the fact that upper-level program staff had important contacts in the community that paved the way for future and dissemination of evaluation findings. The program evaluators had training in participatory and arts-based methods and were able to use this training to inform the evaluation research project. Importantly, program staff and clients were open to collaboration, and all partners were committed to learning from each other from the start. Perhaps most notable were the clients’ strengths. Rarely are individuals who have experience with homelessness or severe poverty considered to have strengths and assets that are beneficial to society. Our work together revealed that this is a significant miscalculation by “housed” individuals. We identified the following:
• Clients were resourceful and insightful, making connections between quantitative outcomes and qualitative outcomes.
• They helped case managers check in on other clients in the program who may have been dubious of case management.
• They assisted in outreach.
• They helped the evaluators interpret survey results and were invested in the evaluation process as a whole.
For example, when survey data showed a decrease in physical and mental wellbeing after six months of housing, program staff and evaluators assumed clients were struggling with transitioning to housed life and considered offering more “life skills” classes. However, the interpretations from the client co-researchers revealed a more complex story. They explained that it took many months of housing before they felt safe enough to come out of the constant state of “fight or flight” they experienced prior to housing. Once they emerged from this state, they were able to take stock of their wellbeing and recognize the trauma they had endured. Some described their reactions in terms similar to post-traumatic stress disorder. Due to their insights, the program was able to address this issue by providing more comprehensive services beyond life skills classes. The value of such contributions is often overlooked in evaluation projects that do not take a participatory approach, and the value of the contributions of those who have traditionally been excluded from the process cannot be overstated.
Participatory Research Project: Photovoice Studies
Throughout our partnership, the Community Group—or “the group”—has collectively produced multiple evaluation reports, participated in community arts projects, and conducted participatory research projects. This section focuses on the two biggest projects which both used Photovoice, a participatory research methodology.
Photovoice Project One
In January 2016, the group chose to conduct a Photovoice project that examined clients’ experiences in the program and gave them the opportunity to speak to the program and the larger community about these experiences. Photovoice seemed an appropriate choice given that it is a community-based participatory research method in which participants use photography to(a) identify and record their personal and community strengths and concerns;(b) engage in critical dialog about them; and(c) communicate these strengths and concerns to policymakers (Wang & Burris, 1997).
As a participatory method, all partners are involved at each stage of the research process, from the development of the research question to the dissemination of findings. Photovoice works to center the voices and experiences of individuals traditionally left out of the research process (Tsang, 2020; Wang, 1999). Thus, the method worked well with our T-PE and intersectional approach. Individuals experiencing homelessness, particularly those who also experience severe mental illness or chronic health conditions, rarely have a say in the research and policies that greatly impact them. This exclusion is largely due to social exclusionary policies and assumptions by the larger society that such individuals are incapable of meaningful contribution to research and practice. Photovoice allowed for both the creation of an inclusive space that centered on client voices and analysis of the processes that tend to restrict inclusion and voice.
As part of this initial three-month study, 18 Housing First clients and two case managers took more than 300 photographs over a four-week period. At that point, most clients had been housed less than a year, and thus, the group decided to focus on the initial transition from homelessness to housing. They took photographs in response to prompts aimed to examine this process (e.g., “How is your life different now?” “What is everyday life like for you?”). Each week, clients and case managers shared photographs with each other and discussed their relevance to the prompt and the overall research question. Then, the group collectively conducted participatory analysis on the photos (see Image 2) and reported the findings in the yearly evaluation report (Pruitt & Barile, 2017). After the conclusion of the Photovoice project, Dr. Pruitt continued attending the weekly community group. As staff turned over, she often took on a facilitator role. A core group of clients also continued to attend and contribute to group agendas. This commitment helped ensure the group continued even in the midst of staff turnover. As several clients noted, having that consistency was meaningful. Throughout the next year, the group engaged in continued participatory evaluation research and practice, assisting in evaluation reports, helping interpret evaluation results, providing peer support to others in the program, and assisting Dr. Pruitt in her research on local media coverage of homelessness. Importantly, the group co-authored an academic journal in 2017.
Photovoice Project Two
In 2017, HF clients in the group asked Dr. Pruitt to help them design and conduct a follow-up Photovoice study. With increased knowledge of the research process, clients wanted to examine the long-term and continuous nature of the recovery process from homelessness. The group applied for and was awarded a Society for Community Research and Action Mini-Grant to purchase higher-quality cameras. From August–November of 2018, 22 individuals participated in this project (15 clients, four staff members, and three evaluators), most of whom had participated in the 2016 study and had been housed for an average of 3.4 years.
Participatory Analysis
In both projects, the community group conducted a participatory analysis of the photos. During meetings, group members would select a few meaningful photos to share, with the photographer contextualizing the photo by describing where and when it was taken, why it was meaningful, and/or what it represented. The group, then, collectively analyzed the photos by identifying patterns in photos and drawing connections to previously shared photos. In 2018, the group coded photographs using large theme boards (see Image 3). The ultimate goal of the analysis was to identify key themes relevant to the long-term process of transitioning into housing and recovering from the trauma of homelessness. Based on the themes identified during the participatory analysis stage, community psychologist evaluators also conducted a secondary content analysis of all meeting transcripts. is a classification process consisting of codifying and identifying themes within qualitative data (e.g., transcripts; Collins et al., 2016). The goal of the secondary analysis was to examine the unique contributions of the Photovoice method and to gain a comprehensive understanding of the recovery process.
Outcomes
Participatory and content analyses of the projects suggested that although housing brought stability to many aspects of life, challenges such as stress, stigma, and everyday struggles persisted for many clients once housed. Additionally, social support and community reintegration continuously appeared as prominent indicators and promoters of recovery from the trauma of homelessness. Other prominent themes identified as relevant to transition to housing and recovery included:
• the importance of projects, hobbies, and goals;
• appreciation of people and environment;
• the stigma surrounding homelessness; and
• reflection on life before and after housing.
While both studies demonstrated the difficulties clients faced with stigma, the first study found that clients felt that the level of the interpersonal stigma they experienced had been reduced with their housed status. However, the pain of previous treatment was still poignant. In the second study, clients continued to discuss stigma; this time, from a macro-level perspective. They discussed the stigma toward “the homeless” that they perceived in the media, local policies, and in the larger community’s attitudes (see Image 4). Notably, they wanted to use the findings from these projects to try to address this stigma and advocate for the homeless community. For more information on study findings, please visit our website at (or see Pruitt et al., 2018).
Dissemination
With social and political transformation as central goals of Photovoice methodology, dissemination of findings was essential to the project. Further, clients consistently explained that part of their motivation for participating in the project was its potential to create social change. As such, extensive dissemination of findings is an important piece of these projects.
Given that findings are informed by the lived experiences of individuals whose perspectives are rarely seen in the dominant public discourse, exhibits were one of the main dissemination tactics to reach a broad audience. In 2016, the group held an exhibit for the first Photovoice project’s photos and findings at Honolulu Hale (City Hall) in collaboration with the program, the City and County of Honolulu, and the University of Hawaiʻi at Mānoa. The exhibit was also displayed at the university’s Hamilton Library in November 2018, along with the second project’s photos and findings. In 2019, the second project’s exhibit was displayed at two other community events, including the Hawai‘i Art and Mental Health Summit and the Homelessness Interfaith Summit. The goal was to reach diverse audiences with varying levels of power and stakes in the program and to center the perspectives of individuals from the margins. Findings have also been disseminated within the academic community. Throughout 2017, HF clients and program evaluators co-authored a research article that reported findings from the first project, which was published in the American Journal for Community Psychology in January 2018 (see Pruitt et al., 2018). Evaluators are currently in the process of preparing a manuscript that shares both participatory analysis findings of the second Photovoice project and secondary content analysis findings. Findings and corresponding recommendations have also been included in annual evaluation reports to the program. Lastly, evaluators developed a website detailing the PV process, projects, and findings.
Impact
The two Photovoice projects also resulted in varied impacts—transformative change, knowledge creation, and methodological insights:
Transformative impact: Both projects sought to achieve a transformative impact by providing a time and space for HF clients to actively reflect on their own lived experiences, offer feedback to the program, and to engage in social action. Recognizing that they had “won the lottery” in being chosen for this pilot program, they wanted to help others by sharing findings from the studies in an effort to change the local homeless service system. Analysis of the impacts of the first Photovoice project revealed transformative change on the individual, program, community, and policy levels (see Table 1; Pruitt et al., 2018).
Table 1. Transformative Impacts
Level of Change Examples of Change
Individual-level Decreased perceived interpersonal stigma
Program-level Increased client voice in program
Community-level Community education on homelessness, mental illness, and Housing First
Policy-level Extension of program funding and expansion of program to neighboring islands and adoption of Housing First as a statewide model
Knowledge Creation: The projects also built knowledge as a result of the lived experiences of the transition to housing and the recovery from homelessness by taking a phenomenological approach to community-based participatory research (Bush et al., 2019). Phenomenology is a research approach grounded in the belief that knowledge can be derived from experience (Racher & Robinson, 2003), and seeks to build knowledge by relying on the accounts of those experiencing the phenomenon of interest (Giorgi et al., 2017). Indeed, as the field of community psychology teaches, the true experts on any given issue are those most impacted by that issue.
Fig. 4 - Consider
How might knowledge of lived experiences connect to intersectional praxis? How might knowledge built on lived experience disrupt dominant systems of power?
Methodological Insight: Lastly, the use of various research methods throughout the projects increased understanding of research approaches used to study lived experience. Intersectionality scholarship argues that research on any given issue calls for diverse approaches and methods (Moradi & Granzka, 2017). In line with this argument, community psychologist evaluators conducted both participatory and content analysis to draw meaning from photos and discussions. Comparison of these methods further emphasized the need for multiple research approaches to comprehensively capture the essence of lived experience. For instance, evaluators found that some themes that were prevalent in photos were not frequently discussed and that some themes prevalent in discussions were not frequently photographed. These findings signal that some topics are hard to put into words, while others are hard to capture visually. Additionally, evaluators encountered difficulty interpreting themes identified through content analysis themselves, revealing how participation and insight from HF clients were crucial to being able to comprehensively and accurately analyze data generated from the projects.
Fig. 5 - Consider
How might method diversity help community psychologist understand complex phenomena like recovery from homelessness? How does intersectionality necessitate diverse methods?
Lessons Learned
Overall, this partnership has been a process of mutual learning for all partners, and this section details the lessons learned engaging in the participatory research process. Notably, this section reflects the lessons learned by the community psychology evaluators involved in the projects. The initial plan for this case study was to include the perspectives of all partners. However, due to COVID-19, this collaboration has not been possible. Thus, this section may be most useful for community psychologist evaluators working in multicultural environments with marginalized communities. We recognize that given the lack of our partners’ voices, this section is necessarily incomplete. Perhaps the first lesson, then, is for community psychologists to be aware that our perspectives are not universal but are situated in a larger context informed by intersecting systems of power. One of the most important lessons included learning that taking on the researcher role could be difficult for clients.
While participatory researchers generally assume that more ownership of and voice in the research project is desirable and “empowering,” our work showed that taking on this role comes with unique challenges . For example, clients did not take any photographs the first few weeks of the second Photovoice project, despite the fact that they had initiated the project themselves. While one staff member worried that this hesitancy meant they did not want to participate, group discussions suggested that clients were taking more time because they wanted to “do it right,” and that they were extremely anxious about potentially making a mistake. Community psychologists working with marginalized groups should consider these challenges and address them throughout the project for some of the strategies we used to address these challenges).
Another lesson learned through the partnership is that dissemination can be one of the most power-laden stages of research. For example, balancing power among stakeholders during exhibit planning proved difficult. During the first exhibit at Honolulu Hale (city hall) in 2016, homelessness was a hot topic in the local media, and the exhibit was taking place during an election year. The program also wanted to use the exhibit as an opportunity to educate the community on its other housing programs. Thus, many higher-powered stakeholders now had a vested interest in the project—which we acknowledged could be both advantageous and challenging for the group’s goals. Beforehand, we talked as a group about the potential for the media or politicians to usurp our project to push agendas not necessarily in line with our own. The group collectively decided that the risk was worth the potential benefit of advocating for the program and others still experiencing homelessness. The exhibit received significant press coverage and was attended by the mayor and other prominent politicians, and the group was satisfied with the overall event. However, the negotiation and planning were daily stressors for evaluators, who served as mediators in these negotiations.
Additionally, the co-authoring of the journal article revealed the power dynamics inherent in academic writing. As hooks (1989) reminds us, the writing of research takes place within a context and that this context often supports white dominance. For example, hooks (1989) points out that White scholars make the mistake of not recognizing that writing occurs within a “culture of domination” and fail to understand power and context, and thus, their work often reinforces that domination. As White community psychologists, we found that we must be cognizant of the impact of this context at all stages of the research process if we were to (co)produce socially responsible research. In fact, researcher self-reflexivity, while necessary, was not enough. In addition to being aware of our own positionality within this web of power, we also needed to be critical of the very conventions that we relied upon and the process of power inherent in those conventions. We found that we needed to shift our focus from identity groups (based on race, gender, homeless status, etc.) to context or process.
Other lessons learned included learning to balance calling attention to differences among partners (e.g., related to power and skills) with recognizing our similarities. We also discovered that relationships were the most integral part of the research and knowledge-building process. It was clear that clients found the relationships they built with each other, the staff, and the evaluators to be most important to their continued investment in the partnership and even to their own recovery processes. Similarly, staff and evaluators found these relationships enriching and sustaining. Evaluators realized that they were as much a part of the group as any other members and found that to over-emphasize differences between them and the group members, however well-intentioned, could be offensive to some group members. For example, Dr. Pruitt found that calling attention to client “expertise” in understanding homelessness and the housing process served to distance herself from clients, and clients responded by emphasizing that they were more similar to her than different. One of the strategies that helped to reconstruct the assumed hierarchy of knowledge and skills (in any direction) was engaging in arts-based projects. With these projects, we were all simultaneously learning new skills and at times, learning from other clients with art backgrounds helped teach the group. In this way, clients, staff, and evaluators were on the same level, learning together.
Looking Forward
Recognizing the value of art in relationship-building, self-healing, and social action, at the conclusion of the second Photovoice project, the community group decided to form an arts hui (group) to learn various art techniques and to create art for themselves and for social action. For example, one of the art projects included a “positive signs project.” During one of the Photovoice discussions on stigma, group members questioned why public signs were always so negative, saying “don’t do this; don’t do that.” So, the group decided to engage in a sign project that distributes positive messages (see Image 7). For example, in response to signs instructing people not to sit or lie on public sidewalks (effectively criminalizing homelessness), one group member suggested the group make signs that inform people where they can sit (see Image 8). The group has also engaged in the art of cooking, learning new methods for creating affordable and healthy dishes.
Unfortunately, in March 2020, the COVID-19 Pandemic halted the weekly group meetings that have sustained this partnership. While all partners hope to continue the weekly meetings as soon as it is safe, it is unclear when or how to go about reconvening given the fact that many group members (including staff, clients, and evaluators) experience chronic health conditions that put them at risk for severe disease. While the group attempts to stay in contact via phone and email, many group members lack regular access to this technology. Some of us have remained in contact via the mail, and evaluators are working to build capacity to meet virtually. For now, however, the future of the group is uncertain although the overall partnership continues.
Recommendations
Based on this partnership experience, the community psychologist evaluators have recommendations for other community psychologists working in similar settings with similar groups—particularly those community psychologists interested in intersectional and participatory approaches. First, rigorous qualitative, quantitative, and mixed-methods research can be conducted collaboratively with individuals who have significant housing, mental health, and physical health challenges. Even individuals with ongoing psychosis were significant and essential contributors to the research project and were core leaders in the partnership. Community psychology practitioners working with such groups should not discount the abilities of community partners, and rather, should consider amending their practice by:
• Remaining flexible. Always have a plan but be willing to change it based on partner needs or changes in the context. Importantly, be willing to try something new and continue to seek ways to use the strengths of the community in meeting the needs and goals of the partnership.
• Thinking outside the box. Consider multiple avenues for participation and contribution. For example, we found that having a non-hierarchical, flexible group format with less structure helped produce a more inclusive environment for individuals who may have mental health challenges. It was also more culturally appropriate. Additionally, consider alternative or innovative research methods and ways of disseminating findings. We have already pointed to the value of arts-based approaches when working to center voices typically overlooked in research and practice. Intersectional community psychology practice might consider engaging in similar methods.
• Being willing to be wrong. Rarely do we get it right the first time. Making mistakes and conflict is an unavoidable part of any partnership and indeed, any authentic relationship. We found that some of our community partners reacted to conflict differently and more subtly than we expected. This reaction was an interaction of power differentials and also cultural and class differences. Indeed, differences even existed amongst community psychologist evaluators. Therefore, we had to ask for input from partners regularly and to investigate acceptable ways to address conflict. Importantly, we had to be willing to be wrong and work toward making it right without being defensive.
• Building authentic relationships. Partnerships will be greatly enriched if they are built upon authentic relationships between people who genuinely enjoy each other’s company. We found that laughing together, eating together, and being vulnerable with each other were important aspects of building relationships among partners.
Additionally, we encourage relying on intersectionality analysis and praxis at each stage of the partnership. This approach will likely require constant attention to power and researcher flexibility. Often our community contexts can unintentionally reinscribe hierarchical and oppressive structures. While partnerships consist of multiple and complex relationships, and these relationships worked together to produce new knowledge, community psychologists should consider how power affects the knowledge produced and what role they might be unintentionally playing in the reproduction of oppressive structures. With such attention to power, community psychologists can facilitate the co-production of meaningful, transformative knowledge which extends beyond the patronizing trope of researchers “giving voice” to marginalized groups. Of course, this approach will require flexibility. As TallBear asserts, “A researcher who is willing to learn how to “stand with” […] is willing to be altered, to revise her stakes in the knowledge to be produced” (2014, p. 2). For community psychology practitioners, especially those working with marginalized populations, the importance of showing up and doing what you say you are going to do cannot be underestimated.
In Hawaiʻi, as in homelessness services, people come and go frequently. Hawaii sees high rates of turnover in residents and is said to be a “revolving door,” and homeless services is a field notorious for high turnover in social workers, outreach workers, and case managers. Many clients simply expected evaluators and other group members to leave, and they frequently mentioned how much it meant to them that we continued to come every week. One of us (Dr. Pruitt) recalls how one of the core client members continued to be shocked that she remembered his name every week, more than 4 years into the project. In other words, consistency was key to building trust. One of the clients mentioned that it meant a lot to him to know that he was an important part of the group and that if he didn’t show up, other group members would wonder where he was. Indeed, if someone did not show up for a couple of meetings in a row, the group would often designate someone to check in on them to see if they were doing alright. Showing up, while it may seem like the least we can do, makes all the difference in building a strong partnership among those who have been socially excluded.
The reframing of the role of the trained researcher from offering expertise and “teaching” to showing up, standing with, and learning from the community is central to achieving equitable engagement and partnerships. Indeed, it became abundantly clear early in the Photovoice projects that HF clients did not expect perfection from us (i.e., in the ways we facilitated discussions or explained certain research concepts), but they did expect us to be present. It was this shared dedication to the project’s goals, commitment to showing up every week, and excitement in creating transformative change that strengthened the partnership most. Additionally, it can be easy to get caught up in the notion that trained researchers have something to teach community members, especially marginalized community members. However, it can be more difficult to see the ways in which traditional researchers can learn from community members. As trained researchers, we often left Photovoice sessions shaken and awakened by clients’ insights.
Conclusion
We have found that when an alternative type of space is created, people and knowledge thrive, allowing us to move beyond respect for diversity. Going beyond respect for diversity, this project demonstrates a space in which individuals of multiple races, classes, and genders worked together to build community, conduct rigorous research, and advocate for social justice. Ultimately, this case study sought to emphasize how to use community psychology values to conduct rigorous, long-term participatory research using innovative methods, and we argue that community psychology practice is incomplete without an intersectional approach. Community psychologist researchers and practitioners should not assume that collaboration and participation are enough to overcome pre-existing power dynamics and oppressive structures. Instead, they should also engage in constant reflexivity and recognize moments of subtle resistance. We hope we have demonstrated in this case study that community psychology practice should involve an intersectional critical praxis that investigates power dynamics related to various and intersecting oppressions and identities. “Practitioners who would be drawn to intersectionality as critical praxis seek knowledge projects that take a stand; such projects would critique social injustices that characterize complex social inequalities, imagine alternatives, and/or propose viable action strategies for change.” (Collins, 2015)
For the detailed Field Notes and Reflections on Field Notes for this case study, please contact Dr. Anna Pruitt.
From Theory to Practice Reflections and Questions
• How is the theory of intersectionality helpful in a participatory evaluation process? What are other areas where using an intersectionality approach might be beneficial?
• Pruitt et al. (2021) shared, “We hope we have demonstrated in this case study that community psychology practice should involve an intersectional critical praxis that investigates power dynamics related to various and intersecting oppressions and identifies.” Reflect on how “power dynamics” and research further what this means in general and provide a short statement.
• Name one way you can think outside of the box within your own work or area(s) of interest as a community psychologist or other field if not community psychology. | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/03%3A_Evaluation_Research/3.04%3A_Showing_up_and_Standing_with-_An_Intersectional_Ap.txt |
The CDC Foundation defines public health as “the science of protecting and improving the health of people and their communities.” You will find public health professionals working to prevent the spread of illness within large populations of people or smaller segments of people as in a single community. Further, their work revolves primarily around prevention, but they also attempt to understand how disease spreads and the effects on populations. Community health is similar and when working within communities, community psychologists have similar goals. However, they take a different approach such as focusing on the eradicating the impact of health disparities because of socioeconomic, cultural and ethnic factors.
The three case stories in this section will provide those interested in public and community health an excellent lens in which to see this type of work being done by three community psychologists. The first story A Plan for Prevention: Measuring Equity from the Start , contributed by Dr. Tonya Roberson focuses on a community-based participatory research (CPAR) methodology to promote health equities for African American students at an HBCU.
The second story Working with Survivors of Gender-Based Violence authored by Dr. Dessie Clark and Joshua Brown, LCSW tackle the task of bringing awareness to gender-based violence and working with survivors of this prevailing social issue, particularly those who might be living with traumatic brain injury as a result.
The third story Journeying Past Hurt: Creating and Sustaining Trauma-Informed Healing Practices With Black Pregnant and Parenting Mothers contributed by Dr. Deidra Somerville who centers her work on individual, family, and community healing. The story provides an excellent narrative on how to incorporate the knowledge and experiences of Black pregnant and parenting mothers into training programs and curriculum designed to support family health and well-being.
4.02: A Plan for Prevention- Measuring Equity fr
This case study discusses using culturally tailored data collection tools when applying practical community-based participatory research methods to promote health equities for African American students at an HBCU.
The Big Picture
The United States has historically been a country struggling with racial and health disparities. Disparities in health outcomes and healthcare persist between racial, ethnic, and socioeconomic groups in the United States with African Americans (AA) suffering greater from chronic diseases such as cancer, heart disease, stroke, dementia, HIV/AIDS, diabetes, and the morbidity and mortality rates for African Americans far exceed that of White Americans (The State of Health Disparities). Working as a community psychologist and studying the impact of diseases and inconsistencies, I see patterns in health care inequities. African Americans get sicker and die at a younger age from preventable ailments and diseases than White Americans. Furthermore, the history of medical mistrust among African Americans has been justifiably consistent and long-lasting. In a 1966 speech on health care injustice to the Dr. Martin Luther King Jr., an African American Baptist minister and activist at the time shared, “Of all the forms of inequality, injustice in health care is the most shocking and inhumane”. Dr. Martin Luther King Jr., an African American Baptist minister and activist at the time shared a profound remark, “Of all the forms of inequality, injustice in health care is the most shocking and inhumane”. These inequities result from a combination of individual and group behavior, lack of health and research knowledge, systemic inequality in economics, housing, and health care systems. African Americans have been underrepresented in medical research. Improving health disparities will require a methodical, purposeful, and sustainable effort to address issues including but not limited to health education and health literacy. The healthcare system cannot begin to address health issues or engage in prevention efforts if the patients do not understand what is being said to them. The patient needs to understand the information given to them to make informed decisions. Improved health education and health literacy will enable people to make informed decisions that lead to better health outcomes, family and social support, and access to health care which will ultimately help reduce disparities. The need to implement culturally specific research data collection tools to increase diversity and inclusion and increase the research participation of African Americans across ages to reduce the existing health inequities is imperative.
What Does Health Disparities Have to Do With It?
The United States (U.S.) government as “a particular type of health difference that is closely linked with social or economic disadvantages. For example, did you know that in the U.S., Black adults are nearly twice as likely as White adults to develop type 2 diabetes? This racial health disparity has been rising over the last 30 years and continues to rise. Disparities exist in nearly every aspect of health, including quality of health care, access to care, utilization of health care, medication adherence, and health outcomes. These disparities are believed to be the result of the complex interaction among genetic variations, health literacy, environmental factors, existing zip codes, and specific health behaviors. Closing these multi-layered gaps in health outcomes is no easy task. Community Psychologists can better identify and address the needs of a diverse population when we consider health disparities. Solutions are more likely to endure if they address both the current cause of a given disparity as well as the circumstances that caused it to occur initially.
The most recent pandemic Covid-19 has heightened our awareness regarding health disparities in the United States. Drivers of health inequities have been debated through the years, but most notably include social determinants of health (SDOG) such as poverty, employment in low-wage, but essential worker jobs, and crowded housing situations (Riordan, Ford, & Matthews, 2020). In the public health arena, the importance of addressing all facets of SDOH to advance health equity has long been recognized and discussed. When developing solutions to health problems, community psychologists recognize the importance of culture and context rather than assuming a “one size fits all” will be effective. Often, adaptations are needed to fit individuals.
Think Globally, Act Locally: Lived Experiences
I have always considered myself a public servant and have been inspired by the words and work of Dr. Martin Luther King Jr. The following quote is from a1965which speaks to the meaning of being a global citizen. “Our lives begin to end the day we become silent about the things that matter.” I take this to statement mean that …those who recognize that a wrong is being committed and fail to make a change are not necessarily guilty of the wrongdoing, but neither are they acquitted of the harmful outcomes.
When my mother, a former educator, was diagnosed with breast cancer during a routine mammogram I witnessed firsthand what health disparities look like. During her bout with breast cancer, she and our family faced many racial and cultural barriers, provider stereotyping, and communication difficulties between my mother and her provider. This experience made me think seriously about the other patients that did not have the capacity our family had to advocate for her, yet face the same challenging experiences. What was their outcome going to be? Will they live or die if they decide not to go through with suggested procedures because of feeling uncertainty, intimidation, or like a ‘guinea pig’? Who was going to be the voice for them, and at that point, I said, “I WILL!” I began volunteering with community–based organizations and then working to conduct community-engaged research, recruiting research volunteers in underrepresented communities with large academic medical centers, healthcare organizations, and community-based settings. The more involved I became, the more I witnessed the medical mistrust that existed and the health and racial disparities. I started to focus on two important areas: health education and disease prevention strategies to promote improving the quality-of-care and quality-of-life of persons coping with a life-threatening illness.
Community Assets/Needs
“If we take the time to care about people, we can transform whole communities.
You never know how you can change someone’s life by showing him or her that you care.”
– Rehema Ellis/NBC News
In my experience community-engaged research benefits from initially using a (CBPR) approach. Community-Based Participatory Research involves an equitable partnership involving all research parties in all aspects of the research process from inception to dissemination. CBPR relies on “trust, transparency, dialogue, extending and building community capacity, and collaborative inquiry toward its goal of improving health and well-being” (Winkler & Wallerstein, 2003). I’ve found that “true” CBPR approaches in health disparities research take a least two years to develop and the process needs to be ongoing. CBPR combines the best of community and academic wisdom, experience, and knowledge to promote social change to improve community health and reduce disparities. The CBPR approach has been effective in impacting health outcomes such as asthma, diabetes, and cardiovascular disease (Israel, Eng, Minkler, & Parker, 2012).
Principles of CBPR:
• Community initiation
• Capacity building
• Varied methods
• Joint data ownership
• Social action outcomes
• Community relevance
• Process oriented
• Ethical review
Within the last twenty years, CBPR has gained recognition within the public health sector and shown that community engagement is vital for effectively identifying and addressing health disparities. Many causes of health disparities and inequities include poor education, poor health behaviors of the group, poverty (inadequate financial resources), personal and environmental factors (USDHHS). Most of these factors are related to access. To impact health disparities and inequities we must strive toward holistic health equality for African Americans and other populations of color, the healthcare system must begin working to abolish the protracted consequences of racism. More meaningful data at the individual and cultural level should be collected among people of all ages, and then reviewed, and considered by both the members of the community and public health leaders and government decision-makers. This data and its review can be used to develop tailored health initiatives to improve health outcomes and increase equity. Including partnerships at the inception of the data collection process and culturally relevant data collection tools will increase the likelihood that the results are appropriately attained and accurately interpreted. The time has come for the political, economic, and social powers that have negatively impacted American medicine to reshape decisions that affect African American health policies. Assessments are conducted for various audiences, including researchers, funding agencies, private agencies, and policymakers and they must be culturally tailored for the target group. Researchers must then report factual and credible findings. Only then can health disparities be addressed and measured thoroughly and accurately.
Why Culturally Tailored Interventions?
Often evidence-based interventions are not tested with culturally diverse populations. Distinct cultural groups have unique needs and often fall through the cracks of service and healthcare systems. Interventions tailored for specific populations, can address these needs and reduce disparities. In order to improve African Americans’ health knowledge and willingness to participate in research, data collection instruments must be developed with the understanding that the respondents will only interpret questions and terms based on their own experience and context. Community psychologists working as researchers must aim to construct questions that are understandable and relevant to the group being studied to obtain the relevant information to combat the issue.
Dr. Robert Williams, an African American psychologist and professor who created the Black Intelligence Test of Cultural Homogeneity (BITCH-100) in 1972 because he saw the bias in intelligence tests towards White Americans. Dr. Williams also saw this as a problem because low test scores among African Americans were hurting their chances to secure jobs, gain entrance to certain schools, and access other opportunities supporting academic and economic success. Furthermore, receiving low intelligence tests scores was affecting African Americans’ self-esteem, confidence, and motivation to achieve and succeed. The test consisted of a multiple-choice questionnaire in which the examinee was asked to identify the meaning of 100 words as they were then used in what was labeled black ghettos. It took about two years for him to develop this culturally tailored test, and its purpose was to determine if his theory was correct. The results of the test showed that the Black group performed much better than their White counterparts. White students performed more poorly on this test than Black Americans, suggesting that there are important dissimilarities in the cultural backgrounds of Black and White participants. The results of these tests and examination of the BITCH-100 confirmed Robert Williams’ belief that his intelligence test dealt with content material that was familiar to Blacks.
Where Do I Start?
I put on my community psychologist’s hat and mulled over the fact that if Dr. Williams can prove that backgrounds are an essential part in the success of Black students and standardized testing, that background and culture are also vital in improving health outcomes and eradicating health disparities.
A Plan for Prevention
African American college students represent a unique population for promoting health. Although, African American research participation is restricted for relevant health disparities, especially among young African American adults. Limited data exist concerning the health of African Americans (AA). When health assessments are conducted at universities, AA students typically do not participate. Therefore, African American college students and further engaged research are necessary to collect accurate data. This additional data is vital in the development of health prevention and promotion interventions, activities, and services for this vulnerable student body. Armed with the data AA students can take on leadership roles and become advocates for health and peer-to-peer educators in eliminating racial/ethnic disparities and improving the quality of life of African Americans. Students at (HBCUs) can serve as a model for promoting health equity and prevention, and HBCUs are in an ideal position to serve as excellent public health partners. Therefore, I developed a mixed-methods study, which was culturally tailored for Black college students at a private HBCU in Atlanta, Georgia. The study was designed to answer questions about health beliefs, health behaviors that tapped into the uniqueness related to the disparities in their health and wellness.
This study proposed to:
1. Access the participants’ health perceptions, behaviors, and knowledge of Black college students at the HBCU,
2. Identify and define critical problems and barriers of health, and
3. Explore strategies to design sustainable health education and disease prevention interventions leading to better health.
Building Collective Impact
Building a productive and collaborative team of research partners is just the beginning. The team members’ ideas must be aligned, and promote sustainability. Many factors, such as identifying the right team members, building trust through good communication, and effective negotiation skills are needed to advance collaborative projects and to prevent and manage disputes and conflicts. I had to use my communication and interpersonal skills daily. I had to be open, forthcoming, and transparent while making a concerted effort not to over-deliver.
I conducted the study at Clark Atlanta University (CAU), an HBCU. Although I was from a different state, I was familiar with the operations, location, and culture of the school and campus because my daughter was a student. I strategically identified collaborative partners which included the Chicago local office of the American Heart Association, and a nearby Walmart Neighborhood Market to provide incentives for participants. Faculty members from Charles Drew University (another HBCU) and students helped with data collection. CAU-Research and Sponsored Programs provided leadership in the establishment of a partnership between a CAU research mentor, administering contracts, IRBs, faculty, staff and students, the institution, and its constituents. I further developed relationships at CAU with the PanHellenic Council (Greek Organizations), Student Affairs and Student Health and Wellness Center, and nurses. A unique approach was designed for this campus setting in Atlanta, Georgia to begin my work to identify and to increase student, faculty, and staff participation.
It became time to deliver. All hands were on deck! Initial recruitment relied on word of mouth until Student Affairs approved the recruitment flyer. Two to three weeks prior to the survey administration dates, the student health service director, the research mentor, eleven student volunteers, and I circulated recruitment flyers across the campus. Over 500 hundred flyers were posted in high traffic areas in program departments, classroom buildings, bus stops, dormitories, library, and student cafeteria. The Clark Atlanta University (CAU) administration also sent out a campus-wide email blast advertising and encouraging the students to participate. All college faculty members from various departments were contacted and asked to allow their classes to complete the 15-20 minute survey during class time. On the day of the survey collection, tables were set up, with volunteers and a variety of healthy snacks, in the Student Center; a heavy student traffic area. We offered incentives provided by the American Heart Association (ink pens, pedometers, towels, can strainers, and healthy cookbooks) given to each student that completed the survey. Upon completion of the survey, each student was also eligible to be entered into a raffle for \$25 Walmart gift cards.
Description of the Project/Engagement
“Identify your problems but give your power and energy to solutions.” Tony Robbins
My mom would always say, “There are a lot of problems but what are the solutions”? I believe that by conducting research in communities that emphasizes participation and action we can move toward developing effective solutions.
This project consisted of three main parts:
1. The first part was an overall attempt to understand the structure and organization of student health and counseling services at universities and colleges across the country.
2. Secondly, after reviewing this information, the task force identified university health centers defined as integrated and queried them more in-depth, focusing on the issue of integration.
3. The third part consisted of follow-up case study interviews with selected center directors. Using the findings from the literature review, the task force developed a web survey including questions relevant to counseling, health perception, and knowledge.
Participatory Action Research Approach
Participatory Action Research (PAR) is useful as stakeholders seek to understand communities and help facilitate their advancement because its approach views:
• research as conducted with people not on, or for people; and
• embraces processes that include “bottom-up organizing”.
To ensure an ethical stance is taken with this population, this study used a PAR approach. Unobservable community issues and problems can be identified through PAR. PAR usually involves any number of processes that include bottom-up organizing. PAR methods were used in every aspect of the development of this study including survey development, discussing the use of the data, and student interest in health promotion. Clark Atlanta University (CAU) students needed to be educated about the importance of participating in research, as there was limited data on the health of African American college students and further research was necessary to collect accurate and useful data. The goal was to use the data collected to develop sustainable, culturally-tailored programs to promote health equity on CAU’s campus.
Study Participants
Recruitment was remarkably successful. The participants (N=402) included CAU students 18 to 27 years old. Additional descriptive statistics are shown in Table 1 below.
Table 1: Demographics of the Participants
Classifications Number of Participants
Freshman 68
Sophomore 103
Junior 92
Senior 100
Graduate/Professional 18
Not seeking a degree 12
Other 9
Case Study Methodology
Research suggests that to address and attempt to eliminate health disparities, studies must incorporate a broad range of methodological approaches and cultural issues regarding the collection of data from racial, ethnic, and socioeconomic participants and other hard-to-reach populations (Stewart & Napoles-Springer, 2003; Sue & Dhindsa, 2006).
Examples of the methodological approaches of the project include:
1. Mixed-method approaches to increase participation of otherwise hard-to-reach groups to provide context to quantitative data
2. Community-based participatory action research (CPAR); and
3. Collection of data across the life -course (Bulateo & Anderson, 2004; Halfon, Hochstein, 2002; Zarit & Pearlin, 2005).
Our survey also contained questions about the structure, rationale, and subsequent impact of integrating health and counseling services.
Survey Measures & Interview Protocol
The results from our study revealed the health perceptions, beliefs, and knowledge of this population. The analyses provided foundational information to strategize and design sustainable health education and disease prevention interventions for African Americans in the future.
The assessment contained demographic information as well as questions relevant to spirituality, family history, ethnic identity, HBCU culture, student’s health perception, behavior, and willingness to participate in future health promotions. The survey also contained questions about racism, health disparities, and clinical trials. The focus group protocol was comprised of open-ended questions for the students to answer and their answers were expected to support the assessment’s findings.
The complete culturally tailored survey questions were grouped into nine main domains:
1. Health, health education, Family History, and Safety
2. Alcohol, Tobacco, and Drugs
3. Sex Behavior & Contraception
4. Weight, Nutrition, and Exercise
5. Mental Health
6. Physical Health
7. Oral Health
8. Spirituality/Religiosity/Social Health
9. Academic Performance
Focus Groups
Through this exploratory pilot, a focus group interview protocol and health assessment inquiries on the health needs of AA students were designed to establish whether the tools used were reliable and valid instruments for Clark Atlanta University to use in the future to help inform and develop health interventions. Interviews were open-ended and responses appeared to be candid and were detailed. The data analysis process consisted of five stages (1) familiarization of transcripts, (2) identifying thematic structures, (3) interpretation and selective coding, and (4) formulating categories. A common set of techniques for identifying themes, patterns, and relationships were vital to this study. Unlike quantitative methods, in qualitative data analysis, there are no commonly valid procedures that can be applied to generate findings. The analytical and critical thinking skills of the researcher are needed. No qualitative study can be repeated to generate the exact same results. The final stage of the analysis included linking the research findings to the research questions. Our analyses identified five focus group themes derived from the most common responses during the discussion. In this data analysis step, participants’ quotes were paired with the coded topic themes.
What Went Well
This project showcased positive collaboration building and teamwork. The communication was transparent and effective. This project set a pace for a truly equitable partnership. Buy-in from all parties: internal and external partners, faculty, staff, and students. Each party had ideas and a vision that aligned. The project involved collecting valuable information from and in partnership with a vulnerable population that could benefit from the results. The student volunteers as well as the student participants themselves were extremely interested, recommended great ideas, and were eager to learn and contribute. The Greek organizations served as leaders to help spread the word to a target population that we may not have otherwise been able to capture. Such positive school connectedness and valuable student leadership to disseminate health information in the future. Word-of-mouth about this much-needed project circulated among many other HBCUs increasing awareness and concern for the issue of health disparities. Since the completion of this study, I have been approached by Charles Drew University College of Medicine and Science conducting a similar study and project with their students.
Lessons Learned
The current manner used to assess college student’s health perception, beliefs and knowledge was developed and using The American College Health Association-National College Health Assessment (ACHA-NCHA) standards which do not provide a holistic, culturally tailored health report for people of color, especially African Americans. To reduce or eliminate health inequities and increase the overall quality of life, researchers must be able to identify which disparities need to be addressed and tailor data collection tools to do so. By regularly assessing, monitoring, and improving HBCU student’s health, African American students can leap forward to increase research and improve health and health outcomes not only for their campus but their communities. Evaluation of this data led to conversations about continuing this work and developing a combined student and staff steering committee to design and develop a sustainable, culturally tailored, holistic health education and disease prevention initiative to promote health inequity.
Implications for Policy
The data collected in this study can be an important resource for government leaders, policymakers, philanthropic foundations, and community non-profit organizations that are seeking to impact health, end racial health inequities and improve health outcomes.
This case study focused on a collaborative study that explored disparities in selected specific health determinants and identified promising programs and interventions that might be effective in reducing disparities. Focusing public and policymaking attention on fewer, more critical disparities that are potentially modifiable by universal and targeted interventions, can help reduce disparities (Robert Hahn, CDC, personal communication, 2010). However, until more evidence of effectiveness is available, I suggest the following actions:
1. Increase community awareness of disparities as problems with solutions;
2. Sett priorities among disparities to be addressed at the federal, state, tribal, and local levels;
3. Articulate valid reasons to expend resources to reduce and ultimately eliminate priority disparities; and disadvantaged groups by allocating resources in proportion to need and a commitment to closing modifiable gaps in health, longevity, and quality of life among African Americans with the United States.
Relationship to Community Psychology Practice
This case study highlights the importance of more community-engaged initiatives for impacting health disparities and inequities among African Americans. Consistent with community psychology frameworks, community engagement activities are informed by local context and involve the community in problem identification and social justice action. Efforts to support and equip community members, leaders, and organizations to address the pressing and urgent needs of African American and minority families and communities are crucial.
Community psychologists can provide education and promote resilience with communities of color fostering proactivity regarding their personal and community health to reduce disparities and inequities. Partnerships are vital to the success of community psychology practice. When we seek to address problems through community interventions, we discover the importance of the environment and potential community partners (Shinn & Toohey, 2003). In other words, we need to understand the context of the neighborhoods and community settings where offer our thoughts on community interventions. Community psychologists can help develop culturally appropriate interventions that support and honor existing protective community practices, while helping to change those practices that may have a negative impact on health (Bronheim & Sockalingam, 2003). Thus, the community psychologist can act as a change-maker, health educator, and researcher to meet individuals and families where they are to promote health equity.
In general, health outcomes are inextricably linked with lifestyle choices, personal decisions, resources, and environmental factors and are influenced by culture, history, and values. Therefore, community-engaged interventions must focus on holistic development, community engagement in defining health promotion goals, and advocacy toward addressing the impact of racism and discrimination.
Marquita (name changed) a program participant shared, “The survey was… informative, descriptive and was very specific! It inspired me to want to have seminars about all the topics that were included, and I realized how much knowledge I didn’t have regarding the different types of health issues prevalent in the Black community. I feel that the survey will change the world!”
Conclusion
Community psychologists can use the advances in social sciences to provide education and support and build resiliency within communities regarding proactivity about their health to assist in reducing disparities and inequities. Health disparities ultimately impact everyone. It erodes human capital and the labor market suffers. As we have seen with the Covid-19 pandemic, the world is still trying to recover economically, culturally, politically, and environmentally. We can push forward a restored world when we work together. Partnerships are vital to the success of community psychology practice of co-creating spaces of health and wellness.
From Theory to Practice Reflections and Questions
• Improving health disparities will require methodical, purposeful, and sustainable efforts to address issues such as health education and health literacy for individuals and families to foster informed decision-making processes that can lead to better health outcomes…(Roberson, 2021). What immediately comes up for you or resonates with you when reading this statement?
• How would you use the narrative in this case story to promote health equity among African Americans or other populations?
• Discuss your understanding of the term “health inequities” and how would you co-create equitable health with those impacted? | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/04%3A_Community_and_Public_Health/4.01%3A_Prelude_to_Community_and_Public_Health.txt |
The purpose of this case study is to explore the process and outcomes of a collaboration between researchers and a community-based organization serving survivors of gender-based violence, Fort Bend Women’s Center.
The Big Picture
Gender-based violence is a rapidly growing social concern and even more so as the world continues to grapple with the effects of Covid-19. The implications of gender-based violence are too numerous to name here, but there are several special considerations for this population, including attrition (this population can be more transient than others), high caseload and rate of burnout for front line workers, as well as the physical and psychological effects of abuse, including traumatic brain injury (TBI) and initial reluctance to trust others. Further, gaps in existing work with survivors of gender-based violence include a mismatch between the expectations of researchers and the realities of those who are on the frontlines in these organizations and the people that they serve. The purpose of this case study is to explore the process and outcomes of a collaboration between researchers and a community-based organization serving survivors of gender-based violence,Fort Bend Women’s Center. We propose that focusing attention on communication, trust, buy-in, and burnout are critical for collaborations between researchers and community organizations that serve survivors of gender-based violence.
It is important to understand that collaborations, such as the one detailed in this case study, do not begin by happenstance. Strong collaborations can take time to develop. For this reason, the authors find it important to explain the origins of this project. In 2012, Abeer Monem (now former) Chief Programs Officer of the began to explore reasons why a portion of the agency’s Intimate Partner Violence (IPV) survivors were struggling to progress toward self-sufficiency, despite the agency’s existing program offerings such as case management and counseling. As the agency explored the reasons behind the lack of progress, it became clear that one of the main reasons could be potential traumatic brain injury (TBI) in the survivor population. Eager to confirm their suspicion, agency personnel embarked on the discovery and research phases of the intervention’s lifecycle.
It was first deemed necessary to determine if agency survivors indeed exhibited a likelihood of traumatic brain injury. FBWC personnel began administering the HELPS Screening Tool for Traumatic Brain Injury (HELPS) (M. Picard, D. Scarisbrick, R. Paluck, International Center for the Disabled, TBI-NET, and U.S. Department of Education, Rehabilitation Services Administration). The HELPS Screening Tool is a simple tool designed to be given by professionals who are not TBI experts. FBWC personnel began by offering the HELPS upon intake to survivors seeking services. The initial findings found that over 50% of survivors screened positive for a potential brain injury incident. With this knowledge, FBWC program leadership began exploring neurofeedback as an innovative approach to assisting survivors exhibiting symptoms of TBI. FBWC approached another non-profit organization that was focused on researching and propagating neurofeedback in public school-based settings. After deliberations between leadership groups, a budget and project plan was finalized.
FBWC leadership began seeking funding from various sources and, after several attempts over approximately 18 months, two sources (one governmental, one non-governmental) agreed to fund the initial work of the neurofeedback project. Initial funding covered the neurofeedback equipment as well as the cost of setup, training, and mentoring by a board-certified neurofeedback clinician. In late 2014, FBWC began an initial pilot program to determine the impact and efficacy of a neurofeedback training program for Intimate Partner Violence (IPV) survivors with potential brain injury in an agency setting.
In 2017, I (Dessie Clark) traveled to Houston, Texas where she was introduced to Abeer Monem, the previous Chief Programs Officer of FBWC. During this meeting, Abeer shared information about an innovative neurofeedback program that was happening at FBWC. She described the approach and noted that they had been collecting data on the program to try to assess efficacy and impact. Dessie was intrigued and agreed to visit the site later that week. Upon arrival, Dessie was introduced to Joshua Brown, a board-certified neurofeedback clinician, who was the Director of Special Initiatives (now Chief Programs Officer) and one of the founders of the neurofeedback program. After multiple discussions, an agreement was reached between the two parties to begin a collaboration.
Timeline of Project
2011-2012: FBWC learns about intersection of brain injury and IPV and begins screening survivors for potential brain injury
Late 2012: FBWC leadership meets with Joshua Brown to discuss a possible neurofeedback pilot
Late 2014: Funding becomes available, and the neurofeedback pulot program is launched
2017: Dessis Clark is introduced to Abeer Monem and Joshua Brown and the collaboration officially begins
Community Assets & Needs
Fort Bend Women’s Center provides comprehensive services for survivors such as emergency shelter, case management, counseling, housing services, and legal aid. It is important to acknowledge that there are components of Fort Bend Women’s Center that are unique to the way the agency approaches service provision. Community assets will vary widely, even amongst similar populations. That being said, we believe the following assets are important to intimate partner violence survivors generally. First, FBWC emphasizes a trauma-informed care approach to working with survivors. The service model is voluntary (as opposed to other models that may have mandatory or compulsory services) and non-judgmental. Specifically, at FBWC, there was existing trust between staff and survivors. This is largely due to a trauma-informed care model that focuses on enhancing internal motivation in survivors and open and honest communication with staff. This helps to create a culture where survivors feel willing and able to be more open about their experiences and the challenges they are facing. Important elements of this model include the offering of and non-judgmental advocacy. This model has led to survivors developing a vested interest in the success of FBWC as an agency. Many survivors participated in the research because they felt motivated to share with others their positive experiences at FBWC. Also, many survivors return to FBWC to volunteer following service provision. Please note that the intrinsic motivation to stay involved with FBWC is not a common phenomenon in this community. This illustrates the importance of continual work on trust, safety, and confidentiality.
The IPV survivor community has myriad needs, and no one IPV community will have the same needs. For this partnership, there were several key needs of the survivors at Fort Bend Women’s Center that became vital to address in order to successfully implement the partnership. These needs included trust, safety, confidentiality, and adaptability. Given the trauma that survivors have experienced, these components need to be taken into consideration in interactions with other survivors, staff, and members of the research team.
Trust
The Domestic Violence Power and Control Wheel (Developed by: Domesitve Abuse Intervention Project)
Power and Control
• Using coercion and threats: Making and/or carrying out threats to do something to hurt her/him and threatening to leave her/him, to commit suicide, to report her/him to welcome and making her/him drop charges and making her/him do illegal things
• Using intimidation: Making her/him afraid by using looks, actions, gestures, smashing things, destroying her/his property, abusing pets and displaying weapons
• Using emotional abuse: Putting her/him down, making her/him feel bad about herself/himself, calling her/him names, making her/him think she's/he's crazy, playing mind games, humiliating her/him and making her feel guilty
• Using isolation: Controlling what she/he does, who she/he see and talks to, what she/he reads, where she/he goes, limiting her/his outside involvement and using jealousy to justify actions
• Minimizing denying and blaming: Making light of the abuse and not taking her/his concerns about it seriously, saying the abuse didn't happen, shifting responsibility for abusive behavior and saying she/he caused it
• Using children: Making her/him feel guilty about the children, using the children to relay messages, using visitation to harass her/him and threatening to take the children away
• Using male privilege: Treating her/him like a servant, making all the big decisions, acting like the "master of the castle", being the one to define men's and women's roles
• Using economic abuse: Preventing her/him from getting or keeping a job, making her/him ask for money, giving her/him an allowance, taking her/his money, not letting her/him know about or have access to family income
The survivor population at Fort Bend Women’s Center have experienced violence from a family member and/or sexual assault. FBWC data suggests that over half of the survivors seeking services at FBWC have experienced multiple traumatic experiences. Additionally, some survivors may have had difficult experiences with the justice system, the medical establishment, and other helping professionals. Experiences of trauma can cause distrust from the survivor when seeking services. Additionally, there is a high incidence of mental health disorders in survivors, including paranoia, that can impede the creation of a trusting relationship. Establishing trust became a vital step in survivor recruitment. For effective recruitment, it was imperative that survivors trusted that their information would be kept in confidence and that what they were being asked to participate in would not harm them. As previously mentioned, a culture of trust already existed between survivors and staff. The research team was able to build on this by including staff in the research collaboration. Staff members were included in project development, which allowed them to have a deeper understanding of the work. Staff’s enthusiasm for the collaboration, which was shared with survivors, helped extend the pre-existing trust that survivors had with staff members to the research collaboration.
Safety
Safety is of utmost importance to survivors who are fleeing violence. While one might only think of physical safety with this population, it is also important to take psychological safety into account. Steps were taken in this program to ensure that participating survivors understood the likelihood of any psychological harm due to sharing their traumatic experiences. Mental health staff was identified on a rotating basis to act as an on-call resource should survivors need it.
Confidentiality
Confidentiality of personal information directly involves trust and safety. Many survivors in our program feared for their lives and did not want anyone to know where they were or what they were doing. It is important when working with this population to ensure confidentiality. This is not only ethically and legally important, it is also important in building a long-lasting program. When thinking about effective confidentiality, one should consider their applicable agency, state, and federal confidentiality rules and regulations. At a minimum, it is important to execute a with the participant.
Collaborative Partners
To begin the process of engaging in effective research collaborations, it is often difficult to identify an effective community partner, and it is a somewhat arduous process. While individuals and organizations in academia understand the merits of research, this is not always the case for community organizations. Even in cases where community organizations recognize these benefits, there may be barriers, such as trust, due to historical harms done to communities by researchers. Additionally, community organizations often face strains due to limited resources or capacity to support research which researchers may fail to acknowledge or understand. These issues can create barriers for researchers who are interested in partnering with organizations that have survivor populations. This also causes issues for community organizations who may be less likely to have access to research, including best practices, given academic publishing practices.
Building understanding and trust between researchers and community partners is at the heart of a successful collaboration. A solid research partnership with a community organization requires buy-in from both sides. Whether from the perspective of the community organization or the researcher, it is imperative to find a partner that communicates effectively. This involves clearly defining each party’s expectations upfront, making sure that the terms that are used are clearly understood (including any potential jargon), and discussing the importance of flexibility in timelines. A great example of the lack of work on building understanding and trust is the story of the (now-defunct) nonprofit Southwest Health Technology Foundation (SWHTF) in which one author, Joshua Brown, was an employee. SWHTF was a small organization that was focused on evaluating the effectiveness of neurofeedback in existing systems (such as public schools). SWHTF began several data-driven pilot projects without the assistance of a research partner. These data showed the potential positive effects of neurofeedback interventions on behavior and academic performance. However, these data never saw the light of day. SWHTF leadership attempted to partner with four different research institutions to analyze the data. All four attempts ultimately failed without yielding tangible results. This was due to a failure on the part of SWHTF and the researchers to build trust and understanding through defining clear expectations, clearly understanding each party’s role, and agreeing on expected outcomes.
An important part of building trust is approaching a partnership with strategies that are aimed at the education of staff and survivors who will be involved. While SWHTF is a relevant example, the focus of this case study is the project conducted with Fort Bend Women’s Center (FBWC). We found the most effective approach to building trust and understanding was to focus on educating staff about our project first. We identified the case managers as those staff who have the most contact with survivors and have built up the most trust. When educating staff members, we found that it wasn’t as important that they fully understood all the specifics of the project but, rather, that they had enough of a basic level of knowledge about the project to introduce the information to the survivor. Because case managers had built up trust with the survivors, the survivors were much more likely to take the recommendation of the case manager and enroll in the program. Because our case managers were not subject experts, survivors were willing to speak to researchers even without an understanding of the specifics. The specifics were provided by researchers and program staff before enrollment.
In order to understand how a community navigates issues related to gender-based violence, it is important to understand the ways in which the culture of that community may impact their perspective. The key stakeholders in this project included the funding entities, the Chief Programs Officer, the Neurofeedback program Lead, and members of the Neurofeedback team. The population consisted of survivors of gender-based violence; both those who had completed a neurofeedback program and those who had not yet done so but desired to do so in the future. This research collaboration was between researchers at Michigan State University and Fort Bend Women’s Center. The organization, which served as the primary setting for the project, provides emergency shelter, housing/rental assistance, and supportive services. The project was predominantly made up of those located within the agency as staff members while researchers at Michigan State University primarily served as guides and consultants for the research portion of the project. For this project, funders included the Texas Health and Human Services Commission, the George Foundation, the Simmons Foundation, and the Office for Victims of Crime.
Project Description
This case study involved a multi-year collaboration. During the initial six-months, a series of site visits were conducted with the goal of researchers getting to know staff members of the community organization, as well as the survivors who received services from the organization. Before engaging in any research, a multi-day feedback session was held in which staff members from the organization gave feedback on the research including the approach, questions, assessment tools, and logistics of how the research would be conducted. In these sessions, it became clear that given the population being served at this site, to include high numbers of disabled or immigrant survivors, there would need to be adjustments for accessibility and safety. Other barriers specific to this collaboration included staff burdens, the distance between researcher and community partner, and various considerations given the population such as trauma responses, trauma history, and the relatively transient nature of the population. Additionally, the research and data collection experience of the staff at the community organization was limited. These conversations were critical in helping the community organization familiarize themselves with the research and the researcher obtain a better understanding of the unique strains the organization was facing in conducting the research. As such, novel processes had to be developed and reinforced to ensure adequate data collection and analysis. We involved considerations for a variety of perspectives that may be shared by survivors in our data collection approach. We would do this in the future, with more frequent check-ins with participants and staff.
Outcomes
The collaboration consisted of frequent communication between the researchers and the organization. Additionally, site visits happened semi-annually. Key outcomes included – efficacy of the intervention was established, adaptive technology was created, and we found evidence of a successful collaboration. The resulted in statistically significant decreases in depression, anxiety, PTSD, and disability symptomology for survivors. Survivors also experienced normalization of brain activity. This provides evidence that neurofeedback can benefit the well-being of survivors. Given the distance between Michigan and Texas, a system was created for checking in, transferring data, and ensuring that all necessary tasks were completed. A particularly novel component of this collaboration was the creation of an app for mobile phones that allowed for data to be transferred securely from Texas, in areas where Wi-Fi may not be present, to Michigan. The creation of a mobile phone app can be replicated. This was an important aspect of conducting research with a population where safety was critical, and Wi-Fi may not always be accessible. Finishing the project, and creating tools to do so, is in itself an indicator that the collaboration was successful. However, this project resulted in a host of other products such as publications, technical reports, presentations, and an awarded grant which also indicate that this collaboration produced well-recognized resources. Not a traditional metric, but one of great importance to the authors is the fact that both the agency and the researchers wish to work together in the future.
Lessons Learned
Over the course of the 3-year collaboration, we learned many lessons about conducting rigorous research with community partners who serve survivors of gender-based violence. However, we highlight the four biggest takeaways from our collaboration – (1) communication, (2) trust, (3) buy-in, and (4) resources. It’s important to note that there is no such thing as a perfect collaboration. Collaborations can be successful, produce important results, and still face challenges. It’s important to note that there is no such thing as a perfect collaboration. Collaborations can be successful, produce important results, and still face challenges. That is true of our collaboration, in which we did face challenges. For domestic violence agencies, there is one overarching consideration that impacts the four takeaways we will discuss below – turnover and movement of staff. Turnover and movement of staff is relatively common in community agencies and that means that there is a constant need to set and reset expectations to make sure everyone has the same information about the project and what is expected of them. These expectations need to be reinforced frequently to ensure there are no issues that need to be addressed. For example, in the first 3 years of our collaboration, we experienced 3 different neurofeedback team leads and have seen the neurofeedback clinical team have full turnover twice. Our experience underlines the importance of consistently resetting expectations.
Communication
We have found in collaborations with multiple parties, particularly those that are conducted long-distance, communication is perhaps the most important ingredient for success. How do you work collaboratively when not in the same space? For us, it was critical to use technology. We used secure platforms to share information and have important conversations. However, having conversations is not enough. It is important these conversations do not use jargon. For example, there are certain technical terms that researchers or practitioners may use which are not clear and can lead to confusion. It’s also important to share perspectives on what the priorities are for different parties. For example, researchers may be worried about things such as missing data whereas counselors may be more willing to collect minimal data in an effort to move on to the next survivor more quickly or to protect survivor confidentiality. Communication is complicated, especially at a distance. It’s important to realize you may do the best you can and still have problems communicating. Defining modes of communication, and what expectations are, is also critical. For example, what constitutes an email versus a call, how frequent those communications will be, and what expectations are for when those communications will be returned.
Trust
It is critical that there is trust between all parties. The survivors must trust the community partner and the researchers, the community partners must trust the researchers, and the researchers must trust the community partners and the survivors. This can create a complex web of dynamics that can be vulnerable to changes and miscommunications. In our collaboration, there were moments where trust and understanding between the researchers and the community partners were limited. In retrospect, it was important for community partners and researchers to sit down and share their perspectives and approach to work. For example, researchers may be more focused on details like completing paperwork properly or recruitment and retention of participants. Whereas, community partners may be more focused on completing interventions or connecting survivors to resources. In both cases, these duties are appropriate for the position but given the rapidly changing needs of survivors, the priorities may not be in alignment across groups. It is critical for both parties to understand where the other is coming from and trust that the necessary steps for the project will be completed. Collaborations should tackle this issue by communicating freely and openly and not resulting in micromanaging or avoiding these issues. Collaborators must trust that all members of the collaboration will do their part and be transparent if and when issues arise. In our case, this impacted survivors’ access to the project because at times, due to other survivor or agency demands, staff members were not actively talking to survivors about the project and what it may entail to become involved and learn more.
Buy-In
Ensuring that researchers, community stakeholders, and survivors have bought into the project and understand the project plan is important in ensuring things run smoothly. While the project itself may involve conducting research, it’s important to elicit feedback from the other collaborators at all aspects of the process. In our case, we asked staff members and survivors to provide feedback on the project design and survey. We hosted a multi-day training to talk through the process, the questions we were asking, and gather feedback on what we should know to inform the project moving forward. However, as referenced above, these agencies may experience frequent turnover of staff movement. As such, is it important to check in about buy-in over the course of the project – but particularly when there are transitions.
Resources
Working with community partners, who are often over-burdened and under-resourced, requires acknowledgment and supplementation from other collaborators. In our case, it was important for the researchers to design and adjust the project to best meet the needs of the community partner and survivors. We did this by:
1. Taking over aspects of the project like data collection to the extent that was allowable given distance and travel,
2. Hiring staff members as research assistants to help with data collection; and
3. Creating a phone application that allowed for information necessary for the project to be directly transmitted to a secure server at MSU.
Burnout
While for many of our lessons learned, we have concrete suggestions for future work, we do find ourselves with one lesson we have learned but have not solved. A constant challenge on this project was learning how to deal with staff burnout – both in their roles and in regard to the research project. For example, as we’ve mentioned staff at these agencies are often overburdened and under-resourced. Participating in research can exacerbate these issues and lead to a faster rate of burnout or what we observed and have called “research fatigue”. We believe that having a place to vent frustrations about the research project so they may be dealt with is a promising thought. However, this is a bit complicated as it seems that staff may be unsure of the appropriate avenue to share these concerns – whether it should be their agency supervisor or a member of the research team.
Research Process
A relatively unique aspect of this project was the willingness of staff to engage in all aspects of the research process. Members of FBWC were engaged throughout the entire process from project conceptualization to dissemination of information. The key stakeholder, Joshua Brown, was eager to be involved in research. However, this was only possible because Dessie Clark suggested the possibility and Joshua didn’t know that it is unusual for community partners to be involved. This highlights the importance for researchers and community partners to be talking about research, the degree to which each member wants to be involved, and what expectations will be. The authors of this chapter had many conversations about authorship on all the produced works and what workload and timeline would look like to live up to these expectations.
Looking Forward
As we continue to move forward, we would be remiss if we did not acknowledge the impacts of COVID-19 on the intimate partner survivors, the agencies (such as Fort Bend Women’s Center) that serve them, and research for those housed in a University setting. Before COVID-19 we imagined continuing our work in many of the same ways. We had applied for future grants and dreamed of expanding our work to examine the children’s neurofeedback program at Fort Bend Women’s Center. While we hope that eventually conducting our work, in-person, will continue, it seems prudent to reimagine what working together will look like in our altered state. It is the intention of the authors to continue collaborating. However, this may require adjusting to continue working in a virtual matter. Given the fact that technology has already been an important part of our process as long-distance partners, we hope that future work uses those technologies (digital survey platform, phone app for information transfer, etc.) to continue to collect important information that ultimately benefits survivors and their communities.
Recommendations
While gender-based violence is often examined at the individual level, communities play an important role in how gender-based violence is addressed and how survivors are supported. Communities can be a tremendous source of support for survivors by providing social support through which these individuals can access resources, and connect to services. In this way, communities have the potential to be a tremendous source of support for survivors. Or, in contrast, communities can impose substantial barriers on survivors and their families. Since structurally, communities are located closest to survivors, understanding how gender-based violence is addressed by, and within, communities is important in understanding and confronting gender-based violence as a society. The nature of this work was relatively clinical in nature (e.g. neurofeedback) where community relationships and community-engaged research are not a typical fixture. This effort provides suggestions for how those in clinical disciplines, like clinical psychology or social work, may conduct work with community psychologists that are more interdisciplinary in nature.
Further recommendations include:
• Examining how those who do more clinical/individual work may engage with communities,
• The use of technology to conduct and engage in community work, and
• How researchers may do work in communities that is rigorous, such as the waitlist control trial done here, and is not limited to that which can be done inside a lab.
A frequent conversation between the authors of this article was about the wall that exists between researchers and communities. Often, it is assumed that communities do not value or understand research. Or, conversely, that any research that can be done in community settings is not rigorous or worthwhile. Our experiences show the inaccuracy of these assumptions. Fort Bend Women’s Center created the neurofeedback program with research in mind. They implemented best practices and collected necessary data. While they didn’t have the resources to compile and analyze the data in ways that could be presented to the scientific community – they were certainly open and eager for the opportunity. Additionally, the research that has been done in this collaboration so far has been recognized widely and invited to contribute to special issues and conference keynotes – a marker of success in the scientific community. This was successful because people, located in very different spaces, were willing to discuss how they could meet in the middle to accomplish a common goal. The experiences of survivors happen in real-world settings and it was important to capture their lived experiences in that setting.
It is important to take the time on the front end to develop a plan. But, also, recognizing that plan likely will change. There should be explicit plans for action with turnover and communication. The priorities of the work should be established and reinforced. This includes defining what priorities are overlapping, and what priorities are important to researchers and community partners so they can work together effectively. Given the fluid nature of research, domestic violence organizations, and survivors it is important for everyone to be willing to adapt. Researchers may be forced to make changes to the research plan, particularly to meet staff and survivors’ needs. The agency may need to adapt to ensure that the research components fit into their own expectations and be willing to give feedback if they do not so adjustments can be made.
Implications for Community Psychology
There has been a multitude of promising results from this project including, establishing the efficacy of the intervention, creation of adaptive technology, and evidence of successful collaboration. In our case, researchers and community partners have published and presented in academic spaces and created a technical report for practitioners. Community psychology theory often focuses on engaging local communities that are relatively close to the research team. This case study has implications for how to do community-engaged research over a long distance using various technologies. This has the potential to further the conversation on how we can engage and work with communities when physical access may not be possible. This is important as funding and travel can pose barriers to certain populations and novel ways of doing this work may present additional opportunities for other researchers.
Conclusion
The authors of this case story believe that finding creative ways to manage mostly virtual relationships, as we have done here, has always been a critical component of doing community psychology work. However, as we wrote this chapter during COVID-19 we realized that what has been important to those of us striving to reach vulnerable populations in hard-to-reach locations is now a standard challenge. While community psychology has always pushed innovative ways to do community work, limited conversations have evolved on how adaptive technology could and should be used to try to ensure successful collaborations, particularly collaborations across distance.
From Theory to Practice Reflections and Questions
• Gender-based violence is a rapidly growing social concern and even more so as the world continues to grapple with the effects of Covid-19 (Clark & Brown, 2021). How does the discussion in this case study challenge your thinking regarding traumatic brain injury (TBI) and gender-based violence?
• Reflect upon conversations you have heard and/or had on gender-based violence. List 3-5 statements you have heard. Based on these statements what would you consider about society’s response to this issue? If you have not heard any stereotypical or other statements, research 3-5 statements and answer the same question.
• How would you go about creating an alternate setting to address this challenge in the community? | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/04%3A_Community_and_Public_Health/4.03%3A_Working_with_Survivors_of_Gender-Based_Vio.txt |
This chapter explores the process of development for the Journey’s Curriculum, a program designed to center the voices and knowledge of Black pregnant and parenting mothers to support families.
The Big Picture
The Community Exodus
Chicagoans often tell the story of Chicago and Chicago’s suburbs as if they are separate and apart, when their stories are intimately tied together. The fast-changing economic climate coupled with the unrelenting housing and employment discrimination practices affecting Chicago’s Black families reached a boiling point that spilled over into a mass migration to quiet unsuspecting residents of the south suburbs, peaking in the early 1990s through 2010. Surges in crime in Chicago’s Black neighborhoods were in part a result of the movement of jobs from the urban cities in America to urban cities in southeast Asian countries, disrupting and upending the economic fabric in rural communities in the outskirts of major cities in China, India, Malaysia, and Indonesia and shaking up the lives of Black men and women in Chicago’s south and west side neighborhoods. For many Chicago families, moving to the south suburbs was a way to find jobs, safe neighborhoods, and schools with strong academic promise.
As Blacks fled south and further west, White families fled even further south and west, leaving their homes and careers largely in policing and teaching, with the perception that their communities would suffer some wrath untold from their new Black neighbors. Some Black families were able to make the migration work. As the south and west suburban communities changed, the necessary support systems found in large cities were not present in the suburbs. Their absence deeply impacted families, particularly young families. The project discussed in this chapter attempted to support these young families, primarily headed by young Black parenting and pregnant mothers, whose families moved to the south suburbs. They are the first generation of Black children born during the exodus to the south suburbs. They have learned to navigate the systems available to them, much in the way some of their parents did. The project is designed to help them to see the systems that drove their families to the south suburbs and to use different tools to navigate communities existing within persistent disinvestment, White flight, and scarcity. This project took place in a small town that is still working to establish adequate support systems for its largely Black and low-income population. This city is Dolton, IL.
How I (Dr. Somerville) Came Into This Work
My family was part of the mass migration to the south suburbs during the 1990s. Like many others, our family looked for a place that would offer a “good life” for children and a place with peace of mind for ourselves. We found it in South Holland, a neighboring suburb just east of Dolton. I’ve spent most of my waking days working, studying, organizing, and engaging in coordinated responses to oppressive systems affecting Black people and people of color in Chicago, leaving far less time to explore the ecosystem of support for families in my own community. This project presented a unique opportunity to change that. Through volunteering and some consulting work, I’ve come to know and learn more about the nonprofit organizations in the south suburbs. Many operate as storefronts, have offices in bank buildings as a result of community benefit agreements, or operate within churches. Very few grantmaking foundations fund outside of the city limits of Chicago, which means that many of the programs in the south suburbs are funded through fee-for-service contracts with the State of Illinois or large grants that fund federal programs within the local area. Healthcare Consortium of Illinois (HCI) is one of the federally funded operations in the suburbs that accompanied families to the south suburbs in the 1990s.
South Suburban Landscape: Community Needs
HCI is located in Dolton, IL, a village township located in the south suburbs of Cook County. Demographics of Dolton (2010) include the following:
• 88% of the population are African American
• 41% of the households have children under the age of 18; and
• 29% of the households earn less than \$25,000 annually.
Like many south suburban Cook County communities, Dolton experienced a significant population shift during the 1990s which resulted in a large exodus of White, middle-income families and an influx of black middle-and lower-income families around the same time. During this shift, many families came to the south suburbs seeking employment and housing opportunities. The challenges with a tight job market, poor transportation options, and poor social service infrastructure made it difficult for many young, under-resourced families to transition well into communities designed to accommodate single-family, working and middle-income families with high degrees of self-sufficiency. According to data on local municipalities collected by the Metropolitan Planning Council of Chicago, low-income, single-parent families in Dolton spend 46% of their income on housing costs and 30% of their income on transportation costs. For mediate and moderate-income families, percentages were significantly lower for transportation costs, at 29% and 35%, respectively. Most residents within Dolton, regardless of their income, find employment in Chicago (39.4%). Unemployment is also much higher in Dolton at 24% compared to the rest of Cook County (10.7%) and the 7-county region (9.5%). HCI’s Healthy Start families are in the eye of the storm of poverty, unemployment, and those overburdened with housing and transportation costs (Encyclopedia Chicago History).
Residents and local leaders living together and working on issues impacting Black children and families in the south suburbs have relied on time-honored institutions within Black communities to address the gaps in resources in the absence of multi-million dollar foundations, corporation foundations, city budgets for supportive services, and a small pool of wealthy individual donors. Many churches have active community outreach ministries. Volunteerism is strong among churches and retirees. Schools and small community centers work with nonprofits and unincorporated organizations seeking collaborators to carry out services, space for programming, or distribute goods and services to the broader community. The web of connections among service providers is intact. Individuals who work for a nonprofit generally know who’s who. There is a sensibility of a “small town” community feel that makes connecting across organizations and municipalities less difficult than for larger cities where there is competition for funding at every turn.
Community Partner: Healthcare Consortium of Illinois
is an organization located in Dolton, IL, of partners committed to developing and maintaining targeted, community-based, integrated health and human service delivery systems which increase the well-being of individuals, families, and communities throughout Illinois by means of advocacy, awareness, and action. HCI operates the Healthy Start Program, a federally funded system of services that promotes family-based education strategies that are intended to lead to positive health outcomes for pregnant and parenting mothers and their families. The Healthy Start Program serves neighborhoods on Chicago’s southeast side and communities within south suburban Cook County. This project will focus service delivery on the needs of clients served by HCI’s Healthy Start Southeast Chicago Program.
Development of the Journeys Curriculum
The engagement began in the fall of 2018 with a goal to identify how to best bring the knowledge and experiences of Black pregnant and parenting mothers into training programs for staffers and curriculum designed to support families to get them to the point of thriving. Caseworkers, doulas, lactation consultants, and program managers from the Healthy Start Southeast Chicago Program worked with clients to learn about ways to make sure that curriculum was more than just a way to change behavior, but to acknowledge the systems that are often hostile to clients. They realized that asking clients to constantly acquiesce to unjust systems was indeed unjust and did nothing to improve those systems for their clients. Healthy Start program staff hoped to identify key issues and solutions based on their knowledge of the populations they serve and responsiveness to their presenting issues. At an initial meeting held at the HCI offices with the Healthy Start team, the team shared their current challenges and concerns to develop a meaningful, useful, and relevant project that would best address their concerns together. One key issue found was current curriculum and staff training that focused on behavior modification left many unaddressed systemic issues that clients discussed with staff. The concerns were:
• Case management strategies employed often focused on goal setting and implementing strategies for mobility, which are very often hard to obtain or achieve; and
• Barriers to success were often related to untreated trauma: childhood trauma, rape, prostitution, and intergenerational family dysfunction.
Healthy Start professionals had existing training in several different curriculums over the years, all evidenced-based, as the Healthy Start Program is a federally funded program. The curriculum choices they had at their disposal are not designed to address what staff believes are the root causes that keep their clients from setting and achieving their program goals. They decided that a curriculum that helped to address this current gap in their program practices would be the most meaningful and relevant project to focus on. The staff discussed the curriculum options they used and how they are integral to their work as advocates. A curriculum development chart (see Figure 1) was developed and presented at a follow-up meeting to analyze the curriculum options with the team. The curriculum chart featured underlying treatment goals, curriculum outcomes based on what has been written in peer-reviewed literature, what works for the team, and what is missing. The staff had the final say in how this chart defined the treatment goals and curriculum outcomes. The Staff team also determined what they found most useful for the existing curriculum and what they wanted to see in the curriculum to be developed. Table 1 below shows the Curriculum Development Chart.
Table 1 – Curriculum Development Chart
Curriculum Underlying Treatment Goal Curriculum Outcomes What Works? What is Missing?
Mothers and Babies Psychoeducation regarding depression, inner and outer reality management Stress management
Use of coping strategies
Decreases effects of depressive episodes
Improves self-regulating behavior
The context for the inner and outer reality management to help young mothers define their lives as beyond coping. A trauma-informed approach is not emphasized.
Parents as Teachers To improve health and early childhood education of children 0-5 years
Improve parental engagement
Optimal early development of children
Mothers practice the use of tools for self-regulation
Children are well monitored for health and development improvements no acknowledgement of the relationships between trauma experienced by mothers and outcomes. No capacity developed for parents to understand past and current trauma and contextualize their current reality.
Proposed Journeys Curriculum Treatment of past trauma and addressing current trauma
Psychoeducation
Goal setting
Placing experiences within the context of structural oppression and system navigation
Proposal: improved self-regulation; change in perception of the oppressive systems they navigate as reflected in the goals they set for themselves. Proposal: childe development outcomes are not included as part of the curriculum.
Workshop Processes
The workshopping process involved setting time with the staff to discuss our collective experiences with the current curriculum targeting pregnant and parenting Black mothers. The staff identified the absence of a as a clear focus. The team consisted of the program director, 3 case managers, and 2 doulas. The program director and 2 case managers had recently gone through a trauma certification training together a few months prior and shared ideas on how to incorporate our knowledge of trauma-informed practice, both in terms of institutional responses and clinical responses. Staff members discussed the importance of holding space to acknowledge trauma in order to facilitate healing and the use of tools to address trauma. The use of would be an important tool to address trauma and introduce the trauma-informed practice. The entire team acknowledged that they were unclear of the extent to which unaddressed trauma impacted their case management strategies with their clients. Putting together a curriculum with a trauma-informed approach (trauma-informed care) would help them to determine if addressing trauma would indeed have a positive outcome for their work with their clients. The opportunity to acknowledge the impact of intergenerational trauma and the legacy of white supremacy would bring a contextual reality to the curriculum that could inform their work with clients in more meaningful ways (Minnesota State Health Information)
Healthy Start staff were considered knowledgeable and experts in relation to their clients and the community. They were being consulted regarding the problem and issues affecting their clients and the solution that would best address it. The curriculum was designed to emphasize a strengths-based model that reflects aspects of psychological empowerment theory.
Empowerment Theory
There are many aspects to that can be applied to understanding an empowered setting. It is important to emphasize psychological empowerment in the lived experience of clients and Healthy Start staff. The curriculum draws from the four components of psychological empowerment: the emotional, cognitive, relational, and behavior components. Currently, Healthy Start staff saw many of their clients as not having control over their lives, due to all of the systems they must navigate for their daily survival. The curriculum was designed to facilitate the possibility for client participants to experience control and self-efficacy over their lives and relationships with their children. Their self-perception as competent and in control are considered to make a significant difference in how clients are able to make a life for themselves, despite the limitations that come with teenage motherhood.
Intervention Strategies
As a community psychologist, I proposed a set of intervention strategies in response to the goals discussed in prior meetings and what the team felt was missing that needed to be integrated into the curriculum under development. I conducted research and drew from the bank of information that taken together, I was either exposed to or familiar with as part of our shared training in the treatment of trauma. I also took into consideration the experiences discussed with HCI staff would be important to include as part of the curriculum: incorporation of movement (many of the group-based sessions we reviewed together only allowed for everyone to sit for an hour or more at a time, which everyone felt was not wholly conducive to their clients), incorporation of building rapport and connection among participants, and integrating aspects of African-centric practices that can support positive identity development.
The intervention strategies introduced to the HCI team were broken down into 3 pillars of intervention: therapeutic/healing, growth/change, and ritualistic/confronting complacency/waking up the spirit (see Figure 2). Each strategy introduced to the HCI team was discussed in order to introduce the relationship of the theories to the outcomes discussed in earlier conversations. This chart was introduced to begin to discuss the process by which the goals and outcomes would be obtained. This chart would help connect the dots between theory, practice and what HCI staff felt would help best support their current work with their clients. I endeavored to ensure that everyone involved in the decision-making process could see the connection between the “big picture” ideas we explored when looking at different curriculums and our thoughts about them to carve out more specific goals for this curriculum. This part of the journey brought up many feelings for all of us, reflecting on our own experiences in our own healing journeys. We realized that this was not only for clients but could also benefit HCI staff as well. Table 2 below shows the Interventions Strategy Chart.
Figure 2. Interventions Strategy Chart
Strategy Reference Intervention Goal Intervention Outcomes
Liberation Education Paolo Freire bell hooks Growth/change: Engage through praxis and use of critical thinking Participates in their own life story while healing
Re-Evaluation Counseling Harvy Jenkins Growth/change: To undermine the effects of traumatic events, affirm and evolve the psyche Use different perspective to commit to their own ideas about change
Rites of Passage Indigenous African traditions Ritualistic/confronting complacency/waking up the spirit and therapeutic healing: Restore harmony, justice, balance, and order Begin to recognize their true selves and to nurture and evolve beyond the constraints around them
Radical group work – Circle Work Kay Pranis
Jennifer Ball
Mark and create sacred space to invoke ceremony, mutual exchange, and healing Create grounding process to ensure greatest access to healing and participation
Narrative Therapy Dubi Therapeutic healing: To use storytelling as a means to address past trauma Use an expressions to release all that has caged their ability to heal from trauma
The curriculum also aligns strongly with Bronfenbrenner’s ecological systems theory. The curriculum aims to facilitate a process for clients to see the connections among how clients experience their families, community institutions, neighbors, and the macrosystems that support them. The curriculum helps to identify the barriers to healthy family relationships and the need to examine the systems that both impede child and maternal health and support it. The curriculum builds upon HCI’s integrative framework and supports each of its case managers as they work with clients to support their goal-setting towards education attainment, employment, and the emotional health of Black mothers and their children. The use of systems thinking was designed to help HCI staff and clients improve outcomes for mothers and their families.
The Role of Trust
The team members trust in my capacity to take their ideas and shape them into larger constructs that would inform the direction of the actual curriculum development was a bit daunting. I also felt that our conversations about what we would like to see were specific enough that I had a clear road map of how to further crystallize the ideas into an actionable plan and process. I use the term “we” very loosely, as my role during this part of the process was to listen, listen some more, ask clarifying questions, and present the ideas as I received them in order to receive feedback – and then start the process over again.
I did choose to engage HCI staff with my own experiences as well, as co-creators of this work, which is our community psychology approach, particularly when we shared how we each felt about the trauma certification program we experienced together. Thus, this process was largely reflective of their ideas and priorities for the curriculum. The development of trust was about trusting me and the process I provided for the development of the curriculum, but also, it was a process for HCI team members to trust themselves. As many team members have built a career in their roles as case managers, doulas, and in a leadership role of a case management team, they have been charged with implementing curriculum created by researchers/outsiders who generally don’t consult with case managers or program directors, and certainly not doulas, when developing evidence-based curriculum. This was a new role for the team. As we continued to discuss ideas and plans, the ability to see the process move from an interest they shared to the possibility of having a direct impact on the families they served became more real.
The Journeys Curriculum
The curriculum became known as the Journeys Curriculum, a psychoeducation format to address previous trauma while also aligning with current case management strategies for each client/resident to support their individual goal setting. The curriculum was designed as modular to allow for flexibility in implementation, depending on time constraints, client needs, and team member observations of clients. The curriculum could be delivered over 6-12 weeks, 1-2 hours per session. The curriculum incorporated centering, diaphragmatic breathing, and introduction to more techniques over time as the group evolved. Diaphragmatic breathing is a mind-body practice used throughout the curriculum (at the beginning and ending of each session) to reduce stress and has been studied to have additional benefits Self-regulation techniques were also shared, along with knowledge of how and why such self-regulation helps the body and mind. Each person visits their own family histories through stories about themselves and their families. This storytelling is interwoven throughout the curriculum.
The content of the curriculum allowed users to center their own stories, thereby having the opportunity to relive their journey with support and to gain conscious awareness of the contextual realities that informed their past, that of their family members, ancestors, and that which impacts their relationship with their children. The stories served as the foundation upon which they viewed themselves and the systems they navigate, their relationships and how they’ve changed over time, through influence of external realities. Participants began to critically examine their relationships and goal setting, and how systems impact their relationship to both. They used a mapping exercise to visualize the systems in their lives and their connections to their children to determine whether they are navigating systems or whether systems are navigating their lives. Exploration of their bodies and how they relate to them – as mothers and as sexual partners, through the trauma they’ve experienced, exploitation, and their earliest memories of their awareness of their bodies was also explored. The curriculum also made space for taking back power from traumatic memories/experiences and connecting them to practices that acknowledge pain from past experiences and incorporate self-regulation exercises to support their healing and continued psycho-social-emotional development. After exposure to various trauma-informed techniques, participants identified which ones they planned to continue to use after the group ended. Affirmation exercises were also used throughout and at the end of each module along with bonding exercises that help to reaffirm and reconnect the group members to one another.
Connections of the Curriculum
Once the co-creation of the curriculum was complete, the staff requested that we go through the curriculum together before working with their clients. They immediately saw the connections between their own experiences and that of their clients, as many of them are also Black women, some with similar origin stories to their clients. Staff members were also moved by the opportunity to grow their knowledge and incorporation of trauma-informed practice into their own work with clients. They saw the modular nature of the curriculum to have a particular benefit in helping support flexibility in use with clients. The techniques also offered a unique opportunity to support staff in unique ways and address untreated trauma that they also experienced. The curriculum has made an impact on the ways that staff members see themselves in relation to their clients and has made room for their capacity to engage in this work for their clients and to find a road to healing for staff who are often tasked with supporting clients, even as their own needs to heal from trauma and past experiences remain unresolved.
We were excited to begin the Journeys Curriculum. However, unfortunately, the project with Healthy Start was sidelined with a number of the staff leaving the agency during the planned implementation period. But, the project has since been utilized with a residency program for Black pregnant and parenting women in Harvey, IL, a neighboring municipality to Dolton, IL. The engagement process is still currently underway, delayed due to mandated restrictions related to COVID-19. There are still plans in place to implement the curriculum.
Takeaways From the Project
This process began with a goal to impact clients of the Healthy Start Southeast Chicago Program. Consideration for the ways the Journeys Curriculum would support staff was not clear in the beginning. Yet, the opportunity to see the project through to the end and evaluate the curriculum based on the client experience at Healthy Start was not realized. But this change in focus in the curriculum offered the opportunity to shine a light on an often overlooked topic in nonprofit settings: who heals the healers? Who provides support for the staff members of organizations working against considerable odds, navigating hostile systems, experiencing the long-standing effects of untreated trauma as they help their clients to strive and thrive? This question has sat with me for months since this engagement ended. When I work with new organizations interested in introducing this curriculum to their clients, I will think about how to bring the process that informed this curriculum to new collaborating partners, rather than focusing solely on the curriculum previously developed. I learned from the staff at Healthy Start that organizations I collaborate with may benefit from supporting healing for themselves as well as their clients. I’ve held several workshops since, focusing on untreated and unresolved trauma of staff working with clients navigating through the same choppy waters. What I’ve found in these workshops is that when given the opportunity to lift up and name trauma, the differences between many staff and clients may lie in their education, salary, role, and protective factors in relation to clients, but otherwise, there may be many other experiences they share in common. The role of community psychologists often requires us to be flexible, and remain open to the dynamics of communities as they are never static. This brings us to a related discussion on setting creation.
Sarason’s (1972) initial work on the creation of settings and his reflective essays on the topic over the years bears mentioning here. Although his work has focused largely on education and education settings, I see the concepts he discussed in numerous essays and books as relevant to nonprofit settings. As Sarason explained, one of the challenges faced by organizations leading change efforts is the inability to fully appreciate the ways organizational leaders often aim to make change without changing how an organization functions. Leaders are socialized to set up new initiatives without questioning how the initiative is positioned to avoid the pitfalls of the previous ones, or what fail-safe measures are put in place to address unexpected shifts that inform what is deemed as successful. He makes the point in a reflective essay that personal motivation towards a goal often overshadows the need to fully appreciate and examine the process that informs it. As I look at my own process while working on this project, I acknowledged moments when I could have used my knowledge of Sarason’s theory to engage with the process differently. I was personally excited to develop a trauma-informed curriculum with a diverse team willing to co-create the project, but did not question how introducing a new program into an established setting would avoid the same obstacles that hindered previous attempts to launch a new initiative. Seeing myself as an outsider with insider knowledge and experiences, I didn’t fully appreciate the limitations of the setting to adapt to new ways of engaging in work, without any paradigmatic shift to support it. As you are learning, community psychologists are collectively learning to move to the “inside” of communities to see a different lens than an “outsider” would.
The Journey Ahead
Our work as community psychologists is always fluid. Thus I am working to address some of the areas that did not go well with this project. Having a contingency plan for implementation when key staff changes occur is one area of the project that will require better attention. I worked with a team but had only one project lead to support the organizing process with staff and clients. Once each project lead left HCI, the project lost momentum for a time and eventually was not able to meet the goal of completing the implementation of the curriculum with clients.
Clear timelines were set for each aspect of the months-long planning process and weeks-long training and implementation process, without a clear appreciation for how this planning impacted their work schedules. This is a common concern with the implementation of new programs that are not funded with dedicated staff. Programs developed under these circumstances engage staff who spend paid hours time away from funded projects to participate in the process. This made it difficult for team members who were very eager to participate and interested in the project to continue while managing competing priorities, particularly as they were under some pressure with funding for their programs being under threat due to a much more conservative Trump administration threatening to end such programs.
The work to refine the process of developing a trauma-informed curriculum for organizations serving Black pregnant and parenting mothers continues with Lakeside Incorporated, which serves clients living on Chicago’s south side and south suburbs. The process involves revising the materials that support the development process, incorporating more materials that invoke questioning, training in trauma-informed care, and the use of liberation education. Another important takeaway is I didn’t have awareness of the ways staff would find the curriculum helpful and useful to their own healing process during the initial creation. All work of this nature must begin with the ‘healer’. That has since changed. I use that awareness to raise different opportunities for questioning for staff that can support a dialogue that focuses on how they see themselves on their own healing journeys with trauma. Research and further exploration of liberation-based healing has also provided more insights and potential tools that can be incorporated into the curriculum, particularly the work of
Conclusion
As with any work in community psychology or other related fields, adaptability and flexibility are imperative—the very complex nature of communities and people’s lives demands it. This case story is as equally important as case stories that were successful in carrying out the original vision. The goal is for the reader to understand the work of community psychology practice is not set in stone or exists outside of the changes that occur in normal community living. This is what makes community psychology exciting is it moves with the changes of life and is never anti-thesis to the struggles of citizens navigating society to take back their sense of agency or engage in co-liberation!
From Theory to Practice Reflections and Questions
• Dr. Somerville (2021) shared “the opportunity to see the project through to the end and evaluate the curriculum based on the client experience at Healthy Start was not realized”. What are examples of ways a deficit-based approach might explain this occurrence? What are additional ways outside of what has been identified by Dr. Somerville that the curriculum might still be used?
• Foundational to community psychology work is adaptability and flexibility. Share with your classmates or others a time when you had to demonstrate adaptability and what were the results.
• We know that community psychologists do not have the cure for social issues, but instead are co-creators for envisioning alternate settings. Think through a challenge in a community you are aware of. What might an alternate setting look like for this community challenge? | textbooks/socialsci/Psychology/Culture_and_Community/Case_Studies_in_Community_Psychology_Practice_-_A_Global_Lens_(Palmer%2C_Rogers%2C_Viola_and_Engel)/04%3A_Community_and_Public_Health/4.04%3A_Journeying_Past_Hurt-_Creating_and_Sustain.txt |
Before we begin talking about culture and psychology, it is important to have a basic understanding of the field of psychology. Broadly speaking, psychology is the science of behavior and mind, including conscious and unconscious phenomena, as well as feeling and thought . Psychologists, practitioners or researchers in the field, explore the role that cognitive processes (thinking) has on individual and social behavior. Psychologists also explore the physiological and biological processes (e.g., neurotransmitters, brain, and nervous system) that underlie the thinking and behavior of individuals.
There are four main goals of psychology:
Describe
Psychologists describe the behavior of humans and other animals in order improve our understanding of the behavior and get a sense of what can be considered normal and abnormal. Psychological researchers use many different methods to help describe behavior including naturalistic observation, case studies, and surveys. After describing behavior, it is easier for psychologists to understand and explain the behavior.
Explain
This goal involves determining the causes of behavior. Psychologists try to understand why a person acts or reacts in a certain way and then they try to identify if there are other factors that may produce the behavior (e.g., something that happens before or after a behavior). Using experimental designs, psychologists establish theories which will help explain the same behavior in different situations and contexts.
Predict
The third goal of psychology is to predict what behavior will come next, how a person will behave or when will the behavior happen in the future. Predicting behavior is hard, unless the behavior has already been studied which is why describing and explaining behavior must happen first. A psychologist may be able to predict a behavior by looking for a pattern using past instances or examples of that behavior. Predicting behaviors is essential for psychologists if they want to change or modify harmful or dysfunctional behaviors or to promote or encourage positive or prosocial behaviors among individuals.
Control
Finally, and perhaps most importantly, psychology strives to change, influence, or control behavior to make positive and lasting changes in people’s lives. It is important to note that if a psychologist tries to influence, shape, modify or control someone’s behavior without asking permission or getting consent it is considered unethical. As noted earlier, the ultimate goal of psychology is to benefit individuals and society but we do this by respecting the rights of others.
Psychology has been described as a “hub science” which means that medicine tends to draw upon psychological research mainly through the fields of neurology and psychiatry. Social sciences commonly draw directly from sub-disciplines within psychology like social psychology and developmental psychology. The field of psychology is about understanding and solving problems in several areas of human activity and as a discipline psychology ultimately aims to benefit society.
1.02: Cultural WEIRDos
Despite its ultimate aim to benefit society, the psychological aspects of culture have historically been overlooked because many elements of culture cannot be observed. For example, the way that gender roles are learned is a cultural process, as is the way that people think about their own sense of duty toward their family members. Also, there has been an overrepresentation of research conducted using human subjects from Western, educated, industrialized, rich and democratic nations (WEIRD) . Findings from psychology research utilizing primarily W.E.I.R.D. populations are often labeled as universal theories that explain psychological phenomena but are inaccurately, and inappropriately, applied to other cultures.
Recent research findings revealing that cultures differ in many areas, such as logical reasoning and social values has become increasingly difficult to ignore. For example, many studies have shown that Americans, Canadians and western Europeans rely on analytical reasoning strategies, which separate objects from their contexts to explain and predict behavior. Social psychologists refer to the fundamental attribution error or the tendency to explain people’s behavior in terms of internal, inherent personality traits rather than external, situational considerations (e.g. attributing an instance of angry behavior to an angry personality). Outside W.E.I.R.D. cultures, however, this phenomenon is less prominent, as many non-W.E.I.R.D. populations tend to pay more attention to the context in which behavior occurs. Asians tend to reason holistically, for example by considering people’s behavior in terms of their situation; someone’s anger might be viewed as simply a result of an irritating day (Jones, 2010; Nisbet et al., 2005). Yet many long-standing theories of how humans think rely on the prominence of analytical thought (Heinrich, 2010).
By studying only W.E.I.R.D. populations, psychologists fail to account for a substantial amount of diversity of the global population. Applying the findings from W.E.I.R.D. populations to other populations can lead to a miscalculation of psychological theories and may hinder psychologists’ abilities to isolate fundamental cultural characteristics.
A major goal of cultural psychology is to have many and varied cultures contribute to basic psychological theories in order to correct these theories so that they become more relevant to the predictions, descriptions, and explanations of all human behaviors, not just Western ones ( Shweder & Levine, 1984).
1.03: Introduction to Cultural Psychology
Cultural psychology is an interdisciplinary study of how culture reflect and shape the mind and behavior of its members (Heine, 2011). The main position of cultural psychology is that mind and culture are inseparable, meaning that people are shaped by their culture and their culture is also shaped by them (Fiske, Kitayama , Markus, & Nisbett, 1998). Shweder (1991) expanded, “Cultural psychology is the study of the way cultural traditions and social practices regulate, express, and transform the human psyche, resulting less in psychic unity for humankind than in ethnic divergences in mind, self, and emotion.” Incorporating a cultural perspective in psychological research helps to ensure that the knowledge we learn is more accurate and descriptive of all people.
The four goals of psychology can also be effectively applied to study cultural psychology by describing, explaining, predicting, and controlling (influencing) behavior across cultures. Cultural psychology research informs several fields within psychology, including social psychology, developmental psychology, and cognitive psychology.
Cultural psychology is often confused with cross-cultural psychology but they are not the same thing. Cross-cultural psychology uses culture to test the universality of psychological processes rather than for determining how cultural practices shape psychological processes. For example, a cross-cultural psychologist would ask whether Jean Piaget’s stages of development (e.g., sensorimotor, preoperational, concrete operational, and formal operational) are universal (the same) across all cultures. A cultural psychologist would ask how the social practices of a particular set of cultures shape the development of cognitive processes in different ways (Markus & Kitayama , 2003).
Despite its contributions to the field of psychology, there have been criticisms of cultural psychology including cultural stereotyping and methodological issues. There has been an abundance of research that explores the cultural differences between East Asians and North American in areas cognitive psychology (e.g., attention, perception, cognition) and social psychology (e.g., self and identity). Some psychologists have argued that this research is based on cultural stereotyping ( Turiel , 2002) and minimizes the role of the individual (McNulty, 2004).
Additionally, self-report data is one of the easiest, least expensive and most accessible methods for mass data collection, especially when conducting research in cultural psychology ( Kitayama , et al., 2002; Masuda & Nisbett, 2001). Relying on self-report data for cross-cultural comparisons of attitudes and values can lead to relatively unstable and ultimately misleading data and interpretations. We discuss this in greater detail in Chapter 3. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/01%3A_Culture_and_Psychology/1.01%3A_Introduction_to_Psychology.txt |
We have spent a lot of time talking about culture without really defining it and to complicate matters more, there are many definitions of culture and it is used in different ways by different people. When someone says, “My company has a competitive culture,” does it mean the same thing as when another person says, “I’m taking my children to the museum so they can get some culture”? For purposes of this module we are going to define culture as patterns of learned and shared behavior that are cumulative and transmitted across generations.
Patterns : There are systematic and predictable ways of behavior or thinking across members of a culture. Patterns emerge from adapting, sharing, and storing cultural information. Patterns can be both similar and different across cultures. For example, in both Canada and India it is considered polite to bring a small gift to a host’s home. In Canada, it is more common to bring a bottle of wine and for the gift to be opened right away. In India, by contrast, it is more common to bring sweets, and often the gift is set aside to be opened later.
Sharing : Culture is the product of people sharing with one another. Humans cooperate and share knowledge and skills with other members of their networks. The ways they share, and the content of what they share, helps make up culture. Older adults, for instance, remember a time when long-distance friendships were maintained through letters that arrived in the mail every few months. Contemporary youth culture accomplishes the same goal through the use of instant text messages on smartphones.
Learned: Behaviors, values, norms are acquired through a process known as enculturation that begins with parents and caregivers, because they are the primary influence on young children. Caregivers teach kids, both directly and by example, about how to behave and how the world works. They encourage children to be polite, reminding them, for instance, to say “Thank you.” They teach kids how to dress in a way that is appropriate for the culture.
Culture teaches us what behaviors and emotions are appropriate or expected in different situations. In some societies, it is considered appropriate to conceal anger. Instead of expressing their feelings outright, people purse their lips, furrow their brows, and say little. In other cultures, however, it is appropriate to express anger. In these places, people are more likely to bare their teeth, furrow their brows, point or gesture, and yell (Matsumoto, Yoo , & Chung, 2010).
Members of a culture also engage in rituals which are used to teach people what is important. For example, young people who are interested in becoming Buddhist monks often have to endure rituals that help them shed feelings of specialness or superiority—feelings that run counter to Buddhist doctrine. To do this, they might be required to wash their teacher’s feet, scrub toilets, or perform other menial tasks. Similarly, many Jewish adolescents go through the process of bar and bat mitzvah . This is a ceremonial reading from scripture that requires the study of Hebrew and, when completed, signals that the youth is ready for full participation in public worship. These examples help to illustrate the concept of enculturation.
Cumulative : Cultural knowledge is information that is “ stored” and then the learning grows across generations. We understand more about the world today than we did 200 years ago, but that doesn’t mean the culture from long ago has been erased. For instance, members of the Haida culture, a First Nations people in British Columbia, Canada are able to profit from both ancient and modern experiences. They might employ traditional fishing practices and wisdom stories while also using modern technologies and services.
Transmission: Passing of new knowledge and traditions of culture from one generation to the next, as well as across other cultures is cultural transmission. In everyday life, the most common way cultural norms are transmitted is within each individuals’ home life. Each family has its own, distinct culture under the big picture of each given society and/or nation. With every family, there are traditions that are kept alive. The way each family acts and communicates with others and an overall view of life are passed down. Parents teach their kids every day how to behave and act by their actions alone. Outside of the family, culture can be transmitted at various social institutions like places of worship, schools, even shopping centers are places where enculturation happens and is transmitted.
Understanding culture as a learned pattern of thoughts and behaviors is interesting for several reasons. First, it highlights the ways groups can come into conflict with one another. Members of different cultures simply learn different ways of behaving. Teenagers today interact with technologies, like a smartphone , using a different set of rules than people who are in their 40s, 50s, or 60s. Older adults might find texting in the middle of a face-to-face conversation rude while younger people often do not.
These differences can sometimes become politicized and a source of tension between groups. One example of this is Muslim women who wear a hijab , or headscarf. Non-Muslims do not follow this practice, so occasional misunderstandings arise about the appropriateness of the tradition. Second, understanding that culture is learned is important because it means that people can adopt an appreciation of patterns of behavior that are different than their own. Finally, understanding that culture is learned can be helpful in developing self-awareness. For instance, people from the United States might not even be aware of the fact that their attitudes about public nudity are influenced by their cultural learning. While women often go topless on beaches in Europe and women living a traditional tribal existence in places like the South Pacific also go topless, it is illegal for women in some of the United States to do so.
These cultural norms for modesty that are reflected in government laws and policies also enter the discourse on social issues such as the appropriateness of breastfeeding in public. Understanding that your preferences are, in many cases, the products of cultural learning might empower you to revise them if doing so will lead to a better life for you or others.
Humans use culture to adapt and transform the world they live in and you should think of the word culture as a conceptual tool rather than as a uniform, static definition. Culture changes through interactions with individuals, media, and technology, just to name a few. Culture generally changes for one of two reasons: selective transmission or to meet changing needs. This means that when a village or culture is met with new challenges, for example, a loss of a food source, they must change the way they live. It could also include forced relocation from ancestral domains due to external or internal forces. For example, in the United States tens of thousands Native Americans were forced to migrate from their ancestral lands to reservations established by the United States government so it could acquire lands rich with natural resources. The forced migration resulted in death, disease and many cultural changes for the Native Americans as they adjusted to new ecology and way of life. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/01%3A_Culture_and_Psychology/1.04%3A_Defining_Culture.txt |
An etic perspective refers to a psychological construct or process that is universal, or true across all cultures. An etic perspective is closely associated with cross-cultural psychology. Remember our earlier example of child development and Piaget, an etic perspective seeks to compare development stages across cultures for similarities.
Cultural universals are psychological processes that exist in every human culture and includes attributes such as values and modes of behavior. These are often the areas of focus and study in psychology. Some examples of cultural universals in psychology are:
• Language and cognition
• Group membership
• Ritual
• Emotions
The idea that specific aspects of culture are common to all human cultures is contrary to the emic perspective which focuses on cultural differences and culturally specific processes that shape thinking and behavior. Research using an emic perspective is often considered to be an ‘insider’s’ perspective but can be biased if the participant or researcher is a member of the culture they are studying. A participant-researcher may fail to consider how the culture and cultural practices might be perceived by others and valuable information might be left out.
1.06: Products of Culture
In cultural psychology, material culture refers to the objects or belongings of a group including food, fashion, architecture or physical structures. These objects reflect the historical, geographic, and social conditions of the culture. For instance, the clothes that you are wearing right now might tell researchers of the future about the fashions of today.
Nonmaterial culture ( subjective), by contrast, consists of the ideas, attitudes, and beliefs of a society.
Norms are things that are considered normal, appropriate, or ordinary for a particular group of people and guide members on how they should behave in a given context. In Western cultures wearing dark clothing and appearing solemn are normative behaviors at a funeral. In certain cultures, they reflect the values of respect and support of friends and family.
Values are related to the norms of a culture, but they are more global and abstract than norms. Norms are rules for behavior in specific situations, while values identify what should be judged as good or evil. Flying the national flag on a holiday is a norm , but it exhibits patriotism, which is a value .
Beliefs are the way people think the universe operates. Beliefs can be religious or secular, and they can refer to any aspect of life. For instance, many people in the United States believe that hard work is the key to success, while in other countries your success is determined by fate.
Norms, values, and beliefs are all deeply interconnected. Together, they provide a way to understand culture.
1.07: Hofstede's Cultural Dimensions
Hofstede’s cultural values provide a framework that describes the effects of culture on the values of its members, and how these values relate to behavior. Hofstede’s work is a major resource in fields like cross-cultural psychology, international management, and cross-cultural communication.
Hofstede conducted a large survey (1967-1973) that examined value differences across the divisions of IBM, a multinational corporation. Data were collected from 117,000 employees from 50 countries across 3 regions. Using factor analysis, a statistical method, Hofstede initially identified four value dimensions (Individualist/Collectivist, Power Distance, Uncertainty Avoidance, and Masculinity /Femininity). Additional research that used a Chinese developed tool identified a fifth dimension: Long Term/Short Term orientation (Bond, 1991) and a replication, conducted across 93 separate countries, confirmed the existence of the five dimensions and identified a sixth known as Indulgence/Restraint ( Minkov , 2010). The five values are discussed in detail below.
Masculinity and Femininity (task orientation/person orientation) refers to the distribution of emotional roles between the genders. Masculine cultures value competitiveness, assertiveness, material success, ambition, and power. Female cultures place more value on relationships, quality of life and greater concern for marginalized groups (e.g., homeless, persons with disabilities, refugees). In masculine cultures differences in gender roles are very dramatic and much less fluid than those in feminine cultures where women and men have the same values that emphasize modesty and caring. Masculine cultures are also more likely to have strong opinions about what constitutes men’s work versus women’s work, while societies low in masculinity permit much greater overlap in social and work roles of men and women.
Uncertainty Avoidance (UA) addresses a society’s tolerance for uncertainty and ambiguity. It reflects the extent to which members of a society attempt to cope with anxiety by minimizing uncertainty. Another, more simplified, way to think about UA is how threatening change is to a culture. People in cultures with high UA tend to be more emotional, try to minimize the unknown and unusual circumstances and proceed with carefully planned steps and rules, laws and regulations. Low UA cultures accept and feel comfortable in unstructured situations or changeable environments and try to have as few rules as possible. People in these cultures tend to be more tolerant of change. Students from countries with low uncertainty avoidance don’t mind it when a teacher says, “I don’t know.”
Power Distance (strength of social hierarchy) refers to the extent to which the less powerful members of organizations and institutions (like a family) accept and expect that power is distributed unequally. There is a certain degree of inequality in all societies, notes Hofstede; however, there is relatively more equality in some societies than in others. Individuals in societies that exhibit a high degree of power distance accept hierarchies to which everyone has a place without the need for justification. Societies with low power distance seek to have an equal distribution of power. Cultures that endorse low power distance expect and accept re lations that are more consultative or democratic – – we call this egalitarian.
Countries with lower PDI values tend to be more egalitarian. For instance, there is more equality between parents and children with parents more likely to accept it if children argue with them, or “talk back” to use a common expression. In the workplace, bosses are more likely to ask employees for input, and in fact, subordinates expect to be consulted. On the other hand, in countries with high power distance, parents expect children to obey without questioning. People of higher status may expect obvious displays of respect from subordinates. In the workplace, superiors and subordinates are not likely to see each other as equals, and it is assumed that bosses will make decisions without consulting employees. In general, status is more important in high power distance countries.
Individualist and Collectivism refers to the degree to which individuals are integrated into groups. Individualistic societies stress personal achievement and individual rights, focus on personal needs and those of immediate family. In individualistic societies, people choose their own affiliations and groups and move between different groups. On the other hand, collectivistic societies put more emphasis on the importance of relationships and loyalty. Individuals in collectivist societies belong to fewer groups and they are defined more by their membership in particular groups. Communication is more direct in individualistic societies but more indirect in collectivistic societies.
Long Term (LT) and Short Term (ST) describes a society’s time horizon; the degree to which cultures encourage delaying gratification or material, social, emotional needs of the members: LT places more importance on the future, pragmatic values, oriented toward rewards like persistence, thrift, saving, and capacity for adaptation. Short term values are related to the past and the present (not future) with emphasis on immediate needs, quick results, and unrestrained spending often in response to social or ecological pressure.
The cultural value dimensions identified by Hofstede are useful ways to think about culture and to study cultural psychology; however, Hofstede’s theory has also been seriously questioned. Most of the criticism has been directed at the methodology of the study beginning with the original instrument. The questionnaire was not originally designed to measure culture but rather workplace satisfaction (Orr & Hauser, 2008) and many of the conclusions are based on a small number of responses (McSweeney, 2002). Although 117,000 questionnaires were administered, the results from 40 c ountries were used and only six countries had more than 1000 respondents. Critics also question the representa tiveness of the original sample.
T he study was conducted using employees of a multinational corporation (IBM) who were highly educated, mostly male, who performed what we call ‘white collar’ work (McSweeney, 2002). Hofstede’s theory has also been criticized for promoting a largely static view of culture (Hamden-Turner & Trompenaars, 1997; Orr and Hauser, 2008) that does not respond to changes or influences of other cultures. It is hard to deny that the world has changed in dramatic ways since Hofstede’s research began.
Material and nonmaterial aspects of culture can vary subtly from region to region. As people travel, moving from different regions to entirely different parts of the world, certain material and nonmaterial aspects of culture become dramatically unfamiliar. As we interact with cultures other than our own, we become more aware of our own culture, which might otherwise be invisible to us, and to the differences and commonalities between our culture and others. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/01%3A_Culture_and_Psychology/1.05%3A_Two_Views_of_Culture.txt |
Ethnocentrism is the tendency to look at the world primarily from the perspective of one’s own culture. Part of ethnocentrism is the belief that one’s own race, ethnic or cultural group is the most important or that some or all aspects of its culture are superior to those of other groups. Some people will simply call it cultural ignorance.
Ethnocentrism often leads to incorrect assumptions about others’ behavior based on your own norms, values, and beliefs. In extreme cases, a group of individuals may see another culture as wrong or immoral and because of this may try to convert, sometimes forcibly, the group to their own ways of living. War and genocide could be the devastating result if a group is unwilling to change their ways of living or cultural practices.
Ethnocentrism may not, in some circumstances, be avoidable. We often have involuntary reactions toward another person or culture’s practices or beliefs but these reactions do not have to result in horrible events such as genocide or war. In order to avoid conflict over culture practices and beliefs, we must all try to be more culturally relative.
Cultural relativism is the principle of regarding and valuing the practices of a culture from the point of view of that culture and to avoid making hasty judgments. Cultural relativism tries to counter ethnocentrism by promoting the understanding of cultural practices that are unfamiliar to other cultures such as eating insects, genocides or genital cutting. Take for example, the common practice of same-sex friends in India walking in public while holding hands. This is a common behavior and a sign of connectedness between two people. In England, by contrast, holding hands is largely limited to romantically involved couples, and often suggests a sexual relationship. These are simply two different ways of understanding the meaning of holding hands. Someone who does not take a relativistic view might be tempted to see their own understanding of this behavior as superior and, perhaps, the foreign practice as being immoral.
D espite the fact that cultural relativism promotes the appreciation for cultural differences, it can also be problematic. At its most extreme, cultural relativism leaves no room for criticism of other cultures, even if certain cultural practices are horrific or harmful. Many practices have drawn criticism over the years. In Madagascar, for example, the famahidana funeral tradition includes bringing bodies out from tombs once every seven years, wrapping them in cloth, and dancing with them. Some people view this practice disrespectful to the body of the deceased person. Today, a debate rages about the ritual cutting of genitals of girls in several Middle Eastern and African cultures. To a lesser extent, this same debate arises around the circumcision of baby boys in Western hospitals. When considering harmful cultural traditions, it can be patronizing to use cultural relativism as an excuse for avoiding debate. To assume that people from other cultures are neither mature enough nor responsible enough to consider criticism from the outside is demeaning.
The concept of cross-cultural relationship is the idea that people from different cultures can have relationships that acknowledge, respect and begin to understand each other’s diverse lives. People with different backgrounds can help each other see possibilities that they never thought were there because of limitations, or cultural proscriptions, posed by their own traditions. Becoming aware of these new possibilities will ultimately change the people who are exposed to the new ideas. This cross-cultural relationship provides hope that new opportunities will be discovered, but at the same time it is threatening. The threat is that once the relationship occurs, one can no longer claim that any single culture is the absolute truth. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/01%3A_Culture_and_Psychology/1.08%3A_Ethnocentrism_and_Cultural_Relativism.txt |
Culture refers to patterns of learned and shared behavior that are cumulative and transmitted across generations. Historically, the role of culture has been overlooked in the field of psychology and a majority of psychological research has focused on Western, Educated, Industrial, Rich and Democratic (WEIRD) cultures. Cultural psychology has emerged as a specialty within the field of psychology to increase awareness of culture in shaping thinking and behavior. Etic and emic are cultural perspectives through which we can view psychological phenomena that include non-material culture like values, attitudes and beliefs. Stereotyping and ethnocentric bias can occur when we view other cultures from our own perspective which often results in a misunderstanding or disparagement of unfamiliar cultures.
1.10: Vocabulary
Culture is defined as patterns of learned behavior that are shared, cumulative and transmitted across generations and groups.
Cultural psychology is an interdisciplinary study of how cultures reflect and shape the thoughts, attitudes and behaviors of its members.
Cross-cultural psychology uses culture to test whether some psychological processes are universal rather than determining how cultural practices shape psychological processes.
Cultural relativism is the principle of regarding and valuing the practices of a culture from the point of view of that culture and to avoid making hasty judgments
Cultural universals are psychological processes that exist in every human culture and includes attributes such as values and modes of behavior.
Emic perspective which focuses on cultural differences and culturally specific processes that shape thinking and behavior
Ethnocentrism is the tendency to look at the world primarily from the perspective of one’s own culture.
Etic perspective refers to a psychological construct or process that is universal, or true across all cultures
Goals of psychology
• Description is the first goal of psychology intended to identify “what” is happening when a behavior takes place including context, frequency, intensity, and duration.
• Explanation is the second goal of psychology intended to address “why” a behavior is taking place. The association between related factors and the behavior is exploratory not correlational or causal.
• Prediction is the third goal of psychology intended to assess the likelihood (i.e., correlational probability) that a behavior will take place again or not.
• Control is the fourth goal of psychology intended to address how behavior can be changed. This goal includes a cause-effect association between an intervention and a behavioral change.
Hofstede’s cultural values provide a framework that describes the effects of culture on the values of its members, and how these values relate to behavior.
• Masculinity and Femininity refers to the distribution of emotional roles between the genders.
• Uncertainty Avoidance refers to a society’s tolerance for uncertainty and ambiguity.
• Power Distance is the extent to which the less powerful members of organizations and institutions (like a family) accept and expect their power is distributed unequally
• Individualistic and Collectivist refers to the degree to which individuals are integrated into groups and their community.
• Long Term and Short Term describes a society’s time horizon; the degree to which cultures encourage delaying gratification or material, social, emotional needs of the members.
Material culture refers to the objects or belongings of a group including food, fashion, architecture or physical structures
Nonmaterial culture (subjective) consists of the ideas, attitudes, and beliefs of a society.
Psychology is the scientific study of behavior and mental processes.
WEIRD is an acronym that stands for demographic factors that represent the population that has been traditionally included in research and development of psychological theory. This population has the following characteristics: Western, Educated, Industrialized, Rich, and Democratic cultures. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/01%3A_Culture_and_Psychology/1.09%3A_Summary.txt |
Aristotle was the first to provide evidence of social learning in bird songs and Charles Darwin was the first to suggest what became known as social learning in explaining the transmission of an adaptive behavior pattern seen in a population of honey bees. Social learning happens when behaviors are acquired through observation or are taught by other members of a social group (e.g., caregivers, siblings) or social institutions (e.g., schools, places of worship). Social learning among humans is important because it means that we can avoid costly and time-consuming trial and error and at the same time multiply the power of individual learning (Boyd & Richardson, 2005). Our collective brain power makes it possible for certain behaviors to become more adaptive and spread among groups.
The actual phrase animal culture was first proposed by Japanese primatologists who discovered socially transmitted food behaviors on Koshima Island in the 1940s among Japanese monkeys. The scientists observed a female monkey dunk a piece of potato in the ocean.
Whale songs. Male humpback whales produce various songs over their lifetime, which are learned from other males in the population. Males in a population conform to produce the same mating song, consisting of a highly stereotyped vocal display involved in mate attraction. Researchers were able to record a series of songs and identified the cultural transmission of these songs across geographic distances (Western and Central South Pacific Ocean) over 11 years (Garland et al., 2013; Garland, Rendell, Lamoni, Poole and Noad, 2017).
Dolphin Sponges. A community of bottlenose dolphins in Western Australia use conical sponges as tools to find food (foraging). During “sponging,” dolphins break off a sponge and wear it over the rostrum (snout) while foraging on the seafloor (Smolker, et al., 1997; Mann et al., 2008). Scientists think that the dolphins use the sponges for protection while foraging. Researchers, using genetic analyses, found that all ‘spongers’ are descendants of a single matriline (mother to daughter) which suggests cultural transmission of the use of sponges, as tools, within a specific population (Mann and Sargeant, 2003).
Chimpanzee Tools. Chimpanzees also use tools for foraging but different types of tools are associated with specific populations. This means that not all chimpanzees make or use the same tools for the same purpose (see Whiten articles for summary). For example, one troop of chimpanzees plunges sticks into termite nests to gather food and another troupe uses bark or leaves as a kind of scoop to forage for termites. There is a documented instance of chimpanzees in the Democratic Republic of Congo creating a tool that is like a paint brush or bottle washer that results in more successful foraging.
2.02: Uniquely Human
Although we humans are not the only species to exhibit culture, we depend on it in a way that no other species does and no other species demonstrates the cultural virtuosity and flexibility of human beings. In the animal world, transmitting innovations (combining of two or more separate elements into entirely new tools or practices) among peers and between generations of the same group occurs frequently but not necessarily between separate groups of the same species. Moreover, cultural innovation does not seem to occur among non-human species but it is a hallmark of human cultural development.
The first thing to emphasize is that humans are not born with culture like we are born with brown eyes, black hair or freckles. We are born into culture, and we learn it by living in human social groups. In this way culture is something that is transmitted from one generation to the next. This is how we become ‘enculturated. Using archaeological data, is has been estimated that the human capacity for cultural learning emerged somewhere between 500, 000 – 170 000 years ago (Lind et al., 2003) , although some researchers have argued that the predisposition for social cognition, which facilitates social learning, extends farther back in time when we split from an ancient ancestor (Heinrich, 2016; Tomasello, 1999). Other scientists argue that convincing evidence for human culture only appears within the last 100, 000 years (Tattersall, 2015).
Regardless of the time period or mechanism, most researchers, across disciplines, accept that changes, namely in the area of cognition and cooperation led to cultural adaptations and cultural learning among humans.
Describing and explaining which elements of culture are uniquely human is complex, cross-disciplinary and controversial. As this is an introductory text we will limit our discussion to three areas where there is broad agreement on the uniqueness of human culture as it relates to psychology: cognition, cooperation and cumulative learning.
Cognition
It has been argued that cognitive abilities like learning, attending and memory underlie human and cultural evolution (Heine, 2016; Henrich, 2016, Tomasello, 1999) because these abilities make humans better at social learning which, as an adaptation, led to cultural learning. Social learning is best described as learning that occurs in the presence of another person and there are two main forms of social learning: emulative and imitative learning (Heine, 2016; Tomasello & Rakoczy, 2003). Emulative learning focuses on the environment, process and outcomes related to a specific event that is observed. It is an individual style of learning even though there is another person who has modelled the behavior. Humans and nonhuman primates engage in emulative learning. Imitative learning is considered to be uniquely human (Tomasello & Rakoczy, 2003) and occurs through the process of modeling and demonstrating behavior with an understanding of the goal of the behavior. Imitative learning includes intention and reflection of the behaviors, as well as an understanding of the perspective of the person who is performing the behavior. Humans also imitate behavior to fit in and not just to learn, which has not been observed in other nonhuman primates (Tomasello & Carpenter, 2007).
To illustrate the difference between these these two types of social learning, Nagell, Olguin and Tomasello (1993) designed an experiment that required toddlers and chimpanzees to use a tool (rake) to retrieve an object that they really wanted (a toy for the children and food for the chimpanzees) that was out of their reach. Each group was shown how to use the tool to retrieve the object – but there was a slight variation to the experiment. One group was shown how to use the rake teeth down which was effective but not efficient. The other group was shown how to use the rake teeth up which was both effective and efficient. The toddlers used imitative learning and copied the behavior they had observed, either teeth up or teeth down. The chimpanzees used the teeth up situation, regardless of what they observed and were more successful in reaching the food. The chimpanzees used emulative learning which was the most effective and efficient way to learn and solve problems. Additional cognitive research has confirmed that on some measures of thinking, apes were smarter than human adults (Martin, Bhui, Bossarts, Matsuzawa, & Camerer, 2014) and children (Herrmann et al., 2007).
In a series of cognitive experiments chimpanzees, orangutans and 2-year old toddlers were compared on measures of physical problem solving and social problem solving. Toddlers were used in the experiment because human adults would have performed significantly better than the apes across all cognitive measures (we call this the ceiling effect). Results revealed that on measures of physical problem solving, apes and toddlers performed about the same.
On social problem-solving tasks that required the participants to engage in social learning, toddlers outperformed the apes. There was really no competition. Results from comparative studies like these suggest that humans are not intellectually superior (at least when compared to apes) except in the area of social learning that included elements of imitative learning. Attending to events, modeling the behaviors of others, learning from others and storing the knowledge for later use and problem solving is central to imitative learning and essential for cultural learning and cumulative culture.
Much of the research examined so far has compared human toddlers to nonhuman primates; however, there is a growing body of developmental literature that confirms the importance of social learning. Cross-cultural research in the area of developmental psychology has demonstrated that human infants selectively attend to several important cues like prestige (who does it best), sex and ethnicity (sound and look like me) and familiarity (similarity in background). The research has also demonstrated that learning these cues seems to happen at about the same time in human development and in about the same order across cultures. Cues and social learning are not just for infants. Researchers examined undergraduate student performance and cues related to sex and ethnicity. After controlling for other variables, results showed students who receive instruction from faculty of the same sex, ethnicity or race were less likely to drop out and had better grades (Hoffman & Oreopoulous, 2009; Fairlie, Hoffman & Oreopoulous, 2011). These studies suggest that interpretation, motivation and understanding cues appears to have significant implications for early childhood development, as well as later adoption of adult roles.
Relatively recent autism research has provided an opportunity to explore the role of cues and attending even further. Autism research suggests that children with autism who miss out on these stages of attending have significant difficulty with social cues (Tomasello, Kruger, & Ratner, 1993). Additionally, children with autism have great difficulty sharing emotional states or understanding the intentions of others (Tomasello et al., 2004). By sharing intentions humans are able to experience events and perspectives together at a level not seen among our closest animal relatives (Tomasello, et al., 2004). Shared intentionality, is a cognitive process by which we see others as intentional agents. Shared intentionality encompasses interactions, commitment to a goal, and cooperation with others to achieve the goal, which are necessary elements for cultural learning and cultural adaptations (Heine, 2016; Tomasello et al., 2004).
Cooperation
The ability to work together toward common goals is required for the survival of any group. There is strong evidence that nonhuman primates (our genetic cousins) cooperate but that it is limited to kin or partners with few documented cases of cooperation with intergroup members (e.g., strangers) (Melis and Semmann, 2010). Cooperating with strangers to complete complex goals appears to be a uniquely human behavior and there are several explanations for these phenomena, including cognitive skills, protection from conflict and the development of social norms.
Cognition
As discussed earlier in the previous section that humans have a unique ability to engage in social learning (Boyd, Richerson & Henrich, 2011; Hermann et al., 2007; Tomasello, 1999), as well as other psychological advantages including memory which helps us to track who helped us and who we have helped (Hauser et al., 2009; Melis and Semmann, 2010). Perhaps most importantly, we are able to transfer all this information to others in our group, which means that as an individual you might gain a reputation for being a helper or gain a reputation for being a loafer (more about this in Chapter 11).
Protection
Cooperation may also have emerged as a result of external pressures (e.g.,intergroup conflict, climate change, competition) which facilitated the formation of large groups. Bowles and colleagues (2013) suggest that competition with other groups brought about social changes and groups who were better at cooperating were more likely to survive. Additionally, our ancient ancestors were more likely to be prey than predator and being part of a large group offered some protection from predators (Hart & Sussman, 2009; Henrich, 2016). Dunbar (1993) has proposed that because early humans began living in large communities, language (a cognitive product) was developed. He suggests that humans required the use of complex communication to maintain social cohesion and unity among group members.
Norms
As groups became larger, humans established social rules or norms (Richerson & Boyd, 2008). Henrich (2016) argues that social norms within groups emerged because of our unique cognitive abilities for social learning (e.g., to learn from someone else). By observing a model of appropriate behavior, humans learn what behavior is unacceptable and those individuals who do not follow these social rules are often sanctioned (punished). Cooperation has spread across populations because of social norms that sanction intergroup conflict and promote fair treatment of group members. Research with infants less than a year old appear to support this argument.
Hamlin and colleagues used a puppet morality play in an experiment with infants and toddlers that found children preferred people who help others reach a goal (prosocial behaviors) and avoided people who were harmful, or who get in the way of others reaching a goal. As early as 3 months age, humans are evaluating the behaviors of others and assigning a positive value to helpful, cooperative behaviors (Hamlin et al., 2007; Hamlin & Wynn, 2011) and negative values to harmful or selfish behaviors.
Prosocial behaviors and social norms that reward cooperation and helping become automatic overtime and are reflected in everyday choices that we make as adults. For example, Rand and colleagues used the Public Goods Game, an economic experiment, to examine cooperation and competition (Rand, 2016). In the game, participants decide how much to contribute to the public good and if all participants contribute then the payoff is greater for the entire group.
Early findings revealed that participants who made their contributions faster gave more to the public good (greater cooperation). These results were consistent in several replications (Cone & Rand 2014); when forced to make a quick decision, participants cooperated more than when asked to reflect on their decision. It seems that under certain circumstances, social contexts and social norms, ‘going with your gut’ leads to increased cooperation (Henrich, 2016).
Cumulative Learning
We read earlier in the chapter that animals have cultural transmissions but only humans seem to have the ability for cumulative cultural changes that result in behaviors that no one single person could have learned individually through a lifetime of trial and error. We call this cumulativecultural learning which refers to human collective brain power (Tomasello & Moll, 2010) or a set of sophisticated skills that we possess (Henrich, 2016) which allow us to create practices, behaviors, norms, artifacts (things) and institutions that are retained by group members and transmitted across generations and to other groups.
Humans can use this collective brain power for novel problem solving in order to adapt to changing environments and social conditions. Boyd, Richerson and Henrich (2011a) go even farther and suggest that culture is part of our human biology because our brains and our bodies have been shaped and influenced across thousands of years in ways that promote the accumulation and transmission of knowledge.
Many researchers suggest that cumulative cultural evolution results from a ratchet effect that began when humans developed the cognitive infrastructure and processes to understand that others have intentions so people can engage in coordinated efforts to achieve complex and specialized tasks (Tennie, Call, & Tomasello 2009). The ratchet effect suggests that cultural adaptations and innovations are accumulated (become part of a larger library of knowledge) and then are expanded upon and refined across generations.
There are historical examples of cultures who have lost accumulated culture. Henrich (2004) details the experiences of Tasmanian islanders and explains using a mathematical model how cultural learning can be lost. History and social context are important here – humans arrived on Tasmania, from Australia (conceivably), about 34,000 years ago by crossing a land bridge that was later covered by rising ocean levels. Using archeological evidence, it seems that the early Tasmanians (roughly 70,000 inhabitants) had a sophisticated toolkit with hundreds of tools and a set of skills for hunting different animals. When eighteenth century explorers arrived in Tasmania the population had declined, the toolkit had only 24 tools and the diet had significantly less variety than the early Tasmanians. Henrich argues that the loss of knowledge happened because Tasmanians did not attend to the best models or teachers (or there were fewer skilled people) and the products that were created were imperfect. Over time, skills were lost and knowledge was lost which resulted in a loss of accumulated knowledge. There are examples of cultural loss in North America among native populations; however, these examples are not the result of natural selection or ecological pressures. In most cases, cultural loss was brought about through external forces including colonization, subjugation and the enforcement of policies to limit transmission of cultural values, practices, rituals and language (UNESCO) of most native populations.
It is generally accepted that innovations and the accumulation of cultural learning occurs at a faster rate with larger groups that have greater interconnection. These ideas were tested in a laboratory experiment that used college students and a computer game (Derex, Beugin, Godelle & Raymond, 2013). Students were shown a complicated fishing net and then asked to design the same net using a computer. Students were assigned to groups that range from 2 to 16 members and each player had 15 attempts to design the net. Students earned points based on the quality of their work. To mimic social learning, students were able to select a model created by other group members. Researchers found that members of a larger group were more likely to complete the fishing net accurately with fewer trials. The researchers assert in larger groups there were more models to learn from. This lab experiment along scientific observations reveal that in a large group it is more likely that someone will come up with a great idea that can be maintained and improved by the group (Heine, 2016) and the learning process moves quicker.
Summary
Comparative animal studies and developmental research are generally seen as support that human brains come preprogrammed with a host of cognitive abilities that have helped us to adapt and survive (Henrich, 2016). Some researchers also believe that human cognitive ability evolved genetically so that we would become better learners, individually, as well as better at learning from others by figuring out what people (e.g., teacher, expert or model example) want us to do or know (Henrich, 2016). This has important implications or consequences for our interactions with others, including cooperative behaviors. Prosocial behaviors are learned through the cognitive processes we discussed earlier, like attending, imitating, as well as through social learning. Social norms that reward cooperation and helping become automatic overtime and contribute to large group cohesion and unity. Large groups with high interconnection are less likely to experience loss of accumulated knowledge and more likely to innovate and adapt to ecological stress and other selective pressures. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/02%3A_Cultural_Learning/2.01%3A_Animal_Culture.txt |
Modern humans are genetically very similar (genotype) but exhibit very different physical characteristics (phenotype). It is generally accepted that genetic and cultural diversity is geographically and ecologically structured, which means that people from particular regions resemble each other more than they resemble people from other regions. As modern humans migrated to geographic and climatic regions that differed from the lands of their ancestors, they met new environmental challenges. Group variation arose because different groups encountered different environments and conditions (Boyd and Richerson, 2011b). These new environments led to genetic changes that enhanced survival for group members, which were then transmitted to offspring across generations. Moreover, these new, often harsh environments forced humans to create new ways of coping, learning, living and raising their offspring that also enhanced survival.
Environmental variance refers to differences among groups because the environment is different. Stressors can be abiotic (e.g., climate, UV radiation or high altitude), biotic (e.g., disease), or social (e.g., war and psychological stress). Evidence is growing that environmental stressors (or pressures) can cause genetic variations (i.e., changes in genes). You are probably familiar with at least a couple of genetic adaptations brought about by environmental pressures.
Population migration to high altitude has altered red blood cells to accommodate the reduced oxygen levels at high altitude (Beall, 2004). Malaria is a biotic example of an environmental variant. Malaria is a disease carried by mosquitoes and affects thousands of people each year. A change in red blood cell shape (sickle or semicircle) is a genetic adaptation present in tropical climates to protect individuals from malaria but in other environments the adaptation can be quite harmful to a person. Sickle cell anemia is a disease that has emerged because the once advantageous adaptation causes joint inflammation and pain in people when expressed in other environments.
As mentioned earlier, groups living in new environments invented new tools and new ways of doing things to adapt to ecological conditions and pressures. Cultural variance refers to different behaviors among groups as a result of different learning, coping and living. Cultural adaptations can occur at any time and may be as simple as putting on a coat when it is cold or as complicated as engineering, building, and installing a heating system in a building. Consider contemporary hunter-gatherer societies in the Arctic and Kalahari. These groups inhabit hostile environments that are separated by thousands of miles but they have not developed massive distinct genetic adaptations to these environments. They are successful because of cultural adaptations to their unique habitations.
In additional to environmental variations and ecological pressures, cultural adaptations may be shaped by access to arable land, sustainable strategies (e.g., fishing, hunting, agriculture) and sources of food. In other words, resources available to the population. For example, the people of India revere cows and believe that eating a cow is a terrible act, which might seem strange to people in other countries. Cows in India are considered sacred. Cows are viewed as more than just an animal because they provide milk which is a precious resource. In order to ensure the milk is always available cows must be kept alive and well cared for. The cow is a food source, even if not in the way that, say, an American or European would view it. The sacredness of the cow was a cultural adaptive measure by the Indian people to keep an important, renewable resource protected.
Another example of a cultural adaptation, and the use of local resources, is building a shelter or a home. In the southwest United States, before air conditioning (and even with air conditioning) homes were built to survive the hot, dry climate. The bricks were made from abundant dirt and used few windows, which kept heat in during the winter and out during the summer. The flat roof construction catches the rainfall that is precious and scarce in the southwest. To those in other parts of the world it is just a structure, but to the people of the southwest it meant survival.
Adaptations may be environmental or cultural and are likely the result of differences in ecology, resources and people (Crezana, Kolodny, & Feldman, 2017). Van de Vliert (2011) examined these components and their impact on ingroup favoritism (preferences for people who are similar to us that results in disproportionate shares of resources). Using data from almost 180 countries, ingroup favoritism was highest in cultures with the lowest income and harshest, most demanding climates (e.g., extreme heat or cold) and lowest in cultures with high national income and demanding climates. Ecological stress and scarce resources created social norms that favor some people over others. By examining ecology, national wealth and behaviors collectively, we can see the relationship between these factors and cultural adaptations.
Cultural similarities can be explained by adaptations of different groups to similar environmental conditions and cultural variations can also be explained by changing environmental conditions. Since environmental changes were not predictable, cultures changed in many different directions. Cultures that were once similar could become dissimilar with selective pressures and ecological distress and the opposite could also happen, cultures that were once different could become similar. The continual development of culturally transmitted knowledge and skills enables people to thrive in new environments. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/02%3A_Cultural_Learning/2.03%3A_Ecological_and_Geographic_Cultural_Variation.txt |
Environmental pressure, things like natural disasters, flooding, limited land to farm, is only one of many potential ways that can lead to cultural adaptations. Others include technological innovation (e.g., printing press, electricity) and contact with other cultures (immigration or colonization), which may promote or inhibit changes in cultural practices. Through cultural transmission, goods and services may be exchanged by two cultures, as well as values, languages, and behavior patterns. This section will review three main methods for cultural transmission.
Innovation
Earlier in the chapter we learned that large groups with a high degree of interconnectivity are more innovative and less likely to lose knowledge. Innovation includes combining existing information to create something new. In a large group or population, it is highly likely that one person will make a new discovery that will be adopted or shared with the group but only if the innovation is useful (Heine, 2016). Consider the transitions from foraging to agriculture or more recent transitions from email to text messages – innovations that have been adopted, shared and transmitted across the globe.
Diffusion
Diffusion is the spread of material and nonmaterial culture and relates to the integration between cultures and within cultures. We learned earlier that material culture refers to tangible objects or belongings of group. For example, Middle-class Americans can fly overseas and return with a new appreciation of Thai noodles or Italian gelato. The diffusion of nonmaterial culture (norms, values and beliefs) has accelerated as access to television and the Internet has brought the lifestyles and values portrayed in sitcoms into homes around the globe. Twitter feeds from public demonstrations in one nation have encouraged political protesters in other countries. When this kind of diffusion occurs, material objects and ideas from one culture are introduced into another. Ideas that are easy to communicate are more likely to spread and emotional messages will spread more quickly. If an idea or a message challenges our assumptions or expectations but seems reasonable it is also more likely to be shared and communicated (known as the minimally counterintuitive idea).
Acculturation
Acculturation is the process of social, psychological, and cultural change that happens when cultures come into contact with one another and blend. Acculturation can be experienced at the level of a group (e.g., war, political domination, colonization) and at the level of the individual (e.g., dating after divorce). The process of acculturation may very distressing or only mildly uncomfortable but it is a normal part of our adaptation to things that are new to us. Some of the most noticeable effects of acculturation often include changes in food, clothing, and language. We discuss acculturation and culture shock more fully in Chapter 13.
2.05: Summary
Humans have a unique set of skills that enable us to more readily innovate, adopt, adapt and improve. Underlying these skills seems to be a cognitive infrastructure that promotes learning, teaching and perspective taking. These cognitive abilities seem to predispose humans to sociality, cooperation and collaboration with individuals who are outside of our kin network. A trait that makes us very unique among other animals and separates us from our nearest genetic cousins the non-human primates.
2.06: Vocabulary
Acculturation is the process of social, psychological, and cultural change that happens when cultures blend.
Adaptation refers to a feature or a behavior that helps a living thing survive and function better in its environment and a genetic adaptation refers to changes physiological processes and genetics as a result of environmental or cultural variance.
Cooperation is the ability to work together toward common goals; animals cooperate with kin or other group members, but humans appear to be the only species that cooperates with strangers.
Cultural learning requires elements of social learning and encompasses other unique cognitive abilities like shared intentionality and perspective taking; collective learning of a culture that facilitate innovation, improvements and transmission across groups.
Cultural variance refers to different behaviors among groups as a result of different learning.
Cumulative learning refers to human collective brain power or a set of sophisticated skills that allows humans to create practices, behaviors, norms, artifacts (things) and institutions that are retained by group members and transmitted across generations and to other groups.
Diffusion is the spread of material and nonmaterial culture and relates to the integration between cultures and within cultures.
Emulative learning focuses on the environment, process and outcomes related to a specific event that is observed.
Environmental variance refers to differences among groups because the environment is different.
Imitative learning occurs through the process of modeling and demonstrating behavior with an understanding of the goal of the behavior.
Innovation is a new idea, method, behavior or tool
Ratchet effect suggests that cultural adaptations and innovations are accumulated (become part of a larger library of knowledge) and then expanded upon and refined across generations.
Social learning occurs when behaviors are acquired through observation or are taught by other members of a social group (e.g., caregivers, siblings) or social institutions (e.g., schools, places of worship).
Shared intentionality is a cognitive process by which we see others as intentional agents; encompasses interactions, commitment to a goal, and cooperation with others to achieve the goal. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/02%3A_Cultural_Learning/2.04%3A_Cultural_Transmission.txt |
Both cultural and cross-cultural studies have their own advantages and disadvantages. Interestingly, researchers can learn a lot from cultural similarities and cultural differences; both require comparisons across cultures. For example, Diener and Oishi (2000) were interested in exploring the relationship between money and happiness. They were specifically interested in cross-cultural differences in levels of life satisfaction between people from different cultures. To examine this question, they used international surveys that asked all participants the exact same question, such as “All things considered, how satisfied are you with your life as a whole these days?” and used a standard scale for answers; in this case one that asked people to use a 1-10 scale to respond. They also collected data on average income levels in each nation, and adjusted these for local differences in how many goods and services that money can buy.
The Diener research team (2000) discovered that, across more than 40 nations, there was a tendency for money to be associated with higher life satisfaction. People from richer countries such as Denmark, Switzerland and Canada had relatively high satisfaction while their counterparts from poorer countries such as India and Belarus had lower levels. There were some interesting exceptions, however. People from Japan—a wealthy nation—reported lower satisfaction than did their peers in nations with similar wealth. In addition, people from Brazil—a poorer nation—had unusually high scores compared to their income counterparts. The researchers tried to explain these differences and one proposed explanation was culture.
Cross-cultural (method) validation is another type of cross-cultural study that establishes whether assessments (e.g., surveys, tests, standard scales) are valid and reliable when used across cultures. Cross-cultural validation studies evaluate the equivalence of psychological measures across cultures. Instruments used across cultures should be equivalent. Measurement equivalence refers to similarity in conceptual meaning and empirical method between cultures. Bias on the other hand refers to differences that do not have exactly the same meaning within and across cultures.
Two essential features of any instrument or standard scale are validity and reliability.
• Validity of an instrument is another way of saying accuracy. Validity asks whether the instrument (or test) measures what it is supposed to measure
• Reliability of an instrument is another way of saying consistency of the results or consistency of the instrument.
Often instruments, surveys and interview questionnaires that are created in the United States, show strong reliability and validity when tested in the United States (i.e., have been validated) but often these measures do not perform well in other cultures. Common validation issues include problems with language (i.e., translation issues) and assumptions that the topic area is the same across cultures (i.e., is anxiety the same everywhere?).
3.02: Research Issues in Cultural Psychology
Research methods are the elements used in a psychological investigation (experiment) to describe and explain psychological phenomena and constructs. Research methods can also be used to predict and control for issues through the use of objective and systematic analysis. Information, sometimes called data, for psychological research can be collected from different sources like human participants (e.g., surveys, interviews), animal studies (e.g., learning and behavior) and archival sources (e.g, tweets, and social media posts). Research is done with the help of an experiment, through observation, analysis and comparison.
When conducting research within a culture (indigenous study) or across cultures (cross-cultural study) many things can go wrong that will make conducting, analyzing and interpreting data difficult. This section will review four common methodological issues in cultural research (He, 2010).
• Sampling Bias
• Procedural Bias
• Instrument Bias
• Interpretation Issues | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/03%3A_Research_Methods_and_Culture/3.01%3A_Advantages_and_Disadvantages.txt |
In the United States, and other Western countries, it is common to recruit university undergraduate students to participate in psychological research studies. Using samples of convenience from this very thin slice of humanity presents a problem when trying to generalize to the larger public and across cultures. Aside from being an over-representation of young, middle-class Caucasians, college students may also be more compliant and more susceptible to attitude change, have less stable personality traits and interpersonal relationships, and possess stronger cognitive skills than samples reflecting a wider range of age and experience (Peterson & Merunka, 2014; Visser, Krosnick, & Lavrakas, 2000).
Put simply, these traditional samples (college students) may not be sufficiently representative of the broader population. Furthermore, considering that 96% of participants in psychology studies come from western, educated, industrialized, rich, and democratic countries (so-called WEIRD cultures; Henrich, Heine, & Norenzayan, 2010), and that the majority of these are also psychology students, the question of non-representativeness becomes even more serious.
When studying a basic cognitive process (e.g., working memory) or an aspect of social behavior that appears to be fairly universal (e.g., cooperation), a non-representative sample may not be a big deal but over time research has repeatedly demonstrated the important role that individual differences (e.g., personality traits and cognitive abilities) and culture (e.g., individualism vs. collectivism) play in shaping social behavior.
For instance, even if we only consider a tiny sample of research on aggression, we know that narcissists are more likely to respond to criticism with aggression (Bushman & Baumeister, 1998); conservatives, who have a low tolerance for uncertainty, are more likely to prefer aggressive actions against those considered to be “outsiders” (de Zavala et al., 2010); countries where men hold the bulk of power in society have higher rates of physical aggression directed against female partners (Archer, 2006); and males from the southern part of the United States are more likely to react with aggression following an insult (Cohen et al., 1996).
When conducting research across cultures it is important to ensure that there is equivalence across samples from other cultures to maintain the internal consistency (validity) of the research study ( Harzing , et al., 2013; Matsumoto and Juang , 2013). Asking middle-school students in the United States about their online shopping experiences may not be a representative sample for middle school students in Kenya, Africa. Even when trying to control for demographic differences there are some experiences that cannot be separated from culture (Matsumoto and Luang , 2013). For example, being Catholic in the United States does not have the same meaning as being Catholic in Japan or Brazil. Researchers must consider the experiences of the sample in addition to basic demographic information.
3.04: Procedural Bias
Another type of methodological bias is procedural bias, which is sometimes referred to as administration bias. This type of bias is related to the study conditions including the setting and how the instruments are administered across cultures (He, 2010). The interaction between the research participant and interviewer is another type of procedural bias that can interfere with cultural comparisons.
Setting
Where the study is conducted can have a major influence on how the data is collected, analyzed and later interpreted. Settings can be small (e.g., home or community center) or settings can be large (e.g., countries or regions) and can influence how a survey is administered or how participants might respond. In a large cross-cultural health study Steels and colleagues (2014) found that the postal system in Vietnam was unreliable and demanded a major, and unexpected, change in survey methodology. The researchers were forced to use more participants from urban areas than rural areas as a result of these challenges. Harzing and Reiche (2013) found that their online survey was blocked in China due to internet censoring practices of the Chinese government but with minor changes it was later made available for administering.
Instrument Administration
In addition to the setting, how the data is collected (e.g., paper-and-pencil mode versus online survey) may influence different levels of social desirability and response rates. Dwight and Feigelson (2000) completed a meta-analysis of computerized testing on socially desirable responding and found that impression management (one dimension of social desirability) was lower in online assessment. The impact was small but it does have broad implications for how results are interpreted and compared across cultural groups when testing occurs online.
Harzing and Reiche (2013) found that paper/pencil surveys were overwhelmingly preferred by their participants, a sample of international human resource managers, and had much higher response rates when compared to the online survey. It is important to note that online survey response rates were likely higher in Japan and Korea largely because of difficulties in photocopying and mailing paper versions of the survey.
Interviewer and Interviewee Issues
The interviewer effect can easily occur when there are communication problems between interviewers and interviewees, especially, when they have different first languages and cultural backgrounds (van de Vijver and Tanzer, 2003). Interviewers, not familiar with cultural norms and values may unintentionally offend participants or colleagues or compromise the integrity of the study.
An example of the interviewer effect was summarized by Davis and Silver (2003). The researchers found that when answering questions regarding political knowledge, African American respondents got fewer answers right when interviewed by a European American interviewer than by an African American interviewer. Administration conditions that can lead to bias should be taken into consideration before beginning the research and researchers should exercise caution when interpreting and generalizing results.
Using a translator is not a guarantee that interviewer bias will be reduced. Translators may unintentionally change the intent of a question or item by omitting, revising or reducing content. These language changes can alter the intent or nuance of a survey item (Berman, 2011), which will alter the answer provided by the participant. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/03%3A_Research_Methods_and_Culture/3.03%3A_Sampling_Bias.txt |
A final type of method bias is called instrument bias but it does not have anything to do with the instrument, survey or test but rather refers to the experience and familiarity of the participant with test taking. There are two main types of instrument bias discussed in cross-cultural research (He, 2012), familiarity with the type of test (e.g., cognitive versus educational) and familiarity with response methods (e.g., multiple choice or rating scales).
Demetriou and colleagues describe an example of familiarity with test type (2005) when they compared Chinese and Greek children on visual-spatial tasks. The researchers found that Chinese children outperformed Greek children on the task but not because of cultural differences in visual spatial performance but because writing Chinese is a visual spatial task. Chinese children performed better because learning to write (in all cultures) requires practice and writing in Chinese language is a highly visual spatial task.
An example of how instrument bias can be reduced comes from a study that included Zambian and British children (Serpell, 1979). The children were asked to reproduce a pattern using several different types of response method including paper-and-pencil, plasticine, configurations of hand positions, and iron wire. The British children scored significantly higher on the paper-and-pencil method while the Zambians scored higher when iron wires were utilized (Serpell, 1979). These results make sense within cultural contexts. Paper pencil testing is a common experience in formal, Western education systems and making models with iron wire was a popular pastime among Zambian children. By using different response methods (i.e., paper/pencil, iron wire) the researchers were able to separate performance from bias related to response methods.
Another issue related to instrument bias is response bias, which is the systematic tendency to respond in certain way to items or questions. There are many things that may lead to response bias including how survey questions are phrased, the demeanor of the researcher, or the desire of the participant to be a good participant and provide “the right’ answers. There are three common types of response bias:
Socially desirable responding (SDR) is the tendency to respond in a way that make you look good. Studies that examine sensitive topics (e.g., sexuality, sexual behaviors, and mental health) or behaviors that violate social norms (e.g., fetishes, binge drinking, smoking and drug use) are particularly susceptible to SDR.
Acquiescence bias is the tendency to agree rather than disagree with items on a questionnaire. It can also mean agreeing with statements when you are unsure or in doubt. Studies have consistently shown that acquiescence response bias occurs more frequently among participants from low socioeconomic status and from collectivistic cultures (Harzing, 2006; Smith & Fischer, 2008). Additionally, work by Ross and Mirowsky (1984) found that Mexicans were more likely to engage in acquiescence and socially desirable responding than European Americans on a survey about mental health.
Extreme response bias is the tendency to use the ends of the scale (all high or all low values) regardless of what the items is asking or measuring. A demonstration of extreme response bias can be found in the work of Hui and Triandis (1989). These authors found that Hispanics tended to choose extremes on a five-point rating scale more often than did European Americans although no significant cross-cultural differences were found for 10-point scales.
3.06: Interpretation Bias
One problem with cross-cultural studies is that they are vulnerable to ethnocentric bias. This means that the researcher who designs the study might be influenced by personal biases that could affect research outcomes, without even being aware of it. For example, a study on happiness across cultures might investigate the ways that personal freedom is associated with feeling a sense of purpose in life. The researcher might assume that when people are free to choose their own work and leisure, they are more likely to pick options they care deeply about. Unfortunately, this researcher might overlook the fact that in much of the world it is considered important to sacrifice some personal freedom in order to fulfill one’s duty to the group (Triandis, 1995). Because of the danger of this type of bias, cultural psychologists must continue to improve their methodology.
Another problem with cross-cultural studies is that they are susceptible to cultural attribution fallacy. This happens when the researcher concludes that there are real cultural differences between groups without any actual support for this conclusion. Yoo (2013) explains that, if a researcher concludes that two countries are different based on a psychological construct because one country is an individualistic (I) culture and the other is a collectivistic (C) culture, without connecting differences to IC, then the researcherhas made a cultural attribution fallacy.
3.07: Summary
As an immensely social species, we affect and influence each other in many ways, particularly through our interactions and cultural expectations, both conscious and unconscious. The study of cultural psychology examines our thoughts, feelings, and behaviors we are unaware or ashamed of across cultures and within cultures. The desire to carefully and precisely study these topics, together with advances in technology, has led to the development of many creative techniques that allow researchers to explore the mechanics of how we relate to one another. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/03%3A_Research_Methods_and_Culture/3.05%3A_Instrument_Bias.txt |
Research, whether indigenous or cross-cultural, must be conducted ethically. The American Psychological Association (APA) has created a set of common ethical principles and shared standards to guide the professional and scientific responsibilities of psychologists. A major goal of the principles and code of conduct is to educate professionals in psychology, students, colleagues, patients and members of the public about the ethical standards of the field. The ethical principles include:
• Beneficience and non-maleficence
• Fidelity & responsibility
• Integrity
• Justice
• Respect for people’s
Psychological researchers agree that good research is ethical and is guided by a basic respect for human dignity and safety. Unfortunately, this has not always been the case. One notable example of unethical research in the United States was the Tuskegee syphilis study conducted by the US Public Health Service from 1932 to 1972 ( Reverby , 2009).
The participants in this study were poor African American men in the vicinity of Tuskegee, Alabama, who were told that they were being treated for “bad blood.” Although they were given some free medical care, they were not told they had syphilis and were not treated for the disease. Instead, they were observed to see how the disease would develop and progress in untreated patients. Even after the use of penicillin became the standard treatment for syphilis in the 1940s, these men continued to be denied treatment and were not given an opportunity to leave the study. It was eventually discontinued after details were shared with the general public but negative consequences of the study persist. Shaver et al (2000) found that African-Americans had less trust in researchers and were less likely to participate in research as a result of the Tuskegee experiment.
Today, any experiment that involves human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm to the participants. Any research institution in the United States that receives federal support for research involving human participants must have access to an institutional review board (IRB). The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members.
The purpose of the IRB is to review research proposals that involves human participants. Psychologists must receive prior approval from an institutional research board (IRB) before beginning any experiment. Among the most important principles protecting human subjects are:
Informed consent states that people should know when they are involved in research, and understand what will happen to them during the study (at least in general terms that do not give away the hypothesis). Researchers conducting cultural or cross-cultural research should consider the possibility that informed consent may not be custom of the culture and the concept may not be understood by the participants.
Voluntary participation is the choice to participate, along with the freedom to withdraw from the study at any time. Certain kinds of methods—such as naturalistic observation in public spaces, or archival research based on public records—do not require obtaining informed consent. Some cultures may not recognize individual and autonomous choice to participate and involvement may be mandatory. Researchers should be aware of how participation, voluntary or mandatory, may influence participant self-disclosure and responding.
Privacy encompasses two broad concepts, anonymity and confidentiality. Researchers also may not identify individual participants in their research reports. Typically, psychologists and researchers report only group means and other statistics. With online data collection becoming increasingly popular, researchers also have to be mindful that they follow local data privacy laws, collect only the data that they really need (e.g., avoiding including unnecessary questions in surveys), strictly restrict access to the raw data, and have a plan to securely destroy the data after it is no longer needed. Researchers should not assume that the same protections of data privacy extend across cultures.
Risks and Benefits are key elements of ethics in research and people who agree to participate in psychological studies should be exposed to risk only if they fully understand the risks and only if the likely benefits clearly outweigh those risks. Researchers wishing to investigate implicit prejudice using the IAT, need to be considerate of the consequences of providing feedback to participants about their unconscious biases. Similarly, any manipulations that could potentially provoke serious emotional reactions (culture of honor study described earlier) or relatively permanent changes in people’s beliefs or behaviors (e.g., attitudes towards vaccination) need to be carefully reviewed by the IRB.
Deception refers to the need of some research to deceive participants (e.g., using a cover story) to prevent participants from modifying their behavior in unnatural ways, especially in laboratory or field experiments. In these instances, researchers may hide the true nature of the study.
For example, when Asch recruited participants for his experiments on conformity, he described it as being a study of visual spatial skills. Deception is typically only permitted (a) when the benefits of the study outweigh the risks, (b) participants are not reasonably expected to be harmed, (c) the research question cannot be answered without the use of deception, and (d) participants are informed about the deception as soon as possible, usually through debriefing. Deception studies may be approved in one culture or country but researchers should not assume that deception studies are permissible or will be approved in all cultures.
Debriefing is the process of informing research participants as soon as possible of the purpose of the study, revealing any deceptions, and correcting any misconceptions they might have as a result of participating. Debriefing also involves minimizing harm that might have occurred. For example, an experiment examining the effects of sad moods on charitable behavior might involve inducing a sad mood in participants by having them think sad thoughts, watch a sad video, or listen to sad music. Debriefing would therefore be the time to return participants’ moods to normal by having them think happy thoughts, watch a happy video, or listen to happy music.
3.09: Other Issues in Research
There are many psychological processes that are considered sensitive topics within specific cultural, social and political contexts. Sensitive topics are anything considered private, stigmatizing or sacred to the participants or culture (McCosker, Barnard and Gerber, 2001). Western cultures tend to be more open about discussing sensitive areas and researchers should be aware of potential methodological issues and possible consequences for participants who agree to the study.
Sex, sexuality, sexual orientation and HIV/AIDS status are topics that may limit participation but more importantly participants may face imprisonment, sanctions (formal and informal) for acknowledging thoughts or behaviors that run contrary to cultural norms. Researchers must be aware of the cultural, social and political contexts of the cultures under study to maintain the safety of participants all the while maintaining ethical research practices. Additionally, researchers should not assume that data privacy and data protections extend to all individuals across all cultures.
3.10: Vocabulary
Acquiescence bias is the tendency to agree rather than disagree with items on a questionnaire. It can also mean agreeing with statements when you are unsure or in doubt
Bias on the other hand refers to differences that do not have exactly the same meaning within and across cultures
Cross-cultural studies are those that use standard forms of measurement, such as Likert scales, to compare people from different cultures and identify their differences
Cross-cultural (method) validation is another type of cross-cultural study that establishes whether assessments (e.g., surveys, tests, standard scales) are valid and reliable when used across cultures.
Cultural attribution fallacy. This happens when the researcher concludes that there are real cultural differences between groups without any actual support for this conclusion.
Debriefing is the process of informing research participants as soon as possible of the purpose of the study
Deception refers to the need of some research to deceive participants (e.g., using a cover story) to prevent participants from modifying their behavior in unnatural ways
Equivalence refers to similarity in conceptual meaning and empirical method between cultures
Extreme response bias is the tendency to use the ends of the scale (all high or all low values) regardless of what the items is asking or measuring.
Indigenous (ethnographic) studies are those in which the scientist spends time observing a culture and conducting interviews.
Informed consent states that people should know when they are involved in research, and understand what will happen to them during the study
Validity of an instrument is another way of saying accuracy. Validity asks whether the instrument (or test) measures what it is supposed to measure
Reliability of an instrument is another way of saying consistency of the results or consistency of the instrument.
Socially desirable responding (SDR) is the tendency to respond in a way that make you look good
Voluntary participation is the choice to participate, along with the freedom to withdraw from the study at any time. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/03%3A_Research_Methods_and_Culture/3.08%3A_Ethics_in_Cultural_Psychological_Research.txt |
It is important to understand that culture is learned. People aren’t born using chopsticks or being good at soccer simply because they have a genetic predisposition for it. They learn to excel at these activities because they are born in countries like Argentina, where playing soccer is an important part of daily life, or in countries like Taiwan, where chopsticks are the primary eating utensils.
So, how are such cultural behaviors learned? It turns out that cultural skills and knowledge are learned in much the same way a person might learn to do algebra or knit. They are acquired through a combination of explicit teaching and implicit learning— by observing and copying.
Cultural teaching can take many forms. As discussed in Chapter 2, social learning occurs when behavior is taught or modeled to another. In child development, cultural shaping begins with caretakers and their young. Caregivers teach kids, both directly and by example, about how to behave and how the world works. They encourage children to be polite, reminding them, for instance, to say “Thank you.” They teach kids how to dress in a way that is appropriate for the culture. They introduce children to religious beliefs and the rituals that go with them. They even teach children how to think and feel! This uniquely human form of learning, where the cultural tools for success are passed from one generation to another, is what is known as enculturation.
Enculturation and the Brain
The topic of genetic influences versus environmental influences on human development is often referred to as the, “nature-nurture debate.” It would be satisfying to be able to say that nature–nurture studies have given us conclusive and complete evidence about where traits come from, with some traits clearly resulting from genetics and others almost entirely from environmental factors, such as childrearing practices and personal will; but that is not the case. Instead, everything has turned out to have some footing in genetics. The message is clear: You can’t leave genes out of the equation. Keep in mind that no behavioral traits are completely inherited, so you can’t leave the environment out altogether.
Cultural neuroscience is a field of research that focuses on the interrelation between a human’s cultural environment and neurobiological systems. Our brain works to interact and, in essence, learn from our environment beginning from the moment of conception. Neural patterns form, are shaped and reshaped, continually through a process of feedback and interaction with our world. Each person has periods of brain development (typically in early childhood) where this neurological wiring is acquired smoother and faster than at any other point of time in development.
Research by Kitayama and Uskul (2011), along with others, has found evidence to support that windows for pathway wiring in the brain also occur for enculturation. In other words, cultural neuroscience investigates the way our brain is wired for cultural practices, values, and traditions through our early childhood experiences. These pathways form naturally in the brain and are then reinforced through feedback and repetition. Kitayama and Salvador (2017) write that, “culture is embrained.”
4.02: Enculturation Agents
Think back to an emotional event you experienced as a child. How did your parents react to you? Did your parents get frustrated or criticize you? Did they act patiently and provide support and guidance? Did your parents provide lots of rules for you or let you make decisions on your own? Why do you think your parents behaved the way they did? Enculturation agents are individuals and institutions that serve a role in shaping individual adaptions to a specific culture to better ensure growth and effectiveness.
Parents and caretakers are a primary enculturation agent for their young. Psychologists have attempted to answer questions about the influences on parents and understand why parents behave the way they do. Because parents are critical to a child’s development, a great deal of research has been focused on the impact that parents have on children. Parenting is a complex process in which parents and children influence one another. There are many reasons that parents behave the way they do.
The multiple influences on parenting are still being explored. Both caretakers and their children bring unique personality traits, characteristics, and habits to the parent-child dynamic that ultimately impacts the child’s development. Culture also influences parenting behaviors in fundamental ways. Although promoting the development of skills necessary to function effectively in one’s community is a universal goal of parenting, the specific skills necessary vary widely from culture to culture. Parents have different goals for their children that partially depend on their culture (Tamis-LeMonda et al., 2008).
Differences in caretaking reflect differences in parenting goals, values, resources, and experiences. As previously stated, culture is learned. Regardless of the specific choices parents make, it can be said that caretakers play a pivotal role in exposing a child to early cultural learning. In fact, many researchers believe that parents/caretakers serve as the single, most important enculturation agent in any child’s life.
While some parenting priorities are culturally universal (parents are expected to play a role in nurturing and raising their young), many more childrearing values and habits are culture-specific. Culture-specific influences on caretaking choices can be subtle or overt, and promote a narrative of what parents “ought” to do in order to successfully raise their children. For example, American parents are encouraged to enculturate a sense of independence and assertiveness in children, while Japanese parents prioritize self-control, emotional maturity, and interdependence (Bornstein, 2012). Undoubtedly, every society places expectations on caretakers as enculturation agents to raise their young in ways that promote culture-specific goals and expectations. This chapter will focus on four areas of child development:
• Temperament
• Attachment
• Parenting Styles
• Cognition | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/04%3A_Culture_and_Development/4.01%3A_Defining_Enculturation.txt |
In psychology, temperament broadly refers to consistent individual differences in behavior that are biologically based and are relatively independent of learning, system of values and attitudes. Thomas, Chess, Birch, Hertzig and Korn began the classic New York Longitudinal study in the early 1950s regarding infant temperament (Thomas, Chess & Birch, 1968). The study focused on how temperamental qualities influence adjustment throughout life. Behaviors for each one of these traits are on a continuum. If a child leans towards the high or low end of the scale, it could be a cause for concern. The specific behaviors are: activity level, regularity of sleeping and eating patterns, initial reaction, adaptability, intensity of emotion, mood, distractibility, persistence and attention span, and sensory sensitivity.
Redundancies between the categories have been found and a reduced list is normally used by psychologists today. Thomas, Chess, Birch, Hertzig and Korn (Thomas & Chess 1977) found that many babies could be categorized into one of three groups:
• Easy
• Difficult
• Slow-to-warm-up
Their research showed that easy babies readily adapt to new experiences, generally display positive moods and emotions and also have normal eating and sleeping patterns. Difficult babies tend to be very emotional, irritable and fussy, and cry a lot. They also tend to have irregular eating and sleeping patterns. Slow-to-warm-up babies have a low activity level, and tend to withdraw from new situations and people. They are slow to adapt to new experiences, but accept them after repeated exposure.
Not all children can be placed in one of these groups. Approximately 65% of children fit one of the patterns. Of the 65%, 40% fit the easy pattern, 10% fell into the difficult pattern, and 15% were slow to warm up. Each category has its own strength and weakness and one is not superior to another. An important aspect of the research of Thomas & Chess (1977) relates to the interaction of child temperament with caretaker personality and parenting style. They proposed that a “match” between the needs of child temperament with parental care would enhance healthy development of self-regulation and the child’s sense of self. This important balance is known as, goodness-of-fit.
Temperament and Culture
Thomas, Chess, Birch, Hertzig and Korn found that these broad patterns of temperamental qualities are remarkably stable through childhood. These traits are also found in children across all cultures. Thomas and Chess also studied temperament and environment. One sample consisted of white middle-class families with high educational status and the other was of Puerto Rican working-class families. They found several differences. Parents of middle-class children were more likely to report behavior problems before the age of nine and the children had sleep problems. This may be because children start preschool between the ages of three and four.
De Vries (1974) followed Masai (tribe in East Africa) infants and mothers for a number of years during a period of famine. The researcher found that Masai infants who were more demanding were more likely to survive during periods of ecological stress than infants who were more docile. The researcher suggested that infants who were more aggressive and demanding – or in temperament terms more difficult – were more likely to be fed and to have their needs met than docile infants who might have been easier to ignore. The findings from these cross-cultural studies of temperament demonstrate the interaction between ecology, temperament and culture can impact an individual. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/04%3A_Culture_and_Development/4.03%3A_Temperament.txt |
Some of the most rewarding experiences in people’s lives involve the development and maintenance of close relationships. Attachment refers to a deep and enduring emotional bond that connects one person to another across time and space. For example, some of the greatest sources of joy involve falling in love, starting a family, being reunited with distant loved ones, and sharing experiences with close others. Not surprisingly, some of the most painful experiences in people’s lives involve the disruption of important social bonds, such as separation from a spouse, losing a parent, or being abandoned by a loved one. Why do close relationships play such a profound role in human experience? Attachment theory is one approach to understanding the nature of close relationships.
Attachment theory was originally developed in the 1940s by John Bowlby, a British psychoanalyst who was attempting to understand the intense distress experienced by infants who had been separated from their parents. Bowlby (1969) observed that infants would go to extraordinary lengths to prevent separation from their parents or to reestablish proximity to a missing parent. For example, he noted that children who had been separated from their parents would often cry, call for their parents, refuse to eat or play, and stand at the door in desperate anticipation of their parents’ return. Drawing on evolutionary theory, Bowlby (1969) argued that these behaviors are adaptive responses to separation from a primary attachment figure—a caregiver who provides support, protection, and care.
It was not until his colleague, Mary Ainsworth, began to systematically study infant– parent separations that a formal understanding of these individual differences emerged. Ainsworth and her students developed a technique called the strange situation, a laboratory task for studying infant, parent attachment (Ainsworth, Blehar, Waters, & Wall, 1978). In the strange situation, 12-month-old infants and their parents are brought to the laboratory and, over a period of approximately 20 minutes, were systematically separated from and reunited with one another. In the strange situation, most children (about 60%) behave in the way implied by Bowlby’s theory. Specifically, they become upset when the parent leaves the room, but, when he or she returns, they actively seek the parent and are easily comforted by him or her. Children who exhibit this pattern of behavior are often called secure.
Other children (about 20% or less) are ill at ease initially and, upon separation, become extremely distressed. Importantly, when reunited with their parents, these children have a difficult time being soothed and often exhibit conflicting behaviors that suggest they want to be comforted, but that they also want to “punish” the parent for leaving. These children are often called anxious-resistant. The third pattern of attachment that Ainsworth and her colleagues documented is often labeled avoidant. Avoidant children (about 20%) do not consistently behave as if they are stressed by the separation but, upon reunion, actively avoid seeking contact with their parent, sometimes turning their attention to play objects on the laboratory floor.
Ainsworth’s work was important for at least three reasons. First, she provided one of the first empirical demonstrations of how attachment behavior is organized in unfamiliar contexts. Second, she provided the first empirical taxonomy of individual differences in infant attachment patterns. According to her research, at least three types of children exist: those who are secure in their relationship with their parents, those who are anxious-resistant, and those who are anxious-avoidant. Finally, she demonstrated that these individual differences were correlated with infant–parent interactions in the home during the first year of life. Children who appear secure in the strange situation, for example, tend to have parents who are responsive to their needs. Children who appear insecure in the strange situation (i.e., anxious-resistant or avoidant) often have parents who are insensitive to their needs, or inconsistent or rejecting in the care they provide.
Attachment and Culture
In the years that have followed Ainsworth’s ground-breaking research, researchers have investigated a variety of factors that may help determine whether children develop secure or insecure relationships with their primary attachment figures. As mentioned above, one of the key determinants of attachment patterns is the history of sensitive and responsive interactions between the caregiver and the child. In short, when the child is uncertain or stressed, the ability of the caregiver to provide support to the child is critical for his or her psychological development. It is assumed that such supportive interactions help the child learn to regulate his or her emotions, give the child the confidence to explore the environment, and provide the child with a safe haven during stressful circumstances.
A number of longitudinal studies are emerging that demonstrate prospective associations between early attachment experiences and adult attachment styles and/or interpersonal functioning in adulthood. For example, Fraley, Roisman, Booth-LaForce, Owen, and Holland (2013) found in a sample of more than 700 individuals studied from infancy to adulthood that maternal sensitivity across development prospectively predicted security at age 18.
Simpson, Collins, Tran, and Haydon (2007) found that attachment security, assessed in infancy in the strange situation, predicted peer competence in grades 1 to 3, which, in turn, predicted the quality of friendship relationships at age 16,which, in turn, predicted the expression of positive and negative emotions in their adult romantic relationships at ages 20 to 23.
It is easy to come away from such findings with the mistaken assumption that early experiences “determine” later outcomes. To be clear, attachment theorists assume that the relationship between early experiences and subsequent outcomes is probabilistic, not deterministic. What this means is that having supportive and responsive experiences with caregivers early in life may set the stage for positive social development, but that doesn’t mean that attachment patterns are set in stone. Even if an individual has far from optimal caretaker experiences in early life, attachment theory suggests that it is possible for that individual to develop well-functioning adult relationships through a number of corrective experiences, including relationships with siblings, other family members, teachers, and close friends.
Security is best viewed as a culmination of a person’s attachment history rather than a reflection of his or her early experiences alone. Those early experiences are considered important not because they determine a person’s fate, but because they provide the foundation for later experiences.
It is essential to note that the attachment theory work of Bowlby and Ainsworth focused on Westernized caretaking ideals in their determination of healthy, secure attachment. As previously discussed, what is considered “ideal” in Westernized culture is not necessarily prioritized in other cultures. Sensitivity and caution is required in determining if observed attachment patterns are adaptive within the context of the child’s environment. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/04%3A_Culture_and_Development/4.04%3A_Attachment.txt |
As children mature, parent-child relationships naturally change. Preschool and grade-school children are more capable, have their own preferences, and sometimes refuse or seek to compromise with parental expectations. This can lead to greater parent-child conflict, and how conflict is managed by parents further shapes the quality of parent-child relationships. So, what can parents do to nurture a healthy self-concept?
Diana Baumrind (1971, 1991) thinks parenting style may be a factor. The way we parent is an important factor in a child’s socioemotional growth. Baumrind developed and refined a theory describing parenting styles based on two aspects of parenting that are found to be extremely important:
• Parental responsiveness, which refers to the degree the parent responds to the child’s needs.
• Parental demandingness, is the extent to which the parent expects more mature and responsible behavior from a child.
Using these two dimensions, she recognized three different parenting styles:
Authoritarian (Too Hard): the authoritarian parenting style is characterized by high demandingness with low responsiveness. The authoritarian parent is rigid, harsh, and demanding. Abusive parents usually fall in this category (although Baumrind is careful to emphasize that not all authoritarian parents are abusive).
Permissive (Too Soft): this parenting style is characterized by low demandingness with high responsiveness. The permissive parent is overly responsive to the child’s demands, seldom enforcing consistent rules. The “spoiled” child often has permissive parents.
Authoritative (Just Right): this parenting style is characterized by high demandingness with huge responsiveness. The authoritative parent is firm but not rigid, willing to make an exception when the situation warrants. The authoritative parent is responsive to the child’s needs but not indulgent. Baumrind makes it clear that she favors the authoritative style.
Parenting Styles and Culture
Of the four parenting styles, the authoritative style is the one that is most encouraged in modern American society. American children raised by authoritative parents tend to have high self-esteem and social skills; however, effective parenting styles vary as a function of culture and, as Small (1999) points out, the authoritative style is not necessarily preferred or appropriate in all cultures. In contrast to the authoritative style, authoritarian parents probably would not relax bedtime rules during a vacation because they consider the rules to be set, and they expect obedience. This style can create anxious, withdrawn, and unhappy kids. It is important to point out that authoritarian parenting is as beneficial as the authoritative style in some ethnic groups (Russell, Crockett, & Chao, 2010). For instance, first-generation Chinese American children raised by authoritarian parents did just as well in school as their peers who were raised by authoritative parents (Russell et al., 2010). Not surprisingly, children raised by permissive parents tend to lack self-discipline, and the permissive parenting style is negatively associated with grades (Dornbusch, Ritter, Leiderman, Roberts, & Fraleigh, 1987).
The permissive style may also contribute to other risky behaviors such as alcohol abuse (Bahr & Hoffman, 2010), risky sexual behavior especially among female children (Donenberg, Wilson, Emerson, & Bryant, 2002), and increased display of disruptive behaviors by male children (Parent et al., 2011). There are some positive outcomes associated with children raised by permissive parents. They tend to have higher self- esteem, better social skills, and report lower levels of depression (Darling, 1999). | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/04%3A_Culture_and_Development/4.05%3A_Parenting_Styles.txt |
By the time you reach adulthood you have learned a few things about how the world works. You know, for instance, that you can’t walk through walls or leap into the tops of trees. You know that although you cannot see your car keys they’ve got to be around here someplace. What’s more, you know that if you want to communicate complex ideas like ordering a triple-shot soy vanilla latte with chocolate sprinkles it’s better to use words with meanings attached to them rather than simply gesturing and grunting. People accumulate all this useful knowledge through the process of cognitive development, which involves a multitude of factors, both inherent and learned.
Stage theories of development, such as Piaget’s stage theory, focus on whether children progress through qualitatively different stages of development. Sociocultural theories, such as that of Lev Vygotsky, emphasize how other people and the attitudes, values, and beliefs of the surrounding culture, influence children’s development.
Swiss psychologist Jean Piaget proposed that children’s thinking progresses through a series of four discrete stages. By stages he meant periods during which children reasoned in the same way about many superficially different problems, with the stages occurring in a fixed order and the thinking within different stages differing in fundamental ways. The four stages that Piaget hypothesized were the sensorimotor stage (birth to 2 years), the preoperational reasoning stage (2 to 6 or 7 years), the concrete operational reasoning stage (6 or 7 to 11 or 12 years), and the formal operational reasoning stage (11 or 12 years and throughout the rest of life).
During the sensorimotor stage, children’s thinking is largely realized through their perceptions of the world and their physical interactions with it. Their mental representations are very limited. Consider Piaget’s object permanence task, which is one of his most famous problems. If an infant younger than 9 months of age is playing with a favorite toy, and another person removes the toy from view, for example by putting it under an opaque cover and not letting the infant immediately reach for it, the infant is very likely to make no effort to retrieve it and to show no emotional distress (Piaget, 1954). This is not due to their being uninterested in the toy or unable to reach for it; if the same toy is put under a clear cover, infants below 9 months readily retrieve it (Munakata, McClelland, Johnson, & Siegler, 1997). Instead, Piaget claimed that infants less than 9 months do not understand that objects continue to exist. This is called object permanence.
During the preoperational stage, according to Piaget, children can solve not only this simple problem (which they actually can solve after 9 months) but show a wide variety of other symbolic-representation capabilities, such as those involved in drawing and using language. However, such 2- to 7-year-olds tend to focus on a single dimension, even when solving problems would require them to consider multiple dimensions. This is evident in Piaget’s (1952) conservation problems. For example, if a glass of water is poured into a taller, thinner glass, children below age 7 generally say that there now is more water than before. Similarly, if a clay ball is reshaped into a long, thin sausage, they claim that there is now more clay, and if a row of coins is spread out, they claim that there are now more coins. In all cases, the children are focusing on one dimension, while ignoring the changes in other dimensions (for example, the greater width of the glass and the clay ball).
Children overcome this tendency to focus on a single dimension during the concrete operations stage, and think logically in most situations. However, according to Piaget, they still cannot think in systematic scientific ways, even when such thinking would be useful. Thus, if asked to find out which variables influence the period that a pendulum takes to complete its arc, and given weights that they can attach to strings in order to do experiments with the pendulum to find out, most children younger than age 12, perform biased experiments from which no conclusion can be drawn, and then conclude that whatever they originally believed is correct. For example, if a boy believed that weight was the only variable that mattered, he might put the heaviest weight on the shortest string and push it the hardest, and then conclude that just as he thought, weight is the only variable that matters (Inhelder & Piaget, 1958).
Finally, in the formal operations period, children attain the reasoning power of mature adults, which allows them to solve the pendulum problem and a wide range of other problems. The formal operations stage tends not to occur without exposure to formal education in scientific reasoning, and appears to be largely or completely absent from some societies that do not provide this type of education.
Cognitive Development and Culture
Although Piaget’s theory has been very influential, it has not gone unchallenged. Recent research indicates that cognitive development is considerably more continuous than Piaget claimed. For example, Diamond (1985) found that on the object permanence task described above, infants show earlier knowledge if the waiting period is shorter. At age 6 months, they retrieve the hidden object if the wait is no longer than 2 seconds; at 7 months, they retrieve it if the wait is no longer than 4 seconds; and so on. Even earlier, at 3 or 4 months, infants show surprise in the form of longer looking times if objects suddenly appear to vanish with no obvious cause (Baillargeon, 1987).
Similarly, children’s specific experiences can greatly influence when developmental changes occur. Children of pottery makers in Mexican villages, for example, know that reshaping clay does not change the amount of clay at much younger ages than children who do not have similar experiences (Price-Williams, Gordon, & Ramirez, 1969). In a study of tribal children (Inuit of Canada, Baoul of Africa and Aranda of Australia) researchers found differences in the ages at which children reached certain stages and acquired certain skills (Dasen, 1975). About 50% of the Inuit children solved a visual spatial test by the age of 7, 50% of the Aranda children solved the same task by the age of 9; however the Baoul children did not solve the task until the age of 12. On a conservation task the ages of skill acquisition reversed. The differences seem related to the living environment of the children – the Baoul children lived in permanent settlements while the Inuit and Aranda tribes are nomadic. Demands of daily life shape the cognitive development and different societies’ value and reward different skills and behaviors.
A main figure whose ideas contradicted Piaget’s ideas was the Russian psychologist Lev Vygotsky. Vygotsky stressed the importance of a child’s cultural background as an effect to the stages of development. Because different cultures stress different social interactions, this challenged Piaget’s theory that the hierarchy of learning development had to develop in succession. Vygotsky introduced the term Zone of Proximal Development as an overall task a child would have to develop that would be too difficult to develop alone.
Overall, Piaget’s theories are widely recognized as making key contributions to the field of child development and helped pave the way for further empirical study. Cross-cultural testing has challenged many of his ideas, but the overall hierarchy of stages and sub-stages in cognitive development appears to be universal. Timing, ages, and capabilities during each stage appear to vary according to cultural context and enculturation patterns. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/04%3A_Culture_and_Development/4.06%3A_Cognitive_Development.txt |
Ecological Systems Theory
Contextual characteristics, such as the neighborhood, school, and social networks, also affect enculturation, even though these settings don’t always include both the child and the parent (Brofenbrenner, 1989). For example, Latina mothers who perceived the neighborhood as more dangerous showed less warmth with their children, perhaps because of the greater stress associated with living a threatening environment (Gonzales et al., 2011). Urie Bronfenbrenner was a Russian-born American developmental psychologist who is known for his ecological systems theory of child development. His scientific work and his assistance to the United States government helped in the formation of the Head Start program in 1965.
Bronfenbrenner’s research and his theory was key in changing the perspective of developmental psychology by calling attention to the large number of environmental and societal influences on child development. Bronfenbrenner saw the process of human development as being shaped by the interaction between an individual and his or her environment. The specific path of development was a result of the influences of a person’s surroundings, such as their parents, friends, school, work, culture, and so on.
According to Melvin L. Kohn, a sociologist from Johns Hopkins University, Bronfenbrenner was critical in making social scientists realize that, “…interpersonal relationships, even [at] the smallest level of the parent-child relationship, did not exist in a social vacuum but were embedded in the larger social structures of community, society, economics and politics.”
Peer and Sibling Relationships
Parent-child relationships are not the only significant relationships in a child’s life. Peer relationships are also important. Social interaction with another child who is similar in age, skills, and knowledge provokes the development of many social skills that are valuable for the rest of life (Bukowski, Buhrmester, & Underwood, 2011). In peer relationships, children learn how to initiate and maintain social interactions with other children. They learn skills for managing conflict, such as turn-taking, compromise, and bargaining. Play also involves the mutual, sometimes complex, coordination of goals, actions, and understanding. For example, as infants, children get their first encounter with sharing (of each other’s toys); during pretend play as preschoolers they create narratives together, choose roles, and collaborate to act out their stories; and in primary school, they may join a sports team, learning to work together and support each other emotionally and strategically toward a common goal. Through these experiences, children develop friendships that provide additional sources of security and support to those provided by their parents.
Peer relationships can be challenging as well as supportive (Rubin, Coplan, Chen, Bowker, & McDonald, 2011). Being accepted by other children is an important source of affirmation and self-esteem, but peer rejection can foreshadow later behavior problems (especially when children are rejected due to aggressive behavior). With increasing age, children confront the challenges of bullying, peer victimization, and managing conformity pressures. Social comparison with peers is an important means by which children evaluate their skills, knowledge, and personal qualities, but it may cause them to feel that they do not measure up well against others. Also, with the approach of adolescence, peer relationships become focused on psychological intimacy, involving personal disclosure, vulnerability, and loyalty (or its betrayal)—which significantly affects a child’s outlook on the world. Each of these aspects of peer relationships requires developing very different social and emotional skills than those that emerge in parent- child relationships. They also illustrate the many ways that peer relationships influence the growth of personality and self-concept.
Education
As previously stated, caretakers serve the role of being primary enculturation agents of their young in any given society. If parents serve as the #1 enculturation agent, education would likely be the second most important source of enculturation for any child.
Education is the process of facilitating learning, or the acquisition of knowledge, skills, values, beliefs, and habits. Educational methods include storytelling, discussion, teaching, training, and directed research. Education frequently takes place under the guidance of educators, but learners may also educate themselves. Education can take place in formal or informal settings and any experience that has a formative effect on the way one thinks, feels, or acts may be considered educational.
Culture, education, and society are all interconnected concepts that work to enculturate a child. Education models are based on the ideals and principals of society, and then its educated citizens go on to become influencers of that society’s culture. Researchers Markus and Kitayama (2010) refer to the systemic influence of culture and self as mutual constitution, and point out that individuals are simultaneously being shaped by culture while also influencing it. Regardless of the method of education, schooling serves as both a reflection of the priorities and values of that society, while also enculturating it’s young to contribute to that culture. For example, core American values of competition, choice, and independence can be seen in the way we structure our formal education system. Parents and students often expect to play a role in choosing curriculum, classes and competition for academic and athletic status is expected within schools. Similarly, in the United States the government dictates larger educational goals and resources, while state and local districts pass mandates based on the democratic wishes of larger society. In contrast to Westernized systems of education, many East Asian education models are based on uniform standards of academic rigor, student conformity, and respect for authority.
4.08: Summary
The process of human development and enculturation is complex. Our caretakers and method of schooling serve as two of the most important enculturation agents during childhood. Differences in childrearing choices, traditions, and expectations reflect differences in values and priorities. Developmental factors of goodness-of-fit, attachment, parenting styles, and cognition work to shape a child’s physical and psychological health in culturally diverse ways. There are universal and biological factors (temperament and intelligence), as well as culturally-specific factors that influence our relationships in adulthood. Many of the theories discussed in the chapter are rooted in a Western paradigm of what is ‘best’ and ‘appropriate.” It is important to identify situations and contexts where we might react in ethnocentric ways to parenting choices and styles.
4.09: Vocabulary
Attachment refers to a deep and enduring emotional bond that connects one person to another across time and space.
Authoritarian parenting style is characterized by high demandingness with low responsiveness
Authoritative parenting style is characterized by high demandingness with huge responsiveness
Concrete operations stage children overcome tendency to focus on a single dimension and think logically in most situations but cannot think in systematic scientific ways
Cultural neuroscience is a field of research that focuses on the interrelation between a human’s cultural environment and neurobiological systems
Enculturation describes the uniquely human form of learning that is taught by one generation to another.
Enculturation agents are individuals and institutions that serve a role in shaping individual adaptions to a specific culture to better ensure growth and effectiveness
Formal operations period, children attain the reasoning power of mature adults
Goodness-of-fit refers to the interaction of child temperament with caretaker personality and parenting style
Neuroplasticity is the ability of the brain to change throughout an individual’s life.
Parental responsiveness, which refers to the degree the parent responds to the child’s needs.
Parental demandingness, is the extent to which the parent expects more mature and responsible behavior from a child.
Permissive parenting style is characterized by low demandingness with high responsiveness.
Preoperational stage children can solve not only this simple problem (which they actually can solve after 9 months) but show a wide variety of other symbolic-representation capabilities
Sensitive Period of Development describes a window of opportunity where experiences have a greater impact on certain areas of brain development.
Sensorimotor stage children’s thinking is largely realized through their perceptions of the world and their physical interactions with it
Strange situation, a laboratory task for studying infant, parent attachment
Temperament broadly refers to consistent individual differences in behavior that are biologically based and are relatively independent of learning | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/04%3A_Culture_and_Development/4.07%3A_Other_Enculturation_Agents.txt |
Before we can understand how the brain reconstructs our world using mental schemas it’s critical to learn how information from the world is first sensed and perceived.
Sensation
The physical process during which our sensory organs (e.g., eyes, ears, nose among others) respond to external stimuli is called sensation. Sensation happens when you eat noodles or feel the wind on your face or hear a car horn honking in the distance. During sensation, our sense organs are engaging in transduction, the conversion of one form of energy into another. For example, physical energy such as light or a sound wave is converted into a form of electrical energy that the brain can understand.
Perception
After our brain receives the electrical signals we make sense of all this stimulation and begin to appreciate the complex world around us. This psychological process, making sense of the stimuli, is called perception. It is during this process that you are able to identify a gas leak in your home, recognize the color orange, or connect a song that reminds you of a specific afternoon spent with friends. Perception is the process of interpreting and organizing the information that we received from our senses.
Our experience influences how our brain processes information. You have tasted food that you like and food that you don’t like. There are some bands you enjoy and others you can’t stand. When eat something new or hear a new band, you process those stimuli using bottom-up processing. This is when we build up to perception from the individual pieces.
Sometimes stimuli we’ve experienced in our past will influence how we process new ones. This is called top-down processing. The best way to illustrate these two concepts is with our ability to read.
Read the following quote out loud:
Notice anything odd while you were reading the text in the triangle? Did you notice the second “the”? If not, it’s likely because you were reading this from a top-down approach. Having a second “the” doesn’t make sense. We know this and our brain knows this and doesn’t expect there to be a second one, so we tend to skip right over it. In other words, your past experience has changed the way you perceive the writing in the triangle. Someone who is just learning to read is using a bottom-up approach by carefully attending to each piece and would be less likely to make this error.
5.02: Attention
Attention is the behavioral and cognitive process of selectively concentrating on one thing in our environment while ignoring other distractions. Attention is a limited resource. This means that your brain can only devote attention to a limited number of stimuli (things in the environment). Despite what you may believe, we are terrible multi-taskers. Research shows that when multitasking, people make more mistakes or perform their tasks more slowly. Each task increases cognitive load (the amount of information our brain has to process) and our attention must be divided among all of the tasks to perform them. This is why it takes more time to finish something when we are multitasking.
Many aspects of attention have been studied in the field of psychology. In some respects, we define different types of attention by the nature of the task used to study it. For example, a crucial issue in World War II was how long an individual could remain highly alert and accurate while watching a radar screen for enemy planes, and this problem led psychologists to study how attention works under such conditions. Research results found that when watching for a rare event, it is easy to allow concentration to lag. This a continues to be a challenge today for TSA agents, charged with looking at images of the contents of your carry-on luggage in search of knives, guns, or shampoo bottles larger than 3 oz.
Culture can also influence and shape how we attend to the world around us. Masuda and Nisbett (2001) asked American and Japanese students to describe what they saw in images like the one shown below. They found that while both groups talked about the most salient objects (the fish, which were brightly colored and swimming around), the Japanese students also tended to talk and remember more about the images in the background (they remembered the frog and the plants as well as the fish).
North Americans and Western Europeans in these types of studies were more likely to pay attention to salient and central parts of the pictures, while Japanese, Chinese, and South Koreans were more likely to consider the context as a whole. The researchers described this as holistic perception and analytic perception.
Holistic Perception: A pattern or perception characterized by processing information as a whole. This pattern makes it more likely to pay attention to relationships among all elements. Holistic perception promotes holistic cognition: a tendency to understand the gist, the big idea, or the general meaning. Eastern medicine is traditionally holistic; it emphasizes health in general terms as the result of the connection and balance between mind, body, and spirit.
Analytic Perception: A pattern of perception characterized by processing information as a sum of the parts. Analytic perception promotes analytic thinking: a tendency to understand the parts and details of a system. This pattern makes it more likely to pay attention and remember salient, central, and individual elements. Western medicine is traditionally analytic; it emphasizes specialized subdisciplines and it focuses on individual symptoms and body parts.
How we attend and perceive our world has implications for how we evaluate and explain the world around us, including the actions of others. We will talk more about this concept, known as attributions, later in the chapter. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/05%3A_Culture_and_Cognition/5.01%3A_Sensation_and_Perception.txt |
The culture that we live in has a significant impact on the way we think about and perceive our social worlds, so it is not surprising that people in different cultures would think about people and things somewhat differently. Social cognitions are the way we think about others, pay attention to social information, and use the information in our lives (consciously or unconsciously). In this section we will review several types of social cognitions including schemas, attributions, confirmation bias and the fundamental attribution error. We will also revisit analytic perception and holistic perception that we learned earlier in this chapter.
5.04: Schema
Through the process of cognitive development, we accumulate a lot of knowledge and this knowledge is stored in the form of schemas, which are knowledge representations that include information about a person, group, or situation. Because they represent our past experience, and because past experience is useful for prediction, our schemas influence our expectations about future events and people.
When a schema is activated it brings to mind other related information. This process is usually unconscious, or happens outside of our awareness. Through schema activation, judgments are formed based on internal assumptions (bias) in addition to information actually available in the environment. When a schema is more accessible it can be activated more quickly and used in a particular situation. For example, if there is one female in a group of seven males, female gender schemas may be more accessible and influence the group’s thinking and behavior toward the female group member. Watching a scary movie late at night might increase the accessibility of frightening schemas, increasing the likelihood that a person will perceive shadows and background noises as potential threats.
Once they have developed, schemas influence our subsequent learning, such that the new people and situations we encounter are interpreted and understood in terms of our existing knowledge (Piaget & Inhelder, 1962; Taylor & Crocker, 1981). When existing schemas change on the basis of new information, we call the process accommodation. In other cases, however, we engage in assimilation, a process in which our existing knowledge influences new conflicting information to better fit with our existing knowledge, thus reduces the likelihood of schema change. You may remember these concepts from Chapter 4 when we learned about Piaget’s theory of cognitive development.
Psychologists have become increasingly interested in the influence of culture on social cognition and schemas. Although people of all cultures use schemas to understand the world, the content of our schemas has been found to differ for individuals based on their cultural upbringing. For example, one study interviewed a Scottish settler and a Bantu herdsman from Swaziland and compared their schemas about cattle. Because cattle are essential to the lifestyle of the Bantu people, the Bantu herdsman’s schemas for cattle were far more extensive than the schemas of the Scottish settler. The Bantu herdsmen was able to distinguish his cattle from dozens of others, while the Scottish settler was not.
One outcome of assimilation that shapes our schema is confirmation bias, the tendency for people to seek out and favor information that confirms their expectations and beliefs, which in turn can further help to explain the often, self-fulfilling nature of our schemas. The confirmation bias has been shown to occur in many contexts and groups, although there is some evidence of cultural differences in its extent and prevalence. Kastenmuller and colleagues (2010), for instance, found that the bias was stronger among people with individualist (e.g., the United States, Canada, and Australia) versus collectivist (e.g., Japan, China, Taiwan, Korea, India among others) cultural backgrounds. The researchers argued that this partly stemmed from collectivist cultures putting greater importance in being self-critical, which is less compatible with seeking out confirming as opposed to disconfirming evidence. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/05%3A_Culture_and_Cognition/5.03%3A_Social_Cognitions.txt |
Psychologists who study social cognition believe that behavior is the product of the situation (e.g., role, culture, other people around) and the person (e.g., temperament, personality, health, motivation). Attributions are beliefs that a person develops to explain human behaviors, characteristics and situations. This means that we try to explain or make conclusions about the causes of our own behavior and others’ behavior. Internal attributions are dispositional (e.g., traits, abilities, feelings), and external attributions are situational (e.g., things in the environment). Our attributions are frequently biased. One way that our attributions may be biased is that we are often too quick to attribute the behavior of other people to something personal about them rather than to something about their situation. This is a classic example of the general human tendency of underestimating how important the social situation really is in determining behavior. Fundamental attribution error (FAE) is the tendency to overestimate the degree to which the characteristics of an individual are the cause of an event, and to underestimate the involvement of situational factors. FAE is considered to be universal but that cultural differences may explain how and when FAE occurs.
Attributions and Culture
On average, people from individualistic cultures tend to focus their internal attributions more on the individual person, whereas, people from collectivistic cultures tend to focus more on the situation (Ji, Peng, & Nisbett, 2000; Lewis, Goto, & Kong, 2008; Maddux & Yuki, 2006). Miller (1984) asked children and adults in both India (a collectivistic culture) and the United States (an individualist culture) to indicate the causes of negative actions by other people. Although the younger children (ages 8 and 11) did not differ, the older children (age 15) and the adults did. Americans made more dispositional attributions, whereas Indians made more situational attributions for the same behavior.
Morris and his colleagues (Hong, Morris, Chiu, & Benet-Martínez, 2000) investigated the role of culture on person perception in a different way, by focusing on people who are bicultural (i.e., who have knowledge about two different cultures). In their research, they used high school students living in Hong Kong. Although traditional Chinese values are emphasized in Hong Kong, because Hong Kong was a British-administrated territory for more than a century, the students there are also enculturated with Western social beliefs and values.
Morris and his colleagues first randomly assigned the students to one of three priming conditions. Participants in the American culture priming condition saw pictures of American icons (such as the U.S. Capitol building and the American flag) and then wrote 10 sentences about American culture. Participants in the Chinese culture priming condition saw eight Chinese icons (such as a Chinese dragon and the Great Wall of China) and then wrote 10 sentences about Chinese culture. Finally, participants in the control condition saw pictures of natural landscapes and wrote 10 sentences about the landscapes.
Then participants in all conditions read a story about an overweight boy who was advised by a physician not to eat food with high sugar content. One day, he and his friends went to a buffet dinner where a delicious-looking cake was offered. Despite its high sugar content, he ate it. After reading the story, the participants were asked to indicate the extent to which the boy’s weight problem was caused by his personality (personal attribution) or by the situation (situational attribution). The students who had been primed with symbols about American culture gave relatively less weight to situational (rather than personal) factors in comparison with students who had been primed with symbols of Chinese culture.
In still another test of cultural differences in person perception, Kim and Markus (1999) analyzed the statements made by athletes and by the news media regarding the winners of medals in the 2000 and 2002 Olympic Games. They found that athletes in China described themselves more in terms of the situation (they talked about the importance of their coaches, their managers, and the spectators in helping them to do well), whereas American athletes (can you guess?) focused on themselves, emphasizing their own strength, determination, and focus.
Most people tend to use the same basic perception processes, but given the cultural differences in group interconnectedness (individualistic versus collectivist), as well as differences in attending (analytic versus holistic), it should come as no surprise that people who live in collectivistic cultures tend to show the fundamental attribution error less often than those from individualistic cultures, particularly when the situational causes of behavior are made salient (Choi, Nisbett, & Norenzayan, 1999). Bias attributions can lead to negative stereotyping and discrimination but being more aware of these cross-cultural differences in attribution may reduce cultural misunderstandings and misinterpreting behavior. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/05%3A_Culture_and_Cognition/5.05%3A_Attributions.txt |
Memory is a single term that reflects a number of different abilities: holding information briefly while working with it (working memory), remembering episodes of one’s life and our general knowledge of facts of the world among other types. Memory involves three processes:
• Encoding information – attending to information and relating it to past learning
• Storing – maintaining information over time
• Retrieving – accessing the information when you need it
The information processing model of memory is a useful way to represent how information from the world is integrated with the knowledge networks of information that already exist in our minds.
Sensory Memory is the part of the memory system in which information is translated from physical energy into neural signals. This is part of the encoding process. We receive information from our environment and we must perceive it and attend to it before it can move to our working memory.
Short-Term Memory (working memory) is the part of the memory system in which information can be temporarily stored in the present state of awareness. This type of memory is limited to 7 items of capacity and 7 to 30 seconds of duration on average.
Long-Term Memory is the part of the memory system in which information can be permanently stored for an extended period of time. It has a large to unlimited capacity and a duration that may last from minutes to a lifetime.
Semantic Memory is the type of long-term memory about general facts, ideas, or concepts that are not associated to emotions and personal experience.
Episodic Memory is a type of long-term memory about events taking place at a specific time and place in a person’s life. This memory is contextualized (i.e., where, who, when, why) in relation to events and what they mean emotionally to an individual.
Memory failures can occur at any stage, leading to forgetting or to having false memories. The key to improving one’s memory is to improve processes of encoding and to use techniques that guarantee effective retrieval. Good encoding techniques include relating new information to what one already knows, forming mental images, and creating associations among information that needs to be remembered. The key to good retrieval is developing effective cues that will lead the person back to the encoded information. Classic mnemonic systems can greatly improve one’s memory abilities.
Memory and Culture
It should be obvious, after learning about episodic memory that many of our memories are personal and unique to us but cultural psychologists and researchers have found that the average age of first memories varies up to two years between different cultures. Researchers believe that enculturation and cultural values influences childhood memories. For example, the way parents and other adults discuss, or don’t discuss, the events in children’s lives influences the way the children will later remember those events.
Mullen (1994) found that Asian and Asian-American undergraduates’ memories, on average, happened six months later than the Caucasian students’ memories. These results were repeated in a sample of native Korean participants, only this time the differences were even larger. The difference between Caucasian participants and native Korean participants was almost 16 months. Hayne (2000) also found that Asian adults’ first memories were later than Caucasians’ but Maori adults’ (native population from New Zealand) memories reached even further back to around age three. These results do not mean that Caucasians or Maoris have better memories than Asians but rather people have the types of memories that they need to get along well in the world they inhabit – memories exist within cultural context. For example, Maori culture is focused on personal history and stories to a greater degree than the American culture and Asian culture. Differences in memory could also be explained by the values of individualistic and collectivist cultures. Individualistic cultures tend to be independently oriented with an emphasis on standing out and being unique. Interpersonal harmony and making the group work is the emphasis of collectivist cultures and the way in which people connect to each other is less often through sharing memories of personal events. In some cultures, personal memory isn’t nearly as important as it is to people from individualistic cultures.
5.07: Thinking and Intelligence
The way we represent the world influences the degree of success we experience in our lives. For example, if we represent yellow traffic lights as the time to hit the accelerator, then the world might give us tickets, scares, or accidents. If we represent our diet as a way to maximize refined sugar intake, then we might wind up experiencing heart disease. Mental representations and intelligence go hand in hand. Some mental representations are more intelligent, because they are more adaptive and support outcomes such as well-being, safety, and success. In this section we are going to cover other elements of thinking like categorization, memory, and intelligence and how culture shapes these processes. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/05%3A_Culture_and_Cognition/5.06%3A_Memory.txt |
The information we sense and perceive is continuously organized and reorganized into concepts that belong to categories. Most concepts cannot be strictly defined but are organized around the best examples or prototypes, which have the properties most common in the category or might be considered the ideal example of a category.
Concepts are at the core of intelligent behavior. We expect people to be able to know what to do in new situations and when confronting new objects. If you go into a new classroom and see chairs, a blackboard, a projector, and a screen, you know what these things are and how they will be used. You’ll sit on one of the chairs and expect the instructor to write on the blackboard or project something onto the screen. You’ll do this even if you have never seen any of these particular objects before, because you have concepts of classrooms, chairs, projectors, and so forth that tell you what they are and what you’re supposed to do with them.
Objects fall into many different categories, but there is usually a hierarchy to help us organize our mental representations.
• A concept at the superordinate level of categories is at the top of a taxonomy and it has a high degree of generality (e.g., animal, fruit).
• A concept at the basic level categories is found at the generic level which contains the most salient differences (e.g., dog, apple).
• A concept at the subordinate level of categories is specific degree and has little generality (e.g., Labrador retriever, Gala).
Brown (1958) noted that children use basic level categories when first learning language and superordinates are especially difficult for children to fully acquire. People are faster at identifying objects as members of basic-level categories (Rosch et al., 1976). Recent research suggests that there are different ways to learn and represent concepts and that they are accomplished by different neural systems. Using our earlier example of classroom, if someone tells you a new fact about the projector, like it uses a halogen bulb, you are likely to extend this fact to other projectors you encounter. In short, concepts allow you to extend what you have learned about a limited number of objects to a potentially infinite set of events and possibilities.
Categorization and Culture
There are some universal categories like emotions, facial expressions, shape and color but culture can shape how we organize information. Chiu (1972) was the first to examine cultural differences in categorization using Chinese and American children. Participants were presented with three pictures (e.g., a tire, a car, and a bus), and were asked to group the two pictures they thought best belonged together. Participants were also asked to explain their choices (e.g., “Because they are both large”). Results showed that the Chinese children have a greater tendency to categorize by identifying relationships among the pictures but American children were more likely to categorize by identifying similarities among pictures.
Later research reported no cultural differences in categorization between Western and East Asian participants; however, among similarity categorizations the East Asian participants were more likely to make decisions on holistic aspects of the images and Western participants were more like to make decisions based on individual components of the images (Norenzayan, Smith, Jun Kim, and Nisbett, 2002). Cultural differences in categorizing were also found by Unworth, Sears and Pexman (2005) across three experiments however, when the experiment task was timed there were differences in category selection. These results suggest that the nature (timed or untimed) of the categorization task determines the extent to which cultural differences are observed.
The results of these categorization studies seem to support the differences in thinking between individualist and collectivist cultures. Western cultures are more individualist and engage in more analytic thinking and East Asian cultures engage in more holistic thinking (Choi, Nisbett, & Smith, 1997; Masuda & Nisbett, 2001; Nisbett et al., 2001; Peng & Nisbett, 1999). You might remember from the earlier section that holistic thought is characterized by a focus on context and environmental factors so categorizing by relationships can be explained with referencing how objects relate to their environment. Analytic thought is characterized by the separation of an object from its context so categorizing by similarity means that objects can be separated into different groups. A major limitation with these studies is the emphasis on East Asian, specifically the use of Chinese participants and Western cultures. There have been no within culture replications using participants from other non-Asian collectivist cultures. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/05%3A_Culture_and_Cognition/5.08%3A_Categories_and_Concepts.txt |
Psychologists have long debated how to best conceptualize and measure intelligence (Sternberg, 2003). These questions include how many types of intelligence there are, the role of nature versus nurture in intelligence, how intelligence is represented in the brain, and the meaning of group differences in intelligence. The concept of intelligence relates to abstract thinking and that includes our abilities to acquire knowledge, to reason abstractly, to adapt to novel situations, and to benefit from instruction and experience (Gottfredson, 1997; Sternberg, 2003). The brain processes underlying intelligence are not completely understood, but current research has focused on four potential factors:
• Brain size
• Sensory ability
• Speed and efficiency of neural transmission
• Working memory capacity
There is some truth to the idea that smarter people have bigger brains. Studies that have measured brain volume using neuroimaging techniques find that larger brain size is correlated with intelligence (McDaniel, 2005), and intelligence has also been found to be correlated with the number of neurons in the brain and with the thickness of the cortex (Haier, 2004; Shaw et al., 2006). It is important to remember that these correlational findings do not mean that having more brain volume causes higher intelligence. It is possible that growing up in a stimulating environment that rewards thinking and learning may lead to greater brain growth (Garlick, 2003), and it is also possible that a third variable, such as better nutrition, causes both brain volume and intelligence.
There is some evidence that brains of more intelligent people operate more efficiently than the brains of people with less intelligence. Haier, Siegel, Tang, and Abel (1992) analyzed data showing that people who were more intelligent showed less brain activity than those with lower intelligence when they worked on a task. Researchers suggested that more intelligent brains need to use less capacity. Brains of more intelligent people also seem to operate faster than the brains of those who are less intelligent. Research has found that the speed with which people can perform simple tasks, like determining which of two lines is longer or quickly pressing one of eight buttons that is lighted, was predictive of intelligence (Deary, Der, & Ford, 2001). Intelligence scores also correlate at about r = .5 with measures of working memory (Ackerman, Beier, & Boyle, 2005), and working memory is now used as a measure of intelligence on many tests.
Research using twin and adoption studies found that intelligence has both genetic and environmental causes (Neisser et al., 1996; Plomin, DeFries, Craig, & McGuffin, 2003). It appears that 40% – 80% of the variability (difference) in intelligence is due to genetics (Plomin & Spinath, 2004). The intelligence of identical twins correlates very highly at r = .86, which is much higher than the scores of fraternal twins who are less genetically similar (r = .60). Correlations between the intelligence of parents and their biological children (r = .42) is significantly higher than the correlation between parents and adopted children (r = .19). The intelligence of very young children (less than 3 years old) does not predict adult intelligence but by age 7 intelligence scores (as measured by a standard test) remain very stable in adulthood (Deary, Whiteman, Starr, Whalley, & Fox, 2004).
There is also strong evidence for the role of nurture, which indicates that individuals are not born with fixed, unchangeable levels of intelligence. Twins raised together in the same home have more similar intelligence scores than do twins who are raised in different homes, and fraternal twins have more similar intelligence scores than do non-twin siblings, which is likely due to the fact that they are treated more similarly than are siblings. Additionally, intelligence becomes more stable as we get older which provides evidence that early environmental experiences matter more than later ones.
Environmental factors also explain a greater proportion of the variance in intelligence and social and economic deprivation can adversely affect intelligence. Children from households in poverty have lower intelligence scores than children from households with more resources even when other factors such as education, race, and parenting are controlled (Brooks-Gunn & Duncan, 1997). Poverty may contribute to diets that under nourish the brain or lack appropriate vitamins. Poor children are more likely to be exposed to toxins such as lead in drinking water, dust, or paint chips (Bellinger & Needleman, 2003). Both of these factors can slow brain development and reduce intelligence.
Intelligence is improved by education and the number of years a person has spent in school correlates about r = .6 with intelligence (Ceci, 1991). There is a word of caution when interpreting this result. The correlation may be due to the fact that people with higher intelligence scores enjoy taking classes more than people with low intelligence scores, and may be more likely to stay in school. Children’s intelligence scores tend to drop significantly during summer vacations (Huttenlocher, Levine, & Vevea, 1998) a finding that suggests a causal effect of intelligence and education. A longer school year, as is used in Europe and East Asia, may be beneficial for maintaining intelligence scores for school-aged children. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/05%3A_Culture_and_Cognition/5.09%3A_Intelligence.txt |
As learned earlier, intelligence is associated with the brain, includes abstract thinking, adapting to new situations, ability to benefit from instruction and experience (Gottfredson, 1997; Sternberg, 2003) and is largely determined by genetics. Psychologist Charles Spearman (1863–1945) hypothesized that there must be a single underlying construct that links these concepts, abilities and skills together. He called this construct the general intelligence factor (g) and there is strong empirical support for a single dimension to intelligence. Others psychologists believe that instead of a single factor, intelligence is a collection of distinct abilities. Raymond Cattell proposed a theory of intelligence that divided general intelligence into two components: crystallized intelligence and fluid intelligence (Cattell, 1963).
Crystallized intelligence is characterized as acquired knowledge and the ability to retrieve it. When you learn, remember, and recall information, you are using crystallized intelligence. You use crystallized intelligence all the time in your coursework by demonstrating that you have mastered the information covered in the course.
Fluid intelligence encompasses the ability to see complex relationships and solve problems. Navigating your way home after being detoured onto an unfamiliar route because of road construction would draw upon your fluid intelligence. Fluid intelligence helps you tackle complex, abstract challenges in your daily life, whereas crystallized intelligence helps you overcome concrete, straightforward problems (Cattell, 1963).
Robert Sternberg developed another theory of intelligence, which he titled the triarchic theory of intelligence because he proposed that intelligence is comprised of three parts (Sternberg, 1988): creative, analytical, and practical intelligence. (CAP).
• Creative intelligence is marked by inventing or imagining a solution to a problem or situation. Creativity in this realm can include finding a novel solution to an unexpected problem or producing a beautiful work of art or a well-developed short story.
• Analytical intelligence is closely aligned with academic problem solving and computations. Sternberg says that analytical intelligence is demonstrated by an ability to analyze, evaluate, judge, compare, and contrast. For example, in a science course such as anatomy, you must study the processes by which the body uses various minerals in different human systems. In developing an understanding of this topic, you are using analytical intelligence.
• Practical intelligence is sometimes compared to “street smarts.” Being practical means you find solutions that work in your everyday life by applying knowledge based on your experiences.
Multiple Intelligences Theory was developed by Howard Gardner and asserts that everybody possesses at least eight distinct types of intelligence. Among these eight intelligences, a person typically excels in some and falters in others (Gardner, 1983). Gardner’s theory is relatively new and needs additional research to establish empirical support. At the same time, his ideas challenge the traditional idea of intelligence to include a wider variety of abilities but creating a test to measure all of Gardner’s intelligences has been extremely difficult (Furnham, 2009; Gardner & Moran, 2006; Klein, 1997).
5.11: Intelligence and Culture
Intelligence can also have different meanings and values in different cultures. If you live on a small island, where most people get their food by fishing from boats, it would be important to know how to fish and how to repair a boat. If you were an exceptional angler, your peers would probably consider you intelligent. If you were also skilled at repairing boats, your intelligence might be known across the whole island. In Irish families, hospitality and telling an entertaining story are marks of the culture. If you are a skilled storyteller, other members of Irish culture are likely to consider you intelligent. Some cultures place a high value on working together as a collective. In these cultures, the importance of the group supersedes the importance of individual achievement. When you visit such a culture, how well you relate to the values of that culture exemplifies your cultural intelligence, sometimes referred to as cultural competence.
5.12: Intelligence Tests
Reliable intelligence testing began in the early 1900s with researchers named Alfred Binet and Henri Simon. They were instructed by the French government to develop an intelligence test to use on children in order to determine which ones might have difficulty in school. The test included many verbally based tasks. American researchers soon realized the value of such testing and Louis (Lewis) Terman, a Stanford professor, modified Binet’s work by standardizing the administration of the test, which was standardized by testing thousands of different-aged children in the United States to establish an average score for each age group. The Stanford-Binet, is a measure of general intelligence made up of a wide variety of tasks including vocabulary, memory for pictures, and naming of familiar objects and is primarily used with children.
Later, David Wechsler created an adult intelligence test named the Wechsler Adult intelligence Scale (WAIS), which is the most widely used intelligence test for adults (Watkins, Campbell, Nieberding, & Hallmark, 1995). The current version of the WAIS, consists of 15 different tasks including working memory, arithmetic ability, spatial ability, and general knowledge about the world. These 15 tasks measure a dimension of intelligence and provide psychologists with four domains scores: verbal, perceptual, working memory, and processing speed. The WAIS is highly correlated with the Stanford-Binet, as well as with criteria of academic and life success, including college grades, measures of work performance, and occupational level. It also shows significant correlations with measures of everyday functioning among individuals with intellectual disabilities.
5.13: Summary
We learned earlier in the chapter that the brain is central to sensing and perceiving our world but culture is at the heart of thinking. Culture shapes how we perceive information, evaluated the information and use the information in our daily life.
We organize the world into networks of information that are stored and used to interpret new experiences. This knowledge can be represented into hierarchical concepts with superordinate, basic, and subordinate categories. We hold information in short-term memory and process it using networks of information in long term memory; some from episodic experiences and some from more formal semantic knowledge. Intelligence is among the oldest and longest studied topics in all of psychology. The development of assessments to measure this concept is at the core of the development of psychological science itself. The way we perceive, remember, and think about the world we live in is influenced by our culture. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/05%3A_Culture_and_Cognition/5.10%3A_One_or_Many.txt |
Accommodation is a cognitive adaptation that occurs when schemas must change when new information is presented
Analytic Perception: A pattern of perception characterized by processing information as a sum of the parts. Analytic perception promotes analytic thinking.
Assimilation is a cognitive process that occurs when we change information in order to make it fit within our schema; with this conflict schema is less likely to change
Attention is the process of filtering information from sensation into perception and cognition.
Basic level categories is found at the generic level which contains the most salient differences (e.g., dog, apple).
Bottom-up perception occurs when we build up to perception from the individual pieces.
Categories are formed when concepts are ranked as subordinate, basic and superordinate levels
Concepts refer to information that is later organized and categorized
Confirmation bias is the tendency to seek out information that favors or confirms existing beliefs and expectations; outcome of assimilation
Crystallized intelligence is characterized as acquired knowledge and the ability to retrieve it.
Culture-Fair Test is an approach to measure intelligence that, in theory, intends to test intelligence in an equally fair way across all cultural groups. Fairness indicates a lack of bias in the assessment, interpretation, and use of data obtained from these measurements.
Dispositional attribution is an explanation of people’s behavior as a result of internal factors that reside within.
Encoding input of information into the memory system
Episodic memory is a type of long-term memory about events taking place at a specific time and place in a person’s life.
Fluid intelligence encompasses the ability to see complex relationships and solve problems.
Fundamental Attribution Error (FAE) is a bias that makes individuals incorrectly label others. Their behavior is attributed to set negative personal flaws. This error makes individuals underestimate the role of external factors
G-Factor is the notion that intelligence is a singular underlying cognitive aptitude or intellectual ability that is representative of a person’s general intellectual potential.
Holistic Perception: A pattern or perception characterized by processing information as a whole. This pattern makes it more likely to pay attention to relationships among all elements.
Intelligence Quotient (IQ) is a total score that is derived from a standardized test of intelligence. Historically, an IQ score is calculated by dividing a person’s mental age (MA) by the person’s chronological age (CA) and multiplying by 100 to avoid decimals. Thus, the formula: IQ=(MA/CA) X100.
Long-Term memory is the part of the memory system in which information can be permanently stored for an extended period of time. It has a large to unlimited capacity and a duration that may last from minutes to a lifetime.
Memory is a system or process that stores what we learn for future use and refers to lots of different abilities
Multiple intelligences is the notion that there is not a singular underlying general intelligence. According to this theory all people vary in terms of levels of strength across a diverse group of specific domains that expand beyond cognitive domains
Perception is the process of organizing or interpreting sensory information into awareness.
Perceptual Illusions is a subjective misinterpretation of sensory stimuli from its objective nature.
Schema are knowledge representations that include information about people, groups or situations; when activated schemas are useful for making predictions and decisions.
Semantic memory is the type of long-term memory about general facts, ideas, or concepts that are not associated to emotions and personal experience.
Sensation is the process that allows energy from the world to be translated as neural signals through the five senses: vision, hearing, taste, smell, and touch.
Sensory memory storage of brief sensory events, such as sights, sounds, and tastes
Short-Term memory is the part of the memory system in which information can be temporarily stored in the present state of awareness. This type of memory is limited to 7 items of capacity and 7 to 30 seconds of duration on average.
Storage is the creation of a permanent record of information
Superordinate level of categories is at the top of a taxonomy and it has a high degree of generality (e.g., animal, fruit).
Subordinate level of categories is specific degree and has little generality (e.g., Labrador retriever, Gala).
Top-down processing occurs when something that we’ve experienced in our past influences how we process new experiences. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/05%3A_Culture_and_Cognition/5.14%3A_Vocabulary.txt |
Sex refers to biological, physical and physiological differences between males and females, including both primary sex characteristics (the reproductive system) and secondary characteristics such as height and muscularity, as well as genetic differences (e.g., chromosomes). Male sexual and reproductive organs include the penis and testes. Female sexual and reproductive organs include clitoris, vagina and ovaries. Biological males have XY chromosome and biological females have XX chromosome but biological sex is not as easily defined or determined as you might expect. For example, does the presence of more than one X mean that the XXY person is female or does the presence of a Y mean that the XXY person is male? The existence of sex variations fundamentally challenges the notion of a binary biological sex.
In humans, intersex individuals make up about two percent, more than 150 million people, of the world’s population (Blackless et al., 2000). Intersex describes variation in sex characteristics, such as chromosomes, gonads (testes and ovaries), sex hormones, or genitals (penis, clitoris, vulva). The term is misleading because it suggests that people have complete sets male or female reproductive systems but this is not always the case. There are dozens of intersex conditions, such as Androgen Insensitivity Syndrome and Turner’s syndrome (Lee et al., 2006).
In our example, having one Y and more than one X chromosome is called Kleinfelter Syndrome. Some people have genitalia considered ambiguous, meaning that they cannot easily be identified as either male (penis) or female (clitoris). Fausto-Sterling (2000) argues that the decision to label someone male or female is a social decision and that a biological sex is too complex to fit within a binary sex system. Nevertheless, because assigning a sex identity is a fundamental cultural priority, doctors will typically decide, with respect to intersex babies, within 24 hours of a birth. Sometimes this decision involves surgery, which can have long term psychological consequences (Fausto-Sterling, 2000).
Gender
Gender is a term that refers to social or cultural distinctions and roles associated with being male or female. Gender is not determined by biology in any simple way. At an early age, we begin learning cultural norms for what is considered masculine (trait of a male) and feminine (trait of a female). Gender is conveyed and signaled to others through clothing and hairstyle, or mannerisms like tone of voice, physical bearing, and facial expression. For example, children in the United States may associate long hair, fingernail polish or dresses with femininity. Later in life, as adults, we often conform to these norms by behaving in gender specific ways: men build houses and women bake cookies (Marshall, 1989; Money et al., 1955; Weinraub et al., 1984). It is important to remember that behaviors and traits associated with masculinity and femininity are culturally defined. For example, in American culture, it is considered feminine to wear a dress or skirt; however, in many Middle Eastern, Asian, and African cultures, dresses or skirts (often referred to as sarongs, robes, or gowns) are worn by males and are considered masculine. The kilt worn by a Scottish male does not make him appear feminine in his culture.
Our understanding of gender begins very early in life – often before we are born. Western cultures, expecting parents are asked whether they are having a girl or a boy and immediately judgments are made about the child. Boys will be active and presents will be blue while girls will be delicate and presents will be pink. In some Asian and Muslim cultures a male child is valued more favorably than a female child (Matsumoto & Juang, 2013) and female fetuses may be aborted or female infants abandoned.
Children, by their first birthday, already distinguish faces by gender and between 3 and 6 years of age, children develop strong and rigid gender stereotypes. Gender stereotyping involves overgeneralizing about the attitudes, traits, or behavior patterns of women or men. Stereotypes can refer to play (e.g., boys play with trucks, and girls play with dolls), traits (e.g., boys are strong, and girls like to cry), and occupations (e.g., men are doctors and women are nurses). These stereotypes stay rigid until children reach about age 8 or 9. Then they develop cognitive abilities that allow them to be more flexible in their thinking about others. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/06%3A_Culture_and_Sexuality/6.01%3A_Sex_and_Gender.txt |
Many of our gender stereotypes are strong because we emphasize gender so much in culture (Bigler & Liben, 2007). For example, children learn at a young age that there are distinct expectations for boys and girls. Gender roles refer to the role or behaviors learned by a person as appropriate to their gender and are determined by the dominant cultural norms. Cross-cultural studies reveal that children are aware of gender roles by age two or three and can label others’ gender and sort objects into gender categories. At four or five, most children are firmly entrenched in culturally appropriate gender roles (Kane, 1996). When children do not conform to the appropriate gender role for their culture, they may face negative sanctions such as being criticized, bullied, marginalized or rejected by their peers. A girl who wishes to take karate class instead of dance lessons may be called a “tomboy” and face difficulty gaining acceptance from both male and female peer groups (Ready, 2001). Boys, especially, are subject to intense ridicule for gender nonconformity (Coltrane and Adams, 2008; Kimmel, 2000)
By the time we are adults, our gender roles are a stable part of our personalities, and we usually hold many gender stereotypes. Men tend to outnumber women in professions such as law enforcement, the military, and politics. Women tend to outnumber men in care-related occupations such as child care, health care, and social work. These occupational roles are examples of typical Western male and female behavior, derived from our culture’s traditions. Adherence to these occupational gender roles demonstrates fulfillment of social expectations but may not necessarily reflect personal preference (Diamond, 2002).
Gender stereotypes are not unique to American culture. Williams and Best (1982) conducted several cross-cultural explorations of gender stereotypes using data collected from 30 cultures. There was a high degree of agreement on stereotypes across all cultures which led the researchers to conclude that gender stereotypes may be universal. Additional research found that males tend to be associated with stronger and more active characteristics than females (Best, 2001); however recent research argues that culture shapes how some gender stereotypes are perceived. Researchers found that across cultures, individualistic traits were viewed as more masculine; however, collectivist cultures rated masculine traits as collectivist and not individualist (Cuddy et al., 2015). These findings provide support that gender stereotypes may be moderated by cultural values.
There are two major psychological theories that partially explain how children form their own gender roles after they learn to differentiate based on gender. Gender schematheory argues that children are active learners who essentially socialize themselves and actively organize others’ behavior, activities, and attributes into gender categories, which are known as schemas. These schemas then affect what children notice and remember later. People of all ages are more likely to remember schema-consistent behaviors and attributes than schema-inconsistent behaviors and attributes. So, people are more likely to remember men, and forget women, who are firefighters. They also misremember schema-inconsistent information. If research participants are shown pictures of someone standing at the stove, they are more likely to remember the person to be cooking if depicted as a woman, and the person to be repairing the stove if depicted as a man. By only remembering schema-consistent information, gender schemas strengthen more and more over time.
A second theory that attempts to explain the formation of gender roles in children is social learning theory which argues that gender roles are learned through reinforcement, punishment, and modeling. Children are rewarded and reinforced for behaving in concordance with gender roles and punished for breaking gender roles. In addition, social learning theory argues that children learn many of their gender roles by modeling the behavior of adults and older children and, in doing so, develop ideas about what behaviors are appropriate for each gender. Social learning theory has less support than gender schema theory but research shows that parents do reinforce gender-appropriate play and often reinforce cultural gender norms. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/06%3A_Culture_and_Sexuality/6.02%3A_Stereotypes_and_Gender_Roles.txt |
Regardless of theory, observing, organization and learning about gender occurs through four major agents of socialization: family, education, peers and media. Each agent reinforces gender roles by creating and maintaining normative expectations for gender-specific behavior. Exposure also occurs through secondary agents such as religion and the workplace.
Family
Family is the first agent of socialization and enculturation. There is considerable evidence that parents socialize sons and daughters differently. A meta-analysis of research from the United States and Canada found that parents most frequently treated sons and daughters differently by encouraging gender-stereotypical activities (Lytton & Romney, 1991). Fathers, more than mothers, are particularly likely to encourage gender-stereotypical play, especially in sons. Parents also talk to their children differently based on stereotypes. For example, parents talk about numbers and counting twice as often with sons than daughters (Chang, Sandhofer, & Brown, 2011) and talk to sons in more detail about science than with daughters. Parents are also much more likely to discuss emotions with their daughters than their sons.
Girls may be asked to fold laundry, cook meals or perform duties that require neatness and care. It has been found that fathers are firmer in their expectations for gender conformity than are mothers, and their expectations are stronger for sons than they are for daughters (Kimmel, 2000). This is true in many types of activities, including preference of toys, play styles, discipline, chores, and personal achievements. As a result, boys tend to be particularly attuned to their father’s disapproval when engaging in an activity that might be considered feminine, like dancing or singing (Coltrane and Adams, 2008).
It should be noted that parental socialization and normative expectations vary along lines of social class, race, and ethnicity. Research in the United States has shown that African American families, for instance, are more likely than Caucasians to model an egalitarian role structure for their children (Staples and Boulin Johnson, 2004). Even when parents set gender equality as a goal, there may be underlying indications of inequality. For example, when dividing up household chores, boys may be asked to take out the garbage, take care of the yard or perform other tasks that require strength or toughness.
Peers
As noted earlier, peer socializations can also serve to reinforce gender norms of a culture. Children learn at a very young age that there are different expectations for boys and girls. When children do not conform to the appropriate gender role, they may experience negative consequences like criticism, bullying or rejection by their peers. Boys and young men are more likely to experience intense, negative peer responses when they do not follow traditional gender norms (Coltrane and Adams, 2008; Kimmel, 2000).
Education
The reinforcement of gender roles and stereotypes continues once a child reaches school age. Studies suggest that gender socialization still occurs in schools today, perhaps in less obvious forms (Lips, 2004). Teachers may not even realize that they are acting in ways that reproduce gender-differentiated behavior patterns but any time students are asked to arrange their seats or line up according to gender, teachers are reinforcing that boys and girls should be treated differently (Thorne, 1993). Even in levels as low as kindergarten, schools subtly convey messages to girls indicating that they are less intelligent or less important than boys.
For example, in a study involving teacher responses to male and female students, data indicated that teachers praised male students far more than they praised female students. Additionally, teachers interrupted girls more and provided boys with more opportunities to expand on their ideas (Sadker & Sadker, 1994). Schools often reinforce the polarization of gender by positioning girls and boys in competitive arrangements – like a “battle of the sexes” competition.
Media
In television and movies, women tend to have less significant roles and are often portrayed as wives or mothers. When women are given a lead role, they are often one of two extremes: a wholesome, saint-like figure or a malevolent, hypersexual figure (Etaugh and Bridges, 2003). Weisbuch and Ambady (2009) demonstrated that nonverbal behavior on television can communicate culturally shared attitudes and bias about women and ideal body images. Television commercials and other forms of advertising also reinforce inequality and gender-based stereotypes. Women are almost exclusively present in ads promoting cooking, cleaning, or child care-related products (Davis, 1993). Think about the last time you saw a man star in a dishwasher or laundry detergent commercial. In general, women are underrepresented in roles that involve leadership, intelligence, or emotional stability. In mainstream advertising, however, themes intermingling violence and sexuality are quite common (Kilbourne, 2000).
Gender inequality is pervasive in children’s movies (Smith, 2008). Research indicates that of the 101 top-grossing, children’s movies released between 1990 and 2005, three out of four (75%) characters were male, only seven (7%) were near being gender balanced.
6.04: Gender Differences
Differences between males and females can be based on (a) actual gender differences (i.e., men and women are actually different in some abilities), (b) gender roles (i.e., differences in how men and women are supposed to act), or (c) gender stereotypes (i.e., differences in how we think men and women are). Sometimes gender stereotypes and gender roles reflect actual gender differences, but sometimes they do not.
In terms of language and language skills, girls develop language skills earlier and know more words than boys; however this does not translate into long-term differences. Girls are also more likely than boys to offer praise, to agree with the person they’re talking to, and to elaborate on the other person’s comments. Boys, in contrast, are more likely than girls to assert their opinion and offer criticisms (Leaper & Smith, 2004). In terms of temperament, boys are slightly less able to suppress inappropriate responses and slightly more likely to blurt things out than girls (Else-Quest, Hyde, Goldsmith, & Van Hulle, 2006).
With respect to aggression, boys exhibit higher rates of unprovoked physical aggression than girls, but no difference in provoked aggression (Hyde, 2005). Some of the biggest differences involve the play styles of children. Boys frequently play organized rough-and-tumble games in large groups, while girls often play fewer physical activities in much smaller groups (Maccoby, 1998). There are also differences in the rates of depression, with girls much more likely than boys to be depressed after puberty. After puberty, girls are also more likely to be unhappy with their bodies than boys.
There is considerable variability between individual males and females. Also, even when there are average group differences, the actual size of most of these differences is quite small. This means, knowing someone’s gender does not help much in predicting his or her actual traits.
6.05: Gender Roles and Culture
Hofstede’s (2001) research revealed that on the Masculinity and Femininity dimension (MAS), cultures with high masculinity reported distinct gender roles, moralistic views of sexuality and encouraged passive roles for women. Additionally, these cultures discourage premarital sex for women but have no such restrictions for men. The cultures with the highest masculinity scores were: Japan, Italy, Austria and Venezuela. Cultures low in masculinity (high femininity) had gender roles that were more likely to overlap and encouraged more active roles for women. Sex before marriage was seen as acceptable for both women and men in these cultures. Four countries scoring lowest in masculinity were Norway, Denmark, Netherlands and Sweden. The United States is slightly more masculine than feminine on this dimension; however, these aspects of high masculinity are balanced by a need for individuality. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/06%3A_Culture_and_Sexuality/6.03%3A_Gender_Enculturation_Agents.txt |
Generally, our psychological sense of being male and female, gender identity corresponds to our biological sex. This is known as cisgender. This is not true for everyone. Transgender individuals’ gender identities do not correspond with their birth sexes. Transgendered males assigned the sex female at birth have a strong emotional and psychological connection to the forms of masculinity in their society that they identify their gender as male. A parallel connection to femininity exists for transgendered females.
A binary or dichotomous view of gender (masculine or feminine) is specific to some cultures, like the United States, but it is not universal. In some cultures there are additional gender variations resulting in more than two gender categories. For example, Samoan culture accepts what they refer to as a third gender. Fa’afafine, which translates as “the way of the woman,” is a term used to describe individuals who are born biologically male but embody both masculine and feminine traits. Fa’afafines are considered an important part of Samoan culture. In Thailand, you can be male, female, or kathoey (Tangmunkongvorakul, Banwell, Carmichael, Utomo, & Sleigh, 2010) and in Pakistan, India, Nepal, and Bangladesh transgender females are referred to as hijras, recognized by their governments as a third gender (Pasquesoone, 2014).
Because gender is so deeply ingrained culturally, it is difficult to determine the prevalence of transgenderism in society. Rates of transgender individuals vary widely around the world (see Table 1) and are shaped by social norms and cultural values. Transgendered individuals, who wish to alter their bodies through medical interventions such as surgery and hormonal therapy, so that their physical being is better aligned with their gender identity, are called transsexuals. Not all transgendered individuals choose to alter their bodies. Many will maintain their original anatomy but may present themselves to society as the opposite gender.
There is no single, conclusive explanation for why people are transgendered. Some hypotheses suggest biological factors such as genetics, or prenatal hormone levels, as well as social and cultural factors, such as childhood and adulthood experiences. Most experts believe that all of these factors contribute to a person’s gender identity (American Psychological Association, 2008). Unfortunately, transgendered and transsexual individuals frequently experience discrimination based on their gender identity and are twice as likely to experience assault or discrimination as non-transgendered individuals. Transgendered individuals are also one and a half times more likely to experience intimidation (National Coalition of Anti-Violence Programs, 2010) and be the victim of violent crime.
6.07: Sexuality and Sexual Orientation
Sex and gender are important aspects of a person’s identity; however, they do not tell us about a person’s sexual orientation or sexuality (Rule & Ambady, 2008). Sexuality refers to the way people experience and express sexual feelings. Sexual attraction is part of human sexuality and sexual orientation refers to enduring patterns of sexual attraction and is typically divided into four categories:
• Heterosexuality is the attraction to individuals of the opposite sex;
• Homosexuality is the attraction to individuals of one’s own sex;
• Bisexuality is the attraction to individuals of either sex; and
• Asexuality is no attraction to either sex.
Heterosexuals and homosexuals are informally referred to as “straight” and “gay,” respectively. North America is a heteronormative society, meaning it supports heterosexuality as the norm. While the majority of people identify as heterosexual, there is a sizable population of people North America who identify as either homosexual or bisexual. Research suggests that somewhere between 3% and 10% of the population identifies as homosexual (Kinsey, Pomeroy, & Martin, 1948; LeVay, 1996; Pillard & Bailey, 1995) and has determined that sexual orientation is not a choice, but rather it is a relatively stable characteristic of a person that cannot be changed.
Research has consistently demonstrated that there are no differences in the family backgrounds and experiences of heterosexuals and homosexuals (Bell, Weinberg, & Hammersmith, 1981; Ross & Arrindell, 1988). Genetic and biological mechanisms have also been proposed, and the balance of evidence suggests that sexual orientation has an underlying biological component. Over the past 25 years, research has identified genetics (Bailey & Pillard, 1991; Hamer, Hu, Magnuson, Hu, & Pattatucci, 1993; Rodriguez-Larralde & Paradisi, 2009) and brain structure and function (Allen & Gorski, 1992; Byne et al., 2001; Hu et al., 2008; LeVay, 1991; Ponseti et al., 2006; Rahman & Wilson, 2003a; Swaab & Hofman, 1990) as biological explanations for sexual orientation.
According to current scientific understanding, individuals are usually aware of their sexual orientation between middle childhood and early adolescence (American Psychological Association, 2008). They do not have to participate in sexual activity to be aware of these emotional, romantic, and physical attractions; people can be celibate and still recognize their sexual orientation. Alfred Kinsey was among the first to conceptualize sexuality as a continuum rather than a strict dichotomy of gay or straight. To classify this continuum of heterosexuality and homosexuality, Kinsey created a six-point rating scale that ranges from exclusively heterosexual to exclusively homosexual. | textbooks/socialsci/Psychology/Culture_and_Community/Culture_and_Psychology_(Worthy%2C_Lavigne_and_Romero)/06%3A_Culture_and_Sexuality/6.06%3A_Gender_Identity.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.