chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Goals and Objectives:
In this chapter, we will do the following:
1. Construct two key measures of industry concentration
2. Identify the key characteristics of monopolistic competition in neoclassical theory
3. Explain how monopolistically competitive firms behave in the short run and long run
4. Describe the social consequences of monopolistic competition
5. Outline the key characteristics of oligopoly in neoclassical theory
6. Explore several neoclassical oligopoly models
7. Investigate the game theoretic concepts of dominant equilibrium and Nash equilibrium
8. Examine a Post-Keynesian markup pricing model
Two Measures of Industry Concentration
In this chapter we will explore imperfectly competitive markets. These market structures fall in between the neoclassical ideal of perfect competition and its antithesis of pure monopoly. The number of firms in an industry is a major determinant of a market’s location along the market power spectrum. Markets with many firms (other things equal) are more competitive and are considered to have less concentrated industries. Markets with fewer firms (other things equal) are less competitive and are considered to have more concentrated industries.
In chapter 8, it was pointed out that imperfectly competitive market structures include monopolistic competition and oligopoly (see Figure 8.2). Monopolistically competitive markets have relatively low degrees of industry concentration whereas oligopolistic markets have relatively high degrees of industry concentration. Because the market power spectrum is continuous, we require a measure of industry concentration that allows us to identify the degrees of market power in different markets relative to one another. In this section, we will introduce two different measures of industry concentration that many economists use to compare different markets.
Before we construct these measures, we must introduce a very important definition that will serve as the building block for our measures of industry concentration. A firm’s market share (Si) refers to the percentage of the market’s total sales revenue for which the ith largest firm is responsible during a given period. For example, in a specific industry, S2 refers to the second largest firm’s market share. Suppose that the second largest firm’s TR is $10,000 per month (TR2) and the total revenue earned in the entire market is$100,000 per month (TRM). In that case, S2 is defined and calculated for this case as follows:
$S_{2}=\frac{TR_{2}}{TR_{M}} \cdot 100=\frac{10,000}{100,000} \cdot 100=10$
In this case, the second largest firm has 10% of the market. It should be noted that the final calculation is 10 rather than 10%. That is, the 10% is implied. This point is important because the correct calculations of the measures that we construct require the omission of the percentages to obtain the correct answer as we shall soon see.
The first measure of industry concentration that we will introduce is referred to as the four-firm concentration ratio. Simply put, it is the combined market share of an industry’s four largest firms. It is defined as follows:
$Four-Firm\;Concentration\;Ratio=S_{1}+S_{2}+S_{3}+S_{4}$
For example, suppose that an industry consists of the following market shares: 10, 20, 30, 25, and 15. Notice that these market shares sum to 100. Therefore, this industry contains exactly five firms. The four-firm concentration ratio in this case is 90 (= 30+25+20+15) because the smallest firm is excluded. This market appears to be highly concentrated because the four largest firms control 90% of the market.
The four-firm ratio appears to provide a sound measure of industry concentration. The larger it is, the more concentrated the industry is. The smaller it is, the less concentrated the industry is. A problem arises, however, when we consider examples of a specific sort. For example, consider two industries in which the market shares are 50 and 50 in Industry A and 25, 25, 25, and 25 in Industry B. In each case, the four-firm concentration ratio is 100. This result suggests that the two industries are equally concentrated. Our intuition, however, tells us that the industry with only two firms is significantly more concentrated than the industry with four firms. The four-firm ratio appears to be a deficient measure in the sense that it does not make distinctions between different industries that are as precise as we would prefer.
For this reason, we now introduce a second measure of industry concentration that is free from the defects of the four-firm ratio. The second measure of industry concentration is called the Herfindahl-Hirschman Index (HHI). It is named after the American economist Orris Herfindahl and the German economist Albert Hirschman. This measure is defined as follows for n firms in an industry:
$HHI=S_{1}^2+S_{2}^2+S_{3}^2+...+S_{n}^2$
The reader should notice two key differences between the four-firm ratio and the HHI. Unlike the four-firm ratio, the HHI includes every market share in the industry, not just the market shares of the largest four firms. Additionally, each market share is squared before the terms are added together. To understand why these changes have been made, let’s return to our example involving industries A and B from earlier. The HHI in Industy A (HHIA) and in Industry B (HHIB) are calculated as follows:
$HHI_{A}=50^2+50^2=5000$
$HHI_{B}=25^2+25^2+25^2+25^2=2500$
Whereas the four-firm ratio suggests that the two industries are equally concentrated, the HHI indicates that Industry A is twice as concentrated as Industry B. This result is much more consistent with our intuition and is an important reason to rely on the HHI, particularly when one is trying to make careful distinctions between levels of industry concentration. It should now be clear as well why the HHI includes the squaring of the market shares. The squaring technique allows the larger market shares to count much more heavily in the overall index than the smaller market shares. The inclusion of all market shares also allows us to obtain a more precise measure.
The HHI may also take on a much larger range of values than the four-firm ratio. That is:
$10,000\geq HHI>0$
In the extreme case in which a single firm has a market share of 100, the HHI equals 10,000 (= 1002). This case is the case of pure monopoly, and it should be noted that this situation is the only one that leads to a HHI of 10,000. The four-firm ratio, however, is 100 just like we found for Industries A and B. The lower extreme for the HHI is just above zero. Clearly, the only way to have a HHI of 0 is for all firms to have zero market shares, which means that the market simply does not exist. Therefore, for any existing markets, the smallest HHI is just above zero. Such a market would be very close to being perfectly competitive due to the huge number of very small firms. It is important not to focus on the specific value of the HHI. The index value by itself means rather little. What is important is the index value of one industry relative to another. The higher the index value is, the more concentrated is the industry.
The Antitrust Division of the United States Justice Department considers a HHI below 1,500 to be an unconcentrated industry, a HHI between 1,500 and 2,500 to be a moderate degree of concentration, and a HHI above 2,500 to be a high degree of concentration.[1]
Using data from the Economic Census provided by the United States Census Bureau,[2] we consider a few examples of unconcentrated industries shown below:
• In 2002, the four-firm concentration ratio in animal food manufacturing was 39.3 and the HHI for the 50 largest firms was 636.6.[3]
• The 2002 four-firm ratio in flour milling and malt manufacturing was 38.2 and the HHI for the 50 largest firms was 524.4.
• The 2002 four-firm ratio in sugar manufacturing was 52.8 and the HHI for the 50 largest firms was 855.5.
A few examples of moderately concentrated industries are below:
• In 2002, the four-firm concentration ratio in beet sugar manufacturing was 85.3 and the HHI for the 20 largest firms was 2,208.9.
• The 2002 four-firm ratio in cookie and cracker manufacturing was 70.9 and the HHI for the 50 largest firms was 1,901.2.
• The 2002 four-firm ratio in tortilla manufacturing was 59.3 and the HHI for the 50 largest firms was 2,355.4.
A few examples of highly concentrated industries are below:
• In 2002, the four-firm concentration ratio in tobacco stemming and redrying was 84.2 and the HHI for the 20 largest firms was 2,905.9.
• The 2002 four-firm ratio in house slipper manufacturing was 94.3 and the HHI for the 20 largest firms was 2,943.5.
• The 2002 four-firm ratio in glass container manufacturing was 87.1 and the HHI for the 50 largest firms was 2,548.1.
Notice that tobacco stemming and redrying appears less concentrated than glass container manufacturing when we compare the four-firm ratios for the two industries, but tobacco stemming and redrying appears more concentrated than glass container manufacturing when we consider the HHI for the two industries. The two measures sometimes lead to contradictory results, but as we have seen, the HHI leads to more precise results than the four-firm ratio. Nevertheless, the four-firm ratio has a greater intuitive meaning and so it is often considered as well when investigating the degree of concentration in an industry. Now that we have a method of measuring industry concentration, we can explore the characteristics of the imperfectly competitive market structures that fall in between the extremes of perfect competition and pure monopoly.
Characteristics of Monopolistic Competition
Monopolistically competitive markets have three major features, two of which suggest a competitive market structure and one of which suggests a more monopolistic market structure. The two characteristics that contribute to the competitive nature of this market structure are the relatively large number of sellers and the absence of barriers to entry and exit in the long run. The characteristic that contributes to the monopolistic nature of this market structure is the tendency of all firms in the industry to produce products with slight differences that they promote aggressively through advertising. It is important to recognize that the differences need not be real or actual differences. If the consumer perceives a difference to exist across products in an industry, we may regard the market structure as monopolistically competitive. For example, doctors may tell patients that no significant differences exist across different types of aspirin, but the companies that produce aspirin promote their specific brands with great vigor. If consumers believe the differences exist, then each firm will possess some degree of market power. Other examples of monopolistically competitive markets include the markets for restaurant meals and auto repairs. Sometimes the factor that differentiates one product from another is nothing more than the location of the seller.
One good example of a monopolistically competitive market is the market for celebrity scents. As of 2009, firms were releasing at least 500 different kinds of celebrity perfume and cologne each year with department stores selling millions of bottles. Slight product differentiation is the source of each firm’s market power in this market. Firms have developed celebrity perfumes for Jennifer Lopez, Gwen Stefani, and Sarah Jessica Parker. Rapper 50 Cent also has a fragrance called Power by 50. Fans that waited at Macy’s in midtown Manhattan to buy a bottle of the new cologne had the good luck to have their photograph taken with the famous rapper. One fan exclaimed, “It’s him, it’s not the perfume. I can’t explain it, it’s like an energy you carry.”[4] Clearly, the uniqueness of the product often stems as much from the perceptions of the consumer as it does from the characteristics of the product itself.
Because each firm in a monopolistically competitive market has some market power, each firm faces a downward sloping demand curve for its specific product as shown in Figure 10.1.
The reader should notice in Figure 10.3 that the demand curve facing the firm is just tangent to the ATC curve above the MR = MC intersection. At this point, price and ATC are the same and so economic profits are equal to zero in this case. How does this long run adjustment to zero economic profits occur? Two possible scenarios exist:
1. Economic profits exist in the short run, leading to the following:
• Firms enter the industry in the long run.
• The firms in the industry lose market share to the entering firms causing the individual demand curves that they face to shift leftward.
• This adjustment continues until P = ATC and economic profits are equal to zero.
2. Economic losses exist in the short run, leading to the following:
• Firms exit the industry in the long run.
• The remaining firms in the industry gain market share causing the individual demand curves that they face to shift rightward.
• This adjustment continues until P = ATC and economic profits are equal to zero.
The consequences of monopolistic competition are relatively straightforward. First, monopolistically competitive markets lead to inefficiency. Markup pricing exists (P > MC), which monopolistically competitive firms achieve by restricting their production levels below those that would prevail in a perfectly competitive market. Furthermore, least-cost production is not achieved because ATC exceeds the minimum ATC. In Figure 10.3, ATC* is higher than the minimum ATC that exists at the intersection between MC and ATC. The monopolistically competitive firm thus operates at less than optimal capacity. The conclusion that monopolistically competitive firms have a chronic problem of under-utilization of plant capacity is referred to as the excess capacity theorem of monopolistic competition.
Recall that the more unique the product is (i.e., the more differentiated it is), the steeper the demand curve will be because fewer close substitutes exist for the product. Similarly, if less product differentiation exists, then the demand curve is flatter because more close substitutes exist. In the extreme case that zero product differentiation exists, the market is once again perfectly competitive and the demand curve is perfectly horizontal. In that case, the MR = MC intersection will occur at the minimum point on the ATC curve. Hence, the problem of inefficiency in a monopolistically competitive market worsens as the products become increasingly differentiated. The increasing amount of differentiation gives individual firms more market power, which they use to restrict output further to raise prices and economic profits.
On the other hand, most consumers value some product differentiation. For example, if all firms in the shoe industry only produce one style of shoe, then the market will be perfectly competitive. A larger number of shoes would be produced at a lower per unit cost and price. Would our society suffer a different kind of loss, however, due to everyone wearing the same style of shoe? It would seem to be so. We would lose the ability to express our uniqueness in this way. Most of us would be willing to pay something extra for that opportunity of self-expression. How much more we are willing to pay for product variety is the question that is highly debatable. Our answers will surely vary depending on the specific good in question. A tradeoff thus exists between product variety and economic efficiency.
Characteristics of Oligopoly
Oligopolistic markets have several defining characteristics. These are markets dominated by a few large firms. Oligopolistic markets may have just two or three firms or even nine or ten firms. The specific number of firms is not rigidly defined. Again, the degree of concentration in an industry is measured along a continuum. For decades, the U.S. automobile market was dominated by the “Big Three” (i.e., General Motors, Ford, and Chrysler). In the beer industry, the “Big Five” in the early 1990s included Anheuser Busch, Miller, Stroh, Coors, and Heilemann. Oligopolies have also existed in the markets for breakfast cereal, steel, and military weaponry.
Also required for a market to be oligopolistic is that each firm has considerable market power, which is the result of barriers to entry that discourage competition. Such entry barriers might include patents, or large startup costs that generate economies of scale. The final characteristic of oligopoly markets is that the firms in the industry behave strategically when setting prices and output levels. That is, each firm sets its price while considering the prices and likely reactions of its competitors. This characteristic sharply distinguishes this market structure from the other market structures. In a perfectly competitive market structure, each firm takes the price as given and ignores the actions of competitors. In a purely monopolistic market, no competitors exist. In a monopolistically competitive market, firms might strive to differentiate their products, but at a point in time, the prices they charge are entirely determined by the demand segments they face and the production costs they incur.
The element of strategic interaction makes it very difficult to model oligopoly behavior. As a result, no single oligopoly model is dominant within neoclassical economics. We will consider several models as well as explore how a special subfield of economics called game theory may shed light on the behavior of oligopolistic firms.
A Simple Duopoly Model
Sometimes oligopoly markets only contain two firms. These special cases of oligopoly markets are called duopoly markets. Suppose that you and your competitor are the only two sellers in the market for baseball cards. Your supply curve and your competitor’s supply curve are shown in Figure 10.4.[5]
The supply price refers to the minimum price that these sellers are willing and able to charge for the corresponding output level. Initially, your supply price and your competitor’s supply price are P1 = $2.00 per baseball card and your output levels are each q1. Now assume that you and your competitor think strategically about each other’s price. Let’s assume that pricing takes place in a series of rounds with each seller reacting to the other seller’s price every other round. Further suppose that you and your competitor each decide that you will always cut the price per card by$0.50 below the price set by the competition. Finally, assume that your competitor is the first to act. Your competitor cuts the price to P2 = $1.50, undercutting you by$0.50. Since you follow a similar strategy, you decide to cut your price to P3 = $1.00, thus cutting your price in half and undercutting your competitor by$0.50. In response, your competitor cuts price to undercut your new price and so sets a price of P4 = $0.50. Your stubborn adherence to this cutthroat strategy leads you to reduce your price to zero and your competitor follows. If the two sellers adhere strictly to these pricing rules, a zero price/zero quantity equilibrium is the result since quantities supplied decline with the price cuts. The problem with this model, of course, is that actual oligopoly markets do manage to persist without price wars completely undermining production. For one thing, firms have production costs and so they may not be willing to cut prices so low that they suffer extensive losses. Also, firms are able to grab a larger market share when they undercut competitors and that factor is not taken into account in Figure 10.4. Therefore, we seem to require models that can explain how oligopoly firms may coexist while accounting for the actions of their competitors and earning positive economic profits. It is to these models that we now turn. The Kinked Demand Model of Non-Collusive Oligopoly In 1939, the Marxian economist Paul Sweezy introduced what has become known as the kinked demand curve model of non-collusive oligopoly.[6] It is rare that a Marxian economist develops a model that becomes widely taught by neoclassical economists. The kinked demand curve model is an important example of how sometimes ideas developed within one school of economic thought spill over into other schools of economic thought. Certainly, not all neoclassical economists accept the validity of this model’s conclusions, but it has been incorporated into most mainstream economics textbooks. The purpose of the model is to explain how oligopoly firms manage to establish stable prices in the marketplace without relying on collusion. That is, price wars are somehow avoided in these markets even though the firms never coordinate with one another to explicitly fix prices. Because the purpose is to explain the rigidity or “stickiness” of the price rather than the level of the price itself, the model assumes that one of the oligopolistic firms is already charging a specific price P1 and producing a specific quantity q1 as shown in Figure 10.5. Given the original price of P1, we need to determine the shape of the demand curve facing this firm. To derive the demand curve facing the firm, we ask what will happen if this firm raises its price above P1. Because we are assuming that firms think strategically, it is reasonable to expect that when this firm raises its price, the other firms will not follow with a price increase. By not following with a price increase, they can capture a larger share of the market. As a result, when this firm raises its price, its customers will react by purchasing the product from the firm’s competitors. That is, consumers will be very responsive to the price increase and the quantity demanded will fall significantly. In other words, the demand curve facing the firm is very elastic above the price of P1. On the other hand, suppose this firm reduces its price below P1. Because we are assuming that firms think strategically, it is reasonable to expect that when this firm cuts its price, the other firms will follow with a price cut. By following with a price cut, the other firms can prevent this firm from capturing a larger share of the market at their expense. As a result, when this firm cuts its price, it will not be able to expand its sales much at all. That is, consumers will be very unresponsive to the price cut and the quantity demanded will only increase a small amount due to new customers entering the market. In other words, the demand curve facing the firm is very inelastic below the price of P1. Because the demand curve is relatively flat above P1 and relatively steep below P1, the demand curve facing the firm has a kink at a price of P1. The next step is to determine the firm’s marginal revenue (MR) curve. We have seen that a downward sloping demand curve has a corresponding MR curve that declines more quickly than demand. Again, the reason is that when the firm cuts the price to sell another unit, it must cut the price of all units sold. As a result, the revenue rises by less than the price of the additional unit. In the case of a kinked demand curve, we essentially observe two downward sloping demand curves joined at the kink. Each section then should have a downward sloping MR curve that corresponds to it. Figure 10.6 shows the MR curve for an oligopolistic firm facing a kinked demand curve. It should be noted that price changes are possible in the kinked demand model. A rise in price might occur, for example, if MC rises a great deal. A rise in MC to MC3 will lead to a reduction in output below q1 and a rise in price above P1. A drop in MC to MC4 will lead to an increase in output above q1 and a decrease in price below P1. The point is not that prices never change in oligopolistic markets but only that they are less likely to change than in other market structures when production costs change. If you think about Coke and Pepsi, for example, these two firms do not collude. If they fixed prices, it would be illegal. Still, relatively stable and similar prices emerge in the market for soda. The author encountered another interesting case a few years ago at Wendy’s. A tomato shortage had caused the price of tomatoes to rise significantly. As a result, Wendy’s asked each customer who ordered a hamburger whether they would be willing to forego the slice of tomato that typically is included on a Wendy’s hamburger. In this case, the firm’s MC rose, and it made a special effort to enlist the help of customers in keeping MC down, all to avoid a price increase that might cost it market share if its competitors refused to follow the price increase. The kinked demand curve model of oligopoly is useful for thinking about how stable prices may emerge in oligopolistic markets. It is subject to the criticism, however, that it only explains the stability of the product price and not its magnitude. For that reason, neoclassical economists have several other ways of thinking about oligopolistic behavior. Two Additional Models: Collusive Oligopoly and Price Leadership The kinked demand model of non-collusive oligopoly assumes that the firms do not explicitly agree on price. Each firm chooses its price and output independently even though each carefully considers what its competitors are doing and will do in response to a price change. If firms do collude, then essentially the firms will act as a pure monopoly. Such a collusive oligopoly is referred to as a cartel. Quite simply, a cartel is a group of firms that behaves as a single firm with respect to certain decisions, which might include the product price, the output level, or the market share for each cartel member. The most famous example of a cartel is probably the Organization of Petroleum Exporting Countries or OPEC. This international oil cartel, made up primarily of Middle Eastern oil producers, has manipulated oil prices at different times since it was first formed in the 1960s. The major difference between a cartel and a purely monopolistic firm is that a cartel is much less likely to remain intact over time for several reasons.[7] First, cartel members are often located in different geographic regions. As a result, the individual demands that they face and the production costs they incur may be significantly different. As a result, it will be difficult for the firms to agree on an appropriate product price. Second, cartel agreements tend to be weaker when a larger number of firms belong to the cartel. That is, it is easier to reach and maintain a pricing agreement among three firms than it is among eight firms. Third, the temptation to cheat on the pricing arrangement is immense because a firm can capture a greater market share by secretly undercutting the other cartel members. When all firms begin to cheat, the cartel falls apart. Such temptations are greatest during periods of economic recession when falling demand leads to desperate attempts to attract new customers. Fourth, cartels are often slow to form because the firms in the market fear that a high cartel price will attract new competition in the market. Finally, the antitrust laws discussed in the last chapter serve as a major barrier to cartel formation in the United States. Firms accused of price fixing face federal prosecution, steep fines, and even criminal punishment for their efforts to rig markets in their favor. In other markets, market-sharing agreements may emerge in which one large firm acts as the price leader and many small competitors follow along by setting the same price as the leader. The price leader ends up sharing the market with many small firms by means of an implicit agreement. For example, U.S. Steel acted as the price leader in the steel industry during the early twentieth century. U.S. Steel would publish price catalogs periodically in which it would announce its new prices. Smaller steel companies would inevitably follow along. When the steel companies deviated from this policy during the panic of 1907, U.S. Steel retaliated with even steeper price cuts that severely punished the smaller steel companies. With this implicit threat in place, the steel companies quickly learned that they must continue to follow the lead of U.S. Steel in the pricing of steel products. In the U.S. automobile industry in the mid-twentieth century, a similar pattern arose with General Motors serving as the price leader. The president of GM would announce new automobile prices in a speech and the other automakers would fall into line with similar prices. Neoclassical economists use a price leadership model to investigate how the established price, market demand, and the cost structures of the different firms collectively determine how the market is divided between the price leader and the smaller competitors. Figure 10.7 provides a graph that represents the price leadership model for (let’s say) the automobile market.[8] This graph is a bit involved and so we will discuss each piece before bringing all of them together to provide an explanation of how the market is divided when a price leader exists. First consider the market demand for automobiles (Dm). It is a downward sloping curve consistent with the law of demand. The small firms’ supply curve, on the other hand, is upward sloping consistent with the law of supply. As the price increases, the quantity supplied of the small firms rises because the higher price covers the higher marginal cost of production. If no other firms exist in this market, then the market will be perfectly competitive and the equilibrium price will be the competitive price of P2 where the supply of the small firms and market demand intersect. Another firm does exist in this market, however, and it is the price leader. We wish to know what the demand curve is that faces the price leader in this market. We can derive the demand curve facing the price leader (DL) by asking how much of the market demand remains at each price after the small firms have selected their quantity supplied. For example, at a price of P2 the small firms produce enough to satisfy the entire market demand. Nothing remains then for the price leader and so the quantity demanded of the price leader’s product is zero at a price of P2. This point is one point on the demand curve facing the price leader. On the other hand, at a price of P1, the small firms produce nothing. The quantity demanded facing the price leader in that case is the entire quantity demanded in the market. This point is a second point on the demand curve facing the price leader. As the price declines from P2 to P1, the small firms’ quantity supplied falls continuously causing the price leader to acquire an increasing amount of the market demand until the price leader captures the entire market. The demand curve facing the price leader is, therefore, a downward sloping demand curve connecting the two points just mentioned. Once we have determined the demand curve facing the price leader, it is a short step to obtain the MR curve for the price leader (MRL). It is also downward sloping and falls more quickly than the demand curve for the very same reasons provided previously whenever we have derived the MR curve from a downward sloping demand curve. Additionally, the MC curve for the price leader (MCL) is upward sloping due to the rising marginal cost of production, which stems from diminishing returns to labor. We now have all the component parts, which we can bring together to complete the price leadership model. The price leader considers the situation and aims to maximize its economic profit. It sets MRL equal to MCL, as any profit-maximizing firm does, and produces QL*. To sell this amount of output, it must set the price up on the demand curve facing the price leader at P*. If the price leader sets the price up on the market demand curve then the price leader will not be able to sell QL* and the firm’s economic profit will not be maximized. The small firms follow the price leader and set the same price of P*. The quantity supplied of the small firms shows up in two different ways in this graph. At a price of P*, we can look at the small firms’ supply curve to see that the small firms will supply QS* at that price. Equivalently, the small firms produce enough to satisfy the market demand that remains after the price leader has set its output. Because QT* is the quantity demanded in the entire market at a price of P*, QS* must also equal the difference between QT* and QL*. That is, the small firms produce what is left over of the market demand after the price leader has taken its share of the market. We can express this result symbolically as follows: $Q_{S}^*=Q_{T}^*-Q_{L}^*$ The market is thus divided between the price leader and the small firms. Clearly, the price leader could capture the entire market by setting the price all the way down at P1. The price leader has no incentive to do so, however, because it would produce far more output than is profit-maximizing. If the firm produced at that output level, MC would far exceed MR. Contrary to what one might expect, it is profit-maximizing for the price leader to share the market with the smaller firms. In general, oligopoly markets tend to be inefficient. In the kinked demand model, the cartel model (i.e., the pure monopoly model), and the price leadership model, the firms set price above MC. All such firms restrict output to maximize their economic profits. On the other hand, the large economic profits of oligopolistic firms enable them to invest in new production technologies that can enhance efficiency. Whether and to what extent these investments offset or cancel out the inefficiencies of oligopolistic firms is a question that may vary by industry.[9] A Detour into Game Theory Neoclassical economists also use an analytical method referred to as game theory to investigate the competitive interaction of oligopolistic firms. Game theory has been applied to many other topics in economics as well as other fields, including biology and political science. It is an analytical method that makes possible the study of strategic interaction between individual agents or players. In this section, we will discuss simultaneous games, in which all players simultaneously choose their strategies or courses of action. The first game that we will consider has only two players: Player A and Player B. Each player has two possible strategies from which to choose. That is, a player may set a high price or a low price. Figure 10.8 shows a payoff matrix with four possible outcomes. If both players set a high price, then outcome A is the result. If both players set a low price, then outcome D is the result. If player A sets a high price and player B sets a low price, then outcome B is the result. Finally, if player A sets a low price and player B sets a high price, then outcome C is the result. Each outcome has a set of payoffs associated with it, which may be interpreted as the economic profits that each player receives if that outcome occurs. If the payoff is negative, then it may be interpreted as an economic loss. Our purpose is to determine which of the four outcomes will occur when the players simultaneously choose their strategies. The reader may be tempted to select outcome A because both players receive the highest payoffs when this outcome occurs. Although in this case outcome A is the correct outcome, this approach frequently does not lead to the correct outcome. The following approach may be used to arrive at the correct outcome: 1. If Player A sets a high price, then Player B sets a high price (since 200 > 75). 2. If Player A sets a low price, then Player B sets a high price (since 75 > -100). In this case, we say that a high price is a dominant strategy for Player B, meaning that the best response of Player B is always to select the same strategy regardless of what Player A chooses to do. We now investigate Player A’s best responses to Player B’s possible strategies. 1. If Player B sets a high price, then Player A sets a high price (since 200 > -100). 2. If Player B sets a low price, then Player A sets a high price (since -50 > -100). In this case, Player A also has a dominant strategy, which is to always set a high price. Both players, therefore, have dominant strategies. Because both players will choose to set a high price, the outcome of the game is outcome A, and each player receives a payoff of 200. When the equilibrium outcome of the game results from both players having dominant strategies, the outcome is called a dominant equilibrium. Outcome A is, therefore, a dominant equilibrium. Consider the second game represented in Figure 10.9. We will follow the same procedure that we used with the first game to determine the outcome of the game. 1. If Player A sets a high price, then Player B sets a low price (since 135 > 90). 2. If Player A sets a low price, then Player B sets a low price (since 35 > 18). In this case, Player B has a dominant strategy to always set a low price regardless of the strategy that Player A chooses. Furthermore: 1. If Player B sets a high price, then Player A sets a low price (since 145 > 90). 2. If Player B sets a low price, then Player A sets a low price (since 35 > 19). In this case, Player A also has a dominant strategy to always set a low price regardless of the strategy that Player B chooses. Since both players opt to set a low price, the outcome of the game is outcome D, and each player receives a payoff of 35. Because both players have dominant strategies, the outcome of the game is a dominant equilibrium. The outcome of this game is especially interesting because each player receives a lower payoff than he or she would have received had both players set high prices. Essentially, what happens here is that a price war leads to a suboptimal outcome for both players. The reader might conclude that collusion would prevent this problem from arising. If each player agrees to charge a high price, then outcome A can be achieved. Is it true? Suppose the two players agree to charge a high price. Each player knows that its best response to the other player charging a high price is to charge a low price. Player A, for example, will enjoy a payoff of 145 rather than 90 by undercutting Player B. Similarly, Player B will enjoy a payoff of 135 rather than 90 when undercutting Player A. In other words, both players have an incentive to cheat on their pricing agreement. Since both players aim to maximize their payoffs, the pricing agreement falls apart. Game theory thus sheds light on the reason that cartel agreements are difficult to maintain except for short periods. The incentive to cheat is simply too great. Consider a third game represented in Figure 10.10. Again, we will follow our method of determining the outcome of the game. 1. If Player A sets a high price, then Player B sets a low price (since 275 > 190). 2. If Player A sets a low price, then Player B sets a high price (since 30 > 15). We see here that Player B does not have a dominant strategy since Player B’s best response depends upon Player’s A’s choice of strategy. Furthermore: 1. If Player B sets a high price, then Player A sets a low price (since 135 > 100). 2. If Player B sets a low price, then Player A sets a low price (since 10 > 5). Player A clearly has a dominant strategy to always set a low price regardless of which strategy Player B chooses. Still, in this case, the outcome is not immediately obvious since Player B does not have a dominant strategy. Nevertheless, we can arrive at the outcome if we assume that each player is able to anticipate the decision-making process of the other player. For example, if Player B can see that Player A will always set a low price no matter what Player B chooses, then Player B will choose her best response to Player A setting a low price. In this case, Player B will set a high price because setting a high price is Player B’s best response to Player A setting a low price. Outcome C is the result of this game. Because an outcome is called a dominant equilibrium only when both players have dominant strategies, outcome C is not a dominant equilibrium. Nevertheless, the outcome is a Nash equilibrium, which is a concept named after John Nash for his discovery of it in the 1950s and for which he was awarded the Nobel Prize in Economics in 1994. A Nash equilibrium exists when every player has chosen her best response given the strategies of the other players. Outcome C (A low and B High) is a Nash equilibrium because Player A’s best response is to always set a low price. Similarly, Player B’s best response to Player A’s low price is a high price. Clearly, not every Nash equilibrium is a dominant equilibrium as this example shows, but is every dominant equilibrium a Nash equilibrium? The answer is yes. A dominant equilibrium exists whenever both players have selected their dominant strategies and these strategies are always their best responses to whatever the other players have chosen. Figure 10.11 shows the relationship between the set of all possible Nash equilibria and the set of all possible dominant equilibria. Again, we will follow our method to determine the outcome of the game. 1. If Player A sets a high price, then Player B sets a high price (since 100 > 0). 2. If Player A sets a low price, then Player B sets a low price (since 200 > 0). In this case, Player B does not have a dominant strategy. Furthermore: 1. If Player B sets a high price, then Player A sets a high price (since 200 > 0). 2. If Player B sets a low price, then Player A sets a low price (since 100 > 0). Player A also lacks a dominant strategy. It will not be possible to determine the outcome of the game as we did in the third game because neither player can be sure what the other player will do. Nevertheless, it is possible to identify two Nash equilibria in this game. Notice that if Player A sets a high price, then Player B’s best response is to set a high price. Similarly, if Player B sets a high price, then Player A’s best response is to also set a high price. Therefore, when each player sets a high price, each player has chosen her best response to the other player. Outcome A is thus a Nash equilibrium. It should also be noticed that if Player A sets a low price, then Player B’s best response is to set a low price. Similarly, if Player B sets a low price, then Player A’s best response is to set a low price. When each player sets a low price then, each player has chosen her best response. Outcome D is thus also a Nash equilibrium. This game, therefore, has two Nash equilibria. Outcomes B and C are not Nash equilibria because neither player chooses her best response for those outcomes. Games with multiple Nash equilibria are thus possible. The final game we will consider also has a surprising outcome. Consider the fifth game represented in Figure 10.13. Again, we will follow our method to determine the outcome of the game. 1. If Player A sets a high price, then Player B sets a high price (since 200 > 110). 2. If Player A sets a low price, then Player B sets a low price (since 10 > -30). In this case, Player B does not have a dominant strategy since her best response depends upon Player A’s choice of strategy. Furthermore: 1. If Player B sets a high price, then Player A sets a low price (since 150 > 100). 2. If Player B sets a low price, then Player A sets a high price (since 20 > 10). Again, neither player has a dominant strategy. Therefore, the outcome cannot be determined as in the third game. Can we identify any Nash equilibria though? Let’s see. If Player A sets a high price, then Player B will set a high price, but if Player B sets a high price, then Player A sets a low price. When considering this analysis of Figure 10.13, we fail to find the agreement we found when looking at the fourth game. Similarly, if Player A sets a low price, then Player B sets a low price, but if Player B sets a low price, then Player A sets a high price. Again, considering this analysis of Figure 10.13, we fail to find the agreement we found in the fourth game. Hence, no Nash equilibrium exists in this game. An alternative method of verifying that no Nash equilibrium exists is to consider each of the four outcomes in the game. For each outcome, we should ask whether each player has chosen her best response to the other player. If we answer in the negative for either player, then that outcome cannot be a Nash equilibrium. As the reader can see, these games have many possible outcomes! A Post-Keynesian Markup Pricing Model Post-Keynesian economists have a different model of oligopolistic behavior than the neoclassical models that we have explored in this chapter. As explained in Chapter 1, post-Keynesian economics is a complex body of thought that derives mostly from the ideas of John Maynard Keynes but which also contains elements of classical political economy, Marxian economics, Sraffian economics, and neoclassical economics. In this chapter, we will explore the post-Keynesian theory of pricing in oligopolistic markets in such a way that we see how some of these elements are brought together to provide a unique explanation of firm behavior. Post-Keynesian economists distinguish between two types of markets that they refer to as “flexprice” markets and “fixprice” markets.[10] The flexprice markets are intensely competitive and prices are determined according to the laws of supply and demand.[11] The fixprice markets, on the other hand, are oligopolistic and have prices determined as the sum of the “normal cost of production” and a markup for profit.[12] Whereas flexprice markets tend to include markets for agricultural commodities and raw materials, fixprice markets include markets for finished, manufactured commodities.[13] For post-Keynesian theorists, the latter type of market is the most important in terms of understanding modern capitalist economies.[14] The key issue in fixprice markets is the determination of the profit markup.[15] Post-Keynesian economists argue that the profit markup is established according to firms’ need for internal financing of new investment projects out of current profits.[16] Their aim is to maximize sales revenue or market share rather than short run profit.[17] That is, if firms anticipate greater future demand for their products, then they will increase the profit markup. The larger markup will generate larger profits which can be used to expand production plant capacity. Similarly, firms will reduce the profit markup if they anticipate a reduction in future demand for their products. The reduction will allow them to reduce the level of investment and avoid creating unnecessary additional plant capacity.[18] The change in the markup will cause the price to change in these markets. The only other factor that can cause the price to change in these markets is a change in the normal level of production cost.[19] Why then are these markets referred to as fixprice markets? Because short run changes in current demand do not affect the product price. Instead, firms will expand production (or output) to meet short run surges in demand and will reduce output in response to short run drops in demand. Firms are assumed to possess some excess production capacity that allows them to make these short run adjustments. In flexprice markets, on the other hand, short run changes in demand do affect product prices, as explained in Chapter 3 in the context of the neoclassical supply and demand model. It is possible to represent the post-Keynesian theory of oligopolistic pricing more precisely. Let’s assume that the price is determined as the sum of short run unit cost (ATC) and the per unit profit markup as shown below: $P=ATC+markup$ In this case, the ATC may be interpreted as the normal cost of production in the short run.[20] Figure 10.14 depicts the post-Keynesian markup pricing model. Suppose the firm initially charges a price of$1.00 per unit and produces an output of 100 units. The ATC is $0.80 per unit in this case. The markup is$0.20 per unit. Notice that the firm has a small degree of excess capacity because it has not quite reached the lowest point on its ATC curve. The firm’s total profit in this case is $20 (=$0.20 per unit times 100 units). If a short run reduction in demand occurs, then the firm will maintain the price of $1.00 per unit but will reduce output from 100 units to 90 units. The firm keeps the price the same because this market is a fixprice market and short run changes in demand do not affect price. Post-Keynesian economists explain the reluctance to change price as stemming from firms’ fear that they might “spoil the market by damaging customer goodwill.” [21] The firm experiences an increase in excess plant capacity because it has reduced its output. Due to the failure to fully utilize its plant capacity and the gains from specialization, the per unit cost increases to$0.90 per unit. The markup thus falls to $0.10 per unit and profits decline to$9.00 (= $0.10 per unit times 90 units). Suppose next that the firm expects an increase in the future demand for its product. The managers believe that plant capacity must be increased to meet the larger future demand. As a result, the firm increases its profit markup from$0.10 per unit to $0.20 per unit. This change will allow the firm to earn enough additional profit to finance the plant expansion using internal funds. The firm thus increases its price to$1.10. The firm’s profits now increase to $18 (=$0.20 per unit times 90 units). If a short run increase in demand restores current demand to its previous level, now the firm’s profits increase to $30 (=$0.30 per unit times 100 units). The firm is now charging a price of $1.10 per unit and incurring unit costs of only$0.80 per unit, generating a per unit profit of $0.30 per unit on these 100 units. This analysis shows how oligopolistic firms may alter their production levels in the short run while maintaining constant prices in the face of changing current demand. At the same time, it shows how prices may be changed to support investment projects that will help satisfy higher expected future demand. The analysis also shows how profits vary at a microeconomic level as current demand rises and falls. This pattern of rising and falling profits with changing demand conditions is perfectly consistent with the post-Keynesian analysis of the business cycle that we will discuss in Chapter 14. The consistency of post-Keynesian microeconomic analysis and macroeconomic analysis is a point in its favor given how much difficulty neoclassical economists have faced with their efforts to create a synthesis of microeconomic and macroeconomic theory. The model also shows how post-Keynesian economics is a blend of different bodies of thought. The belief that markets often fail to clear automatically is a central feature of Keynes’s perspective. The suggestion that prices are determined primarily by production cost is consistent with classical economics. The assumption that labor specialization influences per unit cost is a feature of neoclassical cost analysis. Finally, the assumption that different rates of profit in different industries (stemming from long run changes in demand) lead to capital expansions in high-profit industries and capital contractions in low-profit industries is certainly a feature of Marxian economics. We will learn more aspects of post-Keynesian theory when we turn to macroeconomics. Following the Economic News [22] The New Zealand Herald recently reported that the major oil companies in New Zealand operate in an oligopolistic market structure. According to the newspaper, the oil companies have been gouging the public with “loyalty schemes, occasional discounting, sundry competitions and giveaways to disguise the fact that they actually hate competing directly on price.” As the kinked demand model of oligopoly implies, oligopolistic firms do not want to compete based on price because price cuts lead to price wars and price hikes lead to lost market share (since competitors do not follow price hikes). As the newspaper comments, firms in an oligopolistic market all lose as the consumer gains in a “genuine price war.” Therefore, oligopolists have an incentive to leave price relatively unchanged and instead compete using other tactics (at best) or only offer the illusion of competition (at worst). The newspaper also explains how the oligopolists in the oil industry keep competitors out of the market. The three largest oil companies in the country “control the supply chain infrastructure for the wholesale market for fuel” making it “logistically difficult for new firms, such as Gull, to enter the fuel market.” Barriers to entry are an important factor in maintaining the market power of the oligopolists. This control of the supply chain infrastructure thus constitutes one such barrier to entry. The newspaper explains that the oil market is not the only oligopolistic market in New Zealand. It mentions “banks, building suppliers, phone companies, electricity companies and supermarket chains” as additional examples. To confuse consumers who might be price conscious, “pricing plans for cellphone data or electricity are often a confusing maze of packages with all sorts of bundling and differing terms and conditions.” The newspaper explains that this policy minimizes direct price competition between firms. In a clever twist on Adam Smith’s invisible hand, the author of this article likens the invisible appendage in this case to a “gnarled, grasping claw,” indicating that the social benefits are not as mutual and widespread as users of Smith’s metaphor would often have us believe. To generate greater benefits for the firms, it is the consumers who lose in this scenario. Summary of Key Points 1. The four-firm concentration ratio and the Herfindahl-Hirschman Index (HHI) are two measures of industry concentration, but the HHI is the most exact measure. 2. Monopolistically competitive markets have low degrees of industry concentration and each firm has a small amount of market power, which derives from product differentiation. 3. Monopolistically competitive firms may experience short run profits or losses, but in the long run economic profits are zero due to the lack of entry barriers. 4. Oligopolistic markets have high degrees of industry concentration, and each firm behaves in a strategic fashion with respect to its rivals. 5. The kinked demand curve model explains price rigidity in non-collusive oligopoly markets. 6. Cartels arise in collusive oligopoly markets and function much like pure monopolies, but such agreements are difficult to maintain for a variety of reasons. 7. The price leadership model is used to explain how markets are divided between a large, dominant firm and many small competitors. 8. The outcome of a game is a dominant equilibrium when both players have dominant strategies (i.e., they always choose the same strategy as their best response). 9. The outcome of a game is a Nash equilibrium when each player offers her best response given the other player’s choice of strategy. 10. In post-Keynesian pricing theory, oligopolistic firms adjust output in response to short run changes in demand and only change price when normal costs change or when the markup is adjusted to ensure sufficient funds for investment projects. List of Key Terms Monopolistic Competition Oligopoly Market share Four-firm concentration ratio Herfindahl-Hirschman Index (HHI) Unconcentrated Moderate degree of concentration High degree of concentration Excess capacity theorem of monopolistic competition Game theory Duopoly market Supply price Kinked demand curve model of non-collusive oligopoly Cartel Price leadership model Players Simultaneous games Strategies Payoff matrix Outcomes Payoffs Dominant strategy Best response Dominant equilibrium Suboptimal outcome Nash equilibrium Flexprice markets Fixprice markets Markup Problems for Review 1. Suppose in an industry with annual sales of$8 million, Firm 1 has annual sales of \$1.4 million. What is Firm 1’s annual market share (S1)?
2. Suppose an industry is composed of five firms with the following market shares: 17, 25, 8, 27, and 23.
• Calculate the industry’s four-firm concentration ratio.
• Calculate the Herfindahl-Hirschman index for the industry.
• Would the United States Department of Justice designate the industry as unconcentrated, moderately concentrated, or highly concentrated?
3. Suppose the price leader sets the profit-maximizing price as represented in Figure 10.15. How much output does the dominant firm produce and how much output do the small firms produce?
4. Consider the game in Figure 10.16:
• Which players have dominant strategies in the game?
• What is the outcome of the game?
• Is the outcome of the game a dominant equilibrium?
• Is it a Nash equilibrium?
5. Consider the game in Figure 10.17:
• Which players have dominant strategies in the game?
• What is the outcome of the game?
• Is the outcome of the game a dominant equilibrium?
• Is it a Nash equilibrium?
1. Antitrust Division. “Horizontal Merger Guidelines (08/19/2010): 5.3 Market Concentration.” The United States Department of Justice. Web. Updated June 25, 2015. Accessed on May 11, 2018. https://www.justice.gov/atr/horizontal-merger-guidelines-08192010#5c
2. All the figures for unconcentrated, moderately concentrated, and highly concentrated industries are from the United States Census Bureau. “Manufacturing: Subject Series – Concentration Ratios: Share of Value Added Accounted for by the 4, 8, 20, and 50 Largest Companies for Industries: 2002.” 2002 Economic Census of the United States. Web. Accessed on May 11, 2018.
3. For practical purposes, economists frequently calculate the HHI using the top 50 or top 20 firms in the industry even though technically all firms’ market shares should be included in the calculation.
4. Reed, Brian. “Money in a Bottle: The Celebrity Scent Business.” National Public Radio. November 6, 2009.
5. Samuelson and Nordhaus (2001), p. 212, present this case in a rather different way but arrive at the same result.
6. See Sweezy, Paul. “Demand Under Conditions of Oligopoly.” Journal of Political Economy. Vol. 47, 1939. Pp. 568-573, which is also cited in Keat et al.’s (2013), p. 367, summary of the kinked demand model of oligopoly.
7. Similar lists of factors that influence cartel formation are provided in many neoclassical textbooks. See Keat et al. (2013), p. 391-392, and McConnell and Brue (2008), p. 459-460.
8. See Varian (1999), p. 476, for a similar graphical representation of the price leadership model.
9. McConnell and Brue (2008), p. 464, emphasize this point.
10. Kenyon (1978), p. 34.
11. Ibid. p. 34.
12. Ibid. p. 34.
13. Ibid. p. 34-35.
14. Ibid. p. 35.
15. Ibid. p. 41.
16. Ibid. p. 38-39.
17. Ibid. p. 37-38.
18. Ibid. p. 39-40.
19. Ibid. p. 40.
20. The model in this section is based on the model presented in Snowdon, et al. (1994), p. 370-372.
21. Snowdon et al. 1994: 372
22. “Invisible Hand of Market an Ugly Claw.” The New Zealand Herald. 22 Aug. 2019. Opinion: p. A026. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/02%3A_Principles_of_Microeconomic_Theory/10%3A_Theories_of_Imperfectly_Competitive_Markets.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Describe the neoclassical theory of the market for labor
2. Explore the neoclassical theory of monopsonistic labor markets
3. Consider whether neoclassical monopsony theory represents a theory of exploitation
4. Analyze the case of bilateral monopoly in the labor market
5. Investigate the Marxian theory of the market for labor-power
6. Explain how changes in the character of production influence prices in Marxian theory
Up until this chapter, the focus has been almost exclusively on markets for goods and services sold to the final consumer. Factor markets (also referred to as input markets or resource markets) include the markets for labor, capital, and land. As the reader might expect, different schools of economic thought possess different theories of how these markets function. In this chapter, we will concentrate on the market for labor. We will take a close look at how the labor market operates from a microeconomic perspective, according to neoclassical economists and Marxian economists.
The Neoclassical Theory of the Demand for Labor
In neoclassical economic theory, product markets determine product prices and quantities exchanged. Similarly, neoclassical economists argue that labor markets determine wage rates and employment levels. The theory is essentially a story of supply and demand, much like the one we discussed regarding product markets. A sophisticated analysis underlies this story of supply and demand. This underlying story is developed at length in this section.
We begin with the assumption that the market supply of labor is upward sloping. That is, it is assumed that as the wage rate increases, the quantity supplied of labor rises as well, other factors held constant. Furthermore, it is assumed that the labor market is perfectly competitive such that each employer takes the market wage as given and so is a wage-taker. In other words, no single employer has any power to influence the wage that is paid. In this case, the labor supply curve facing the perfectly competitive firm is completely horizontal. This situation is depicted in Figure 11.1
If an employer reduces the wage paid below the market wage even by a small amount, the quantity supplied of labor will fall to zero. That is, all workers will seek jobs from other employers who are offering the market wage. Similarly, the smallest increase in the wage above the market wage will lead to a sharp (infinite) increase in the quantity supplied of labor. In other words, the wage elasticity of labor supply facing a single employer is infinite in the case of a perfectly competitive labor market.
To understand how much labor an employer will hire to maximize its economic profit, we need to explore the implications of this wage-taking behavior for the firm’s production costs. One concept that is important for this purpose is total resource cost (TRC). The TRC is the total cost of purchasing labor or the total wage bill. It is defined more precisely as follows:
$TRC=wL$
The relationship between the horizontal labor supply curve facing an employer and the employer’s TRC curve is shown in Figure 11.2.
As Figure 11.2 shows clearly, the TRC grows continuously as more labor is hired due to the constant wage rate that must be paid to each worker.
Furthermore, we can define average resource cost (ARC) as the average cost per worker hired. That is, if we were to spread out the total labor cost over the number of workers hired, then we would have the ARC. The ARC is defined precisely as follows, which it turns out can be reduced to the wage:
$ARC=\frac{TRC}{L}=\frac{wL}{L}=w$
This result should not be surprising. It simply means that, on average, the cost of a unit of labor is the wage. Because the wage is given, it follows that the wage must be the average cost of this resource.
Additionally, we can define the marginal resource cost (MRC) as the additional resource cost incurred with the purchase of an additional worker hired. Because the TRC grows by a constant amount equal to the wage rate with the purchase of each additional unit of labor, the MRC is the wage rate. It can be defined more exactly as follows:
$MRC=\frac{\Delta TRC}{\Delta L}=w$
The reader should notice that the MRC is equal to the slope of the TRC curve. The slope of the TRC curve, of course, is equal to the wage rate. Finally, it should be noted that because the ARC and the MRC are both equal to the wage rate, the ARC and MRC curves will be identical to the horizontal labor supply curve facing an employer. Figure 11.3 adds the ARC and MRC curves to a graph of the horizontal labor supply curve facing one employer.
Table 11.1 provides a numerical example that includes calculations of TRC, ARC, and MRC.
As expected, the MRC and ARC are equal to the wage rate, and the TRC grows continuously with employment due to the constant wage rate.
The employer must consider the effect on revenue of hiring additional labor as well as the effect on cost. As a result, we need to introduce a new concept that neoclassical economists refer to as the marginal revenue product (MRP) of labor. To understand this concept, we need to return to the total product (TP) and marginal product (MP) curves that were first introduced in chapter 7. Figure 11.4 shows the graphs of the total product and marginal product curves.
The reader should recall that marginal product is simply the slope of the total product curve. In the short run, MP rises due to specialization and division of labor at low employment levels and then falls due to diminishing returns to labor at higher employment levels. The MRP refers to the additional revenue earned from the purchase of an additional unit of labor, which can be defined as follows:
$MRP=\frac{\Delta TR}{\Delta L}$
The MRP can be further expanded as follows:
$MRP=\frac{\Delta TR}{\Delta L}=\frac{\Delta TR}{\Delta Q} \cdot \frac{\Delta Q}{\Delta L}=MR \cdot MP$
In other words, the MRP is the mathematical product of the firm’s marginal revenue and marginal product of labor. Finally, we learned in Chapter 8 that a perfectly competitive firm’s marginal revenue is equal to the given market price. If we assume that this firm is a perfectly competitive producer, then the MRP can be written as follows:
$MRP=MR \cdot MP=P \cdot MP$
This result is very intuitive. The MRP is the additional revenue that a firm earns from hiring another unit of labor. When the additional unit of labor is purchased, it will produce some additional output. This additional output is the marginal product (MP). The additional output is then sold at the given market price (P) in a perfectly competitive market. The additional revenue generated is the MRP.
A closely related concept is the average revenue product (ARP) of labor. It is simply the total revenue per worker hired, which may be defined as follows:
$ARP=\frac{TR}{L}$
Figure 11.5 shows the calculation of the MRP and the ARP for a perfectly competitive firm that faces a constant market price of $2 per unit. As Figure 11.5 shows, the MRP rises and then falls as employment rises. Because the MRP equals the product price times the marginal product of labor and the product price is constant, the MRP will rise due to the specialization of labor, but then it must fall because of the fall in marginal product. That is, the MRP falls due to diminishing returns to labor. A similar argument explains the shape of the ARP curve. Figure 11.6 provides a graph of the MRP and ARP curves. It is easy to see that as the wage rate falls, the quantity of labor demanded increases. In other words, the labor demand curve is downward sloping. Furthermore, because the quantity of labor demanded is determined at points of intersection between the wage rate and the MRP, the MRP curve is the perfectly competitive employer’s labor demand curve. A second condition is necessary to ensure that economic profits are maximized in the short run. Specifically, it must be proven that the employer cannot earn a larger economic profit by shutting down in the short run. To prove this point, we return to the shut-down rule that we learned when we first discussed how a perfectly competitive firm maximizes economic profits in Chapter 8. In Chapter 8, it was shown that a perfectly competitive firm should only operate when price is at least as great as average variable cost (P ≥ AVC). We may derive a similar shut down rule for a perfectly competitive employer as follows: $P\geq AVC$ $P \cdot Q \geq AVC \cdot Q$ $TR \geq TVC$ $\frac{TR}{L} \geq \frac{TVC}{L}$ $\frac{TR}{L} \geq \frac{wL}{L}$ $ARP \geq w$ In other words, the wage paid must be less than or equal to the ARP. Otherwise, the employer should shut down in the short run. Therefore, when we trace out the labor demand curve, the only relevant portion of the MRP curve is that part that is below the ARP curve, as shown in Figure 11.8. Because the MRP curve intersects the ARP curve at the maximum point on the ARP curve, the highest wage rate at which a positive amount of labor is demanded is the maximum ARP. If the wage rate rises above this point, then the employer will shut down and demand no labor. Neoclassical economists assert that the demand for labor (or any input) is a derived demand. That is, labor demand is derived from the demand for the product that the labor produces. For example, the demand for engineers depends on the demand for new construction. If firms invest in the construction of more bridges, dams, and skyscrapers, then they will need to hire more engineers. As we learned in Chapter 3, if the demand for a product rises, then the market price will increase, other factors held constant. The rise in the market price will cause the marginal revenue product of labor to increase because the output that workers produce can now be sold at a higher price. This change causes an outward shift of the MRP curve. Because the MRP curve is the employer’s labor demand curve, the labor demand curve shifts outward. Therefore, a rise in the demand for a product causes a rise in the demand for the labor that produces it. More generally, we can identify two changes that can shift the MRP curve and thus the labor demand curve. Recalling that the MRP is equal to the product price times the MP of labor, we can identify these two changes as follows: 1. Any factor that raises the price of the product will increase the MRP of labor and thus the demand for labor. 2. A change in production technology, or any other change that increases the marginal product of labor, will increase the MRP of labor and thus the demand for labor. Now that we have derived the labor demand curve for a perfectly competitive employer, it is a short step to obtain the demand curve for the entire labor market. We simply use horizontal summation to aggregate the individual labor demand (or MRP) curves of many different employers. The downward sloping labor market demand curve that results from this aggregation process is shown in Figure 11.9. The Neoclassical Theory of the Supply of Labor Neoclassical theorists have also developed a theory of labor supply. According to this theory, individual workers allocate their available time between working time and leisure time to maximize utility. In this theory, work is regarded as undesirable for its own sake, but it provides wage income that can be used to purchase commodities. Leisure time, on the other hand, is generally regarded as desirable. Because wage income is desired to acquire consumer goods, the wage rate represents the opportunity cost of one hour of leisure time. That is, by choosing to enjoy an hour of leisure time, a worker sacrifices the wage that could be earned. We can use the modern theory of utility maximization to represent the problem facing the individual worker.[1] To begin, we consider the time constraint that the worker faces and represent this constraint in much the same way that we represented the budget constraint facing a consumer in Chapter 6. In this theory, the worker has a total amount of time (T) available each day for either work or leisure. T will generally be less than 24 hours because the worker is unavailable for either work or leisure during sleeping hours. The hours spent working (h) and the hours of leisure time (l) add up to the total time available in the day as shown below: $T=h+l$ Furthermore, the daily income (Y) is equal to the wage rate (w) times the number of hours spent working (h) as shown below. $Y=wh$ If we rearrange the above equation such that h = Y/w, then we can rewrite the total amount of time available in the day in the following way: $T=\frac{Y}{w}+l$ Solving this equation for Y, we obtain the following result: $Y=wT-wl$ It should be noted that w and T are unknown constants in this equation and Y and l are the only variables. If we graph this result as shown in Figure 11.10, then we obtain a clearer picture of the income/leisure tradeoff facing the individual worker. Just as the individual consumer’s preferences for goods may be represented using indifference curves, the individual worker’s preferences for income and leisure may be represented using indifference curves as shown in Figure 11.11. In Chapter 6, we learned that the downward slope of an indifference curve indicates that the consumer is willing to trade off one good for another. Similarly, the downward slope of the indifference curve represented in Figure 11.11 indicates that the worker is willing to trade off income for leisure and vice versa. We also learned in Chapter 6 that the slope of the indifference curve is called the marginal rate of substitution (MRS) and that this slope becomes flatter as the individual moves along the indifference curve. The reason for the change in the slope is that as the worker obtains more leisure, her willingness to trade off additional income for an additional hour of leisure decreases. This diminishing marginal rate of substitution is somewhat like diminishing marginal utility. As explained in Chapter 6, however, diminishing MRS depends entirely on an ordinal notion of utility. We can also rewrite the MRS as the negative ratio of the marginal utilities of leisure and income. Because the worker’s utility remains the same all along the indifference curve, we can write the following equation: $\Delta TU=MU_{Y} \cdot \Delta Y+MU_{l} \cdot \Delta l = 0$ This equation states that the change in total utility as the worker moves along an indifference curve is equal to the product of the marginal utility of income (MUY) and the change in income (Y) plus the product of the marginal utility of leisure (MUl) and the change in leisure (l). The entire sum is equal to zero because total utility remains constant along the indifference curve. Solving for the MRS generates the following result: $MRS=\frac{\Delta Y}{\Delta l}=-\frac{MU_{l}}{MU_{Y}}$ As the worker moves to the right along the indifference curve, the amount of l that is chosen rises and the amount of Y that is chosen declines. As a result, the marginal utility of leisure declines relative to the marginal utility of income, implying diminishing MRS. We can now represent the utility maximizing choice of the worker. In Figure 11.12, the worker maximizes utility at point A by choosing the amount of leisure (l*) and hours of work (h*) that yields an amount of income, Y*. At point A, the indifference curve passing through point A is tangent to the time line representing the worker’s time constraint. Because the slopes of these curves must be the same, the following condition must hold: $MRS=-\frac{MU_{l}}{MU_{Y}}=-w$ It is now possible to derive the individual worker’s labor supply curve using the utility maximizing framework that we have developed. Figure 11.13 shows what happens when the wage rate increases. In Figure 11.13, the vertical intercept increases because the maximum possible income is now higher. Similarly, the slope increases (in absolute value) because the opportunity cost of leisure has increased with the higher wage rate. Because leisure has become more expensive to consume, the worker chooses to reduce the amount chosen from l1 to l2. The amount of work chosen correspondingly increases from h1 to h2. The quantity supplied of labor thus rises with the wage rate, which implies the upward sloping labor supply curve shown in the graph on the right in Figure 11.13. On the other hand, it is possible that the worker will stop responding in this manner to the rise in the wage once the wage reaches a very high level. Suppose that the increase in the wage leads the worker to feel richer overall. As a result, the worker purchases more consumer goods but also decides to “purchase” more leisure time by working less. This situation is represented in Figure 11.14. In Figure 11.14, the wage rises from w1 to w2, and the worker cuts back on leisure time (from l1 to l2) as leisure becomes costlier. Similarly, the hours worked increase from h1 to h2 as before. Once the wage rises to w3, however, the worker increases leisure time from l2 to l3. A corresponding reduction in hours worked from h2 to h3 occurs. This drop in the number of hours worked as the wage increases is represented in the graph on the right in Figure 11.14 as a backward bending labor supply curve. Our final step is to aggregate the individual labor supply curves of every worker in the labor market. As before, we can use horizontal summation to obtain the labor market supply curve. Figure 11.15 shows two possible examples of the labor market supply curve. In the graph on the left in Figure 11.15, the labor market supply curve has the usual upward slope that we expect of a supply curve. As the wage rises, workers reduce their leisure time (which is more expensive) and work more to take advantage of the higher wage. The tendency to consume less leisure as the wage rises (other factors held constant) is referred to as the substitution effect in this context in the sense that the worker substitutes away from something that has become relatively more expensive to consume. In the graph on the right, however, workers respond to higher wages by eventually working less and consuming more of all goods, including leisure. The tendency to purchase more of all goods as one’s income rises (other factors held constant) is referred to as the income effect. That is, the worker experiences a rise in real income and so decides to purchase more of everything. Although both effects are typically at work, whether an upward sloping or backward bending supply curve emerges depends on which effect is the stronger of the two. If the substitution effect dominates, then the labor supply curve will be upward sloping. If the income effect dominates, then the labor supply curve will be backward bending. The Neoclassical Theory of Labor Market Equilibrium Now that we have developed both the supply and demand sides of the labor market, we can bring them together to show how neoclassical economists explain the movement to equilibrium in these markets. Figure 11.16 shows two possible labor markets. In the graph on the left, a single equilibrium outcome occurs. The labor market is cleared of shortages as wages increase, and it is cleared of surpluses as wages decrease. Eventually, the market reaches an equilibrium wage rate and employment level. The market will remain at this point unless it is disturbed by a change in an external variable. In the graph on the right, two different equilibrium outcomes are possible due to the backward bending supply curve, which causes a second intersection with the labor market demand curve. The lower equilibrium at w1 and L1 is the same as the one represented in the graph on the left. It is a stable equilibrium in the sense that a slightly higher or lower wage will lead to a surplus or shortage that will push the wage back in the direction of the equilibrium outcome. The upper equilibrium at w2 and L2, on the other hand, is rather different. If the wage falls below w2 by a small amount, then it will continue to fall due to the surplus that exists. Similarly, if the wage rises above w2 by a small amount, then it will continue to rise due to the shortage that exists. Because the wage tends to move further away from the equilibrium when pushed in either direction by a small amount, the equilibrium is an unstable equilibrium. The presence of an unstable equilibrium creates a risk of considerable market instability. We have yet to mention the ideological significance of the neoclassical theory of the labor market. The neoclassical model of a perfectly competitive labor market reaches the conclusion that each worker is paid according to that worker’s contribution to production. Earlier in this chapter, it was shown that a perfectly competitive employer achieves maximum economic profits when the MRP is equal to the wage rate (the MRC). This conclusion means that when the labor market reaches equilibrium each worker will receive a wage that is equal to the worker’s contribution to the firm’s revenue. From a purely ideological perspective, this result is a very powerful one. It means that workers are not exploited as Marxian economists assert. They draw from the social product an amount that is exactly equal to what they contribute. The marginal productivity theory of income distribution is implicitly a theory of distributive justice. That is, people receive what they deserve to receive. What they deserve to receive stems from their productive contributions. The theory has been criticized for a variety of reasons. One objection is that inequality may have its own undesirable social and economic consequences and that payment according to marginal revenue product might lead to extreme levels of inequality. A second objection is that the relationship between social classes (e.g., workers and capitalists) plays no role in the analysis as it does in Marxian economics. Due to the assumption of perfect competition, no employer or worker has any market power. The fact that some own the means of production whiles others lack means of production is given no significance in the model. Finally, the assumption of perfect competition in the labor market is one that opponents of the theory have sharply criticized. As we will see in the next section, when the assumption of perfect competition is dropped, the door to a neoclassical theory of exploitation is suddenly thrown open. A Neoclassical Theory of Exploitation? If we drop the assumption of perfect competition in the labor market, then how will the neoclassical analysis of the labor market change? In this section, we consider the case of imperfectly competitive labor markets. We will consider the case of a single employer, also referred to as a monopsony employer. A monopsony exists in a market when only a single buyer exists. In the labor market, the employers are on the buyers’ side of the market. Therefore, a monopsonistic labor market is a market with only a single buyer of labor. Pure monopsonies, just like pure monopolies, are not very common, but sometimes firms approach monopsony status in certain markets. For example, Wal-Mart has been accused of acting as a monopsonist in certain markets in which it buys goods from suppliers. In those cases, Wal-Mart is by far the largest, or may be the only, buyer of a product from its suppliers. In the defense industry, the U.S. government may be the only purchaser of advanced weaponry from firms that produce such products. Monopsony employers, on the other hand, have existed in company towns like General Motors in Flint, Michigan or Carnegie Steel in Homestead, Pennsylvania. In company towns, the people may have limited mobility and so they either work for the dominant employer, or they do not work at all. As with any neoclassical model, we will start by identifying the model’s main assumptions. We will assume that a single firm exists that is the sole buyer of labor. Furthermore, it is assumed that workers cannot easily move to a new location. Because of these conditions, the monopsonist has the power to set the market wage. That is, the monopsonist has market power, much like the monopolist possessed market power (i.e., the power to set the market price of its product). Unlike the perfectly competitive employer who faces a horizontal labor supply curve, the monopsonist faces an upward sloping labor supply curve, as shown in Figure 11.17. As the wage rises from$11 per unit to $12 per unit, the quantity of labor supplied increases from 3 to 4 units. The MRC may be calculated by dividing the change in TRC by the change in L as follows: $MRC=\frac{\Delta TRC}{\Delta L}=\frac{(12)(4)-(11)(3)}{4-3}=\frac{48-33}{1}=\15\;per\;unit\;of\;labor$ It is possible to obtain this result in another way that is more intuitive. When the wage rate is increased from$11 to $12 per unit, an additional worker enters the market. How much does this increase in the wage add to cost? The additional worker is paid$12, but the reader should notice that the three workers, who were receiving $11 each, now receive$1 raises. Therefore, the total resource cost rises by $12 plus$3 or $15. This manner of proceeding is helpful in terms of understanding why the addition to total resource cost exceeds the wage paid. As the reader can observe, in Figure 11.19 the MRC of$15 is above the wage of $12. In general, the MRC will exceed the wage because when the wage rises to encourage another worker to enter the market, the TRC rises both because of the wage paid to the new worker hired but also because each of the existing workers must receive a wage increase. The reader might notice the similarity between this analysis and the analysis of pure monopoly. In the case of pure monopoly, MR falls faster than price because when the price is cut to sell another unit, the price must also be cut on all the other units previously sold at the higher price. Because the MRC exceeds the wage, the MRC curve will rise more quickly than the labor market supply curve facing the firm. Therefore, we obtain the result shown in Figure 11.20. Table 11.2 provides an example to demonstrate how to calculate TRC, ARC, and MRC when only given information about the labor market supply curve facing the monopsony employer. We have now fully developed the cost structure of the monopsonist and can proceed to the profit-maximizing choice of the firm. Figure 11.21 shows how the monopsony employer will set the wage to maximize its economic profit. It is also possible to compare the monopsonistic equilibrium outcome and the perfectly competitive outcome using the graph in Figure 11.21. Earlier in this chapter, it was explained that the perfectly competitive equilibrium outcome in the labor market occurs where supply and demand intersect. If we assume that the MRP curve of the monopsony employer would be the same as the sum of the MRP curves of many perfectly competitive employers (if this market was perfectly competitive), then we can find the perfectly competitive equilibrium at the intersection of the labor market supply curve and the MRP curve. That is, the MRP curve represents the labor market demand curve and so its intersection with the labor market supply curve represents the competitive equilibrium. In the perfectly competitive equilibrium, wc represents the equilibrium wage rate and Qc represents the quantity of labor that the firm will hire at that wage. Because w* is less than wc, it is easy to see that the monopsony firm reduces the wage to a level that is below what would be paid in a perfectly competitive labor market. Furthermore, because Q* is less than Qc, it is also easy to see that the monopsony firm reduces overall employment below what would exist in a perfectly competitive labor market. The reduction of employment below the perfectly competitive level represents a loss of efficiency brought on by the monopsonist’s pursuit of maximum economic profits. The Economic Consequences of Labor Union Activity In Chapter 3, the concept of a price floor was introduced. A price floor establishes a legal minimum price in a market. The price is permitted to rise above a price floor, but the price cannot fall below the price floor. Industrial unions are organizations that attempt to organize all the workers in an industry and then negotiate industry-wide wage floors for their members. Working hours and working conditions are other key points for negotiation. Craft unions have similar aims, but they only organize the workers who share a common skill or trade, such as carpentry, ironworking, or masonry. Unions possess market power on the sellers’ side of the labor market. That is, a union is (sometimes) the sole seller of labor in a market, just as the monopsonist is the sole buyer on the buyers’ side of the market. A price floor that a labor union might negotiate is just a minimum wage. If the labor market is perfectly competitive, then this situation can be represented as in Figure 11.22. Figure 11.23 shows the monopsony outcome where the wage is wm and employment is Qm. It also shows the perfectly competitive labor market outcome where the wage is wc and employment is Qc. Let’s assume, however, that a union negotiates a wage of w* with this monopsony employer. In this case, the labor market supply curve becomes perfectly horizontal for every employment level up to the original supply curve. The reason is that the workers who would have entered the labor market at lower wage rates previously are now paid the union wage. Once we reach the original supply curve, however, the wage must rise to encourage more workers to enter the market. The supply curve thus has a kink in it at Q*. To obtain the MRC curve, it is necessary to use the information given on the supply curve. When the labor supply curve facing the firm is horizontal, as it is in the case of a perfectly competitive market, then the MRC curve is horizontal as well and identical to the supply curve. Therefore, the MRC will be the same as the labor market supply curve up to the kink. For employment levels beyond the kink, however, the MRC corresponding to the upward sloping supply curve applies. As a result, the MRC curve is horizontal up until the kink in the supply curve, then a vertical gap exists until we reach the upward sloping MRC curve, after which point the MRC curve becomes upward sloping as before. To find the profit-maximizing outcome in the bilateral monopoly model, we only need to equate MRP and MRC. In Figure 11.23, the MRP curve intersects MRC somewhere in the gap in the MRC curve. This intersection gives us the profit-maximizing employment level of Q*. It also gives us the profit-maximizing wage. To call forth Q* amount of labor, the wage rate that must be set is w*, which is directly above Q* at the kink in the supply curve. What we observe in this case is that the wage rate is higher than what the monopsonist would set in the absence of a union. The employment level is higher as well. Furthermore, the gap between the MRP and w is smaller and so the degree of exploitation is lower. On the other hand, it is also the case that the wage rate is lower than the perfectly competitive wage, and the employment level is lower than the perfectly competitive employment level. Still, if the labor union could negotiate an even higher minimum wage, then it would approach or even match the perfectly competitive outcome. If the union negotiates a wage that is higher than wc, however, then unemployment will result as in the perfectly competitive model. The reader might try to verify this result graphically. In general, however, whether the wage that is negotiated is closer to the pure monopsony wage or closer to the perfectly competitive wage will depend on the relative bargaining strength of the monopsonist and the labor union. A relatively strong labor union will negotiate wages that are closer to the perfectly competitive result. A relatively weaker labor union will negotiate wages that are closer to the pure monopsony result. A situation of bilateral monopoly like the one we have been discussing occurred in 1892 in Homestead, Pennsylvania. In a very famous strike, the Amalgamated Association of Iron and Steel Workers struck against the Carnegie Steel Company. The craft union had a large degree of monopoly power at the time, and Carnegie Steel was the only major employer in the entire town. The union’s goals were to negotiate a minimum wage and to establish a June expiration date (rather than a January expiration date) for the new three-year contract. The union wanted a summer rather than a winter expiration date because if a strike became necessary during contract negotiations, the workers could hold out much better in the summer than in the winter. In this case, the company and the union were not able to arrive at an easy solution. A bitter strike ensued involving a battle between striking steelworkers and Pinkerton guards. Eventually, the Pennsylvania Governor ordered the state guard to force an end to the strike. Abstract models can teach us a great deal, but they often cannot capture the intensity of real life struggles.[3] The Marxian Theory of the Market for Labor-Power Now that we have studied the neoclassical theory of the labor market in considerable detail, we can more easily contrast it with the Marxian theory of the market for labor-power. The reader should recall that the commodity that workers sell to capitalists is labor-power as opposed to labor. In Marxian theory, labor refers to the act of working itself, whereas labor-power refers to the ability of a worker to perform labor for a given amount of time, which is sold as a commodity. In Chapter 4, it was shown how the value of labor-power is determined in Marxian theory. Marx provided a formula for calculating the value of a day’s labor-power. As the reader will recall, that calculation requires adding up all the values of all the means of subsistence that a worker requires in the year to produce and reproduce her labor-power (according to a culturally determined norm) and then dividing that value by the number of days in the year. If the social estimation of what a worker requires for the production and reproduction of labor-power changes, then the value of labor-power will change as well. Additionally, if the values of the required means of subsistence change, then the value of labor-power may change as well. The price of labor-power, which is what is paid for labor-power, may diverge from the value of labor-power at times. In the second part of this book, it is explained why the price of labor-power never diverges very much from the value of labor-power. Our primary interest in this chapter, however, is to understand how changes in the capitalist production process can be analyzed from a Marxian perspective. This discussion draws heavily upon Marx’s treatment of the subject in chapter 17 of volume 1 of Capital. In the remainder of this chapter, we will consider how productivity changes are treated in Marxian theory. We will also consider two aspects of capitalist production that are not given much attention in neoclassical theory, namely changes in the length of the working day and changes in the intensity of the labor process. As will be shown, the value of labor-power has an important role to play in the analysis. Changes in the Productivity of Labor In neoclassical microeconomic theory, an increase in the price of a firm’s product raises the marginal revenue product of labor and thus labor demand. The causal claim is that price increases lead to increases in marginal revenue productivity. In Marxian economic theory, on the other hand, the causal chain runs in the reverse direction. That is, a rise in labor productivity typically leads to a reduction in prices. To see why, we need to return to our working day diagrams from Chapter 4. Figure 11.24 shows the three ways that we may express the value produced in one day in a specific industry. In this example, the capitalist advances constant capital (c) of$300 for means of production and variable capital (v) of $90 for labor-power. The worker works a 10-hour day. Given a monetary expression of labor time (MELT) of$15 per hour, the $90 of variable capital may be converted into 6 hours of necessary labor (NL). The remainder of the workday then consists of 4 hours of surplus labor (SL), which may be converted into$60 of surplus value (s) using the MELT. The constant capital of $300 may also be converted into its dead labor (DL) equivalent of 20 hours using the MELT. If we assume that the worker produces a total product (TP) 225 lbs. of sugar during the workday, then we can also calculate the individual value of a pound of sugar. All we need to do is divide the total value of the day’s product by the total product. That is, the price (= value) can be calculated as follows: $p=\frac{c+v+s}{TP}=\frac{\300+\90+\60}{225\;lbs.}=\frac{\450}{225\;lbs.}=\2\;per\;lb.$ By dividing c, v, and s by the price of a pound of sugar, we can calculate the dead product (DP) of 150 lbs., the necessary product (NP) of 45 lbs., and the surplus product (SP) of 30 lbs., respectively. Now that we have reviewed these basic aspects of Marxian economics, we can consider the effects of a change in labor productivity. Unlike in the neoclassical theory we considered earlier in this chapter, it matters a great deal whether the productivity change occurs in an industry that produces means of subsistence for workers (so-called wage goods industries) or in other industries that produce goods that workers do not typically consume. Let’s first assume that a productivity increase occurs in an industry that is not a wage goods industry. This situation is depicted in Figure 11.25. In Figure 11.25, a 30% productivity increase is assumed. What this change means is that a worker can transform 30% more means of production (as reflected in a 30% rise in constant capital) into 30% more finished product in the same 10-hour workday as previously. That is, it is assumed that the worker produces more in the same 10 hours while working at the same level of intensity as previously. Indeed, this change represents a pure productivity increase. In this example, the additional$90 of constant capital (∆c) is used to purchase means of production representing 6 hours of additional dead labor (∆DL). Similarly, the additional sugar produced may be considered an addition to the total product (∆TP) of 67.5 lbs., which is a 30% increase. The new value of labor-power and the newly created value are not affected at all in this case. The price of sugar, however, is affected as can be observed in the following calculation:
$p=\frac{c+\Delta c+v+s}{TP+\Delta TP}=\frac{\300+\90+\90+\60}{225\;lbs.+67.5\;lbs.}=\frac{\540}{292.5\;lbs.}\approx\1.85\;per\;unit$
By dividing each monetary magnitude in Figure 11.25, we can calculate the new values for the surplus product, the necessary product, the dead product, and the change in dead product, as shown in Figure 11.25. To carry out these calculations, the exact figure for the price was used. As a result, when we add together each product figure, we obtain the new total product for the day of 292.5 lbs., which represents a 30% increase in production. The price of sugar, therefore, falls when labor productivity rises. By contrast, we would expect a productivity decline to increase the price of sugar.
The other possibility we should consider is a productivity change that occurs in a wage goods industry but not in the industry that we are considering. If productivity rises in a wage goods industry, then this change will have a direct impact on the value of labor-power. By reducing the value of the means of subsistence that the worker requires, the commodity labor-power becomes less valuable. If the price of labor-power falls in line with the drop in the value of labor-power, then this change will lead to a re-division of the workday in the industry that we are considering. This situation is depicted in Figure 11.26.
Figure 11.26 represents a situation in which the labor embodied in the required means of subsistence for the day falls to 5 hours of SNALT. With the necessary labor at 5 hours, the variable capital declines to $75 (given the MELT of$15/hour). In a similar fashion, the surplus labor rises from 4 hours to 5 hours (given the 10-hour workday), and the surplus value produced rises from $60 to$75. The constant capital advanced remains unaffected by this change in labor productivity in the wage goods sector. Because the total value of the day’s product remains at the same level of $450 and the total amount of sugar produced remains unchanged at 225 lbs. of sugar, the price of sugar is not affected at all. In Figure 11.26, the fall in the value of labor-power simply leads to a change in the distribution of the new value created. Aside from that change, production levels in this industry remain the same. Notice that workers receive a smaller money wage, but they can purchase the same quantity of means of subsistence as previously. Their absolute standard of living remains the same. Capitalists are the sole beneficiaries of the productivity increase in this case. It is possible that a struggle may develop between capitalists and workers over the division of the new value created. If labor unions are relatively strong, then the price of labor-power might rise above its new value (but perhaps not as high as the previous value of labor-power). In that case, the workers enjoy a higher standard of living, as they can purchase more means of subsistence than previously. At the same time, the capitalists extract more surplus value from the workers, and workers become poorer relative to capitalists. This possibility is interesting because it reveals that Marx’s theory is consistent with rising real standards of living for workers even as inequality worsens. Of course, the one situation we have not considered is a productivity increase in a wage goods industry and the consequences of that change for the wage goods industry itself. This case would combine the two examples we have considered. That is, prices would fall in the wage goods industry and a part of the new value created would be redistributed from workers to capitalists as the value of labor-power declines. Although it is possible, it is not necessary to create a diagram for this case since it would simply reproduce the results we have already obtained in the previous two cases. Changes in the Length of the Working Day The next change we need to consider is a change in the length of the working day. Unlike in neoclassical theory where the worker decides how to allocate her time between work and leisure to maximize utility, in Marxian theory, capitalists tell workers what the length of the working day is. In the absence of a union, they either accept those terms or they seek work elsewhere. Figure 11.27 represents a situation in which the working day has been extended. In Figure 11.27, the working day is extended by 30% or 3 hours. Because workers must have means of production with which to work, this extension necessitates a 30% increase of$90 in the amount of constant capital advanced. The total product produced in the day subsequently rises by 30% or 67.5 lbs. The consequence of this increase in the length of the workday is an increase in the surplus value produced, but it has no effect on the price of sugar. The new (unchanged) price of sugar may be calculated as follows:
$p=\frac{c+\Delta c+v+s+\Delta s}{TP+\Delta TP}=\frac{\300+\90+\90+\60+\45}{225\;lbs.+67.5\;lbs.}=\frac{\585}{292.5\;lbs.}=\2\;per\;unit$
The new dead product of 45 lbs. may be calculated simply by dividing the new constant capital advanced of $90 by this price. The main consequence of an extension of the working day is an increase in the degree of exploitation. The value of labor-power is typically unaffected by such a change. Marx, however, did argue that the additional wear and tear that labor-power experiences due to this extension may increase the value of labor-power. That is, the means of subsistence necessary to make the production and reproduction of labor-power each day may rise due to, for example, an increased need for medical care. Beyond a certain point, however, no increase in the means of subsistence can compensate for the deterioration of the worker’s health due to endless drudgery. Additionally, if the value of labor-power remains unchanged even with a lengthening of the workday, it is possible that its price may increase above its value. That is, a struggle between workers and capitalists over the new value created might occur. Depending on the relative strength of the one versus the other, workers or capitalists may end up appropriating a larger portion of the newly created value as wages or surplus value, respectively. Changes in the Intensity of Labor The final change that we will consider is a change in the intensity of labor that occurs in a single industry but not across all industries simultaneously. For example, suppose that the intensity of the labor process increases above the social norm that exists in other industries. In this case, even with the same number of hours in the workday, the worker will create an even larger amount of new value than previously. The reason is that one hour of SNALT is not necessarily the same as one hour of clock time. If the intensity of labor rises above what is considered the social norm in a specific society, then one hour of clock time might be consistent with more than one hour of SNALT. This situation is depicted in Figure 11.28. Figure 11.28 is almost identical to Figure 11.27, which depicted an increase in the length of the working day. The only difference is that the 3 hours of additional surplus labor do not occur because of an increase in the length of the workday. Instead, it is the result of 30% more work being performed within the span of the 10-hour workday. For this reason, the portion of the timeline that shows an extension of 3 hours is a dashed line rather than a whole line, as was the case in Figure 11.27. That is, the increase in labor intensity leads to the incorporation of more SNALT in the final product and a greater value of the final product, but these additions are like the 30% increase in dead labor and constant capital advanced in that they are not part of the working day proper. On the other hand, these changes do represent new value created, and in that sense, they are very different from the contribution that the additional constant capital makes to the final product. In this case, because the value of the final product and the physical product both rise by 30%, the price of sugar remains unchanged. This result is to be expected because the numerical changes are identical to those obtained from an extension of the workday. As in the case of an extension of the working day, the value of labor-power may rise due to its more rapid deterioration. Workers are not working longer hours, but they are working harder, which may impact their health. The same limits to compensating workers with a higher wage that apply in the case of the extension of the working day should also be expected to apply in this case. As before, even if the value of labor-power does not rise, workers might push for an increase in the price of labor-power as they struggle to win a portion of the newly produced value that their more intense labor has made possible. The amount of new value created due to the intensification of the labor process is directly related to the extent of the divergence between the intensity of labor in this industry and the social norm. A general change in the intensity of labor across all industries, however, that alters the social norm will have no effect on the new value produced during a 10-hour workday. Such a change would instead act more like an increase in labor productivity in that more means of production will be transformed into final products and prices can be expected to fall. Simple Labor v ersus Complex Labor Throughout this entire discussion, it has been assumed that the labor that is being performed is of a very simple variety. That is, no special skill or training is required to perform this specific type of labor, which we will call simple labor. Of course, most types of labor require at least some basic training and many types of labor require years of prior education and training if they are to be performed well. These more skilled types of labor we will refer to as complex labor. The existence of complex labor appears to create a difficulty for Marxian economics. If one hour of simple labor (e.g., sweeping floors) creates the same amount of value as one hour of complex labor (e.g., surgical labor), then this theory appears to be flawed. Recall, however, that SNALT is not the same as clock time, and so it is possible that one hour of surgical labor might create 100 times as much value as one hour of unskilled labor. To understand how Marxian value theory can address these issues, let’s consider a numerical example. Suppose that a person goes to a technical school for four years and learns to produce a specialized commodity. The number of hours spent in school during these four years might be 8,320 hours, which may be calculated as follows: $total\;hours\;of\;education=(4\;years)(\frac{52\;weeks}{year})(\frac{5\;days}{week})(\frac{8\;hours}{day})=8,320\;hours$ Suppose the worker then works for 40 years producing the specialized commodity. During this 40-year period, the number of hours worked may be calculated in a similar way: $total\;hours\;of\;work=(40\;years)(\frac{52\;weeks}{year})(\frac{5\;days}{week})(\frac{8\;hours}{day})=83,200\;hours$ The total value created during the working life of this person may be expressed in SNALT as the sum of the hours spent in training plus the hours spent working. This calculation is as follows: $Total\;value\;created=8,320\;hours+83,200\;hours=91,520\;hours$ Further suppose that the worker produces 9,152 use values during her entire working life. To keep the example simple, let’s ignore the value of the means of production by assuming that the constant capital advanced is equal to zero. We can use this information to calculate the value (or price) per unit of the commodity produced in terms of SNALT as follows: $Price\;(in\;SNALT)=\frac{91,520\;hours}{9,152\;units}=10\;hours\;per\;unit$ If we assume a MELT of$6 per hour, then the price of the commodity will be $60 per unit (=$6/hour times 10 hours/use value), and the total value of the worker’s lifetime product will be $549,120 (=$60/unit times 9,152 use values).
If we next consider an unskilled worker who works for 40 years performing simple labor and producing a similar, albeit unspecialized commodity, then we can see what contribution the first worker’s training makes to the production of value. Let’s assume that the unskilled worker also produces 9,152 units of the unspecialized commodity. Since the worker works for 40 years, she has performed 83,200 hours of work, just like the skilled worker. The value of each unit of the unspecialized commodity may be calculated in terms of SNALT as follows:
$Price\;(in\;SNALT)=\frac{83,200\;hours}{9,152\;units}\approx9.09\;hours\;per\;unit$
Using the same MELT of $6 per hour, the price of the unspecialized commodity will be about$54.54 (=$6/hour times 9.09 hours per use value), and the total value of the worker’s lifetime product will be about$499,150 (≈ $54.54/unit times 9,152 use values), ignoring some rounding error here. This example shows rather clearly that the skilled worker produces a more valuable product than the unskilled worker. The difference in the value created occurs because the skilled worker creates more value in the same 40-year period. This enhanced value-creating potential is not the result of a more intense labor process or a longer working day. The superior ability of the skilled worker to create value exists because the hours the worker has spent acquiring specialized knowledge are labor hours that were necessary for the worker to produce and reproduce her labor-power. Just as work is required to produce the means of subsistence the worker needs to perform labor each day, work is also required to produce the knowledge that the worker uses to produce commodities each day. In summary, the value-creating potential of complex labor increases with the educational requirements of the specialized labor process that requires that special type of labor.[4] Following the Economic News [5] The Asia News Monitor recently reported on a publication of the European Union Agency for Fundamental Rights (FRA) that relates to the subject of severe labor exploitation of migrant workers. The FRA report urges “European governments to do more to tackle severe labour exploitation in firms, factories and farms across the EU.” According to the report, migrant workers have experienced exploitation in numerous industries that include “agriculture, construction, domestic work, hospitality, manufacturing and transport.” As we learned in Chapter 4, migrant workers are often subjected to harsher forms of exploitation because they lack recourse to the legal system. The lack of access to legal solutions stems from lack of language skills, political rights, and financial resources. In the case of migrant workers in the EU, many find themselves in “concentration camp conditions.” The report explains that migrant workers in the EU are paid very little, must repay debts to traffickers before they receive earnings, work long, 92-hour weeks, sleep in shipping containers, are beaten and verbally abused, are given no protective gear when working with dangerous chemicals, are coerced into drug trafficking, and are threatened with deportation. The tremendous power that employers have over migrant workers allows employers to greatly increase the intensity of the labor process. The low pay and long hours raise the rate of exploitation of migrant workers, but the intensification of the labor process also leads to the creation of more value within the same working time. The result is an expansion of the surplus value produced, which raises the rate of exploitation as well. Fortunately, the FRA report ends with some positive steps that EU institutions and EU nations may take to address the problem of exploitation of migrant workers. Although the most extreme forms of labor exploitation might be halted, absent a revolutionary transformation of the economic system, the capitalist exploitation of wage workers cannot be entirely abolished. Summary of Key Points 1. For a perfectly competitive employer, the total resource cost (TRC) curve grows continuously with employment, but the average resource cost (ARC) and the marginal resource cost (MRC) curves are identical to the labor supply curve facing the firm due to the constant wage rate. 2. The marginal revenue product (MRP) curve of a perfectly competitive employer may be calculated by multiplying the product price by the marginal product of labor. 3. The profit-maximizing rules (MRP = MRC and only operate when w ≤ ARP) leads to the conclusion that the MRP curve below the maximum ARP is the perfectly competitive employer’s labor demand curve. 4. A shift of the labor demand curve may result from either a change in the product price or a change in the marginal product of labor. 5. The individual worker’s labor supply curve is determined by utility maximization as the worker considers the tradeoff between income and leisure while faced with a time constraint. 6. When a backward bending labor market supply curve exists, both a stable equilibrium and an unstable equilibrium may exist. 7. In a monopsony labor market, the MRC rises more quickly than the ARC because the monopsonist must pay all workers a higher wage when an additional worker is hired. 8. In a monopsony labor market, economic exploitation exists because the MRP exceeds the wage paid, but in a bilateral monopoly labor market, the degree of exploitation depends on the relative strength of the employer and the union. 9. In Marxian theory, an increase in productivity in industries that produce workers’ means of subsistence lowers the prices of those commodities and increases the production of relative surplus value, but when the productivity increase occurs in other industries, it only causes a reduction in commodity prices. 10. In Marxian theory, an increase in the length of the working day increases the amount of absolute surplus value produced, but it leaves commodity prices unchanged. 11. In Marxian theory, an increase in the intensity of labor increases the amount of surplus value produced during a working day of a given length, but leaves commodity prices unchanged. 12. In Marxian theory, complex labor creates a larger amount of value in a specific period than simple labor in the same amount of time. List of Key Terms Factor markets (input markets or resource markets) Wage-taker Wage elasticity of labor supply Total resource cost (TRC) Average resource cost (ARC) Marginal resource cost (MRC) Marginal revenue product (MRP) Average revenue product (ARP) Labor market demand curve Derived demand Time constraint Income/leisure tradeoff Substitution effect Income effect Stable equilibrium Unstable equilibrium Marginal productivity theory of income distribution Monopsony Industrial union Craft union Bilateral monopoly Simple labor Complex labor Problems for Review 1.Complete the missing information for the perfectly competitive employer represented in the table below. Assume the product price is$2 per unit. Then determine the profit-maximizing employment level.
2. Suppose T = 20 and w = $3 per unit of labor. Derive the equation of the time constraint beginning with the fact that T = h+l and Y = wh. When utility is maximized, what will the slope of the indifference curve be that is just tangent to the time line? 3. Complete the missing information for the monopsony employer represented in the table below. Assume the product price is$2 per unit. Then determine the profit-maximizing employment level and wage rate.
4. Suppose the working day is 11 hours, the variable capital is $32, the constant capital is$124, and the MELT is $4 per hour of SNALT. Also, assume that 50 pounds of the product are produced in one day, and this sector does not produce wage commodities. • What is the current price per pound of the product? • Suppose labor productivity rises in the wage commodities sector causing the variable capital to fall to$24. What will happen to the surplus value, the necessary labor, and the product price as a result?
• Returning to the original conditions, suppose that a 20% increase in labor productivity occurs in this industry alone. What will happen to the product price, the surplus value, and the necessary labor in this case? Be sure to account for the change in the amount of use values produced and the change in the constant capital advanced.
• Returning to the original conditions, suppose that the working day is extended from 11 hours to 12 hours. What percentage increase in the length of the workday is this change? What will happen to the surplus value, the constant capital, and the product price as a result?
• Returning to the original conditions, suppose that the intensity of labor increases by 10%. This change is equivalent to how much of a change in the length of the workday? What is the new surplus value, the new constant capital, and the price of the commodity?
5. Suppose a worker spends 2 years in technical school. The training involves a 7-hour workday for 6 days each week during the 52 weeks in the year. The worker then works 8 hours per day and 7 days per week for 30 years. If the constant capital advanced during those 30 years equals $200,000 and the MELT is$9 per hour, then what is the total value produced? Also, if 80,000 use values are produced during the 30 years, then what is the value (price) of the product?
1. Prof. David Ruccio’s presentation of the neoclassical theory of labor supply in his introductory economics class at the University of Notre Dame in the early 2000s inspired the presentation in this section. I served as Prof. Ruccio’s teaching assistant at the time.
2. Chiang and Stone (2014), p. 305-306, represent an exception to the usual rule. They refer to the “monopsonistic exploitation of labor” and even include a box devoted to Marx’s critique of capitalism. They do not emphasize, however, that Marx’s condemnation of capitalism applies equally to intensely competitive market conditions. They refer to the term “exploitation” as loaded, which seems to imply that it should be used with caution. The caveat is not surprising. The authors are one step away from entering a competing discourse that neoclassical economists generally refuse to acknowledge.
3. For an excellent account of the Homestead strike, see Wolff, Leon (1965).
4. In this example, we have ignored the labor embodied in school supplies and equipment. The intensity of schooling is another difficult aspect of the problem, but it would need to be considered as well.
5. “European Union: Severe labour exploitation of migrant workers: FRA report calls for ‘zero’ tolerance of severe labour exploitation.” Asia News Monitor. Bangkok. 01 July 2019. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/02%3A_Principles_of_Microeconomic_Theory/11%3A_Theories_of_the_Labor_Market.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Measure the amount of poverty in an economy
2. Explore the way that income and wealth inequality are measured
3. Analyze two ways of measuring the aggregate output of an economy
4. Examine two critiques of national income accounting
5. Define the labor force and the unemployment rate
6. Investigate the two primary methods of measuring the aggregate price level
7. Explain the meaning of the inflation rate
8. Inspect historical movements of the key macroeconomic variables over time
In Part II, we investigated many theories that are regarded as microeconomic theories because they concentrate on individual consumers, workers, savers, and business enterprises. In Part III, we turn our attention to macroeconomic theories that concentrate on much broader changes in the economy, including changes in the behavior of households, governments, industries, foreign nations, and social classes. These theories use different economic variables than microeconomic theories because the subject matter is so much broader. To understand these theories then, it is necessary first to discuss how macroeconomic variables are measured. This chapter thus concentrates entirely on the issue of macroeconomic measurement and will set the stage for all the theories that we explore in Part III. The chapter discusses how to measure poverty, income inequality, wealth inequality, aggregate output, the labor force, the unemployment rate, the aggregate price level, and the rate of inflation. After each macroeconomic variable is defined and the method of its measurement is described, its historical pattern is considered. The historical observations will also point us in the direction of interesting questions that can only be answered with the help of the theoretical frameworks that are developed in later chapters. Also in this chapter, we will consider two important critiques of national income accounting, which is important because it shows that disagreements within economics are not confined to the realm of theory but also arise around questions of measurement.
The Measurement of Poverty
The well-being of a nation depends on many factors. Neoclassical economists argue that people have unlimited wants. They do not draw a clear distinction between wants and needs. The lack of this distinction in neoclassical theory is one source of disagreement between neoclassical and heterodox economists. Heterodox economists sometimes argue that basic needs for food, clothing, medical care, and housing are fundamentally different from preferences for fine clothes, jewelry, and expensive works of art. It is not simply the strength of the preference, according to this heterodox view, but the nature of the preference that separates needs from wants.
Because this textbook takes heterodox approaches seriously, it will approach the subject of macroeconomic measurement in a way that sharply deviates from most neoclassical economics textbooks. Neoclassical economics textbooks generally begin the discussion of macroeconomic measurement with an explanation of how the total output of a nation is measured. Goods and services of all types are lumped together according to their market values and no effort is made to distinguish between goods and services that fulfill basic human needs and the goods and services that are desirable but not essential for human life. To take the heterodox perspective seriously then, this chapter acknowledges a distinction between basic needs and inessential wants. It does so by starting with poverty measurement as a measure of the well-being of a nation. That is, the welfare of a nation’s people is evaluated according to how well the population meets its basic needs.
The U.S. Census Bureau is the government body responsible for the measurement of poverty in the United States. It uses an official poverty measure and a supplementary poverty measure and each is based on “estimates of the level of income needed to cover basic needs.”[1] To calculate the official poverty rate, the U.S. Census Bureau calculated the amount of money that a household spent on food in 1963 and then tripled it while adjusting it for inflation in later years and for differences in family size, family composition, and age of the householder.[2] This amount of money income is called the poverty threshold. According to the U.S. Census Bureau, 48 different poverty thresholds exist because families are so different according to size and age.[3] In any case, the measure suggests that a household needs to spend a full 1/3 of its income on food, leaving 2/3 for all other expenses.
Once the poverty threshold is known, it is possible to determine whether a family lives in poverty. The U.S. Census Bureau calculates the Ratio of Income to Poverty by dividing total family income by the poverty threshold as follows:[4]
$Ratio\;of\;Income\;to\;Poverty=\frac{Total\;Family\;Income}{Poverty\;Threshold}$
The following definitions are used:
$Ratio\;of\;Income\;to\;Poverty<1\Rightarrow poverty$
$1 \leq Ratio\;of\;Income\;to\;Poverty \leq 1.24 \Rightarrow near\;poverty$ $Ratio\;of\;Income\;to\;Poverty \leq 0.50 \Rightarrow deep\;poverty$
In words, if the ratio of income to poverty is less than one, then the family is living in poverty because its income is below the poverty threshold. If the ratio of income to poverty is greater than or equal to one but less than 1.24, then the family is living at a near poverty level because its income has not reached 125% of the poverty threshold. Finally, if the ratio of income to poverty is less than or equal to half of the poverty threshold, then the family is living in deep poverty.[5]
The U.S. Census Bureau also provides a helpful example to illustrate the calculation.[6] A similar example is provided below:
Suppose that a family of five earns $28,000 per year. The 2016 poverty threshold for a family of five was$29,360. The ratio of income to poverty in this case is $28,000/$29,360 = 0.9537. Because the ratio of income to poverty is less than one but greater than 0.50, the family is living in poverty although not in deep poverty. The U.S. Census Bureau also defines the income deficit (if negative) or the income surplus (if positive) as the difference between family income and the poverty threshold as follows:[7]
$Income\;deficit\;(or\;surplus)=Income-Threshold=28,000-29,360=-1,360$
In other words, the family of five would require $1,360 to meet the threshold and move from poverty to near poverty. Finally, the official poverty rate refers to the percentage of the population that lives below the poverty threshold. Over time, the U.S. official poverty rate has fluctuated as shown in Figure 12.1. As Figure 12.1 shows, the U.S. poverty rate fell significantly during the economic expansion of the 1960s but rose during the recessions in the early 1980s and early 1990s. It also declined during the economic expansion of the 1990s but rose again after the 2001 recession and even more during the Great Recession. The poverty rate thus seems to follow a somewhat countercyclical movement, which means that it rises during recessions and falls during expansions. The official poverty rate has been in use for a half century, but it has some serious shortcomings. The Institute for Research on Poverty at the University of Wisconsin-Madison has summarized the most common criticisms of the official poverty measure, a few of which are listed below:[8] 1. It only represents a headcount, but it does not measure “the depth of economic need.” 2. It omits taxes and medical expenses and does not include noncash income like food assistance. 3. It does not account for geographic differences in the cost of living throughout the U.S. We can also add to this list the omission of many people such as those in prison or nursing homes, homeless people, and foster children under age 15.[9] Because of the problems with the official poverty measure, by 2008, New York City and other cities were developing their own poverty measures.[10] The official poverty rate has become increasingly irrelevant because as Rebecca Blank explains, food prices have fallen significantly and housing and energy prices have risen.[11] The poverty threshold has become less meaningful as a result. Resolving these issues is of great importance because food stamp eligibility depends on it, and some federal block grants to states depend on state poverty rates.[12] To address these issues, the U.S. Census Bureau introduced a supplemental poverty measure in 2011. The supplemental poverty measure offers “a more complex statistical understanding of poverty by including money income from all sources, including government programs, and an estimate of real household expenditures.”[13] The supplemental poverty measure is also linked to poverty thresholds but the thresholds tend to be higher than the official poverty thresholds.[14] The new measure has other benefits, such as its ability to demonstrate the impact of specific safety net programs on poverty rates.[15] Nevertheless, as its name suggests, the supplemental poverty measure has not yet replaced the official poverty measure. Instead, it continues to be used as an additional tool for the measurement of poverty. The Measurement of Income Inequality and Wealth Inequality In neoclassical economic theory, a person’s well-being is asserted to depend only on his own consumption level with greater levels of consumption representing greater amounts of satisfaction or utility. Heterodox economists often criticize this way of thinking because it ignores the impact that unequal consumption levels may have on human well-being. This section is also committed to taking the heterodox perspective seriously and so will consider the two measures of well-being that are most relevant in this connection: measures of income inequality and wealth inequality. The amount of inequality that exists in society directly affects human well-being. Those with lower incomes or less wealth experience envy and feel dissatisfied with what they have. Those feelings arise because others have more and those with more often enjoy putting it on display for others to see. Those with lower incomes or less wealth may devote a great deal of time and effort trying to acquire more. They may turn to illegal activities such as illegal drug sales or burglary to accumulate more and overcome such feelings. Depression and anxiety may also be a result of slipping behind others in the race to accumulate material possessions. To overcome these feelings, many people turn to shortcuts such as gambling and playing the lottery. Because such solutions rarely lead to lasting gains for people, the pressure to find a solution becomes that much greater. On the other hand, those with high incomes or great wealth become the subjects of envy and are placed in a defensive position. They must devote effort to justifying their high incomes or great wealth. Economic theory may serve this end insofar as it provides theoretical explanations for the incomes and wealth levels that emerge in capitalist societies. Nevertheless, many with great incomes and wealth will put it on display so that it becomes an object of envy for others. Such displays are what Thorstein Veblen called conspicuous consumption and might include expensive artwork, mansions, boats, sportscars, jewelry, and vacations. Others with great income and wealth separate themselves from the rest of the population in gated communities or high-rise apartments. At all levels, the preoccupation with having more leads people to forget about other aspects of life such as family relationships, which often suffer because of the focus on material gain. The beauty of nature and the joy of hobbies are also forgotten as people seek ways to accumulate more wealth and to elevate themselves above their peers. Great wealth can also lead to the exploitation of labor-power from a Marxian perspective as capital is put in motion to produce surplus value. Because income inequality and wealth inequality are so important to our economic well-being, it makes sense to explore the primary method of measuring them. One method of measuring income inequality is to use a statistic called the quintile ratio. The quintile ratio is the ratio of the income of the top fifth of the population to the income of the bottom fifth of the population. The ratio ignores the middle 3/5 of the population, but it helps us to see just how much of a spread exists between the top income earners and the bottom income earners. The higher the quintile ratio, the higher is the degree of income inequality. For example, a quintile ratio of 5 implies that the top income earners have five times the income of the bottom income earners. If the quintile ratio rises to 6, then the top income earners have six times the income of the bottom income earners, and inequality has increased. Table 12.1 shows the quintile ratios for several countries in 2018. In Figure 12.2, the first 25% of the population holds 15% of the income. The first 50% of the population holds 20% of the income. The first 75% of the population holds 45% of the income. Finally, 100% of the population holds 100% of the income. The 45-degree line has a special role to play relative to the Lorenz Curve. The 45-degree line represents perfect equality. It shows that 25% of the population holds 25% of the income. 50% of the population holds 50% of the income. 75% of the population holds 75% of the income. 100% of the population holds 100% of the income. Therefore, the further away from the 45-degree line the Lorenz Curve is, the more income inequality is implied. It is possible to measure the extent of the deviation of the Lorenz Curve from the 45-degree line. Two areas have been marked in the graph: Area A and Area B. When Area A is larger and Area B is smaller, the Lorenz Curve is further from the 45-degree line, and more income inequality exists. When Area A is smaller and Area B is bigger, then the Lorenz Curve is closer to the 45-degree line and less income inequality exists. To measure the extent to which the Lorenz Curve deviates from the 45-degree line, economists use something called the Gini Coefficient. The Gini Coefficient is calculated as Area A divided by the sum of Areas A and B: $Gini\;coefficient=\frac{Area\;A}{Area\;A+Area\;B}$ The extreme values of the Gini Coefficient are zero and one. When the Lorenz Curve coincides with the 45-degree line, Area A is equal to zero and so the Gini Coefficient is equal to zero, which indicates perfect income equality. When the Lorenz Curve perfectly coincides with the lower right angle, Area B is equal to zero and so the Gini Coefficient is equal to 1, which indicates perfect income inequality. Perfect income inequality means that one person has all the income and the rest of the population has zero income. In general, the Gini Coefficient will fall somewhere in between these extremes and is usually between 0.20 and 0.50. Extreme cases are a bit higher or lower. Table 12.2 shows estimates of the Gini Coefficient for several years for the United States. Table 12.2 shows clearly that the level of income inequality in the U.S. has worsened over time. It is also possible to create a Lorenz Curve to represent the distribution of wealth and a Gini Coefficient to measure the extent of wealth inequality. In the U.S., the distribution of wealth has been much more unequal than the distribution of income. Figure 12.3 places Lorenz Curves representing the distribution of income and the distribution of wealth on the same graph. Because the wealth distribution Lorenz Curve is further from the 45-degree line than the income distribution Lorenz Curve, we can infer that the distribution of wealth is more unequal than the distribution of income. This inequality in the distribution of wealth is only expected to worsen. According to a new analysis that the House of Commons Library conducted, if the current pattern continues, then the top 1% of the global population will own 64% of global wealth by 2030.[16] The Measurement of Aggregate Output We now turn to the primary measure of macroeconomic performance among neoclassical economists, which is a measure of the aggregate output of the economy that is called Gross Domestic Product(GDP). GDP is intended to give us a sense of the size of the economic pie of a nation. It is one of the major components of the National Income and Product Accounts (NIPA). The U.S. Bureau of Economic Analysis (BEA) within the U.S. Department of Commerce maintains the NIPA and publishes quarterly estimates of U.S. GDP. To be precise, GDP represents the total market value of all final goods and services produced in a year within the national boundaries of a nation. Final goods and services refer to goods and services that are sold for final consumption. It should also be noted that GDP is a flow variable because it is measured per period such as a year. Figure 12.4 shows a production possibilities frontier (PPF) for a simple economy with just two final goods: apples and oranges. Figure 12.4 shows two combinations of apples and oranges that the economy produces in two different years. In 2001, it produces 400 apples and 300 oranges. In 2002, it produces 300 apples and 400 oranges. The reader should recall that the quantities of each good are measured in real terms (i.e., so many apples and so many oranges). It is not meaningful to add up all the apples and oranges in a year because they are qualitatively different goods. Even if we are satisfied adding together different types of fruit, if the goods produced in this economy included apples and automobiles, then adding these goods together would really make no sense. In general, the fact that differences exist among the units in which each good is measured prevents us from adding together the real quantities. Neoclassical economists resolve this problem through the assignment of weights to each good, which makes possible their conversion into a common unit and their aggregation. The natural weights to use are the market prices of the goods. More valuable goods, like automobiles, will be assigned greater weights and less valuable goods, like apples, will be assigned smaller weights. Table 12.3 adds price information to our example of an economy that produces apples and oranges. If we multiply each real quantity of a good by the market price of the good, then we obtain a dollar value of that good for the year. We then add the dollar values of apples and oranges for that year and we obtain the aggregate output or GDP for this simple economy. This measure of aggregate output makes it possible for us to compare the size of the economic pie across two years. Since GDP has risen from$775 to $800, we conclude that GDP has risen from 2001 to 2002. Without the common metric that money provides, it would not be possible to draw any conclusions about the change in aggregate output between 2001 and 2002. It is important to ask why economists limit the measurement of aggregate output to final goods and services. The values of intermediate goods, or goods that become part of other goods during production, are specifically excluded from the GDP calculation. The reason for the exclusion of the values of intermediate goods is that their inclusion would lead to a problem referred to as double counting. For example, suppose that a tire manufacturer purchases rubber from a supplier at a price of$150. The tire is then manufactured and sold to an automobile manufacturer for $250 who uses it to produce an automobile. The automobile is then sold to a consumer for$20,000. The automobile is the final good in this scenario, and the rubber and tire are intermediate goods. Therefore, only the value of the automobile is counted as part of GDP and the values of the rubber and the tire are intentionally omitted from the calculation. Why? The reason is that the $20,000 price of the automobile includes the value of the tire, which includes the value of the rubber. The supplier has added$150 to the value of the rubber through its production process (assuming it is the first stage of production). The tire manufacturer then adds another $100 to the value of the rubber, which results in a tire worth$250. The automobile manufacturer then adds additional value to the tire because it is now a part of a finished automobile. Let’s suppose that $400 is the value of the tire, which makes up part of the$20,000 sale price of the automobile. The automobile manufacturer has thus added another $150 of value to the tire. The reason for excluding the values of the rubber and the tire should be clear. The$400 tire, which is part of the sale price of the automobile, already includes the value of the rubber sold to the tire manufacturer and the value of the tire sold to the automobile manufacturer. If we count the value of the rubber and the value of the tire in the calculation of GDP, then we will be counting the value of the rubber three times and the value of the tire two times! To avoid multiple counting, we only add the value of the final good or service when calculating GDP. An alternative method is to add up the values added at each stage of production. In this case, we would add $150 for the value of the rubber,$100 for the value that the tire manufacturer adds, and $150 for the value that the automobile manufacturer adds to the tire due to the production of the finished automobile. Of course, we would then need to add the remaining$19,600 of value that the auto manufacturer adds with labor and other component parts to obtain the $20,000 contribution to GDP. Either of the two methods avoids double counting, and arguably results in a better approximation of the contribution of these goods to the national economic pie. In addition to the exclusion of the values of intermediate goods, several other exclusions apply to the calculation of GDP. National income accountants exclude government transfer payments. Government transfer payments include social security benefits and public assistance of all kinds. Because they do not represent a payment for a real good or service, they do not count in the calculation of GDP. National income accountants also exclude private transfer payments from the calculation of GDP. Private transfer payments include monetary gifts and charitable donations. When a donor makes a charitable donation, she typically receives a letter from the charity thanking her for the contribution. The letter also states that the organization did not grant any goods or services in exchange for the donation. Because the donation does not reflect any current production of goods or services, GDP should not include it. National income accountants also exclude sales and purchases of financial assets, such as stocks and bonds. It is possible to link stocks and bonds to production processes, but these assets only represent claims to the assets of a corporation and so are not included in GDP. Finally, used goods are also excluded from the GDP calculation because the current year GDP or the previous year’s GDP includes the values of the goods when sellers sold them the first time. For example, the sale of a 2014 Ford Escape in 2018 should not be included in the GDP for 2018 because the 2014 GDP already included it. The sale of used goods represents the redistribution of existing output rather than the production of new output. For that reason, the GDP calculation excludes the values of used goods. From a conceptual perspective, two methods exist for thinking about the measurement of aggregate output. The two methods stem from what is an identity in neoclassical economics, namely that income and expenditure are always equal as shown in Figure 12.5. If I purchase that new 2014 Ford Escape for$20,000 in 2014, then we can think about my expenditure of $20,000, which is equal to the value of that final good. We can also think about it from the perspective of the dealer who receives an income of$20,000 when she sells the automobile. At a macroeconomic level, whether we add up all the expenditures on final goods or whether we add up all the incomes received from the sale of final goods, we should obtain the same measure of aggregate output. This result must hold true because of the income-expenditure identity.
To delve deeper into the expenditures approach to the measurement of GDP, national income accountants divide aggregate expenditures into four major categories: personal consumption expenditures (C), gross private domestic investment (I), government purchases (G), and net exports (Xn). If we add together these four values, we obtain GDP as follows:
$C+I+G+X_{n}=GDP$ Personal consumption expenditures include expenditures on durable goods, nondurable goods, and services. The consumption of durable goods typically requires more than three years, and includes such items as automobiles and household appliances. The consumption of nondurable goods typically requires less than three years, and includes such items as food, clothes, and fuel. Finally, services include nontangible commodities like legal services, medical services, and childcare.
Gross private domestic investment includes several types of expenditure as well. Expenditures on final capital goods that businesses incur are included in this category. When a business purchases a machine, for example, it is considered a final capital good because the machine does not become physically incorporated into another product. Its use in the production process is its final use. One potential complication here is that the value of a final capital good is gradually transferred to the value of the good that it is used to produce. As the machine depreciates, that value must pass to the value of the final product because it represents a cost of production. Later in this section, we will consider how national income accountants address this issue.
Residential fixed investment is another type of investment expenditure in the national income accounts. It refers to all expenditures incurred in the purchase of newly constructed homes. When homes are resold, they are not included in the GDP calculation because that would represent double counting. Previous home construction was already counted once when the homes were sold for the first time. The reader might find it odd that homes are considered an investment expenditure rather than a consumption expenditure. The reason is that investment expenditures are a positive contribution to the nation’s stock of capital and houses may be thought of as capital goods. In neoclassical theory, capital goods are goods used to produce other goods. In the case of housing, houses produce a flow of services over time. That is, a home creates a space for a person to live that can benefit that person for many years. The house thus contributes to the production of this service and so the house may be thought of as a capital good. Business fixed investment is another category of investment expenditure in the national income accounts. It refers to expenditures incurred in the construction of new factories, production plants, and office buildings. Business investments of this kind also make possible the production of other goods and services and so represent an increase in the nation’s capital stock.
Inventory investment is another type of investment expenditure in the national income accounts. It is calculated as changes in inventories in a year. Business inventories expand when firms have unsold goods at the end of a calendar year. They store these goods with the hope of selling the goods in the next year. Even though these goods are not sold to the public, they do represent new production of final goods and should be counted in GDP. Hence, national income accountants include new additions to inventories when calculating GDP. It is as if the businesses purchase the goods even though no money changes hands. On the other hand, some businesses will sell goods in the current year that were produced in a previous year and became part of their business inventories in that previous year. Because these goods were already counted as part of a previous year’s GDP since they represented additions to inventories at that time, these sales should be subtracted in the calculation of GDP. One might wonder why they need to be subtracted as opposed to simply ignored. They must be subtracted because when personal consumption expenditures are calculated, they include all goods and services sold to consumers, which might include goods that were produced in a previous year and became part of business inventories at that time. The subtraction at this stage allows national income accountants to remove them from the calculation of GDP. Inventory investment is thus calculated as follows:
$Inventory\;Investment=New\;additions\;to\;inventories-reductions\;in\;inventories$
It is possible for inventory investment to be positive if new additions to inventories outweigh reductions in inventories in a year. It might also be equal to zero if the additions and reductions perfectly balance. Finally, it might be negative, if reductions in inventories are so large that they exceed new additions to inventories. In the last case, negative inventory investment will cause GDP to be smaller.
At this point it might be helpful to consider how the nation’s stock of private capital changes over time.[17] The nation’s private capital stock refers to all the privately-owned machinery, homes, factories, office buildings, production plants, apartment buildings, etc. at a specific point in time. Gross private domestic investment causes the private capital stock to grow. At the same time, depreciation causes the private capital stock to contract. Depreciation refers to the gradual wearing out of capital over time due to use or lack of use. The relationship between gross investment and depreciation in a year determines the net impact on the private capital stock. Gross investment may be thought of as an inflow and depreciation may be thought of as an outflow relative to the private capital stock. Figure 12.6 shows these relationships.
Since gross investment and depreciation cause annual changes in the size of the private capital stock, they are flow variables. The private capital stock is obviously a stock variable because it is measured as of a point in time. If gross investment exceeds depreciation, then the private capital stock expands. That is, more is added to the private capital stock than is depleted. If gross investment is below depreciation, then the private capital stock contracts. That is, more of the private capital stock is wearing out than is being replaced. If gross investment and depreciation are equal, then the private capital stock remains constant for that year.
Returning to the components of aggregate expenditure that are used to measure GDP, the third type is government purchases of final goods and services. Federal, state, and local governments purchase consumer goods and services such as office supplies and computers for use in government buildings. Governments also make investment expenditures when they build new roads, bridges, schools, and office buildings. Because all these purchases represent purchases of final goods and services, we should include them in our calculation of GDP.
The final component of aggregate spending that is used to calculate GDP is net exports. Net exports are the difference between exports and imports (X – M). It is also referred to as the balance of trade or simply as the trade balance. It should be obvious why we add exports in the calculation of GDP. GDP is supposed to include the values of all domestically produced final goods and services in a year. When final goods and services are produced and exported, the expenditure that foreign buyers incur should be included in GDP. Imports of final goods and services, on the other hand, are produced outside the territorial boundaries of the nation and so should be included in the GDPs of foreign nations. The reader might wonder why we subtract imports in the calculation of GDP rather than simply ignoring them altogether. As with spending on goods produced in previous years, personal consumption expenditures might include spending on imported goods and services. No effort is made to exclude imported goods and services from that component of aggregate expenditure. Therefore, we subtract imports at this stage to ensure that they are not included in our GDP measure. Similarly, government purchases of final goods and services and business purchases of final capital goods might include purchases of imported goods. The subtraction of imports also allows us to exclude those values from our GDP calculation.
Because imports are subtracted in the calculation of net exports, it is possible for net exports to be negative. When a nation’s imports exceed its exports, then we say that a trade deficit exists and net exports are negative. When a nation’s exports exceed its imports, then we say that a trade surplus exists and net exports are positive. When the nation exports and imports the same amount, then we say that balanced trade exists and net exports are equal to zero.
Table 12.4 shows the numerical figures for each of the major components of aggregate expenditure in 2017.
The sum of personal consumption expenditures, gross private domestic investment, government purchases, and net exports is equal to GDP for that year. As Table 12.4 shows, GDP for 2017 was equal to $19.3906 trillion. In general, personal consumption expenditures tend to be about 2/3 (or about 66.67%) of GDP. This statistic is a frequently quoted statistic in the economic news of the nation. It is often stated that consumer spending makes up 2/3 of the economy. When people make this claim, they have in mind 2/3 of GDP. Government purchases are usually about 20% of GDP. Investment spending is typically about 10-15% of GDP. Net exports tend to be the smallest component of aggregate spending at about 3% and have been negative in recent decades. The percentages shown in Table 12.4 are approximately at these levels. Also, the reader should notice that the U.S. trade deficit is reflected in the negative value of net exports. We next turn to the income approach to the measurement of GDP. The income approach adds up all the different flows of income that result from the sale of final goods and services. The largest income flow is compensation for American employees (i.e., wages and salaries). That is, when goods and services are sold, part of the revenue is used to pay employees of businesses. Rental income for American landlords is another major income category. Part of the revenue from the sale of final goods and services goes to pay rent for properties that are used in production. Interest income for American moneylenders is a third major income category. When final goods and services are sold, part of the revenue must be used to pay interest on loans. Finally, profit income for American businesses represents a major income flow. Part of the revenue from the sale of final goods and services businesses appropriate as profits. A portion of the profit income is for unincorporated businesses like sole proprietorships (one owner) and partnerships (multiple owners). Such businesses face unlimited liability. That is, if the business fails, then the owners’ personal assets must be used to pay business debts. The rest of the profit income consists of corporate profits or the profits of incorporated business enterprises. Corporations issue and sell stock to the public. These business enterprises enjoy limited liability. If the firm fails, only the corporation’s assets may be used to pay the firm’s debts. The losses for the owners will be limited to the amount of money capital they contributed to the business. Corporate profits are subject to federal and state corporate income taxes and so a portion will be paid to the federal and state governments. Another portion may be distributed as dividends to the shareholders, who are the owners of the corporations. Finally, a third portion of corporate profits might be reinvested in the business and constitute what are called retained earnings because they are neither paid as taxes nor distributed to owners. Three additional income flows must be considered before we arrive at our measure of GDP. When final goods and services are sold, a part of the revenue must be used to pay taxes on the sale. Taxes on production and imports include taxes, such as state sales taxes, excise taxes, and import tariffs. These income flows pass to federal and state governments. Another income flow is used to replace worn out capital. When businesses sell final goods and services, they set aside a portion of the revenue to repair and replace capital goods that have been used in production. A fund that represents the depreciation of the capital stock is thus another income flow that must be included in the GDP calculation. The final income flow that must be included in the GDP calculation is a measure that national income accountants refer to as net foreign factor income. To understand this measure, we must recall that all compensation for employees, rental income, interest income, profit income and taxes on production and imports that were previously discussed flow to American citizens, businesses, and governments. National income is the sum of all these incomes flows as expressed below: $U.S.\;National\;Income=U.S.\;employee\;compensation+U.S.\;rental\;income+U.S.\;interest\;income+U.S.\;profit\;income+U.S.\;taxes\;on\;production\;and\;imports$ That is, whether American workers and businesses are working and operating in the United States or in the rest of the world, their incomes are counted in these income measures. Because GDP measures all the income received within the territorial boundaries of the nation, to calculate U.S. GDP using the income approach, we must adjust aggregate income to account for the fact that some Americans are earning income abroad while some foreigners are earning income in the U.S. Figure 12.7 provides a diagram that shows why this adjustment must be made. In Figure 12.7, the income that foreign citizens and businesses earn in the U.S. is denoted as F1, and the income that American citizens and businesses earn outside the U.S. is denoted as A2. Similarly, the income that American citizens and businesses earn in the U.S. is denoted as A1, and the income that foreign citizens and businesses earn outside the U.S. is denoted as F2. To calculate U.S. GDP, we need to subtract A2 and add F1 when starting with U.S. national income. This adjustment will allow us to calculate all the income earned within the geographical boundaries of the United States. We can now calculate net foreign factor income as follows: $Net\;Foreign\;Factor\;Income=Income\;of\;Americans\;working\;abroad\;(A_{2})-Income\;of\;foreigners\;working\;in\;the\;U.S.\;(F_{1})$ To move us closer to the calculation of U.S. GDP, we need to subtract net foreign factor income from U.S. national income. Subtracting net foreign factor income will remove the income of Americans working abroad and add the income of foreigners working in the U.S. The final adjustment to national income is the addition of depreciation, which is an income flow that is not included in U.S. national income. U.S. GDP using the income approach may thus be calculated as follows: $U.S.\;GDP=U.S.\;National\;Income-Net\;Foreign\;Factor\;Income+Depreciation$ Figure 12.7 also allows us to see that net foreign factor income is the difference between Gross Domestic Product (GDP) and Gross National Product (GNP). GNP was used more widely in the past in studies of aggregate output. It includes all the output produced and all the income earned by U.S. citizens whether working in the U.S. or abroad. In other words, GNP would equal A1+A2 in Figure 12.7, and so it would not include F1. GDP, on the other hand, would equal A1+F1 but it would not include A2. The shift of focus from GNP to GDP in the U.S. occurred in the 1990s and makes sense in an increasingly globalized world where location seems more important than citizenship when thinking about contributions to the total production of the economy. The income and expenditures approaches to the measurement of GDP should lead to the same numerical result because the approaches are based on the income-expenditure identity. The project of adding all the expenditure in the economy and all the income earned is such a massive project, however, that errors are inevitable. Because the two calculations do not match in practice, national income accountants include an item called statistical discrepancy to ensure that the two calculations are the same after accounting for the errors that arise from data collection. Table 12.5 shows the figures for U.S. GDP in 2017 using the income approach. A value for statistical discrepancy has been included so that the GDP calculation is the same (aside from rounding error) as the calculation using the expenditures approach in Table 12.4. To summarize the two approaches to GDP measurement in a single diagram, consider Figure 12.8. Figure 12.8 shows how the four major categories of aggregate expenditure generate incomes for workers, landlords, savers, and businesses (profits and depreciation funds) working and operating in the U.S. as well as U.S. federal, state, and local governments. National income accountants use additional measures of macroeconomic performance. Starting with GDP, we can work backwards, in a sense, to obtain these other measures. For example, if we subtract depreciation from GDP, we obtain a measure called Net Domestic Product (NDP), which is calculated as follows. $NDP=GDP-Depreciation$ NDP shows us the value of the final goods and services produced in a year after we account for the depreciation of the capital stock. This measure addresses an issue raised earlier in this chapter regarding the possibility of double counting that results from the inclusion of final capital goods as an investment expenditure in the calculation of GDP. When a final capital good is used, it depreciates and adds value to the good that it is used to produce. If we count the full value of the final capital good in GDP and the value that it adds to final goods during the year, then double counting will occur. By subtracting depreciation, we eliminate the double counting problem. In 2017, U.S. NDP was calculated as follows: $U.S.\;NDP\;in\;2017=GDP-Depreciation=\16,356\;billion$ We can also reconstruct U.S. national income if we add net foreign factor income to U.S. NDP. This addition will add the incomes of Americans working abroad and subtract the incomes of foreigners working in the U.S. $NI=NDP+Net\;Foreign\;Factor\;Income$ The GDP measure is a helpful way to think about aggregate output and aggregate income. Nevertheless, it is not a perfect measure. One of the problems with GDP is that it can change from year to year for reasons that do not seem to correspond with a change in the production of real final goods and services. For example, because prices are used as weights in the GDP measure, if all prices increase from one year to the next, then GDP will rise even if the real quantities produced have not changed. Let’s again consider a simple economy that only produces apples and oranges. Table 12.6 shows the quantities of each good and their prices for three different years. Nominal GDP is calculated by multiplying prices and quantities and summing them up as explained previously. Nominal GDP refers to GDP measured in current year market prices and is the measure that we have been discussing all along. As Table 12.6 shows, nominal GDP rose between 2001 and 2003, but it rose for two reasons. One reason is the increase in real quantities of apples and oranges produced. The second reason is the rise in the prices of apples and oranges. If we want our measure of aggregate output to only capture increases in real quantities produced, then we have a problem. The problem is the result of changing prices and so the obvious solution is to fix the prices. Which prices should we use? We have market prices from three different years that we can use in our calculation. It really does not matter which set of prices we use if they are constant. Because the choice is arbitrary, we will designate one year as the base year and then use the base year prices to calculate GDP for any year. Table 12.7 shows how GDP is calculated using constant 2001 prices. This simple model is referred to as the cobweb model.[19] More sophisticated versions of the cobweb model show the market price gradually approaching the equilibrium price as the adjustments become smaller and begin to approach their target. Assuming many or all individual markets experience such fluctuations, an aggregation of these output fluctuations will produce macroeconomic fluctuations with corresponding fluctuations in employment. Because the explanation of business fluctuations stemming from the cobweb model is rooted in errors made by producers, it may be considered a heterodox theory of economic cycles. As we will see, however, most orthodox and heterodox explanations of business cycles emphasize other factors. Later chapters delve into the sources of these different explanations. In addition to real GDP, economists also frequently discuss the growth rate of real GDP. The real GDP growth rate is calculated as follows: $Real\;GDP\;growth\;rate=\frac{Real\;GDP_{t}-Real\;GDP_{t-1}}{Real\;GDP_{t-1}}$ The calculation of the real GDP growth rate between year t-1 and year t divides the change in real GDP by the real GDP of the previous year. A positive rate of real GDP growth suggests that real GDP has increased since the previous year. A negative rate of real GDP growth suggests that real GDP has fallen since the previous year. A zero rate of real GDP growth suggests that real GDP has remained the same since the previous year. Another measure that economists use is per capita real GDP or real GDP per person. Per capita real GDP is calculated as follows: $Per\;capita\;real\;GDP=\frac{Real\;GDP}{Population}$ Per capita real GDP is often considered to provide a rough measure of the standard of living in a nation. It measures real income or real output per person. The measure has a serious shortcoming, however, because it is only an average and does not tell us anything about an individual’s economic welfare. If everyone receives the same real income, then per capita real GDP tells us what that real income level is. If a great deal of income inequality exists, however, then some people will have real incomes that are far below the per capita real GDP. Other people will have real incomes that are far above the per capita real GDP. In other words, per capita real GDP tells us nothing about the distribution of income. If someone suggests that it is a rough measure of what individuals earn in real terms, then that suggestion can be very misleading. Nevertheless, a rise in per capita real GDP gives us a sense of how much the economy has expanded over time. Figure 12.11 shows how the per capita real output has increased dramatically from 1947 to 2017. A similar measure that uses real GDP in its calculation is real GDP per worker, which measures the average labor productivity for each member of the labor force as follows. $Real\;GDP\;per\;worker=\frac{Real\;GDP}{Labor\;Force}$ A higher level of real output per worker suggests higher average labor productivity. A lower level of real output per worker suggests lower average labor productivity. Again, this measure is subject to the same shortcoming in that it is only an average. Nevertheless, economists widely use both the labor productivity and per capita output measures. Economists also refer to the growth rates of real per capita income and real output per worker as measures of economic growth and productivity changes. A productivity growth slowdown began in the 1970s. The reasons for the productivity slowdown are hotly debated. Some explanations focus on the inflation that occurred in the 1970s while other explanations focus on the breakdown of the cooperative labor-management relations of the postwar period. The macroeconomic theories in the second part of this book offer explanations for such changes. Heterodox Critiques of National Income Accounting Many heterodox economists are sharply critical of national income accounting. This section concentrates on two major critiques of GDP as a measure of economic well-being. The first critique is one that feminist economists have developed to draw attention to the many contributions that women make to our economic well-being that have been excluded from the calculation of GDP. The second critique involves the assertion that human happiness depends on more than the amount of goods and services that are available for consumption. We consider each critique separately. GDP only includes the market value of all final goods and services produced within the economy during a given year. Because it only includes market values, any production that never finds its way to the market is necessarily omitted from the calculation of GDP due to the lack of a market price. One type of production that never enters the market is household production. Historically, women have been the primary producers within the home of a huge variety of goods and services, including home-cooked meals, laundry services, cleaning services, childcare, care of elderly family members, care of pets, transportation for children and the elderly, gardening and landscaping, clothing, clothing repair, and grocery shopping. The list could easily be expanded. When people outside the home are hired to perform these services and produce these goods, the production is counted in GDP because it has a market value. When women perform these duties within the home and their families consume the goods and services, they are not counted in GDP. Consequently, an enormous amount of labor that women have performed during the past century has been completely overlooked in the national income accounts. It is the invisible nature of women’s unpaid work in the home that has been the motivation for sharp feminist critiques of GDP as a measure of economic welfare. Some early estimates of national income in Norway and other Scandinavian countries included the value of unpaid household labor. For example, in Norway in 1943, the value of unpaid household labor was estimated to be 15% of national product.[20] As the United Nations prepared to introduce the first international standard for national accounts in 1953, however, goods and services derived from unpaid household work were excluded, which led Norway to eliminate it from its national accounts in 1950.[21] The primary method of accounting for goods and services that do not have market values is to use imputed values. The imputed value of a good or service is based on its likely value in the market if it was sold. It can be thought of as the opportunity cost of consuming the good or service when it could be sold. The practical way to handle this problem when considering unpaid labor in the home is to assume that “an hour of market work and an hour of nonmarket work have the same value.”[22] The market wage of a substitute household worker is then used in combination with the labor time needed to produce goods and services in the home.[23] The typical choice of yardstick to arrive at the estimates of the value of household labor is extra gross wages, which are before taxes and include employers’ social security contributions.[24] A rough estimate of the value of nonmarket household production is approximately half of GDP in the industrialized countries.[25] Given how massive this contribution is, the omission of unpaid household labor in the national income accounts grossly understates our national output of goods and services. At the same time, feminist economists recognize that many activities within the home have an intrinsic value that market values simply cannot capture.[26] To represent these intrinsic values, quality of life measures are used that include the “pursuit of good health, the acquisition of knowledge, the time devoted to fostering social relationships, [and] the hours spent in the company of relatives and friends.”[27] However we might measure the contribution of unpaid household labor, it is essential to recognize that it is women’s contribution that is mainly being overlooked in the national income accounts. In industrialized nations in recent decades, women spent about 2/3 of their total work time on unpaid nonmarket activities and 1/3 of their time on paid market activities whereas for men the shares have been reversed.[28] In developing nations, the difference between men’s and women’s shares is even greater.[29] For an alternative measure of macroeconomic well-being, we turn to the Himalayan Mountains where Bhutan’s primary measure is something called Gross National Happiness (GNH). GNH has become a guiding light of economic policymaking in Bhutan. Since Bhutan became a democracy in 2008, its Constitution has required its leaders “to consult the four pillars of Gross National Happiness – good governance, sustainable socioeconomic development, preservation and promotion of culture, and environmental conservation – when considering legislation.”[30] Bhutan’s rejection of GDP as a measure of economic progress goes back to 1971.[31] This Buddhist approach to economic well-being places emphasis on “the spiritual, physical, social and environmental health of its citizens and natural environment.”[32] To protect the natural environment, Bhutan has taken extraordinary measures. It is committed to remaining carbon neutral and to permanently maintaining 60% of its landmass under forest cover, which has included a ban on export logging.[33] The GNH concept has also influenced Bhutan’s system of education, which places heavy emphasis on environmental protection, recycling, and daily meditation.[34] King Jigme Singye Wangchuck, who ruled Bhutan until 2006, coined the GNH label decades ago.[35] It is worth noting that the concept was developed in Asia rather than in western nations where material possessions have long served as the measure of well-being. Nevertheless, the concept has caught on with western leaders. In 2011, the UN General Assembly “passed a resolution inviting member states to consider measures that could better capture ‘the pursuit of happiness’ in development,” which led to the first World Happiness Report of 2012.[36] The UN uses a variety of different variables to calculate a score for each country, which serves as its index of happiness. According to the 2018 World Happiness Report, Finland is the happiest nation on the planet and the U.S. has fallen to 18th place due to crises of obesity, substance abuse, and mental health problems.[37] Although GNH has not replaced GDP as the measure of greatest interest to most economists, it has drawn public attention to the possibility that our economic welfare depends on more than the amount of final goods and services our nation produces each year. The Measurement of the Labor Force and the Unemployment Rate We now turn to the measurement of the labor force, employment, and unemployment. Within the U.S. Department of Labor, the Bureau of Labor Statistics (BLS) is responsible for publishing the unemployment rate each month. To understand this calculation and related measures, consider Figure 12.12, which breaks down the population into its component parts. Figure 12.12 shows that the total population may be divided into the civilian non-institutional population and those not in the civilian non-institutional population. Those in the civilian non-institutionalpopulation are at least 16 years old and are not in institutions such as mental hospitals or prisons. Those not in the civilian non-institutionalpopulation are under 16 years of age or are living in institutions. The civilian non-institutional population then may be divided into those in the labor force and those not in the labor force. Those in the labor force are non-institutionalized civilian workers who are willing and able to work. Those not in the labor force are non-institutionalized civilian workers who are not willing or are not able to work. For example, full-time students, retirees, stay-at-home parents, and disabled people are considered not in the labor force. They are of working age but are not willing or able to work at the current time. Of those willing and able to work, those with jobs are considered employed. Those without jobs who have tried to find work within the past four weeks are considered unemployed. If workers have become discouraged and have given up looking for work, then they are considered outside the labor force. Discouraged workers are thus not in the labor force. The four-week cutoff is completely arbitrary and shows how social values creep into the construction of macroeconomic variables like unemployment. That is, some people find this cutoff to be too short when counting people as unemployed. These people wish to count more people as unemployed. Other people find this cutoff to be too long when counting people as unemployed. These people wish to count fewer people as unemployed. The normative content of the unemployment measure was discussed in detail in Chapter 1. Given these definitions, we can now construct two key labor force measures: 1) the labor force participation rate and 2) the unemployment rate. The labor force participation rate is the labor force divided by the civilian non-institutional population. It shows us the fraction of the civilian non-institutional population that is willing and able to work and is calculated as follows: $Labor\;Force\;Participation\;Rate=\frac{Labor\;Force}{Civilian\;Non-institutional\;Population}$ Figure 12.13 shows the pattern of the labor force participation rate in the United States from 1948 to 2018. Figure 12.13 shows that the labor force participation rate in the U.S. rose considerably from the 1960s to the 1990s but that it has declined quite significantly since the turn of the century. The unemployment rate is the percentage of the labor force that is unemployed. It is calculated as follows: $Unemployment\;Rate=\frac{Unemployed}{Labor\;Force}$ Figure 12.14 shows that the unemployment rate in the U.S. has fluctuated considerably during the past century but that it has not followed an obvious upward or downward trend. It should also be clear that some unemployment has always existed, and it never seems to approach zero. The U.S. labor force figures for December 2017 are below, and they have been used to calculate the unemployment rate and the labor force participation rate.[38] $Total\;Population=325,719,178$ $Civilian\;Non-institutional\;Population=256,109,000$ $Not\;in\;the\;Civilian\;Non-institutional\;Population=69,610,178$ $Labor\;Force=160,597,000$ $Not\;in\;Labor\;Force=95,512,000$ $Employed=154,021,000$ $Unemployed=6,576,000$ $Labor\;Force\;Participation\;Rate=62.7\%$ $Unemployment\;Rate=4.1\%$ Although the unemployment rate gives us some idea as to the percentage of workers who want jobs but are unable to find them, it has some shortcomings. As we have seen, it excludes discouraged workers and so it tends to understate the amount of unemployment. Figure 12.15 provides an historical look at the pattern of the special unemployment rate since 1994 in the U.S. The special unemployment rate includes both the officially unemployed and the discouraged workers. Figure 12.15 shows that the inclusion of discouraged workers in the measurement of the special unemployment rate causes the special unemployment rate to be significantly higher than the official unemployment rate at any given time. The unemployment rate is also based on a headcount and treats part-time workers as employed even if they would like to have full-time work, which understates the amount of unemployment in the economy. Finally, it might overstate the amount of unemployment if people indicate in the survey that they have tried within the past four weeks to find work because they believe that it will help them qualify for unemployment benefits. The causes of unemployment are the focus of later chapters because identifying them requires the use of theoretical frameworks. At this point, however, it is worth summarizing the neoclassical perspective on unemployment, which helps us grasp a major source of the theoretical disagreements that we will encounter in later chapters. Neoclassical economists argue that some unemployment is inevitable and perfectly acceptable in a market capitalist economy. This type of unemployment is called natural unemployment. Furthermore, it consists of two types of unemployment: structural unemployment and frictional unemployment. Structural unemployment occurs when workers lose their jobs due to shifts of consumer demand or technological changes. Such shifts are inevitable and necessary within capitalist economies, so the argument goes, because consumers are free to make choices and firms are free to introduce new technologies. When such changes occur, some industries decline as other industries expand. The result is that workers in contracting industries will become unemployed as they strive to find jobs in the expanding industries. Frictional unemployment occurs when new entrants into the labor force are looking for that first job or when they voluntarily decide to leave one employer and find another employer for whom to work. These increases in unemployment are considered unavoidable in an unregulated economic system where workers are free to make their own decisions about employment. Using these two measures of unemployment, we can define a natural rate of unemployment (NRU). $NRU=\frac{Structurally\;Unemployed+Frictionally\;Unemployed}{Labor\;Force}$ The NRU is the subject of debate among neoclassical economists. How much of the unemployment that we observe results from these factors? The U.S. Congressional Budget Office has estimated the NRU for different years. If we place the NRU on a graph of the official unemployment rate for the period 1948-2017, then we obtain Figure 12.16. Figure 12.16 shows that a significant amount of unemployment has existed beyond the NRU in recent decades. This amount of unemployment beyond the NRU is labeled cyclical unemployment. As Figure 12.16 shows, cyclical unemployment was especially high during the recessions of the early 1980s and during the Great Recession. It is argued to stem from avoidable factors. It rises and falls according to the phase of the business cycle. For neoclassical economists, it is the only source of concern when considering the unemployment problem. At times, the unemployment rate has fallen so low that it falls below the NRU. When unemployment falls to such low levels, it means that hiring is happening at such a furious pace that even the structurally unemployed and the frictionally unemployed are being hired. If the unemployment remains at this low level for a long enough period, then economists might revise their estimate of the NRU in a downward direction. Such revisions to the NRU have occurred in the past as Figure 12.16 indicates. Because of their division of unemployment into natural unemployment and cyclical unemployment, neoclassical economists mean something very peculiar when they refer to full employment. Full employment for neoclassical economists does not mean a zero rate of unemployment. When the economy reaches full employment, it means that cyclical unemployment is zero and the unemployment rate is equal to the NRU. In other words, unemployment exists but it is only structural unemployment and frictional unemployment. If the unemployment rate falls below the NRU as previously discussed, then the economy operates at a level beyond full employment. Finally, the level of real GDP corresponding to full employment is the potential GDP of the economy. If the economy is at the potential GDP, then it is operating on its production possibilities frontier (PPF). All resources are fully employed using the most efficient methods of production. The Measurement of the Aggregate Price Level and the Inflation Rate So many different goods and services exist within an economy that it is difficult to think about something called the general level of prices. Nevertheless, orthodox and heterodox economists devote a lot of attention to it. To measure the general price level, it is necessary to use what economists call a price index. A price index is a summary measure or statistic that is supposed to measure the general price level. When it changes from one period to the next, the change is supposed to capture changes in many different prices at once. Economists use many different price indices to measure the general price level. In this chapter, we will concentrate on two price indices that are the most widely used measures of the general price level. The first measure of the general price level is the GDP deflator. The GDP deflator is calculated as the ratio of nominal GDP to real GDP as follows: $GDP\;Deflator=\frac{Nominal\;GDP}{Real\;GDP}$ Table 12.8 shows how the GDP deflator is calculated for four years using 2001 as the base year. This ratio might seem like a strange measure of the general price level, but consider what causes a divergence between nominal GDP and real GDP. The only reason that nominal GDP exceeds real GDP in a specific year is that the price level has risen. If the GDP deflator is equal to 1.60 in 2002 and 2001 is the base year, then the implication is that the general price level is 160% of what it was in the base year. A GDP Deflator of 2.20 in 2003 implies that the general price level is 220% of what it was in the base year. In general, a rise in the GDP deflator means that the general price level has risen. Using the GDP deflator, it is possible to calculate the rate of inflation. The inflation rate measures the percentage change in the general price level from one year to the next. To calculate the inflation rate, a price index (P) is required. Using any price index, we can calculate the inflation rate in year t as follows: $Inflation\;rate=\frac{P_{t}-P_{t-1}}{P_{t-1}}$ In words, if we divide the change in the general price level since the previous year by the previous year’s price level, then we obtain the percentage change in the general price level since the previous year. The inflation rate thus tells us how rapidly prices are rising. Since the GDP deflator is a price index, we can use it to calculate the inflation rate. For example, the inflation rate in 2003 would be calculated as follows using the information from Table 12.8. $Inflation\;rate=\frac{P_{2003}-P_{2002}}{P_{2002}}=\frac{2.2-1.6}{1.6}=37.5\%$ Table 12.9 adds the inflation rates to the information in Table 12.8. In Table 12.9 the inflation rate for 2001 is shown as undefined. It is only because the inflation rate for 2000 is not included in the table that we cannot calculate the inflation rate for 2001. The reader should notice that the inflation rate for 2004 is negative because the price level fell relative to 2003. Deflation is the name that economists use to describe a negative rate of inflation. The reason that the GDP deflator is referred to as a deflator is that it makes it possible to deflate nominal magnitudes to obtain real magnitudes. For example, suppose that we know the nominal GDP in 2002 is$800 and the GDP deflator is 1.6. Using the definition of the GDP deflator, we can deflate the nominal GDP and solve for the real GDP in 2002 as follows:
$Real\;GDP\;in\;2002\;in\;2001\;dollars=\frac{Nominal\;GDP}{GDP\;Deflator}=\frac{\800}{1.60}=\500$
Using the deflator, we eliminate the impact of the rising price level to express GDP in real terms (i.e., measured in constant, base year dollars).
A second measure of the general price level upon which economists rely heavily is the consumer price index (CPI). The Bureau of Labor Statistics (BLS) within the U.S. Department of Labor publishes the CPI each month. Unlike the GDP deflator which includes the prices of all final goods and services produced in the economy, the CPI only includes the prices of goods and services that a typical consumer purchases. In fact, it is based on the price of a typical consumer basket of goods and services. The information used to construct the CPI is derived from the Consumer Expenditure Survey, which is administered to thousands of families in the United States each year. This information helps the BLS determine which consumer goods and services American households purchase. The BLS then collects information on thousands of prices of goods and services each month to construct the CPI. The BLS uses eight major categories of expenditure to organize the items contained in the consumer basket that it uses, which include the following:[39]
1. Food and beverages (breakfast cereal, milk, coffee, chicken, wine, service meals and snacks)
2. Housing (rent of primary residence, owners’ equivalent rent, fuel oil, bedroom furniture)
3. Apparel (men’s shirts and sweaters, women’s dresses, jewelry)
4. Transportation (new vehicles, airline fares, gasoline, motor vehicle insurance)
5. Medical Care (prescription drugs and medical supplies, physicians’ services, eyeglasses and eye care, hospital services)
6. Recreation (televisions, pets and pet products, sports equipment, admissions)
7. Education and communication (college tuition, postage, telephone services, computer software and accessories)
8. Other goods and services (tobacco and smoking products, haircuts and other personal services, funeral expenses)
Within each category, the BLS tracks the prices of hundreds of representative items and uses them to calculate the CPI. To see exactly how the CPI is calculated, let’s consider a simple example in which only two goods are included in the typical consumer’s basket of goods. Table 12.10 shows hypothetical price information for five years.
In Table 12.10, the typical consumer purchases 400 units of food and 750 units of fuel. Food and fuel are thus the only two goods in the consumer’s basket of goods. The base year is designated as 2007 and so the BLS has decided that these quantities of food and fuel were the quantities that a typical consumer purchased in 2007. It is important to notice that these quantities are then held constant for every other year. It is possible that a consumer might alter the mix of goods and services in her basket, but the BLS assumes a fixed basket for every year when constructing the CPI. The prices of food and fuel, on the other hand, change from year to year (or month to month in reality), and the BLS tracks these changes closely.
The next step is to calculate the price of the consumer basket of goods and services in the base year. This calculation is carried out by simply multiplying the price in 2007 (the base year) times quantity consumed in 2007 for each good and adding them up to obtain $12,000. Since the price of the basket in the base year is used to calculate the CPI in each year, the 12,000 has been included as a fixed value in a single column. We then calculate the price of the consumer basket in the current year by multiplying the fixed quantities and the current year prices of the goods and adding them. Because the prices in the current year change from year to year, the price of the basket in the current year changes too. The final step in the calculation of the CPI for each year is to divide the price of the basket in the current year by the price of the basket in the base year as follows: $CPI=\frac{Price\;of\;basket\;in\;the\;current\;year}{Price\;of\;basket\;in\;the\;base\;year}$ Looking at how the CPI has changed from year to year shows us how the general price level rose from 2007 to 2010 and then fell in 2011. It is also possible to calculate the annual inflation rate, using the CPI values in the formula for the inflation rate provided earlier. The inflation rates have also been included in Table 12.8. The decline in the price level between 2010 and 2011 has caused the inflation rate to turn negative, which indicates deflation, as previously explained. Figure 12.17 shows how the CPI inflation rate has fluctuated during the past half century. Noteworthy periods include the double-digit inflation rates of the 1970s when the oil price shocks occurred and the low inflation rates of the 1990s when the economy expanded and the introduction of new technologies led to rising productivity. The CPI is a helpful index of the general price level. The CPI inflation rate is used to adjust the poverty thresholds discussed earlier in this chapter. Nevertheless, it has been recognized for its limitations, even among mainstream economists.[40] The main limitation stems from the likely possibility that consumers will change the mix of goods and services they buy over time. For example, when the price of a good in the basket rises, consumers may substitute a lower-priced good outside the basket for the higher priced good. Because the consumer basket remains fixed, however, the CPI will register a larger price increase than can be justified by the consumer’s purchasing behavior. Only a change in the consumer basket can rectify this so-called substitution bias, which causes increases in the CPI to overstate the increase in the cost of living. Another kind of bias is the so-called new goods bias. When new goods become available in the marketplace, consumers might purchase them even though they are not in the fixed basket of goods. Changes in the CPI will not reflect changes in the prices of these goods, which leads to an inaccurate measure of the cost of living. Finally, a quality change bias makes the CPI less reliable than it would otherwise be. When the quality of a good in the basket increases along with its price, the CPI will only register the price increase. It will not correct for the fact that the consumer acquires a higher quality product for her money. The result is a tendency for the CPI to overstate the rise in the price level. Just like the GDP deflator, it is possible to use the CPI to deflate nominal dollar amounts into real dollar amounts. To make such conversions, we simply divide the nominal dollar amounts by the CPI as follows: $Real\;income=\frac{Nominal\;income}{CPI}$ For example, suppose that you have$1000 in the early 1980s that you place under your mattress at home.[41] You then pull out the money in 2002 and realize that your money is worth a lot less than it was back in the early 1980s. Why? The reason is that the prices of goods and services have increased considerably during that period, which has eroded the real value of the $1000. Figure 12.18 shows how the real value of the$1000 fell as the price level rose.
The base year in this example is 1982-1984. Actual CPI data were used for these calculations and so this decline in the real value of the $1000 is not hypothetical but actual. As Figure 12.18 shows, the real value of the$1000 in 2002 was equivalent to about $550 in 1982-1984 dollars. Therefore, it lost nearly half its value (i.e., its purchasing power). For this reason, people generally try to protect themselves from inflation. People who receive fixed incomes or people who put money under mattresses generally are harmed by inflation. It explains why workers organize themselves into unions to pressure employers for higher money wages and why Social Security recipients are protected with automatic cost of living adjustments to their benefit amounts. Inflation may also harm moneylenders if they do not charge sufficient interest to compensate them for the rising price level. Consider how we might approximate the percentage change in the real purchasing power of a sum of money (M): $\%\Delta \frac{M}{P} \approx \%\Delta M-\% \Delta P$ If a moneylender lends M to a borrower, then the percentage change in M/P is the real interest rate, where P represents the price level. The real interest rate is the percentage change in the purchasing power of a loan and represents the real interest that the lender receives from the borrower. It may be approximated as the difference between the percentage change in the nominal amount of the loan and the percentage change in the price level. Since the percentage change in the nominal amount of the loan is the nominal interest rate(i) and the percentage change in the price level is the inflation rate (π), we can approximate the real interest rate (r) in the following way:12: Macroeconomic Measurement#sdfootnotesym[42] $r\approx i-\pi$ Therefore, the real interest rate that a lender will earn is equal to the nominal interest rate minus the inflation rate. For example, if a lender makes a loan and charges 10% interest but the inflation rate is 4%, then the real interest rate is only 6%. The nominal dollar amount of the loan grew by 10%, but the 4% rise in the price level caused the lender’s purchasing power to only rise by 6%. The equation can also be rearranged as follows: $i\approx r+\pi$ This equation suggests that a moneylender can protect himself if he charges a nominal interest rate that is equal to the inflation rate plus whatever real return he wants to earn. In this way, interest rates can be set to compensate the lender for whatever inflation is expected to occur. Of course, the nominal interest rate and thus the real interest ultimately will be decided in the market for loanable funds, but inflation will influence how much lenders wish to lend at each nominal interest rate. The approximation of the real interest rate shows us that an unexpected surge in inflation can harm a lender, especially if the inflation rate rises above the nominal interest rate. In that case, the real interest rate will be negative and the lender will lose purchasing power between the time the loan is made and the time that it is repaid. Borrowers benefit in this case because they pay less in real terms for the loan. If inflation is unexpectedly low, however, then the lender will enjoy a benefit because the real interest rate will be higher than expected. Borrowers are harmed in this case because they pay more in real terms for the loan. Despite these issues, many economists do not worry too much about low or moderate amounts of inflation because inflation affects both nominal incomes and the price level. Therefore, real incomes are often left unaffected when inflation is relatively low. Extremely high rates of inflation called hyperinflation can cause economic turmoil, however, and are regarded as very damaging to a nation’s economic well-being. In those situations, money loses its value so rapidly that it ceases to effectively serve as a medium of exchange. Desperate for a stable measure of value in individual exchanges, people faced with hyperinflation often abandon the currency and resort to barter exchange. F ollowing the Economic News [43] A recent news article in CE Noticias Financieras explains that the United States reached a new landmark in July 2019 with the longest economic expansion on record in that nation. This long economic expansion surpassed the long period of economic growth that began in the early 1990s and ended in 2001. The article explains that the usual standard for declaring recessions (i.e., two consecutive quarters of negative real GDP growth) has not been met since the Great Recession of 2007-2009, and so the U.S. economy has enjoyed 121 consecutive months of uninterrupted economic growth. Despite frequent claims that the U.S. economy is headed for recession, the article explains that the latest data indicate a continuation of positive real GDP growth during the first two quarters of 2019. Nevertheless, the growth of the economy has been slower than in previous long upswings. To demonstrate this point, the article contrasts the 1990s expansion in which an increase in U.S. GDP of more than 40% occurred with the current long expansion in which U.S. GDP has only risen about 20%. This long period of steady but slow economic growth has also included a sharp rise in economic inequality. The article cites Reuters, the news agency, which highlights the extremely high prices of art pieces and antiques as a reflection of the gains that the rich have enjoyed during the current expansion. Reuters also refers to an increase in mergers and acquisitions, and “purchases of luxury homes, sports teams, and yachts.” Citing UBS, the article also notes that the number of millionaires has risen from 267 in 2008 to 607 in 2018. A significant rise in wealth inequality should be reflected in a Lorenz curve that is bowed farther away from the 45-degree line of perfect equality and in a higher Gini coefficient. The data the article cites support this conclusion. According to the article, the richest 20% of the U.S. population possesses 88% of the nation’s wealth, which represents an increase above the level registered before the 2008 financial crisis. In a period of economic expansion, the benefits of economic growth may be shared very unevenly throughout the population, which may reinforce the class structure of a capitalist society. Summary of Key Points 1. The official poverty rate measures the percentage of the population that lives below the poverty threshold. 2. The quintile ratio measures the ratio of the income of the top 20% of the population to the income of the bottom 20% of the population. 3. The Gini coefficient is a measure of income inequality (or wealth inequality) that ranges from 0 to 1 with 0 representing perfect income (wealth) equality and 1 representing perfect income (wealth) inequality. 4. Gross domestic product (GDP) measures the market value of all final goods and services produced within the geographic boundaries of a nation within a given year and excludes the values of intermediate goods, transfer payments, purchases of financial assets, and the values of used goods. 5. The expenditures approach to the measurement of GDP involves the sum of consumer spending, investment spending, government spending, and net export spending. 6. The income approach to the measurement of GDP involves the sum of U.S. employee compensation, U.S. rental income, U.S. interest income, U.S. profit income, taxes on production and imports, net foreign factor income, depreciation, and statistical discrepancy. 7. Nominal GDP measures aggregate output using current market prices whereas real GDP measures aggregate output using constant, base year market prices. 8. Feminist economists are critical of the GDP measure because it excludes unpaid household work, which women have mainly performed. The use of imputed values to estimate women’s unpaid contribution only addresses part of the problem due to the intrinsic value of much of this work that market values do not reflect. 9. Gross National Happiness (GNH) is an alternative measure of economic well-being that is based on spiritual, physical, social, and environmental health. 10. The official unemployment rate measures the percentage of the labor force that is unemployed. The labor force participation rate measures the percentage of the civilian non-institutional population that is in the labor force. 11. The GDP deflator is the ratio of nominal GDP to real GDP and serves as a measure of the general price level. It may also be used to calculate the inflation rate. 12. The Consumer Price Index (CPI) measures the value of a basket of consumer goods and may be used to estimate the general price level and to calculate the inflation rate. 13. The real interest rate measures the real cost of borrowing and is calculated as the difference between the nominal interest rate and the inflation rate. List of Key Terms Poverty threshold Ratio of income to poverty Poverty Near poverty Deep poverty Income deficit Income surplus Official poverty rate Countercyclical Supplemental poverty measure Conspicuous consumption Quintile ratio Lorenz curve Gini coefficient Perfect income equality Perfect income inequality Gross domestic product (GDP) National Income and Product Accounts (NIPA) Flow variable Production possibilities frontier (PPF) Intermediate goods Multiple counting Government transfer payments Private transfer payments Income-expenditure identity Expenditures approach Personal consumption expenditures Durable goods Nondurable goods Services Gross private domestic investment Final capital goods Residential fixed investment Business fixed investment Inventory investment Private capital stock Depreciation Stock variable Government purchases Net exports Trade balance Trade deficit Trade surplus Balanced trade Income approach Compensation Rental income Interest income Profit income Unincorporated businesses Sole proprietorships Partnerships Unlimited liability Corporate profits Limited liability Dividends Shareholders Retained earnings Taxes on production and imports National income Net foreign factor income Gross National Product (GNP) Statistical discrepancy Net Domestic Product (NDP) Nominal Gross Domestic Product (GDP) Real Gross Domestic Product (GDP) Economic growth Business cycle Recessions Expansions Peak Trough Overshooting Cobweb model Real GDP growth rate Per capita real GDP Real GDP per worker Imputed value Extra gross wages Gross National Happiness (GNH) Civilian non-institutional population Not in the civilian non-institutional population Labor force Not in the labor force Employed Unemployed Discouraged workers Labor force participation rate Unemployment rate Special unemployment rate Natural unemployment Structural unemployment Frictional unemployment Cyclical unemployment Full employment Potential GDP Price index GDP deflator Inflation rate Deflation Substitution bias New goods bias Quality change bias Real interest rate Nominal interest rate Hyperinflation Problems for Review 1. In 2017, the poverty threshold for a family of four people with two children under the age of 18 years of age was$24,858. If a family of this size has an income of $23,500, then calculate the income surplus or deficit and the income to poverty ratio. What do these measures allow you to conclude about the poverty status of the family? 2. Suppose the Lorenz curve representing the income distribution in a country looks like the one shown in Figure 12.19. Calculate the Gini coefficient. If the Lorenz curve then shifts and becomes closer to the diagonal line, what should happen to the Gini coefficient and what would you conclude about the degree of income inequality in this country? 3. Use the information below to answer the questions: • Households spend$2,800 on durable goods.
• The federal government spends $5000 on final services. • U.S. employee compensation is$11,400.
• State and local governments spend $1,400 on final durable goods. • New homes are constructed and sold for$1,500.
• Businesses invest in new durable capital goods, spending $1,800. • U.S. rental income is$800.
• Businesses add $600 worth of goods to their inventories. • Businesses sell$400 worth of goods from their inventories.
• U.S. interest income is $1,200. • U.S. businesses sell$900 of durable goods to the rest of the world.
• U.S. profit income is $4,000. • U.S. businesses purchase$800 of durable goods from the rest of the world.
• U.S. taxes on production and imports are $2,500. • Households spend$1,500 on nondurable goods.
• Households spend $6,000 on services. • New business construction amounts to$900.
• Depreciation of the capital stock is $1,200. • Foreign citizens and businesses operating in the U.S. earn$700.
• American citizens and businesses operating abroad earn $600. a. What is consumer spending? b. What is investment spending? c. What is government spending? d. What are exports? e. What are imports? f. What are net exports? g. Does a trade deficit or a trade surplus exist? How do you know? h. What is GDP using the expenditures approach? i. What is NDP? j. What is GNP? k. What is net foreign factor income? l. What is GDP using the income approach? 4. Use the information in Table 12.11 to calculate nominal GDP, real GDP, and the real GDP growth rate. 5. Complete the rest of Table 12.11. Calculate the GDP deflator and the inflation rate. 6. Suppose the civilian non-institutional population is 275, the number of employed is 152, and the number of unemployed is 9.6. Calculate the unemployment rate. Calculate the labor force participation rate. 7. Use the information in Table 12.12 to calculate the CPI and the inflation rate. 8. If your nominal income is$2,000 per month in 2018 and the CPI is 1.40 with 2011 as the base year, then what is the real value of your monthly income in constant 2011 dollars?
9. If the nominal interest rate is 9% and the inflation rate is 6.2%, then what is the real interest rate?
1. Center for Poverty Research, University of California, Davis. “How is Poverty Measured in the United States?” Web. Accessed on April 13, 2018. https://poverty.ucdavis.edu/faq/how-poverty-measured-united-states
2. Ibid.
3. United States Census Bureau. “How the Census Bureau Measures Poverty.” Web. Accessed on April 13, 2018. https://www.census.gov/topics/income-poverty/poverty/guidance/poverty-measures.html
4. Ibid.
5. Institute for Research on Poverty. “How is Poverty Measured in the United States?” University of Wisconsin-Madison. 2016. Web. Accessed on April 13, 2018. www.irp.wisc.edu/faqs/faq2.htm
6. United States Census Bureau. “How the Census Bureau Measures Poverty.” Web. Accessed on April 13, 2018. https://www.census.gov/topics/income-poverty/poverty/guidance/poverty-measures.html
7. Ibid.
8. Institute for Research on Poverty. “How is Poverty Measured in the United States?” University of Wisconsin-Madison. 2016. Web. Accessed on April 13, 2018. www.irp.wisc.edu/faqs/faq2.htm
9. United States Census Bureau. “How the Census Bureau Measures Poverty.” Web. Accessed on April 13, 2018. https://www.census.gov/topics/income-poverty/poverty/guidance/poverty-measures.html
10. Blank, Rebecca M. “How We Measure Poverty.” Brookings. September 15, 2008. Web. Accessed on April 13, 2018. https://www.brookings.edu/opinions/how-we-measure-poverty/
11. Ibid.
12. Ibid.
13. Center for Poverty Research, University of California, Davis. “How is Poverty Measured in the United States?” Web. Accessed on April 13, 2018. https://poverty.ucdavis.edu/faq/how-poverty-measured-united-states
14. Institute for Research on Poverty. “How is Poverty Measured in the United States?” University of Wisconsin-Madison. 2016. Web. Accessed on April 13, 2018. www.irp.wisc.edu/faqs/faq2.htm
15. Center for Poverty Research, University of California, Davis. “How is Poverty Measured in the United States?” Web. Accessed on April 13, 2018. https://poverty.ucdavis.edu/faq/how-poverty-measured-united-states
16. Savage, Michael. “Richest 1% on Target to Own Two-Thirds of All Wealth by 2030.” The Guardian. U.S. Edition. April 7, 2018. Web. Accessed on April 14, 2018. https://www.theguardian.com/business/2018/apr/07/global-inequality-tipping-point-2030
17. McConnell and Brue (2008), p. 109, emphasize the relationship between gross investment, depreciation, and net investment when discussing the factors that influence the private capital stock.
18. U.S. Bureau of Economic Analysis, Real Gross Domestic Product [GDPCA], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/GDPCA, April 15, 2018.
19. Prof. Leo Navin first introduced me to a similar version of this model when I was a student in his macroeconomics principles class at Bowling Green State University in 1997.
20. Aslaksen (1999), p. 413.
21. Ibid. Pp. 413
22. Ibid. Pp. 414
23. Ibid. Pp. 415
24. Ibid. Pp. 415
25. Ibid. Pp. 415
26. Ibid. Pp. 414
27. Ibid. Pp. 414
28. Ibid. Pp. 415
29. Ibid. Pp. 415
30. Schultz, Kai. “In Bhutan, Happiness Index as Gauge for Social Ills.” The New York Times. January 17, 2017. Web. Accessed on April 10, 2018. www.nytimes.com/2017/01/17/world/asia/bhutan-gross-national-happiness-indicator-.html
31. Kelly, Annie. “Gross national happiness in Bhutan: the big idea from a tiny state that could change the world.” The Guardian. U.S. Edition. December 1, 2012. Web. Accessed on April 10, 2018. https://www.theguardian.com/world/2012/dec/01/bhutan-wealth-happiness-counts
32. Ibid.
33. Ibid.
34. Ibid.
35. Schultz, Kai. “In Bhutan, Happiness Index as Gauge for Social Ills.” The New York Times. January 17, 2017. Web. Accessed on April 10, 2018. www.nytimes.com/2017/01/17/world/asia/bhutan-gross-national-happiness-indicator-.html
36. Ibid.
37. Collinson, Patrick. “Finland is the happiest country in the world, says UN report.” The Guardian. U.S. edition. March 14, 2018. Web. Accessed on April 10, 2018. https://www.theguardian.com/world/2018/mar/14/finland-happiest-country-world-un-report
38. Source: Bureau of Labor Statistics. U.S. Department of Labor. Labor Force Statistics from the Current Population Survey. The figures are seasonally adjusted. Data extracted on April 15, 2018. Only the total population figure was taken from the U.S. Census Bureau. Accessed on April 15, 2018.
39. Source: U.S. Bureau of Labor Statistics. Consumer Price Index. Frequently Asked Questions (FAQS).
40. These sources of bias are commonly cited in mainstream textbooks. See Hubbard and O’Brien (2019), p. 681-682, and Bade and Parkin (2013), p. 592-593.
41. McConnell and Brue (2008), p. 138, consider a similar example when describing the impact of inflation on real income.
42. See Mishkin (2006), p. 80, footnote 10, for a more exact expression.
43. ContentEngine LLC. “Rich in the US are the winners of the longest economic expansion ever.” CE Noticias Financieras, English ed. Miami. 02 July 2019. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/03%3A_Principles_of_Macroeconomic_Theory/12%3A_Macroeconomic_Measurement.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Define Say’s Law of Markets and its role in classical economic theory
2. Describe how John Maynard Keynes created a revolution in macroeconomic theory
3. Analyze the Keynesian Cross model and the Keynesian multiplier effect
4. Build the aggregate demand/aggregate supply (AD/AS) model to explain the price level and level of aggregate output
5. Apply the AD/AS model to several historical cases from U.S. economic history
6. Identify the Neoclassical Synthesis Model and the Post-Keynesian critique of it
This chapter introduces the reader to macroeconomic theory. One major goal is to describe how the key macroeconomic variables from the previous chapter are determined using theoretical models. To really understand these theoretical models, however, it is helpful to explain how the field of macroeconomics developed in the first place. This chapter thus explains the way in which John Maynard Keynes led a revolution in economic thought in the 1930s. Keynes laid the groundwork for what became known as macroeconomics. To understand Keynes’s role in the history of economic thought, it is necessary to understand his critique of the classical theory of the aggregate economy and one of its key elements known as Say’s Law of Markets. Once Keynes’s role is made clear, it will be easier to understand his theory of output and employment. This theory is represented in the Keynesian Cross model and Keynes’s concept of the spending multiplier. We will also build the aggregate demand/aggregate supply (AD/AS) model so as to provide an explanation, not only for aggregate output and employment, but also for the general level of prices. Using the AD/AS model, we will use it to understand actual changes in aggregate output and the price level that occurred during different periods in U.S. economic history.
Say’s Law and the Classical Theory
It might strike the reader as strange that economics is divided into two separate fields referred to separately as “microeconomics” and “macroeconomics.” This division has not always existed. During the eighteenth and early nineteenth centuries, the discipline was simply referred to as “political economy.” Economists generally regarded questions at the individual level and questions at the societal level as questions to be treated together. During the marginalist revolution of the late nineteenth century, however, some economists became very much preoccupied with questions of efficiency and individual optimization. Larger questions having to do with economic growth, the general level of prices, and overall employment became increasingly separate from this specialized study of atomistic behavior. In 1936, this bifurcation intensified when John Maynard Keynes published his famous book titled, The General Theory of Employment, Interest, and Money. Published shortly after the Great Depression, Keynes offered a theory to explain how aggregate output, employment, and prices are determined. His explanation sharply contrasted with early neoclassical microeconomic theories of how individual markets function.
In his critique of early neoclassical microeconomic theory, Keynes referred to the theory as the “classical” theory. Keynes’s decision to do so stemmed in part from the fact that early neoclassical theory had a laissez-faire orientation that was very similar to the laissez-faire orientation of the classical theories of Adam Smith and David Ricardo. His decision also stemmed from his critique of one aspect of classical economics that was central to early neoclassical thinking, referred to as Say’s Law of Markets. According to Say’s Law of Markets (or Say’s Law, for short), every supply creates its own demand. That is, if a commodity is produced, it will generate enough factor income for the owners of land, labor, and capital to purchase the produced commodity. After all, the price of the commodity may be divided into its component parts of rent, wages, and profits. Hence, these incomes will be sufficient to realize the price of the commodity. The stunning implication of this simple argument at the level of the aggregate economy is that enough income will always exist to purchase the entire output of commodities. Therefore, no general gluts or periods of overproduction are possible. Using nothing but logic, one can conclude that major depressions should never occur. If they do occur, they must be caused by some interference with the free flow of commodities and money, and government interference is a likely culprit. Say’s Law thus supported the laissez-faire orientation of classical economics and early neoclassical economics later.
Another way to understand the classical theory of output and employment is to consider the way in which competition in the labor market will lead to a full employment equilibrium outcome, as depicted in Figure 13.1.[1]
If the market wage (w) is above the equilibrium wage (w*), then a surplus of labor or unemployment exists. Competition will force the market wage down until the market wage coincides with the equilibrium wage. The surplus will be eliminated and the quantity of labor demanded will equal the available labor force (Lf*). With the economy operating at full employment, the economy will be able to produce at the full employment level of GDP (GDPf*). The full employment GDP is shown in Figure 13.1 using the aggregate production function. The aggregate production function exhibits diminishing returns to labor, which assumes that all other factors of production (e.g., land and capital) are held constant. Of course, all available land and capital will be fully employed as well, assuming that the markets for these resources have cleared as well. Given the available production technology and the fully employed resources, the economy will produce the potential GDP.
John Maynard Keynes’s General Theory of Employment, Interest, and Money
John Maynard Keynes was well-trained in neoclassical economic theory. In fact, Keynes’s critique of neoclassical theory was not intended to undermine it entirely. Instead, Keynes regarded that theory as applicable to a special case only, namely the case of an economy operating at full employment. That is, once the economy operates at full employment, then all of the efficiency conclusions of neoclassical theory apply once again. Keynes’s theory was intended to be a more general theory of how the aggregate economy functions, however, and so it offered a framework for thinking about periods during which the economy failed to achieve full employment. To make this argument, Keynes had to attack that central tenet of classical economic theory known as Say’s Law. The simple flow diagram in Figure 13.2 helps to illustrate Keynes’s argument.[2]
If a surplus of savings exists, then in the classical theory, the market rate of interest (i) will fall to the equilibrium level of i* and clear the loanable funds market. Aggregate saving will equal aggregate investment. If, however, other forces are acting on the rate of interest that prevent it from falling, then the surplus of savings will persist and not all savings will be invested. As a result, not all finished commodities will be sold due to insufficient investment demand. Alternatively, it might be that the market clearing level of the rate of interest (i*) does not occur at a market rate of interest that is greater than or equal to zero due to a very large supply of savings and a relatively low level of investment demand. In that case, the market rate of interest will not be able to fall enough to clear the market for loanable funds and the surplus of savings will persist. In this case as well, an excess supply of finished commodities or a glut will occur. The result will be falling production as firms scale back production, falling employment as they lay off workers, and falling prices due to the excess supplies of commodities. All these results were observed during the Great Depression, which is exactly what Keynes developed his theory of effective demand to explain with the hope of ending depressions by means of enlightened government economic policy.
The Consumption Function and the Saving Function
To understand the theory that Keynes developed, it is necessary to begin with the construction of some foundational elements. We first introduce the consumption function and show how it is represented graphically. The consumption function suggests that a positive relationship exists between the level of current consumption expenditures that households are planning and the current level of disposable income. That is, as households acquire more disposable income, their level of consumption rises, other factors held constant (ceteris paribus). Planned consumption thus depends on the disposable income of households. That is, C is a function of DI or C = f (DI). Alternatively, C is the dependent variable and DI is the independent variable, according to this theory. This relationship can be represented mathematically as follows:
$C=C_{0}+\frac{\Delta C}{\Delta DI}DI$
In the consumption function, CO represents autonomous consumption. Autonomous consumption is the level of consumption expenditures that households choose even if their DI falls to zero (i.e., an amount that is independent of income). Such expenditures might be possible if households rely on their savings to finance consumer expenditures. Borrowing would be another way in which consumer expenditures might be positive even when DI equals zero. The consumption function also includes ΔC/ΔDI, which represents the marginal propensity to consume (mpc). The mpc refers to the additional consumption expenditures that households choose for each additional dollar of disposable income received. Figure 13.4 reveals that the level of autonomous consumption determines the vertical intercept of the consumption function in the graph. Similarly, the mpc represents the slope of the consumption function in the graph.
Because the mpc is assumed to be fixed at every level of DI, the slope is constant and the consumption curve graphs as a straight line. To consider a simple example, suppose that the consumption function is C = 200+0.75DI. In this case, even if DI is equal to zero, the households will spend $200 billion. Also, for every$1 of additional DI that the households receive, they will consume an additional $0.75. It is worth noting that only a change in DI can cause a movement along the consumption curve in the graph whereas a change in autonomous consumption will shift the consumption curve up or down. Various factors can shift the consumption curve up or down. One example is a change in household wealth, which is distinct from disposable income. The reader might recall that DI is a flow variable. That is, it is measured per period of time (e.g., per year). Household wealth, on the other hand, is a stock variable and is measured at a point in time. If households experience a reduction in household wealth due to a recession that causes asset values (e.g., stock prices) to fall, then the consumption expenditures will fall at every level of disposable income and the consumption curve will shift in a downward direction. Alternatively, an economic boom that raises household wealth will cause the consumption curve to rise at every income level and will lead to an upward shift. Just like we can represent the level of planned household consumption expenditures, we can also represent the level of planned household saving as the level of disposable income changes. The saving function suggests that a positive relationship exists between planned household saving and disposable income. That is, as households acquire more disposable income, their level of saving rises, other factors held constant (ceteris paribus). Planned saving thus depends on the disposable income of households. That is, S is a function of DI or S = f (DI). Alternatively, S is the dependent variable and DI is the independent variable, according to this theory. This relationship can be represented mathematically as follows: $S=S_{0}+\frac{\Delta S}{\Delta DI}DI$ In the saving function, SO represents autonomous saving. Autonomous saving is the level of saving that households choose even if their DI falls to zero (i.e., an amount that is independent of income). Such “saving” might occur if saving is negative. That is, households do not save but actually borrow or draw down past savings. The saving function also includes ΔS/ΔDI, which represents the marginal propensity to save (mps). The mps refers to the additional saving that households choose for each additional dollar of disposable income received. Figure 13.5 reveals that the level of autonomous saving determines the vertical intercept of the saving function in the graph. Similarly, the mps represents the slope of the saving function in the graph. Because the mps is assumed to be fixed at every level of DI, the slope is constant and the saving curve graphs as a straight line. To consider a simple example, suppose that the saving function is S = -200+0.25DI. In this case, even if DI is equal to zero, the households will save -$200 billion. Also, for every $1 of additional DI that the households receive, they will save an additional$0.25.
It is worth noting that only a change in DI can cause a movement along the saving curve in the graph whereas a change in autonomous saving will shift the saving curve up or down. Various factors can shift the saving curve up or down. A rise in household wealth will lead to a fall in saving (as consumption rises) and shift the saving curve downward. A fall in household wealth will lead to a rise in saving and shift the saving curve upward. As in the case of consumption, planned saving by households represents a flow variable.
It is worth taking a moment to reflect on the relationship between the vertical intercepts of the consumption and saving functions. In both cases, the level of DI = 0, which implies that C+S = 0. This equation further implies that when DI = 0, C = -S. In other words, the vertical intercept of the saving curve is the negative of the vertical intercept of the consumption curve. Therefore, whenever the consumption curve shifts upward, the saving curve will shift downwards, and vice versa. Figure 13.6 shows how a shift of the consumption curve will lead to an opposite change in the saving curve.
One should also consider the relationship between the mpc and the mps. Suppose we add the two measures together as follows:
$mpc+mps=\frac{\Delta C}{\Delta DI}+\frac{\Delta S}{\Delta DI}=\frac{\Delta C+\Delta S}{\Delta DI}=\frac{\Delta DI}{\Delta DI}=1$
In other words, the sum of the mpc and the mps is always equal to 1. This result is very intuitive. If the households receive $1 of additional DI and they consume$0.75 of it, then the remaining $0.25 must be saved. Now that we understand the essential building blocks of Keynes’s theory, we need to consider a national income accounts identity that relates disposable income (DI) to consumption (C) and saving (S). $DI=C+S$ We then consider the case in which S = 0 and thus C = DI. To represent this case graphically, we introduce a reference line that has a 45 degree angle relative to the horizontal axis as shown in Figure 13.7. At the break-even income level of$40 billion in the top graph, C = DI and thus S = 0. Therefore, the bottom graph shows the saving function crossing the DI axis at $40 billion, indicating a level of saving equal to zero at that level of DI. As we move to the right of the break-even income level in the top graph, we see that saving becomes positive and continues to grow as the gap between the two lines becomes larger. Therefore, in the bottom graph, the level of saving becomes positive and continues to grow when DI rises above$40 billion.
Let’s suppose we are given the following consumption function:
$C=20+0.75DI$
It turns out that it is possible to derive the saving function from this information. Recall that DI = C+S. Substituting the consumption function into this equation yields DI = 20+0.75DI+S. Solving for S yields S = DI – 0.75DI – 20, which may be simplified as follows:
$S=-20+0.25DI$
It should be clear that we can use a shortcut method to obtain the saving function from the consumption function. If we begin with the consumption function and negate the vertical intercept of 20, then we obtain the vertical intercept of -20 for the saving function. Furthermore, if we subtract the mpc of 0.75 from 1, then we obtain the mps of 0.25 since the two marginal propensities always add up to 1.
T he Keynesian Cross Model for a Private, Closed Economy
It should be clear that Keynes radically departed from the early neoclassical economic theory in which he was trained. In Keynes’s theory, aggregates, like households, business enterprises, and the government, take center stage. Individual economic agents do not play an important role in the theory. Also, because households and businesses tend to behave in a collective fashion (e.g., consuming more when DI rises or investing less when business expectations turn sour), mass psychology becomes the primary explanation for these behaviors rather than individual rationality and serves as an alternative conceptual point of entry.[4] For example, households have a propensity to consume so much more when DI rises. Nevertheless, the unidirectional logic of neoclassical theory is preserved in Keynes’s theory.[5] That is, one variable affects another in a single causal direction only. For example, a rise in DI causes a rise in consumer spending, but not vice versa.
Based on this theoretical foundation, it is now possible to develop the basic Keynesian theory of output and employment. The Keynesian Cross model, which is also sometimes referred to as the aggregate expenditures model, uses these theoretical tools to explain the equilibrium levels of aggregate output and employment that emerge. To keep the model simple, we initially assume that the economy is private and closed. That is, only households and businesses exist. Because it is a purely private economy, no government exists. Because it is a closed economy, no foreign trade exists. The price level is also assumed to be constant and so the Keynesian Cross model does not explain prices, only output and employment. Finally, it is assumed that DI is equal to real GDP. In the national income accounts, the income that American households have for consumption and saving (DI) is not equal to real GDP due to the presence of depreciation, net foreign factor income, taxes, and transfer payments. With a closed, private economy without depreciation, no such adjustments need to be made, and DI is equal to real GDP.
To build the Keynesian Cross model, we need to explain the determination of planned investment spending. As explained previously, the level of planned investment spending is determined in the loanable funds market. Savers lend funds to borrowers at interest, and their competitive interaction determines a particular quantity of loanable funds exchanged. These loanable funds are invested in new production plants and equipment. If we assume that the level of planned investment spending is independent of the level of real GDP, then we can represent its determination in the loanable funds market as shown in Figure 13.10 below.
As the reader can see in the graph on the right, investment spending (I) is equal to IO and is thus constant at all levels of real GDP (Y).
It is now possible to write two equations representing the two types of planned expenditures in this simple economy. The consumption function representing the planned consumption of households may be written as:
$C=C_{0}+\frac{\Delta C}{\Delta Y}Y$
The reader should notice that real GDP (Y) has replaced DI in the consumption function. The reason, of course, is the assumption that real GDP is equal to DI in this economy. The second equation indicates that investment spending (I) is constant, as previously noted.
$I=I_{0}$
If we combine these two types of planned spending, we can obtain a planned aggregate expenditures (A) function as follows:
$A=C+I=C_{0}+\frac{\Delta C}{\Delta Y}Y+I_{0}$
This planned aggregate expenditures function may be written as follows by rearranging the terms:
$A=(C_{0}+I_{0})+\frac{\Delta C}{\Delta Y}Y$
In this function, CO + IO represents autonomous spending. That is, households and businesses will select this level of planned spending regardless of the level of real GDP. The second portion, (ΔC/ΔY)Y, represents induced spending. Induced spending is planned aggregate spending that is directly related to the level of real GDP. If we place the planned consumer spending (C) curve and the planned aggregate expenditures (A) curve on the same graph, we obtain the graph shown in Figure 13.11.
At this point, planned aggregate spending equals real aggregate output and so all plans are perfectly satisfied by the level of production. The reason this level of real GDP is the equilibrium level can be understood by considering what will occur if the economy is not producing at this point. Suppose that the level of real output is above Y* in the graph. In that case, planned aggregate spending is below the level of real output as shown on the 45-degree line. That is, A < Y. As a result, firms will be producing more than households and firms wish to purchase. As a result, business inventories will build up as a result of the excess supply of commodities. The consequence will be a drop in the level of production as firms cut production. Real GDP will then fall towards the equilibrium level. Conversely, if the level of real GDP is below Y* in the graph, then planned aggregate spending is above the level of real output as show on the 45-degree line. That is, A > Y. As a result, firms will be producing less than households and firms wish to purchase. As a result, business inventories will be depleted as a result of the excess demand for commodities. The consequence will be a rise in the level of production as firms raise production. Real GDP will then rise towards the equilibrium level.
It is also possible to calculate the equilibrium real GDP given a specific consumption function and level of investment. For example, suppose that the following two equations represent an economy:
$C=200+0.75Y$
$I=100$
Given this information, it is possible to obtain the planned aggregate expenditures function by simply adding the two equations together:
$A=300+0.75Y$
To obtain the equilibrium real GDP for this economy, we need to use the equilibrium condition:
$A=Y$
Plugging in the planned aggregate expenditures function and solving for Y, we obtain the equilibrium level of real GDP.
$300+0.75Y=Y$
$Y=1200$ Figure 13.13 represents the solution graphically.
The graph shows that the equilibrium level of real GDP occurs at the intersection of the reference line and the planned aggregate expenditures curve. Solving the two equations simultaneously yields the solution.
It turns out that it is possible to think about the determination of the equilibrium real GDP from another angle. That is, by using the saving function and considering the level of investment spending, it is possible to arrive at a different but related equilibrium condition. To understand this point, consider the equilibrium condition that we have been using up to this point:
$A=Y$
The reader should recall that planned aggregate spending (A) is the sum of planned consumer spending (C) and planned investment spending (I). Real GDP or real income (Y) is either consumed (C) or saved (S). If we break down each term in the equation into its component parts, we obtain the following:
$C+I=C+S$
It is easy to see that the level of consumer spending may be subtracted from both sides of this equation to yield a new equilibrium condition:
$I=S$
In equilibrium then, planned investment and saving must be equal. Figure 13.14 represents this solution graphically and relates it to the Keynesian Cross model that we have already discussed.
In Figure 13.14, if the level of real GDP is below the equilibrium level of Y*, then planned investment exceeds saving. With the injection larger than the leakage, the result is a rise in real GDP and a movement towards the equilibrium level. On the other hand, if the level of real GDP is above the equilibrium level of Y*, then saving exceeds planned investment. With the leakage larger than the injection, the result is a fall in real GDP and a movement towards the equilibrium level.
It is also possible to arrive at the answer algebraically using the same information we used when discussing the Keynesian Cross model. Because we know the shortcut method of deriving the saving function from the consumption function, we can write the saving function alongside the consumption function as follows:
$C=200+0.75Y$
$S=-200+0.25Y$
Given that I = 100, we use the new equilibrium condition as follows:
$I=S$
$100=-200+0.25Y$
We then solve for Y to obtain:
$Y=1200$
The reader will note that this equilibrium real GDP is the same as the one calculated in the Keynesian Cross model. Figure 13.16 shows the solution in a saving-investment graph.
It is now possible to clearly distinguish the Keynesian model of output and employment from the classical model. Figure 13.18 shows clearly that the level of planned aggregate expenditure determines the equilibrium level of GDP, which is likely to be below the full employment GDP (Yf) at a point in time.
Furthermore, as businesses adjust production in the movement to equilibrium, they also adjust their workforces. Employment, therefore, moves towards an equilibrium level of L* that corresponds to the equilibrium level of real GDP as shown on the production function. Employment also ends up below the full employment level of Lf. Hence, Keynes’s theory is one of unemployment equilibrium, and it reveals the case of full employment GDP to be a special case. That is, aggregate planned spending would need to be just high enough to produce the full employment GDP as the equilibrium GDP. Keynes could, therefore, argue that his theory was a more general theory than the classical theory of employment and output.
The Paradox of Thrift
An interesting application of the model allows us to understand what macroeconomists mean by the paradox of thrift. In Chapter 2 we learned that an increase in saving leads to greater capital accumulation and an expansion of production possibilities. When considering short run macroeconomic fluctuations, however, saving may lead to a very different result. Suppose that all households decide to increase saving at every level of GDP. In other words, autonomous saving rises. When this change occurs, the saving curve shifts upward as shown in Figure 13.19.
Initially, saving rises at Y1, but this increase in saving causes a discrepancy between saving and planned investment spending. Specifically, saving rises above planned investment spending. With the leakage of saving being higher than the injection of planned investment, real GDP begins to fall. The drop in real GDP causes a movement along the new saving curve. In other words, induced saving declines. The reduction in saving continues until it once again equals planned investment and the economy returns to equilibrium at Y2. The problem for the economy is that the high level of saving has led to a recession (i.e., falling real GDP) rather than to economic growth, as suggested by the production possibilities model. Furthermore, and rather paradoxically, even though all households decided to save more, aggregate saving returns to the same level that previously existed.[6] The reason is that the recession has led to falling incomes, which reduces saving.
This example illustrates what neoclassical economists refer to as the fallacy of composition. People often assume that when one person acts in a particular way and achieves it that the same will hold true for groups of people. To use a classic example, an individual attending a sporting event might stand up to obtain a better view of the field. This strategy works, but if everyone has the same idea, then when all stand, no one has a better view than before. Similarly, one household might save more of its income successfully, but if all households do the same, then no one is able to save more.
The Multiplier Effect
Now that the Keynesian Cross model has been developed, we can consider one of Keynes’s most important contributions to our understanding of the manner in which changes in spending affect the overall economy. The Keynesian multiplier effect refers to the way in which an increase in a specific component of spending, such as investment spending, raises real GDP by a multiple of the spending increase. To illustrate this point, we will consider the most volatile component of aggregate spending and the impact of changes in it on real GDP. Investment spending tends to be highly volatile for a number of reasons. It is influenced by business expectations about future profits, interest rates, technological change, and taxes on profit income. Suppose that business expectations about future profitability improve significantly. With businesses feeling more optimistic about the future of the economy, a collective rush to invest in new capital takes place. Keynes referred to such impulses during periods of business optimism as animal spirits. The consequence is a rise in demand in the loanable funds market. Such an increase leads to a rise in the equilibrium level of loanable funds exchanged and to a rise in planned investment spending as shown in Figure 13.20.
The rise in planned investment raises the aggregate expenditures curve as shown in Figure 13.20. The result is a higher level of equilibrium real GDP and the economy experiences an economic boom. Interestingly, the graph on the right in Figure 13.20 suggests that the level of real GDP rises by more than the rise in planned investment. Because the slope of the reference line is equal to 1 and the difference between the old and new A curves is equal to the change in planned investment, the first move up along the reference line would indicate a rise in real GDP equal to the rise in planned investment spending. As the reader can see in the graph, however, real GDP rises by more than this amount. Hence, a multiplier effect is implicit in the Keynesian Cross model.
It is worth asking why this change occurs. Intuitively, when businesses engage in new investment spending, they raise real GDP by the amount of the investment spending. This spending is received as income by the households though. Once received, the households spend a portion of the additional income, which is determined by the marginal propensity to consume. That additional expenditure is received by the households as well, and part of it is spent. This cycle continues indefinitely but the spending that occurs in the successive rounds becomes smaller and smaller due to the saving that occurs in each round. Ultimately, aggregate real GDP rises by a finite amount but also by a multiple of the initial amount of investment spending.
It is possible to derive a formula that tells us the exact impact that a change in investment spending has on real GDP, other factors held constant. Let’s define the investment multiplier as the ratio of the change in real GDP to the change in planned investment spending. By taking into account the way in which the households engage in an infinite series of consumer spending rounds, we can prove that the multiplier is positively related to the marginal propensity to consume as shown below.
$\frac{\Delta GDP}{\Delta I}=\frac{1}{1-mpc}=\frac{1}{mps}$
Figure 13A.1 provides the details of the proof for any interested readers. Our main purpose, however, is to understand the intuition behind the formula and to learn how to use it to calculate changes in real GDP. To understand how to use the formula, let’s assume that planned investment spending rises by 100. If the mpc is 0.75, then we can calculate the investment multiplier by plugging the mpc into the formula as follows:
$\frac{\Delta GDP}{\Delta I}=\frac{1}{1-mpc}=\frac{1}{1-0.75}=\frac{1}{0.25}=4$
Because investment spending rises by 100, we can write the equation as follows:
$\frac{\Delta GDP}{100}=4$
By solving for the change in GDP, it is clear that real GDP will rise by $400 billion in this case. Alternatively, the multiplier implies that for every$1 of additional investment spending, real GDP will rise by $4. T he Keynesian Cross Model of an Open , Mixed Economy In this section, we would like to expand the Keynesian Cross model to include international trade and government activity. To account for these types of expenditure, we will make some simplifying assumptions. First, we will assume that net exports are exogenously given as follows: $X_{n}=X_{n0}$ In other words, net exports are at the same level regardless of the level of real GDP as shown in Figure 13.21. In the Keynesian Cross model, we can now add net exports to the consumption function and level of planned investment spending to obtain the aggregate expenditures function as follows: $A=C+I+X_{n}$ $A=(C_{0}+I_{0}+X_{n0})+mpc \cdot Y$ As the reader can see, autonomous spending now includes net export spending. Otherwise, the aggregate expenditures function is the same. Because net exports may be positive, negative, or equal to zero, the vertical intercept may be above, below, or the same as the aggregate expenditures curve for the closed, private economy. Figure 13.23 shows the case of a trade surplus and the impact that it has on the position of the aggregate expenditures curve relative to that of a closed, private economy. As Figure 13.24 shows, a trade surplus raises the aggregate expenditures curve and increases the equilibrium real GDP. A trade deficit, on the other hand, lowers the aggregate expenditures curve and reduces the equilibrium real GDP. Balanced trade leaves the aggregate expenditures curve unchanged and leaves the equilibrium real GDP unchanged. The case of balanced trade demonstrates that the spending by foreigners on the economy’s exports is exactly canceled by the spending of domestic buyers on imports from other countries in terms of the impact on real GDP. This analysis also allows us to draw a conclusion about the desirability of a trade surplus and the disadvantage of a trade deficit. Trade deficits appear to be harmful because they lower the nation’s equilibrium real GDP and employment level. Trade surpluses, on the other hand, appear to be beneficial because they raise the nation’s aggregate output and employment. It is important to consider the various factors that lead to trade surpluses and trade deficits.[8] Income levels in other countries certainly play a role. If trading partners undergo economic expansions and incomes are rising, then foreigners will buy more U.S. exports and the trade balance will improve (i.e., net exports will rise). If trading partners experience recessions and incomes are falling, then foreigners will buy fewer U.S. exports and the trade balance will worsen (i.e., net exports will fall). A second factor affecting the trade balance is tariff policy. A tariff is simply a tax on imported commodities. If tariffs are imposed, then prices of imports rise and the quantity of imports will decline. This change should improve the trade balance, possibly causing a trade surplus. At the same time, however, other nations might retaliate by imposing their own tariffs, which might reduce the nation’s exports. In that case, the overall impact on net exports appears to be uncertain. Such retaliatory tariffs were imposed during the 1930s after the U.S. Congress passed the Smoot-Hawley Tariff Act and sparked a trade war. Finally, it is also possible that a change in the foreign exchange value of a nation’s currency might alter the trade balance. For example, if the domestic currency depreciates, then the nation’s exports will become cheaper for foreigners. The result might be a rise in net exports and a trade surplus. A depreciating currency might then raise output and employment. A potential problem that might arise, however, is retaliatory action taken by foreign central banks. If foreign central banks decide to intervene in the foreign exchange market and deliberately devalue their currencies hoping to acquire a similar competitive trade advantage, then the result might be a net appreciation of the domestic currency. The nation’s exports might then become more expensive for foreigners and net exports will fall. This kind of competitive devaluation of currencies occurred in the 1930s as well, as different nations struggled to stimulate their domestic economies during the worldwide Great Depression. Let’s now add government spending to the picture by considering the case of an open, mixed economy. A mixed economy simply refers to an economy with both a private sector and a public sector. First, we will assume that government spending is exogenously given as follows: $G=G_{0}$ In other words, government spending is at the same level regardless of the level of real GDP as shown in Figure 13.25. Government spending is assumed to be determined by legislators and a whole host of political factors that neoclassical Keynesian economists do not attempt to explain. Nevertheless, we can now add government spending to the consumption function, the level of planned investment spending, and the level of net exports to obtain the aggregate expenditures function as follows: $A=C+I+X_{n}+G$ $A=(C_{0}+I_{0}+X_{n0}+G_{0})+mpc \cdot Y$ Autonomous expenditure has increased by the amount of the government spending. As Figure 13.26 shows, the addition of government spending increases the vertical intercept of the aggregate expenditures curve by the amount of the government spending. Figure 13.27 shows that the addition of government spending raises the equilibrium real GDP above the level that would exist in a private, open economy. It should be rather obvious now why Keynes advocated increased government spending during the Great Depression. By increasing government spending, it is possible to increase aggregate output and employment. With a purely private economy stuck at an unemployment equilibrium, government spending can move the economy closer to full employment. Alternatively, reducing government spending during a recession would only worsen the situation by reducing aggregate output and employment in an already weak economy. We also need to incorporate the other side of the mixed economy, which is the ability of the government to impose taxes. To keep the model relatively simple, we will assume that the government collects a lump sum tax (T) from the households each year. That is, the government does not tax incomes at a particular rate like 20% but rather declares that it will collect a lump sum amount of$200 billion from the households regardless of aggregate income. We thus add the following equation to identify this constant amount of taxes collected.
$T=T_{0}$
Because taxes are collected, it is no longer the case that GDP and disposable income (DI) are equal to one another. To obtain DI, it is now necessary to subtract the lump sum tax from aggregate income (Y). That is, the following equation now holds:
$DI=Y-T$
Since household consumption depends on disposable income, we need to rewrite the consumption function taking into account the lump sum tax. The after-tax consumption function is as follows:
$C_{a}=C_{0}+mpc \cdot DI$ $C_{a}=C_{0}+mpc \cdot (Y-T_{0})$ $C_{a}=(C_{0}-mpc \cdot T_{0})+mpc \cdot Y$
The reader should recall that the pre-tax consumption function was the following:
$C=C_{0}+mpc \cdot Y$ Therefore, the addition of the lump sum tax causes the consumption function to have a smaller vertical intercept by the amount of the mpc times the lump sum tax, as shown in Figure 13.28.
The reason that the lump sum tax lowers consumption at every level of real GDP by this amount is that the households lose the tax amount as part of their income. Because households consume part of their income and save part of their income, when they lose the tax amount, they reduce their consumption by the amount that would have been consumed had they been able to keep this income. That is, they reduce their consumption by the mpc times the amount of the tax.
We can now add the after-tax consumption function to the other spending components to obtain the aggregate expenditures function as follows:
$A=C_{a}+I+X_{n}+G$
$A=(C_{0}-mpc \cdot T_{0}+I_{0}+X_{n0}+G_{0}) + mpc \cdot Y$ It should be clear that autonomous expenditure has changed yet again. This time it has been reduced by the amount of the mpc times the lump sum tax. The consequence of this change is that the aggregate expenditures curve shifts downward by this amount as shown in Figure 13.29.
Now we can also see the impact that a lump sum tax will have on the equilibrium real GDP. When the lump sum tax is imposed, it shifts the aggregate expenditures curve down, which causes the equilibrium real GDP to fall as shown in Figure 13.30.
It is easy to see why a neoclassical Keynesian economist would oppose a tax increase during a recession. Higher taxes discourage consumption which reduces aggregate spending. The drop in spending leads to lower output and employment and thus harms an already weak economy. On the other hand, a tax cut can stimulate consumer spending, which will raise aggregate spending, output, and employment.
The addition of the lump sum tax completes our Keynesian Cross model and allows us to analyze a wide range of possible changes to the aggregate economy. We can also solve algebraically for the equilibrium real GDP if we have enough information. To show how to find this solution, let’s assume the following about the economy:
$C_{0}=200$
$I_{0}=100$ $X_{n0}=-100$ $G_{0}=200$ $T_{0}=100$ $mpc=0.75$
We can also write the complete aggregate expenditures function as follows:
$A=(C_{0}-mpc \cdot T_{0}+I_{0}+X_{n0}+G_{0})+mpc \cdot Y$
Plugging in the known information into the aggregate expenditures function yields the following:
$A=(200-0.75 \cdot (100)+100-100+200)+0.75 \cdot Y$ $A=325+0.75 \cdot Y$
Now recall the equilibrium condition and solve for Y.
$A=Y$
$325+0.75 \cdot Y=Y$ $Y=1300$
Figure 13.31 provides a graph that corresponds to this solution.
The Lump Sum Tax Multiplier
Just as a change in investment spending leads to a multiplier effect as explained in the last section, it is also possible to identify a multiplier effect stemming from the change in lump sum taxes. The reasoning is similar. When taxes are increased, households lose disposable income. They reduce consumer spending, which causes incomes to fall more. The additional reduction in incomes leads to a great drop in consumer spending, and on and on. As before, with each successive round, consumer spending falls by smaller and smaller amounts because not all the lost income would have been consumed anyway at each step.
It is possible to derive a formula that tells us the exact impact that a change in lump sum taxes has on real GDP, other factors held constant. Let’s define the lump sumtax multiplier as the ratio of the change in real GDP to the change in lump sum taxes. By taking into account the way in which the households engage in an infinite series of consumer spending rounds, we can prove that the lump sum tax multiplier is negatively related to the marginal propensity to consume as shown below.
$\frac{\Delta Y}{\Delta T}=\frac{-mpc}{1-mpc}$
Figure 13A.2 provides the details of the proof for any interested readers. Our main purpose, however, is to understand the intuition behind the formula and to learn how to use it to calculate changes in real GDP. To understand how to use the formula, let’s assume that lump sum taxes rise by 100. If the mpc is 0.75, then we can calculate the lump sum tax multiplier by plugging the mpc into the formula as follows:
$\frac{\Delta Y}{\Delta T}=\frac{-mpc}{1-mpc}=\frac{-0.75}{1-0.75}=\frac{-0.75}{0.25}=-3$
Because lump sum taxes rise by 100, we can write the equation as follows:
$\frac{\Delta Y}{100}=-3$ $\Delta Y=-300$
By solving for the change in real GDP, we can demonstrate that real GDP will fall by $300 billion in this case. Alternatively, the multiplier implies that for every$1 of additional taxes, real GDP will fall by $3. Alternatively, a$1 tax cut would lead to a $3 rise in real GDP. Two points are worth mentioning. First, the lump sum tax multiplier is always negative. The reason is that a rise in taxes causes an opposite change in real GDP. This negative relationship exists because higher taxes reduce consumer spending and lower the equilibrium GDP. Second, the lump sum tax multiplier is smaller in absolute value than the investment multiplier.[9] The reader will recall that the investment multiplier was equal to 4 with the same marginal propensity to consume. The reason for the smaller absolute impact of the lump sum tax multiplier is that when the households receive a lump sum tax cut of, say,$100 billion, they will only spend a fraction of it as determined by the mpc. Successive rounds of additional consumer spending then follow. Conversely, when businesses invest an additional $100 billion, the entire$100 billion is spent in the first round, which then leads to successive rounds of additional consumer spending. Because the initial impact of the additional investment spending is larger than the initial impact of the tax cut, the overall impact of an increase in investment spending is significantly larger than the overall impact of a lump sum tax cut.
Aggregate Demand
Up to this point, we have only considered how the levels of aggregate output and employment are determined. In this section, we want to begin building an explanation of the general level of prices. The aggregate demand/aggregate supply (AD/AS) model was developed for this purpose. The AD/AS model can be understood as an extension of the Keynesian Cross model. We begin by introducing the aggregate demand curve(AD) curve and then explain how it relates to the Keynesian Cross model.
The AD curve, shown in Figure 13.32, asserts that an inverse relationship exists between the general price level (P) and the level of real GDP (Y) that is consistent with equilibrium in the market for goods and services.
The general level of prices may be thought of as a price index such as the consumer price index or the GDP deflator. Even though the AD curve looks much like a market demand curve, it is actually quite different. It turns out that the law of demand does not apply in this context.[10] When we discussed the market demand curve in Chapter 3, it was argued that the market demand curve slopes downward for two main reasons.
First, when the price of an individual commodity falls, consumers substitute away from relatively more expensive commodities whose prices have not changed. This effect, which causes a movement along the demand curve, was referred to as the substitution effect. The downward slope of the AD curve cannot be explained in a similar fashion. When the general price level falls, for example, all commodity prices in the economy are falling and so it does not make sense to talk about substitution away from relatively more expensive domestically produced commodities. It is true that deflation does not necessarily mean that all prices are falling at the same rate. Nevertheless, a drop in a price index does not allow us to detect variation in the reduction of prices across commodities and so this explanation will not suffice as an explanation of the downward sloping AD curve.
Second, when the price of an individual commodity falls, consumers experience a rise in their real incomes. That is, the purchasing power of their nominal incomes increases. Feeling richer, they increased their quantity demanded of the commodity whose price fell as well as the quantities demanded of all other commodities. This effect, which also contributes to the movement along the demand curve, was referred to as the income effect. The downward slope of the AD curve cannot be explained in a similar fashion. When a period of generalized deflation occurs, for example, input prices fall in addition to product prices. The result is that factor incomes decline. With nominal incomes declining along with commodity prices, real incomes are likely to remain the same on average. Therefore, we should not expect an income effect at the aggregate level.
Because the law of demand cannot explain the downward slope of the AD curve, we require a different explanation for its downward slope. To understand its shape, we will first explain the relationship of the AD curve to the Keynesian Cross model. Suppose that the price level falls. We will claim, for reasons not yet explained, that this drop in the price level causes aggregate expenditure to rise as shown in Figure 13.33.
As shown in Figure 13.33, the aggregate expenditures curve shifts upward and raises the level of equilibrium real GDP. The consequence is a negative relationship between the general price level and the level of real GDP. The AD curve thus slopes downward. An explanation must be provided, of course, for the negative relationship between the price level and aggregate expenditure. Neoclassical Keynesian economists provide three main explanations for this negative relationship.[11]
The first explanation for the negative relationship between aggregate expenditures and the general price level is referred to as the wealth effect or the Pigou effect after the classical economist, A.C. Pigou. According to this line of thinking, when the price level falls, even though households do not experience a rise in their real incomes, they do experience a rise in their real wealth. Because other factors are held constant, including nominal wealth (e.g., home prices, stock prices), households experience a rise in the purchasing power of their wealth. As a result, they increase their consumption expenditures, which stimulates aggregate expenditure and raises the equilibrium real GDP. The result is a downward sloping AD curve.
The second explanation for the negative relationship between aggregate expenditures and the general price level is referred to as the international substitutioneffect. According to this line of thinking, when the price level falls, even though no substitution away from relatively more expensive domestically produced commodities occurs, substitution away from relatively more expensive foreign commodities does occur. That is, the drop in the general price level only refers to domestically produced commodities with everything else remaining constant, including prices of foreign commodities. As a result, imports decline and net exports rise. Exports also rise because foreign buyers now substitute towards relatively cheaper commodities in this nation. The aggregate expenditures curve thus shifts upward and the equilibrium real GDP rises. The result is a downward sloping AD curve.
A final explanation for the negative relationship between aggregate expenditures and the general price level is referred to as the interest-rate effect. It is also sometimes referred to as the Keynes effect because Keynes was the first to identify it. According to this effect, when the price level falls, people decide to hold less money because they need less money to engage in transactions. As a result, they lend their excess money holdings, which pushes down the rate of interest. The fall in the rate of interest stimulates investment spending and raises aggregate expenditure. As a result, equilibrium real GDP rises. The result is a downward sloping AD curve.
We are now able to discuss the factors that tend to shift the AD curve. Suppose that for a given price level of P1, the economy is at an equilibrium real GDP of Y1 in the Keynesian Cross model as shown in Figure 13.34.
Now suppose that planned investment spending rises. The aggregate expenditures curve shifts upward, which raises the equilibrium real GDP to Y2. Because the price level has not changed, the equilibrium real GDP will be higher at the same price level in the graph of the AD curve. This change implies a movement off of the AD curve and to the right. Because such movements to the right would occur at any given price level when the level of investment rises, it should be clear that the entire AD curve shifts rightward when investment spending increases. The reader might also note that the AD curve shifts rightward by more than the amount of the increase in investment spending due to the multiplier effect, which is consistent with Figure 13.20. That is, the change in equilibrium output at the current price level more than exceeds the change in investment spending.
Although this example concentrates on a shift of the AD curve due to a change in investment spending, a change in any component of aggregate expenditure will have a similar impact on the position of the AD curve. In general, a rise in consumer spending, investment spending, government spending, or net export spending will shift the AD curve rightward. Similarly, a reduction in consumer spending, investment spending, government spending, or net export spending will shift the AD curve leftward.
To be more specific, consider factors that might influence consumer spending. A rise (fall) in nominal wealth will stimulate (depress) consumer spending and shift the AD curve rightward (leftward). The reader should notice that this effect is not the same as the wealth effect that produced a downward sloping AD curve. The reason is that in this scenario, it is a change in nominal wealth that causes a change in real wealth, rather than a change in the general price level that causes a change in real wealth. Another factor that might alter consumer spending is a change in taxes on household income. If taxes are reduced, then this change will stimulate consumer spending and raise aggregate expenditure. The higher equilibrium real GDP will show up as a shift of the AD curve to the right. A tax increase would have the opposite effects.
Changes in investment spending are likely to have different causes. One major factor influencing investment spending is the rate of interest. If the rate of interest falls, then businesses will borrow more because they are more likely to profit from new investment projects. The rise in investment spending will raise aggregate expenditure and equilibrium real GDP. The result will be a rightward shift of the AD curve. A rise in the interest rate would have the opposite impact and lead to a leftward shift of the AD curve. Other factors that might affect the level of investment include changes in expected profitability. The expected profits from new investment might change to due to changes in the state of the economy, changes in production technology, or changes in business taxes. If business expectations improve, new technologies are developed, or business taxes are cut, then expected profits rise, investment rises, aggregate expenditure rises, and the AD curves shifts rightward. A reduction in expected profits due to the opposite conditions would shift AD to the left.
A change in government spending has a direct effect on the position of the AD curve as well. If government spending rises, then aggregate expenditure rises. The equilibrium real GDP rises, and the AD curve shifts rightward. If government spending falls, then the opposite effects occur, and the AD curve shifts leftward.
A change in net export spending will also influence the position of the AD curve. If trading partners experience economic expansions and incomes are rising, then net exports will rise, raising aggregate expenditures and equilibrium real GDP. As a result, the AD curve will shift rightward. If trading partners experience recessions, then the effects are the opposite and the AD curve shifts leftward. Changes in tariff policy and changes in the foreign exchange value of the domestic currency may also shift the AD curve, although the effects are uncertain due to the possibility of retaliatory tariffs or competitive currency devaluation. Without retaliation, the imposition of tariffs or the devaluation of the currency will discourage imports, raise net exports, raise aggregate spending, and raise equilibrium real GDP. The consequence will be a rightward shift of the AD curve. A reduction in tariff rates or an appreciation of the domestic currency would have the opposite effect if trading partners do not alter their policies, and the AD curve would shift leftward.
A ggregate Supply
Our discussion of aggregate demand has suggested that aggregate spending is the primary determinant of the amount of output produced in the economy. It seems to ignore one other factor that arguably plays a major role in the determination of output: production cost. To capture the role of cost of production in the determination of aggregate output, we turn to the aggregate supply side of the economy. Figure 13.35 shows a graph of the aggregate supply curve(AS) curve.
The graph suggests that a positive relationship exists between the general price level and the level of real GDP that businesses are willing and able to produce at each price level. The aggregate supply (AS) curve looks much like a market supply curve, and the explanation of its shape is similar. That is, as the price level rises, per unit profit rises and so firms expand production, but the increase in production drives up unit costs, which brings the expansion to a halt unless the price level rises further. As production rises, the per unit cost of real output rises due to diminishing returns to labor. The price level must rise to cover the higher per unit cost.
Referring to Figure 13.35, when the economy is operating below the full employment GDP (Yf), increases in real GDP do not put much upward pressure on input prices or unit costs due to the great deal of excess capacity in the economy. As a result, the general price level does not tend to rise much. As the economy approaches the full employment level of GDP, however, efficient resources become more and more difficult to acquire.[12] As a result, less efficient resources must be hired and unit production costs begin to rise. The general price level must, therefore, rise to compensate for the higher unit production costs. That is, businesses raise prices as their production costs per unit increase. A related reason for the rise in per unit production costs as real GDP rises has to do with diminishing returns to labor. With the aggregate stocks of capital and land being relatively fixed during this relatively short time period, the increase in employment raises production but at a decreasing rate. Therefore, it is necessary to hire increasing numbers of workers to raise the production of real GDP by one unit. Hence, unit production costs also rise for this reason and contribute to the upward slope of the AS curve.
The next question that arises deals with the factors that shift the AS curve. Basically, a change in any variable that affects per unit production cost, other than a change in the level of real GDP, will shift the AS curve. A rise in per unit production cost will shift the AS curve to the left because at each level of real GDP, the prices that firms require will need to be higher. A fall in per unit production cost will shift the AS curve to the right because at each level of real GDP, the prices that firms require will be lower. For example, a change in the nominal wage rate will shift the AS curve. It is assumed that when the price level changes, all other variables are held constant, including input prices like wages. If the nominal wage rate rises, then at any given level of real GDP, per unit costs will be higher. The result is a leftward shift of the AS curve as shown in Figure 13.36.
Alternatively, a reduction in the nominal wage would shift the AS curve to the right because per unit cost would be lower at every level of real GDP.
A variety of other factors will also shift the AS curve, but each factor works by influencing the per unit production cost of firms. For example, if the nominal prices of land and capital rise, then the AS curve will shift leftward because unit costs are higher. If other input prices fall, then the AS curve will shift rightward.[13] Factor supplies will also affect the position of the AS curve. If factor supplies (i.e., the supplies of land, labor, and capital) rise, then input prices will fall and the AS curve will shift to the right. If the factor supplies fall, then input prices will rise and the AS curve will shift to the left.[14]
Other factors include the prices of imported inputs.[15] For example, if the price of imported oil rises, then unit production costs will rise and shift AS to the left. If import prices fall, then the AS curve will shift to the right. Changes in the foreign exchange value of the domestic currency will also affect unit cost by making imported inputs more or less expensive. For example, if the domestic currency appreciates, then imported inputs become cheaper for domestic producers. Their unit costs fall, and the AS curve shifts to the right. If the domestic currency depreciates, then imported inputs become more expensive for domestic producers and unit costs rise. The AS curve would then shift to the left.
Changes in labor productivity are also likely to affect unit cost and the position of the AS curve.[16] Labor productivity is measured in terms of output per worker (Q/L) where Q is the number of units of output and L is the number of employed workers. Labor cost per unit may be measured in terms total wages per unit produced (wL/Q) where w is the wage rate. It should be clear that a rise in productivity (Q/L) will cause a drop in labor cost per unit (wL/Q). Hence, a rise in labor productivity will shift the AS curve to the right. If productivity falls, then unit labor cost will rise and shift the AS curve to the left.
The degree of monopoly power in input markets may also influence unit cost.[17] Because monopoly markets produce higher prices than competitive markets, if monopoly power grows in a major input market, then unit costs rise, and the AS curve shifts to the left. If government antitrust action breaks up a monopoly in an input market, on the other hand, then input prices will fall and unit costs will fall as well. The AS curve would then shift to the right.
Finally, changes in business taxes and tax credits may also influence per unit production cost.[18] If business taxes are cut, then unit cost will fall, and the AS curve will shift to the right. If business taxes are increased, then unit labor cost will rise and the AS curve will shift to the left. Alternatively, if the government increases its subsidies or tax credits for business, then in effect unit cost will fall, and the AS curve will shift to the right. If the government cuts its subsidies or tax credits to business then in effect unit cost will rise, and the AS curve will shift to the left.
Macroeconomic Equilibrium and Historical Applications of the Model
Now that the aggregate demand and aggregate supply sides of the model have been developed, we can combine them to explain how macroeconomic equilibrium occurs. Macroeconomic equilibrium occurs when the general price level and the level of real GDP reach levels from which there is no inherent tendency to change. Figure 13.37 depicts an economy that has reached a macroeconomic equilibrium state where P2 and Y2 are the equilibrium general price level and the equilibrium real GDP, respectively.
We want to ask how the economy reaches the equilibrium outcome. It may be tempting to fall back on the explanation that we offered when we discussed the movement to equilibrium in an individual market as in Chapter 3. In this model, however, we need to refer to the factors that cause movements along the AD and AS curves. For example, suppose that the price level is at P3 in Figure 13.37. In this case, aggregate spending is relatively low compared with what businesses wish to produce at this price level. If businesses are producing Y3 and aggregate spending leads to a lower equilibrium GDP in the Keynesian Cross model, then firms will experience a rise in inventories (unplanned investment). They will then cut production. As they cut production, a movement down along the AS curve occurs. The release of relatively inefficient resources causes unit production cost to fall and the price level begins to decline. As the price level falls, three effects on the aggregate demand side occur that stimulate aggregate expenditure. When P falls, households experience a rise in real wealth and consume more. Also when P falls, foreigners begin to buy more of this nation’s exports and domestic buyers buy fewer imports. These factors increase net export spending. Finally, when P falls, the amount of money people wish to hold falls, which leads to more lending, lower interest rates, and higher investment spending. All three factors cause a movement down along the AD curve. Eventually, the economy will arrive at the macroeconomic equilibrium outcome.
Alternatively, suppose that the economy begins with a price level of P1. In this case, businesses do not want to produce much real output given the low price level that just covers their low unit costs. On the other hand, aggregate spending is rather high, leading to a high equilibrium GDP in the Keynesian Cross model. Aggregate output will begin to rise as inventories are depleted as a result of the high aggregate spending. As output begins to rise, the effects of diminishing returns to labor and the employment of less efficient resources causes unit costs to creep upwards. Businesses will raise prices to keep up with rising unit costs. As prices rise, three effects on the aggregate demand side begin to take effect. As P rises, households experience a reduction in their real wealth, which leads to lower consumer spending. As P rises, foreigners substitute away from the nation’s exports and domestic buyers substitute towards imports. Both effects reduce net export spending. Finally, as P rises, people wish to hold more money for transactions purposes and so they lend less. The result is a rise in the rate of interest, which discourages investment spending. All three effects cause a movement up along the AD curve. Eventually, the economy will arrive at the macroeconomic equilibrium outcome.
The AD/AS model can be easily applied to specific periods in U.S. macroeconomic history to obtain a sense of how and why the price level and level of real GDP changed. For example, prior to World War II, the U.S. economy frequently experienced periods of deflation during recessions. The recessions that have occurred since World War II have generally not been characterized by deflation. The AD/AS model can help us to understand why deflation has become less common in the U.S. economy. Figure 13.38 shows a horizontal section of the AS curve, which implies that when AD declines during a recession, the price level does not fall.[19]
The rise in aggregate demand increased the general price level and the level of real GDP. Because the economy was near the full employment GDP (Y1f), the rise in aggregate demand pushed the unemployment rate below the natural rate of unemployment and had a strong inflationary impact. When inflation is the result of a rise in aggregate demand, economists generally refer to it as a case of demand-pull inflation.[21]
Next let’s consider what occurred in the U.S. during the 1930s. This decade was characterized by a severe reduction in real GDP as well as a falling price level or deflation. The main factor contributing to these changes was a drop in aggregate demand as shown in Figure 13.40.[22]
When inflation occurs at the same time as falling real GDP, the situation is referred to as stagflation. It is the worst of both worlds (i.e., inflation and recession). The reason that the AS curve shifted leftwards in the 1970s had to do with a sudden rise in unit production cost. Unit costs rose because of the two oil price shocks that increased the price of imported oil in the United States. In 1973, the Arab-Israeli War broke out which interrupted the flow of oil to the West. Then in 1979 the revolution in Iran once again led to a disruption of the flow of oil westward. In both cases, tighter global supplies caused the price of oil to skyrocket, raising production costs and leading to stagflation. Because the inflation in this case was the result of an aggregate supply shift, it is often referred to as cost-push inflation to distinguish it from the demand-pull inflation of the 1960s.[23]
In the year 2000, it became clear that the good times had ended. A stock market bubble had been forming in the IT sector for years. Asset price bubbles form when asset prices rise significantly above levels that are consistent with the real underlying values of the assets. In the case of the IT sector, “irrational exuberance” had led to the bidding up of IT stocks far beyond what would be justified by the profitability of the associated companies. When investors finally discovered to what extent the stocks were overvalued, they dumped them, and the prices collapsed. This disruption in the stock market meant that a great deal of paper wealth was destroyed in a short time. The result was a collapse of investment and consumer spending. Aggregate demand thus fell and gave way to the recession of 2001, like what is depicted in Figure 13.38.
The 2001 recession was mild, however, and the economy soon recovered. The nation’s central bank, the U.S. Federal Reserve, which controls the nation’s money supply, took the lead in responding to the crisis. It increased the money supply which pushed interest rates down to very low levels. As interest rates fell, investment spending rose, but this time investments flowed into the housing sector rather than the IT sector. Over the course of the next few years, a boom in home construction occurred. Low mortgage interest rates encouraged the purchase of new homes, which led to soaring home prices. As before, the bidding upward of these prices above their underlying real values implied the formation of an asset price bubble.
A factor that contributed greatly to the formation of this asset price bubble was the way large financial institutions encouraged the growth of the market for various financial assets, including so-called mortgage-backed securities (MBSs). When commercial banks grant mortgage loans to home buyers, they typically do so for a period of 15 or even 30 years. In the past, a commercial bank would carefully scrutinize the credit worthiness of the home borrower before granting the loan to ensure that the bank would receive mortgage payments each month until the loan was repaid with interest. The growth of the market for MBSs meant that a commercial bank could sell this loan to a large financial institution, like Goldman Sachs, which would then bundle together dozens or even hundreds of such mortgage loans that originated in many different places, creating a single financial asset (i.e., a mortgage-backed security). Goldman Sachs would then sell the MBS to a large institutional investor like a hedge fund. The asset might be sold many times in the organized market for these securities that arose. The owner of the MBS would receive interest payments from many different homeowners due to its ownership of the asset.
The problem that this situation created was that so much money was being made by packaging and selling these specialized assets that less attention was being paid to the credit worthiness of the borrowers. When the loan originator (i.e., the bank that initially grants the loan) is not the same institution that will suffer if the borrower defaults on the loan, it is much less likely that the loan originator will take the proper care in evaluating the credit worthiness of the borrower. Furthermore, credit rating agencies, like Moody’s and Standard and Poor’s, which were supposed to signal to investors how much risk they were assuming by purchasing these securities encountered a conflict of interest. They collected more fees by rating more of these securities. Positive ratings were likely to encourage the growth of the market and keep the market for these securities active. As a result, these agencies tended to be far too optimistic in their valuations and tended to underestimate the degree of risk associated with these securities.
The consequence of all these factors was that home buyers eventually began to default on loans in large numbers. These defaults triggered a collapse in the prices of mortgage-backed securities. Financial institutions that held these assets on their balance sheets watched as the losses mounted. Driven by fear of additional losses, they contracted their lending. The contraction of lending reduced investment spending. Many businesses were unable to obtain the loans necessary simply to pay employees, and business failures began to increase. The result was a huge collapse of aggregate demand as shown in Figure 13.43.
This recession was so extreme that it has been dubbed the Great Recession. It was the worst reduction in real GDP that has occurred since the Great Depression of the 1930s. It should be noted, however, that the Great Depression was far worse with a drop in real GDP and an unemployment rate in excess of 25%. By contrast, during the Great Recession, real GDP fell by just over 4% and the unemployment rate was around 10% at its peak. The reason that the Great Recession was not worse than it was stemmed from the massive government and central bank responses to the crisis. The U.S. Federal Reserve acted as a lender of last resort and offered emergency loans to large financial institutions. The federal government also passed emergency measures, including a $700 billion Troubled Asset Relief Plan (TARP) in late 2008 that bought up hundreds of billions of dollars of so-called toxic assets from banks and other financial institutions while also purchasing equity stakes in the same financial institutions. The federal government also implemented a fiscal stimulus package in early 2009 that included$787 billion of government spending increases and tax cuts. This stimulus package was consistent with the Keynesian prescription for boosting output and employment during recessions. Although these measures had some impact, economic growth remained sluggish for several years and unemployment remained stubbornly high.
The Neoclassical Synthesis Model and the Post-Keynesian Critique
The model that we have investigated up to this point is consistent with the neoclassical interpretation of Keynes’s theory. Keynes’s General Theory is a difficult book and is subject to numerous interpretations. After World War II, Paul Samuelson identified something called the Neoclassical Synthesis model. This model aims to capture what is most valuable in Keynes’s theory while also retaining much of the classical or early neoclassical model that Keynes rejected as incomplete. Because it retains so much of the early neoclassical perspective, many Post-Keynesian economists, who adhere to a very different interpretation of Keynes’s theory, reject the neoclassical synthesis model. We will consider more of the Post-Keynesian perspective in the next chapter. In this section, however, we will briefly consider how neoclassical Keynesian economists aim to create a merger or synthesis of neoclassical and Keynesian ideas.
The proposed merger essentially rests on a distinction between the short run and the long run. As the argument goes, Keynes’s theory best applies to short run changes in aggregate output and employment when nominal wages are sticky. In the long run, however, when nominal wages and other input prices are flexible, it is the classical theory that best applies. To understand this argument, we need to develop a long run aggregate supplycurve (LRAS) curve to distinguish it from the upward sloping short run aggregate supplycurve (SRAS) curve that we have been using throughout this chapter. The LRAS curve is shown in Figure 13.44.
Now suppose that the central bank increases the money supply, which pushes the interest rate down and stimulates investment spending. The rise in investment spending increases aggregate demand, shifting the AD curve to the right. According to Keynesian theory, output rises above the full employment level to Y2 and the unemployment rate falls below the natural rate of unemployment. At the same time, the price level rises to P2 due to the rise in per unit cost. The Keynesian short-run explanation would stop at this point, but according to the neoclassical synthesis model, in the long run factor prices will begin to rise. As nominal wages and other factor prices rise pushing up unit cost, the SRAS curve shifts to the left. The new long run equilibrium then occurs at the intersection of AD, the new SRAS curve, and the LRAS curve. Output returns to the long run level, and the price level is permanently higher at P3. Even though Keynes’s theory provides an explanation for the short run fluctuations, the classical theory provides the long run explanation.
Of course, Post-Keynesian economists are highly critical of the neoclassical synthesis model. The model assumes that if we wait long enough, then wages will adjust to bring about full-employment. It was precisely this kind of thinking that Keynes rejected in his General Theory. Keynes’s famous remark in response to such thinking was that, “In the long run, we are all dead.” Furthermore, this particular example suggests that money is not neutral in the short run, but it is neutral in the long run. Those who defend the neutrality of money argue that changes in the money supply cannot cause changes in real variables, like output and employment. Post-Keynesian economists have long argued that Keynes’s theory implies the non-neutrality of money in both the short run and the long run. More will be said about the Post-Keynesian perspective in later chapters.
Following the Economic News [26]
A news article published in EIU ViewsWire describes how changes in the foreign exchange value of the Icelandic Krona have affected Iceland’s economy since the global financial crisis of 2008. The article explains how the Krona approached new lows after the financial crisis, which supported “a boom in the tourism industry that drove the post-crisis recovery in the Icelandic economy.” In terms of the AD/AS model, the depreciating Krona helped stimulate aggregate demand. When a nation’s currency depreciates, a nation’s exports become cheaper and its imports become more expensive. The consequence is a rise in net exports and an increase in aggregate demand. Real output and employment should then rise, as occurred in the case of Iceland. The article explains that eventually the improved performance of the economy led to a higher demand for the Krona, which caused it to steadily appreciate in 2014-2015 and then significantly more in 2016. The article explains that the Icelandic low-cost airline, Wow Air, faced increasing competitive pressure as a result of the appreciating Krona, and that the weak financial performance of Iceland’s airlines was beginning to have a negative effect on the tourism industry. As investors became cautious about Iceland’s future growth prospects, the exchange value of the Krona began to slide, leading to a sharp depreciation of the Krona, which fell by roughly 10% against the Euro between August and October 2018, as the article explains. In response to the depreciation of the Krona, the Central Bank of Iceland has been increasing interest rates. Higher interest rates should encourage investors to purchase interest-bearing assets in Iceland, which will help the Krona recover. As a result, the author of the article expects upward pressure on the Krona throughout 2019. Although a currency depreciation tends to simulate aggregate demand, the article explains that the central bank has been raising interest rates because it is concerned about the higher prices of oil imports and increases in labor costs due to an upcoming round of collective wage negotiations. Higher oil prices and high labor costs both reduce aggregate supply, which can be inflationary and can cause a reduction in real output, as the AD/AS model implies. Therefore, a currency appreciation can be beneficial to producers because it lowers the prices of imported inputs and may increase aggregate supply. The AD/AS model shows us the various ways that a change in the foreign exchange value of a nation’s currency might affect its economy.
Summary of Key Points
1. Say’s Law of Markets states that new production will always generate enough income for its purchase and so general gluts of overproduction are not possible in market capitalist economies.
2. According to J.M. Keynes, Say’s Law of Markets does not hold in market capitalist economies because not all savings will be invested.
3. The consumption function and the saving function show that consumer spending and saving are positively related to the disposable income of households.
4. The Keynesian Cross model explains how the economy arrives at an equilibrium level of real GDP as businesses change production in response to a buildup or depletion of inventories.
5. In the Keynesian Cross model, the equilibrium condition states that aggregate planned expenditure is equal to real GDP.
6. The paradox of thrift asserts that even if all households increase their saving, aggregate saving will ultimately not change.
7. The Keynesian multiplier effect refers to the tendency for real GDP to rise by a multiple of a rise in investment spending.
8. In the Keynesian Cross model, trade surpluses raise equilibrium GDP, trade deficits reduce equilibrium GDP, and balanced trade leaves equilibrium GDP unchanged.
9. In the Keynesian Cross model, government spending increases or tax cuts raise equilibrium real GDP, whereas government spending reductions or tax increases reduce equilibrium real GDP.
10. The lump sum tax multiplier is negative and has a smaller absolute impact on real GDP than the investment multiplier.
11. The AD curves slopes downward due to the wealth effect, the international substitution effect, and the interest-rate effect.
12. The AD curve shifts due to changes in consumer spending, investment spending, government spending, and net exports.
13. The AS curve slopes upward due to rising unit costs as less efficient resources are employed and diminishing returns to labor occurs.
14. The AS curve shifts due to changes in all other factors that affect unit costs, such as changes in input prices, import prices, the exchange rate, business taxes and credits, monopoly power, labor productivity, and input supplies.
15. Macroeconomic equilibrium occurs at the intersection of the AD and AS curves and determines the equilibrium real GDP and price level.
16. The neoclassical synthesis model represents a merger of neoclassical economics and Keynesian economics.
List of Key Terms
Say’s Law of Markets
Aggregate production function
Consumption function
Autonomous consumption
Marginal propensity to consume (mpc)
Saving function
Autonomous saving
Marginal propensity to save (mps)
Reference line
Keynesian Cross model
Aggregate expenditures model
Induced spending
Planned investment
Actual investment
Paradox of thrift
Induced saving
Fallacy of composition
Multiplier effect
Animal spirits
Trade surplus
Trade deficit
Balanced trade
Tariff
Competitive devaluation
Mixed economy
After-tax consumption function
Pre-tax consumption function
Lump sum tax multiplier
Aggregate demand (AD) curve
Substitution effect
Income effect
Wealth effect
International substitution effect
Interest-rate effect
Aggregate supply (AS) curve
Macroeconomic equilibrium
Sticky prices
Menu costs
Demand-pull inflation
Stagflation
Cost-push inflation
Asset price bubbles
Mortgage-backed securities (MBSs)
Great Recession
Neoclassical Synthesis model
Long run aggregate supply (LRAS) curve
Short run aggregate supply (SRAS) curve
Neutrality of money
Problems for Review
1.Suppose the consumption function is C = 250+0.8DI. What is the saving function?
2. Suppose that investment spending falls by $300 billion. If the mpc is 0.6, then what is the change in real GDP, according to the Keynesian multiplier effect? 3. Suppose you are given the following information about the economy: • Investment spending is 400 • Government spending is 600 • A trade deficit of 250 exists • Autonomous consumption is 300 • The mpc is 0.70 • A lump sum tax of 275 is imposed Given this information, write the aggregate expenditures function. Then calculate the equilibrium level of real GDP and place your answer on a graph like the one below. 4. Suppose that a lump sum tax cut of$125 is imposed on households. If the mpc is 0.82, then what will be the overall change in real GDP that results?
5. Suppose that the economy begins in macroeconomic equilibrium as depicted in the AD/AS model. Suppose that a stock market collapse occurs that reduces household wealth at the same time that monopoly power increases in key input markets. What will happen to the equilibrium levels of prices and real GDP? Represent your answer graphically.
6. Suppose that the economy begins in macroeconomic equilibrium as depicted in the AD/AS model. Suppose that the domestic currency depreciates. What will happen to the equilibrium levels of prices and real GDP? Consider the impact on both AS and AD in your answer. Represent your answer graphically.
7. Suppose that the economy begins in macroeconomic equilibrium as depicted in the AD/AS model. Suppose that labor productivity rises at the same time that taxes on households are reduced. What will happen to the equilibrium levels of prices and real GDP? Represent your answer graphically.
1. See Snowdon, et al. (1994), p. 44-51, for a more advanced treatment of the classical model of output and employment.
2. See Hunt (2002), pp. 405-408, for a discussion of how Keynes’s theory relates to the neoclassical flow model.
3. Hunt (2002), pp. 413-415, graphically represents the two major causes summarized here.
4. See Wolff and Resnick (2012), p. 106-107, for a discussion of how Keynes introduced this new entry point.
5. See Wolff and Resnick (2012), p. 40-41, for a discussion of the logic of Keynesian theory.
6. See Chiang and Stone (2014), p. 507-509, for the conditions under which saving ends up falling overall.
7. See U.S. Bureau of the Census (1975), Series U 187-200. Value of Exports and Imports: 1790 to 1970, p. 884-885. The calculation of the trade balance includes total merchandise, gold, and silver.
8. Hubbard and O’Brien (2019), p. 792-793, emphasize differential growth rates across countries, differential price levels across countries, and exchange rates as the major factors influencing the level of net exports. McConnell and Brue (2008) also emphasize tariffs.
9. See Samuelson and Nordhaus (2001), p. 504-505, who make this point in the context of an example where a tax increase is required to balance the government budget after a government spending increase.
10. Chiang and Stone (2014), p. 524, and McConnell and Brue (2008), p. 188, argue that neither the income effect nor the substitution effect can explain the downward slope of the AD curve. See also Case, Fair, and Oster, p. 550.
11. These effects are identified in nearly all neoclassical textbooks when the downward sloping AD curve is explained.
12. OpenStax (2014) refers to firms “running into limits” as the economy approaches its potential GDP.
13. Many textbooks mention changes in commodity prices and changes in nominal wages. See Krugman et al. (2014), p. 422.
14. Hubbard and O’Brien (2019), p. 834, emphasize changes in the labor force and capital stock. Other books such as OpenStax (2014) describe these changes as supply shocks.
15. Prices of imported inputs are also frequently mentioned in textbooks. See OpenStax (2014), p. 562, and Samuelson and Nordhaus, p. 662.
16. See OpenStax (2014), p. 560-561, and McConnell and Brue (2008), p. 195, for a discussion of how productivity changes can influence the AS curve.
17. See Chiang and Stone (2014), p. 532.
18. See Chiang and Stone (2014), p. 531-532.
19. McConnell and Brue (2008), p. 198, analyze the post-WWII recessions in this way. McConnell and Brue (2008), p. 198, and OpenStax (2014), p. 588, provide similar graphs representing this scenario.
20. McConnell and Brue (2008), p. 199, include such arguments. Similar arguments may also be found in OpenStax (2014), p. 585. OpenStax (2014), p. 585, also mentions a coordination argument that Keynes made. That is, although workers might accept a wage cut if all workers simultaneously received one, coordinating an economy-wide wage cut is not possible in a decentralized market economy.
21. The case of demand-pull inflation is a common one in neoclassical textbooks. See Chiang and Stone (2014), p. 536-537, Samuelson and Nordhaus (2001), p. 425-426, and Bade and Parkin (2013), p. 760.
22. See Coppock and Mateer (2014), p. 436, for a similar representation of this case.
23. The case of cost-push inflation is also a common one in neoclassical textbooks. See Chiang and Stone (2014), p. 537-539, Samuelson and Nordhaus (2001), p. 426-427, and Bade and Parkin (2013), p. 761.
24. McConnell and Brue (2008), p. 200-202, analyze this case. It is also included as an exercise in Krugman et al. (2014), p. 445. Hubbard and O’Brien (2019) analyze the case too but without emphasis on the low inflation rates.
25. McConnell and Brue (2008), p. 202, make this connection to the “New Economy.” Interestingly, history has repeated itself. In the 1920s, many respected observers claimed that a “new economics” had abolished the business cycle thanks to the establishment of the Federal Reserve in 1913. See Chancellor (1999), p. 192.
26. “Iceland economy: Sharp depreciation in the krona prompts policy action.” EIU ViewsWire. The Economist Intelligence Unit N.A., Incorporated. New York. 10 Nov. 2018. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/03%3A_Principles_of_Macroeconomic_Theory/13%3A_The_Theory_of_Effective_Demand_and_the_Neoclassical_Synthesis_Model.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Incorporate turnover time into Marx’s theory of competitive profit rate formation
2. Investigate the Marxian theory of the long-term tendency of the rate of profit to fall
3. Analyze the Marxian theory of the business cycle and the industrial reserve army
4. Explore the Marxian theory of discoordination across macroeconomic sectors
5. Study the causes of the 2007-2009 economic crisis from a Marxian perspective
6. Evaluate U.S. economic history through the lens of social structure of accumulation theory
7. Inspect the Austrian theory of the business cycle
8. Contrast Post-Keynesian effective demand theory with the neoclassical synthesis model
In Chapter 13, we investigated the neoclassical synthesis model, which represents a synthesis of neoclassical theory and Keynesian theory. The neoclassical synthesis model makes it possible to retain the neoclassical conclusion that market capitalist economies tend to return to the full employment level of output in the long even as it allows for the Keynesian conclusion that the economy may suffer from periods of depression or inflationary boom in the short run. The unorthodox theories of macroeconomic crisis that we explore in this chapter reject the neoclassical synthesis model. All assert that the tendency towards depression and periods of prolonged crisis exists in capitalist societies, but the reasons for their assertions range from the central bank’s manipulation of the money supply to institutional breakdown. To explore these competing theories, we will first look at the Marxian theory of capitalist crises, which has several dimensions. After this analysis is complete, we will analyze the causes of the 2007-2009 economic crisis from a Marxian perspective. The next theory that we will consider, known as social structure of accumulation (SSA) theory, is a framework that many radical political economists use. It offers an original way of interpreting the history of capitalist societies and the different factors that promote capital accumulation and produce economic crises. We will then shift gears and consider the Austrian theory of the business cycle, which places most of the blame for economic crises within capitalism on the meddling of the central bank. We will conclude with a discussion of the way in which the Post-Keynesian theory of effective demand contrasts with the theory of effective demand represented in the neoclassical synthesis model.
Incorporating Turnover Time into Marx’s Theory of Competitive Profit Rate Formation
In Chapter 8, we investigated Marx’s theory of the formation of a competitive rate of profit. In that chapter, we considered an economy with five industries. This example has been reproduced in Table 14.1.
Because each industry has a different organic composition of capital (OCC) (i.e., a different ratio of constant capital to total capital), the rates of profit differ. The industries with higher rates of profit have lower organic compositions of capital. The industries with lower rates of profit have higher organic compositions of capital. It was previously explained that the industries with higher profit rates employ relatively more variable capital because labor-power is the source of value and surplus value. Because capital has a strong tendency to flow out of industries with low rates of profit and into industries with high rates of profit, the rate of profit tends to equalize across all industries as prices fall in the industries with high profit rates and prices rise in industries with low profit rates. The uniform, general rate of profit (r) in this example is calculated in the following way, where S, V, and C refer to aggregate surplus value, aggregate variable capital, and aggregate constant capital, respectively:
$r=\frac{S}{C+V}=\frac{187.50}{250+250}=37.5\%$
These adjustments cause prices of production to diverge from values and profits to diverge from surplus value in specific industries. For the overall economy, however, aggregate production price and aggregate value are equal. Additionally, aggregate profit and aggregate surplus value are equal.
The situation becomes more complicated when we introduce variations in turnover times across industries. In some industries, the capital turns over very quickly. That is, capital is advanced, which is to say that it is used to purchase labor-power and the means of production. The elements of production are then used to produce commodities. The commodities are then quickly sold. The quicker this transformation from money capital back into money capital occurs, the shorter the turnover time. Also, a shorter turnover time implies a greater number of turnovers per year. When the number of turnovers per year is larger, then more surplus value will be produced and appropriated in a year, other factors the same. Also, the capital is only advanced one time and is then used repeatedly throughout the year as the capital value returns to the capitalist.
The fact that capital turns over multiple times in a year leads us to a new definition of the rate of profit and a new definition of the rate of surplus value. We shall refer to these new measures as the annual rate of surplus value and the annual rate of profit. They are defined as follows:
$Annual\;Rate\;of\;Surplus\;Value=\frac{Annual\;Mass\;of\;Surplus\;Value}{Total\;Variable\;Capital\;Advanced}$
$Annual\;Rate\;of\;Profit=\frac{Annual\;Mass\;of\;Surplus\;Value}{Total\;Capital\;Advanced}$
To calculate these measures, we need to consider the length of the turnover period in weeks, denoted as T. For example, if T = 12, then it takes 12 weeks for capital to be advanced, employed in production, and then realized through the process of exchange. We also assume that xc and xv represent the weekly constant capital advanced and the weekly variable capital advanced, respectively. It is essential to understand that capital must be advanced during each week of the initial turnover period. Otherwise, it would be impossible to maintain continuous production. Once the turnover period ends, the realization of commodity values guarantees that the capitalist enterprise has sufficient capital again to continue production. Therefore, the capital advanced will be equal to the product of the turnover time and the weekly capital advanced, and the annual rate of profit (rA) will be equal to the annual mass of surplus value (S) divided by this capital advanced.
$r_{A}=\frac{S}{T(x_{c}+x_{v})}$
Similarly, the annual rate of surplus value (eA) is equal to the annual mass of surplus value divided by the variable capital advanced:
$e_{A}=\frac{S}{Tx_{v}}$ Using these definitions, let’s consider an example of an economy with 7 industries as shown in Table 14.2.
To calculate the general annual rate of profit(rA*), we need to add up the annual mass of surplus value across all industries and then divide by the total capital advanced across all n industries as follows:
$r_{A}^*=\frac{\Sigma_{i=1}^n S_{i}}{\Sigma_{i=1}^n T_{i}(x_{ci}+x_{vi})}$
When calculating the general annual rate of profit, we add up the annual surplus value across each industry i and then divide by the sum of the capital advanced across each industry i. It is this formula that is used to calculate the annual rate of profit of 159% (after rounding) in Table 14.2. That profit rate may then be used to determine the annual profit in each industry and the annual production price in each industry.
To calculate the general annual rate of surplus value (eA*), we need to add up the annual mass of surplus value across all industries and then divide by the total variable capital advanced across all n industries as follows:
$e_{A}^*=\frac{\Sigma_{i=1}^n S_{i}}{\Sigma_{i=1}^n T_{i}x_{vi}}$
It should be clear from both definitions that the annual rates of profit and surplus value are significantly higher than their daily counterparts. The reason is that capital is only advanced during the turnover period, which is typically shorter than one year, while the surplus value is appropriated throughout the entire year.
Table 14.2 reveals that the same aggregate equalities hold in an economy where different turnover periods apply to different industries. That is, the aggregate annual mass of surplus value is equal to the aggregate annual mass of profit. Also, the aggregate annual value of commodities equals the aggregate annual production price. Prior to the transformation of values into production prices, the industries with relatively more variable capital tend to have higher annual rates of profit, such as industries 6 and 7. An additional reason exists for the high annual profit rates of industries 6 and 7, namely the short turnover times of only 9 weeks in those industries. Industry 5 has the shortest turnover time of 8 weeks, but its organic composition of capital is so high that the combination produces the lowest annual rate of profit. Industry 4’s low annual rate of profit can be attributed both to its high organic composition of capital and its long turnover period. Industries 1-3 have organic compositions of capital that are closer to industries 6 and 7 (which raises their annual profit rates above those of industries 4 and 5), but their long turnover periods produce lower annual profit rates than in industries 6 and 7.
This analysis effectively incorporates the turnover process into the method of transforming values into prices of production. It does not address the transformation problem, described in Chapter 8, however, and so the importance of addressing that problem should be kept in mind.
The Marxian Theory of the Long-Term Tendency of the General Rate of Profit to Fall
Marxian economists argue that the general rate of profit has a long-term tendency to fall in capitalist economies. It is a claim about the movement of the general rate of profit over a period of decades and even centuries. The explanation concentrates on capitalist competition and the way that it leads to innovation. The introduction of more advanced machinery and equipment in the production process causes an increase over time in the amount of constant capital employed in production relative to the variable capital employed in production. Because variable capital is the source of value and surplus value, the relative decline in its employment causes the general rate of profit to fall over time.
To see how an increase in the organic composition of capital tends to drive down the general rate of profit, consider the general rate of profit as we defined it before incorporating turnover time into the analysis:
$r=\frac{S}{C+V}$
This definition of the general rate of profit is calculated using the aggregate surplus value, the aggregate constant capital, and the aggregate variable capital. The organic composition of capital (OCC), as defined in Chapter 8, is the following:
$OCC=\frac{C}{C+V}$
An alternative measure of the organic composition of capital is expressed more simply as the ratio of constant capital to variable capital as follows:
$OCC'=\frac{C}{V}$
It is possible to rewrite the general rate of profit so that the relationship to the organic composition of capital becomes clear:
$r=\frac{\frac{S}{V}}{\frac{C}{V}+1}$
Other factors the same, as the organic composition of capital (OCC’) increases, the general rate of profit must fall. This argument provides the explanation for the long-term tendency of the rate of profit to fall. Capitalist competition leads to innovation and a rising organic composition of capital. The general rate of profit thus tends to fall over long periods of time. The fall in the rate of profit means that capitalist enterprises have an increasingly difficult time making interest payments and rent payments out of their profits, which generates capitalist instability and economic crises with workers thrown out of work and businesses failing.
The same argument applies to the general annual rate of profit as shown below:
$r_{A}^*=\frac{\Sigma_{i=1}^n S_{i}}{\Sigma_{i=1}^n T_{i}(x_{ci}+x_{vi})}=\frac{\Sigma_{i=1}^n S_{i}}{\Sigma_{i=1}^n T_{i}x_{ci}+\Sigma_{i=1}^n T_{i}x_{vi}}=\frac{S}{C+V}=\frac{\frac{S}{V}}{\frac{C}{V}+1}$
The only difference here is that S refers to the annual mass of surplus value, and C and V refer to the annual aggregate constant capital advanced and the annual aggregate variable capital advanced. Both C and V are calculated according to the amounts required to maintain continuous production throughout the turnover period. The annual rate of profit also shows a tendency to decline as the organic composition of capital rises.
Although Marxian economists argue that the annual rate of profit tends to fall over long periods of time, the law of the tendency of the rate of profit to fall (LTRPF) is not an unconditional tendency. That is, Marxian economists, following Marx, argue that several countertendencies operate to prevent a decline of the general rate of profit. Therefore, if we observe an increase in the average rate of profit, then such movements do not subvert the law of the tendency of the rate of profit to fall. They simply mean that the countertendencies are at work and are giving a boost to the general rate of profit. We will briefly summarize the six major countertendencies that Marx identified in volume 3 of Capital and then add an additional countertendency to the list based on our definition of the annual rate of profit.[1]
The first factor that serves to counteract the fall in the general rate of profit is a rise in the rate of surplus value. If workers are exploited more, then S/V will rise. Other factors the same, a rise in the rate of surplus value throughout the economy will raise the general rate of profit. Therefore, even if the organic composition of capital rises due to competition and innovation, a sufficiently large increase in the rate of surplus value will nevertheless increase the general rate of profit. A likely cause of such an increase in the rate of surplus value is an extension of the length of the working day. If workers are required to work longer hours for the same wages, then the increase in absolute surplus value raises the rate of surplus value and the rate of profit. Another possible source of a rise in the rate of surplus value is a rise in productivity in the sectors that produce the means of subsistence for workers. If productivity rises in those sectors, then the value of labor-power declines with the fall in the values of the commodities that workers require daily to reproduce their labor-power. This change represents a rise in relative surplus value production. The increase in the rate of surplus value then raises the rate of profit.
A second counteracting factor that pushes back against the long-term fall in the general rate of profit is a reduction in wages below their value. When wages are pushed down below the value of labor-power, the degree of exploitation rises. The organic composition of capital rises as well, but the impact is greater on the numerator in our rewritten definition of the profit rate, which causes the rate of profit to rise overall. It is easier to see that a fall in wages below the value of labor-power will raise the rate of profit by considering the original definition (i.e., r = S/(C+V)). When the wages paid fall, the consequence is an unambiguous rise in the rate of profit. Different factors may cause such reductions in wages, including a weakening of unions and a period of intense competition among workers in the market for labor-power. These factors make it possible to exploit workers more, and the profit rate rises.
A third factor that serves as a countertendency to the falling rate of profit tendency is the cheapening of the elements of constant capital. Marx explained that a rise in productivity tends to increase the material elements used in the production process, but it also tends to reduce the value of those same elements. If the devaluation occurs relatively more than the increase in the material elements, then the constant capital employed will decline and the rate of profit will increase. The technological innovation that capitalist competition drives tends to raise the organic composition of capital, but the cheapening of the elements of constant capital works in the opposite direction. The organic composition of capital ends up falling, which raises the general rate of profit.
The fourth factor also involves a reduction of wages but this time it has a very specific cause. Marx referred to this factor as the relative surplus population. As capitalist development progresses and technological change advances, the introduction of labor-saving machinery leads to unemployment in many sectors of the economy. This surplus population puts downward pressure on wages, which raises the rate of exploitation. Even though it also pushes up the organic composition of capital, the impact on the numerator in our rewritten definition of the profit rate is more significant, which drives up the rate of profit overall. The same technological advances that tend to increase the organic composition of capital also drive down wages, which boosts the rate of surplus value.
A fifth factor that Marx mentions as responsible for counteracting the long-term tendency of the rate of profit to fall is international trade. That is, increased trade with foreign nations leads to imports of the elements of constant capital at lower prices. The reduction in the constant capital advanced reduces the organic composition of capital, which boosts the general rate of profit. As capitalism deepens, world trade expands, and this effect should become stronger. A similar pattern is expected with respect to the commodities that workers purchase for consumption. World trade makes it possible to import cheaper means of subsistence. As the value of labor-power declines with the increased availability of cheaper elements of consumption, the variable capital that capitalists must advance declines. The consequence is a rise in the rate of surplus value and an increase in the general rate of profit.
The sixth and final factor that Marx mentions as an offsetting factor that counteracts the long-term fall in the general rate of profit is the rise in the amount of share capital invested in production. Marx has in mind interest-bearing capital that earns interest only and thus takes a share of the average profit appropriated in industry. Moneylending capital is thus excluded from the calculation of the general rate of profit. Because it earns a rate that is far below the average rate of profit and it is frequently invested in industries with a high organic composition of capital (e.g., railroads), the consequence would be a major reduction in the general rate of profit, if it was included in the calculation of the general rate of profit. Its exclusion tends to increase the general rate of profit and thus the rise in share capital with the development of capitalism qualifies as a counteracting factor that works against the long-term tendency of the rate of profit to fall.
A final factor that we might add to Marx’s list of counteracting tendencies is a change in the average turnover time across the different branches of production. The calculation of the annual rate of profit (rA*) shows that if the turnover time in any one industry increases, then the annual rate of profit will fall. The reduction in the annual rate of profit will be even larger if most or all the industries experience longer turnover times. Similarly, if the turnover time in any one industry declines, then the annual rate of profit will rise. The rise in the annual rate of profit will be even larger when most or all industries experience shorter turnover times. Now consider what has happened to the average turnover time throughout the history of capitalism. The turnover time includes buying time (i.e., the purchase of materials and instruments of labor), production time, and selling time (i.e., the sale of the final commodities). Capitalists have long been engaged in an intense competitive struggle to appropriate more profits than competitors. A reduction in the turnover time is a primary method of reducing the capital advanced and increasing the annual mass of surplus value that is appropriated, which are factors that increase the annual rate of profit. The enormous improvements in transportation and communication technology throughout the history of capitalism have allowed capitalists to achieve this reduction of turnover time. Commodities are purchased more quickly for use in production, production itself has become immensely quicker and more efficient, and commodities are transported to the final consumer more quickly and easily than ever before. The reduction in turnover time has thus boosted the annual rate of profit and has helped counteract the long-term tendency of the annual rate of profit to fall.
Although these counteracting factors tend to increase the general rate of profit, Marxian economists assert that the long-term tendency of the profit rate to fall will reassert itself, producing economic crises, rising unemployment, and falling production. Marxian economists have more to say about the economic instability that capitalism produces, however, and so we now turn to factors that may produce economic downturns in the short-term.
A Marxian Theory of the Business Cycle and the Industrial Reserve Army
When profits are appropriated, capitalists must decide how to use them. One option is to consume it all, spending it on luxury commodities like mansions, expensive automobiles, vacations, jewelry, artwork, etc. Another option is to reinvest it to expand production. When profits are reinvested in production, Marxists state that capital is accumulated. In fact, the profits are being transformed into new capital and so the capital value grows. Capitalists can also choose to hoard the profits, but doing so will not allow it to be used for luxury consumption or profit-making and it will lose its value over time if inflation occurs.
Although these different options are available to capitalists, they are driven to accumulate capital. The intense competition that occurs among capitalists leads to capital accumulation as capitalists seek to outperform their competitors. This drive to accumulate capital has an impact on the general rate of profit, which creates the economic fluctuations that are referred to as the business cycle. To understand the reason, consider how money wages are likely to change over the course of the business cycle as depicted in Figure 14.1.[2]
The maximum wages are determined according to how much profits (П) may be squeezed before it becomes impossible for capitalists to pay interest on loans, pay rent to landowners, etc. Some capitalists are in a stronger position than other capitalists and so as that point approaches, many weaker businesses fail, production begins to decline, and unemployment soars. With the expansion of the reserve army of the unemployed, wages begin to fall and the total funds advanced as wages decline. With the decline in wages, profits start to increase. Eventually, wages fall enough and profits rise enough that capital accumulation resumes. This resumption of capital accumulation does not occur until the trough of the business cycle is reached. That is, production reaches its minimum level before capitalists can justify accumulating capital again. At that point, wages have fallen so much and capital assets have depreciated so much that new investment and expanded production are expected to be profitable.
Figures 14.2 (a) and 14.2 (b) show how the wages paid (W) and the general rate of profit (r) change over time in response to the overall fluctuations in economic activity.
Figure 14.2 (a) shows aggregate wages rising rapidly during the economic expansion and squeezing profits to a minimum level. In Figure 14.2 (a) profits are represented as the difference between aggregate wages and aggregate value added (W+П). At the start of the economic crisis, however, unemployment soars and wages begin to drop. This reduction in wages occurs quickly and helps to restore profitability until the next expansion begins. Figure 14.2 (b) shows how the general rate of profit follows the opposite pattern relative to aggregate wages as production changes over the course of the business cycle. Because we are now thinking in terms of profits and wages as opposed to surplus value and variable capital, let’s write the general rate of profit as follows:
$r=\frac{\Pi}{C_{P}+W}$
In this expression, CP represents the production price of the means of production or the value of the constant capital transformed into its production price. We can also write the maximum value of the general rate of profit (rmax) and the minimum value of the general rate of profit (rmin) as follows:
$r_{min}=\frac{\Pi_{min}}{C_{P}+W_{max}}$ $r_{max}=\frac{\Pi_{max}}{C_{P}+W_{min}}$
As production expands, wages rise quickly, and profits are squeezed. The result is a rapid fall in the rate of profit to its minimum value, which precipitates the crisis. As production declines, aggregate wages quickly fall and profits expand for businesses that do not fail. Eventually, the rise in the general rate of profit to its maximum value makes renewed capital accumulation possible once more.
This theory of the business cycle also explains the fluctuations in the unemployment rate over time that we observe. The reserve army of the unemployed rises during contractions and falls during expansions. Nevertheless, a reserve army of the unemployed always exists. The reader should consider the contrast between the language used in neoclassical theory and in Marxian theory. In neoclassical theory, unemployment is recognized as inevitable, but it is referred to as natural unemployment. The use of the word “natural” suggests that a significant amount of unemployment should not be a concern to anyone. It is natural and beyond our conscious control. Marxian economists also regard a certain amount of unemployment in a capitalist society as a permanent feature of that economic system. The reference to a “reserve army” of the unemployed emphasizes the scale of the problem of unemployment. It also suggests that unemployed workers are under the control of the capitalist class and are only called into service as needed. Both schools of economic thought recognize the permanent nature of unemployment in capitalist societies, but their ways of interpreting that empirical fact are completely different.
In his classic 1942 work The Theory of Capitalist Development, Marxian economist Paul Sweezy includes a helpful flow diagram that captures the various factors that cause the reserve army of the unemployed to grow and shrink over time. Figure 14.3 is based on the diagram in Sweezy’s book.[4]
The flows from the reserve army to industry and back again represent the expansions and contractions of the reserve army that have been discussed in this section. When wages rise and squeeze profits too much, workers are thrown out of work and the reserve army swells. When wages fall and profits expand, hiring picks up and the reserve army shrinks. Sweezy’s model identifies additional factors that can influence the size of the reserve army of the unemployed, namely the entrance of new workers into the market for labor-power and retirements that cause workers to exit the market for labor-power. Many factors are responsible for these expansions and contractions of the reserve army of the unemployed, and our purpose here is only to demonstrate the linkage between changes in wage levels and the amount of unemployment as aggregate production rises and falls.
The Marxian Theory of Discoordination across Macroeconomic Sectors
Marx had a great deal to say about what causes capitalist crises. In volume 1 of Capital, Marx argued that the possibility of a crisis within capitalism was inherent within the sphere of simple commodity circulation. The reader should recall the general form of a simple commodity circuit, which is denoted in the following way:
$C-M-C'$
This circuit indicates that a commodity is sold for money and that the money is then used to purchase another commodity. The first commodity is thus transformed into a qualitatively different commodity. Classical economists, like J.B. Say, argued that crises within capitalism are impossible because “supply creates its own demand.” That is, the act of selling is also an act of purchase. When commodities are sold to obtain money, the purpose is to purchase another commodity and so the act of supplying commodities also represents a demand for other commodities. Therefore, supply and demand will be equal at the aggregate level and economic crises should never occur. This argument has been labeled Say’s Law of Markets, as we discussed in Chapter 13. The problem, as Marx pointed out, is that just because one commodity is sold for money does not mean that the money must immediately be spent on another commodity. If the seller of a commodity decides to hold on to the money, then no new demand is created. Therefore, Marx argued that the simple circulation of commodities contains within it the possibility that a crisis will occur.[5]
To argue that an event is possible is not the same as arguing that it will occur. Therefore, Marx provided additional arguments to show that capitalist crises are inevitable within capitalist societies. We have already considered Marx’s argument that a rising organic composition of capital leads to a long-term decline in the general rate of profit and how this decline produces economic crises. We have also explored Marx’s claim that capital accumulation leads to rising wage rates and a squeezing of profits until an economic crisis occurs, which restores wage rates to a level that is compatible with renewed capital accumulation. In this section, we will consider another aspect of Marx’s theory of economic crisis that focuses on imbalances that arise across major sectors of the economy, which ultimately produce an economic crisis. Marx makes this argument using a two-sector model of social reproduction that is found in volume 2 of Capital.[6] The rest of this section summarizes a portion of Marx’s analysis.
Table 14.3 provides information about an economy with two major sectors.
In Table 14.3, one sector produces means of production and another sector produces means of consumption. The means of consumption include both the means of subsistence for workers and luxury commodities for capitalists. Simple reproduction characterizes this economy. That is, capitalists do not reinvest the surplus value to expand production and so no capital is accumulated. To see that simple reproduction exists in this economy, consider the total constant capital advanced (C). The $1700 of constant capital that is advanced is exactly equal to the value of the means of production produced in Sector I. That is, the demand for means of production exactly equals the supply of means of production. Now consider the total variable capital advanced (V) and the total surplus value (S) realized from the sale of the total social product. The$800 of wages paid to workers plus the $500 of surplus value that capitalists appropriate is just sufficient to purchase the entire$1300 of means of consumption produced in Sector II. That is, the demand for means of consumption is exactly equal to the supply of means of consumption. Since aggregate supply and aggregate demand are the same, and all surplus value is spent on luxury commodities, simple reproduction exists in this economy.
Another way to see that simple reproduction holds is to identify the condition for simple reproduction that Marx identified in this two-sector model. For simple reproduction to hold, the following condition must be met:
$C_{2}=V_{1}+S_{1}$
In words, the constant capital employed in Sector 2 must equal the sum of variable capital and surplus value in sector 1. We can think through the condition in the following way. Capitalists in sector 1 advance $1000 of constant capital and so purchase$1000 worth of means of production produced in their industry. That purchase leaves $700 (= V1+S1) worth of means of production to be sold. For simple reproduction to hold, the constant capital advanced in Sector 2 must equal$700. Otherwise, the demand for the total output of Sector 1 will be too large or too small, and an economic crisis will occur.
Alternatively, workers and capitalists in Sector 2 purchase $600 (= V2+S2) worth of the means of consumption produced in their industry. That purchase leaves$700 (= C2) worth of means of consumption to be sold. For simple reproduction to hold, the sum of the variable capital and the surplus value in Sector 1 must equal 700. Otherwise, the demand for the total output of Sector 2 will be too large or too small and an economic crisis will occur. The brilliance of Marx’s argument can be appreciated if we think through the sources of the different types of spending. The capitalists are willing to advance the constant capital and the variable capital. The workers then spend their wages and the capitalists purchase the means of production. Therefore, they know that when the commodities are sold, they will receive enough revenue to compensate them for these capital advances. The surplus value is a different matter. Capitalists know that they will receive the surplus value when the output is sold, but the only way that they can realize the surplus value is to advance the funds themselves! They must purchase luxury commodities using their previously realized surplus value. Why would capitalists want to use funds to make purchases only to have the same amount of surplus value return to them? The answer in Marx’s theory is that the capitalists consume a surplus product in the form of luxuries produced with the surplus labor of the working class. It is true that the capitalists advance the constant capital and the variable capital and end up with revenues equal to that amount of capital advanced plus the surplus value. In that sense, money is used to make more money, but notice that the extra money to realize the surplus value originates with the capitalists themselves. It does not matter that the capitalists end up with a sum of money that is the same as before. The capitalist class appropriates the surplus labor and consumes the surplus product. A Marxian Analysis of the 2007-2009 Economic Crisis Richard Wolff provides a Marxian analysis of the 2007-2009 economic crisis through a Marxian lens.[7] According to Wolff, it is not changes in aggregate investment spending, tax levels, or government spending that should be the focus of efforts to explain the crisis. Instead, we should emphasize capitalism’s class structure as we struggle to understand the factors that produced the worst decline of economic activity in the United States since the Great Depression. Wolff argues that beginning in the mid-1970s, workers’ average real wages stopped rising even though they had been increasing each decade since 1820.[8] Wolff points to the displacement of American workers due to the computerization of production and the transfer of production overseas as U.S. firms searched for higher profits.[9] Even during the deflation of the Great Depression, the price level declined more rapidly than money wages, which caused real wages to rise. As Wolff explains, even after real wages began to stagnate in the 1970s, workers’ productivity continued to rise, which allowed the capitalists to appropriate even more of the surplus value that workers produce. Figure 14.4 offers a graphical depiction of the pattern of real value added per worker and real wages throughout U.S. history. Figure 14.4 shows a gradual increase in real wages beginning in 1820 with stagnation beginning in the 1970s. Real value added per worker, on the other hand, is depicted as rising continuously up to the present time. Fluctuations are omitted from the graph to focus on the long-term trends that Wolff emphasizes. The difference between real value added per worker and the real wage represents the surpluses extracted from workers. Wolff explains that the surpluses were distributed in several different ways.[10] A large part of the surpluses was distributed in the form of bonuses to corporate executives. Another portion was distributed as dividends to shareholders. Yet another part was used to move production overseas. Even so, the bulk of these surpluses found their way into the banks, which transformed them into loanable money capital. The loan capital was distributed to borrowers who purchased homes and automobiles. Other borrowers paid for college tuition and consumer goods. Firms also borrowed the funds to expand production. Large financial institutions bundled together many different loans, which created specialized financial assets, such as mortgage-backed securities (MBSs) and collateralized debt obligations (CDOs). They then sold the financial assets to large banks and institutional investors like hedge funds. As the market for specialized financial assets exploded, so did the degree of risk in the financial system. As Wolff explains, workers began to borrow heavily to maintain a rising material standard of living.[11] Faced with stagnant wages, it was the only means of expanding upon one’s material possessions. Workers thus faced a double squeeze from the 1970s to 2006, according to Wolff.[12] Capitalists took the surpluses from workers but then also took from them again as workers paid interest on their mortgage loans, auto loans, credit card loans, and student loans. Because workers had fallen so deeply into debt, they struggled to keep up with their debt payments. Many borrowers began to default on their loans. The situation was made worse because many lenders chose to lend to so-called subprime borrowers in the subprime mortgage market. Subprime mortgage loans are loans to people with poor credit histories and low incomes. The interest rates are high but so is the risk. Furthermore, as risky loans were made, the growth of the secondary market for the loans had expanded to the point where the originator of the loan could sell it relatively quickly. Lenders thus had less reason to be concerned about the credit worthiness of their borrowers. The more loans they pushed, the greater commission revenue they received. The incentives were thus skewed and fueled the buildup of risk within the system. A compounding factor was the behavior of credit rating agencies in the leadup to the 2008 financial crisis. Standard & Poor’s, Moody’s, and Fitch Ratings are the three agencies most responsible for assigning ratings to financial assets like MBSs, CDOs, and other specialized financial assets. Investors rely heavily on these ratings to evaluate the degree of risk. These ratings thus affect the prices of these financial assets. When the rating assigned to an asset is high, investors infer that the degree of risk is relatively low. When the rating assigned to an asset is low, investors infer that the degree of risk is relatively high. During the housing boom prior to the Great Recession, MBSs and other assets seemed like good investments. Rising prices for these assets made them appear to be sound investments. The problem, however, is that home prices were greatly inflated, and many borrowers were taking on more debt than they could handle. The rating agencies should have recognized the high degree of risk associated with these securities and downgraded them appropriately. Because the rating agencies received fees from the large investment banks to rate the securities that they created, the rating agencies had a strong incentive to promote the growth of these financial markets. The assignment of high ratings to the newly issued securities, even when they could not be justified, furthered that goal. A conflict of interest exists when an individual or organization has an incentive to act in multiple, competing ways. In this case, the agencies had a mission to serve the public interest with accurate ratings provided to investors. This mission competed with its drive to maximize profits through the assignment of inflated ratings on securities. Figure 14.5 provides a diagram of the relationships that existed between investment banks, credit rating agencies, institutional investors, commercial banks, and homebuyers. Figure 14.5 shows how commercial banks loaned money to homebuyers for the purchase of homes. The banks then sold those mortgage loan assets to investment banks, which bundled the mortgages to create MBSs. The rating agencies then gave these securities inflated ratings to promote their further creation and sale. The newly issued securities were then sold in the financial marketplace to institutional investors such as pension funds, hedge funds, and mutual fund companies. When home prices plummeted, many borrowers recognized that the values of their homes were below the values of their mortgage loans, and they simply walked away from their homes. Because they had put so little money down to purchase the homes, the loss of equity from a default did not stop them from abandoning their commitments. As defaults began to soar, the financial assets that represented bundles of these loans began to lose value. Large financial institutions watched as their asset values plummeted. Faced with such losses, banks and other financial institutions stopped lending to each other and to the public. Credit markets froze.[13] With firms unable to borrow money to pay wages and to purchase materials, layoffs increased, businesses failed, and unemployment soared. The Great Recession was the result of the instability in the financial system. Wolff finds the source of the Great Recession in the class structure of capitalist society. He argues that a movement beyond a capitalist class structure would have prevented the economic crisis of 2007-2009. Specifically, he argues that workers need to become the collective appropriators of surpluses within their firms.[14] If firms were reorganized in this manner, Wolff believes that they would not have frozen real wages as firms did starting in the 1970s.[15] They also would not have distributed it as bonuses and dividends to executives and shareholders. They also would not have used it to transfer production overseas, nor would have they allowed it to accumulate in the banks to be loaned back to workers at high interest rates.[16] From a Marxian perspective, to really address the source of such instability would require revolutionary measures that challenge the right of owners to privately owned means of production. U.S. Economic History through the Lens of Social Structure of Accumulation (SSA) Theory A theoretical framework that is closely related to, yet distinct from, Marxian economics is social structure of accumulation (SSA) theory. This theory originated during the latter half of the twentieth century. Radical political economists use it to interpret major shifts in the history of capitalist societies. Most of the focus has been on the United States, but other national economies have been analyzed using this framework as well. A social structure of accumulation (SSA) refers to an “institutional environment affecting capital accumulation.”[17] During periods of stability, the set of institutions comprising a SSA promotes rapid capital accumulation and economic growth. A SSA may be in place for decades as capital accumulation and economic growth continue without major disruption. Business cycle fluctuations will occur during these long periods of time, but no major economic crisis occurs. Eventually, however, the underlying institutions that make capital accumulation possible begin to break down, and a major economic crisis occurs. The crisis creates the conditions for the restructuring of the economy. New institutions develop and eventually establish a foundation for renewed capital accumulation and economic growth. The new institutions form a new SSA, which may do a better or worse job of promoting capital accumulation and economic growth. This section on SSA theory concentrates on the work of Terrence McDonough who has written extensively about the nature of SSAs in U.S. economic history. [18]McDonough argues that three SSAs may be identified as we look back at the history of American capitalism: a post-Civil War SSA, a monopoly capitalist SSA, and a post-WWII SSA.[19] McDonough does not devote much attention to the post-Civil War SSA but argues that it represents a period of primitive accumulation, thus revealing the Marxian roots of SSA theory.[20] It was thus a period during which many of the resources of the nation were transformed into privately owned means of production and pulled into the capital accumulation process. McDonough also describes the monopoly capitalist SSA, which took shape during the late nineteenth and early twentieth centuries. He argues that each SSA is built according to a unique organizing principle.[21] The organizing principle in the case of the monopoly capitalist SSA is the oligopolistic market structure that developed due to the wave of corporate mergers that occurred during those years.[22] A specific subset of institutions is also needed to revive the capital accumulation process after an economic crisis, which David Kotz calls the core of the SSA.[23] According to Kotz, the core institutions of a SSA moderate class conflict and capitalist competition during the long period of expansion.[24] During the monopoly capitalist SSA, a greater concentration of industry was one of the core institutions.[25] McDonough identifies several factors that contributed, including a growing market for industrial securities, New Jersey holding company legislation, and Sherman Act interpretations that permitted monopoly by merger.[26] A second core institution was an electoral shift towards the Republican Party at the federal level. Republican control of Congress and the Presidency in the early twentieth century led to policies that supported financial and industrial capitalists (e.g., protectionism).[27] A third core institution that McDonough identifies with the monopoly capitalist SSA is the regulation of trusts rather than the breaking up of trusts.[28] The administration of Theodore Roosevelt would distinguish between trusts that were behaving responsibly and trusts that abused their power. It created a Bureau of Corporations in 1903 that would publicize abuses of economic power to keep large corporations in line.[29] A fourth core institution of the monopoly capitalist SSA is a new ideological system referred to as corporatism.[30] According to the corporatist ideology, cooperation among capital, labor, and the public was a worthwhile goal.[31] Through the National Civic Federation (NCF), founded in 1900, business leaders, political leaders, and labor leaders promoted cooperation among these entities.[32] This set of ideas thus helped create a stable environment for capital accumulation and economic growth. A fifth core institution that McDonough identifies with the monopoly capitalist SSA is a change in capital-labor relations. This development consisted of two parts. The first part involved an effort to break the power that the skilled workers had over the production process. This goal was accomplished via the reduction of semi-skilled labor to a common denominator using highly mechanized production.[33] As machines began to perform more work, and the workers became more like operators of machinery than skilled craftsmen, the result was a loss of worker control over the production process. The second part of the change in capital-labor relations, according to McDonough, was the anti-union strategy that employers adopted.[34] As workers began to organize in response to their loss of control of production, employers implemented an open-shop policy,[35] which means that the employers did not recognize unions at all and would not distinguish between union members and non-union workers in the hiring process. In other words, the employer would only negotiate with individual workers rather than a collective body. The final core institution that McDonough associates with the monopoly capitalist SSA is an imperialist strategy on the part of the United States beginning in the late 1890s. The U.S. aggressively sought to expand into foreign markets to offset the effects of the 1890s slump.[36] McDonough mentions the U.S. entry into the Spanish-American War, the annexation of Hawaii, the treatment of Cuba as a protectorate of the United States, and the annexation of the Philippines, Puerto Rico, and Guam.[37] President McKinley’s “Open Door” policy towards China was also part of the U.S. effort to establish itself as a dominant player in the sphere of international trade.[38] Eventually, the monopoly capitalist SSA experienced a period of prolonged economic crisis. The world wars and the worldwide Great Depression represented a crisis of the institutions that had served capital accumulation in the early twentieth century. From the ashes of the old SSA rose a new set of institutions that established a foundation for rapid capital accumulation and economic growth in the post-World War II period. The postwar SSA, as McDonough calls it, had five core institutions.[39] According to McDonough, “the social influence of the war itself” was the organizing factor for this SSA.[40] The first core institution of the postwar SSA is the Keynesian state. The Second World War demonstrated that Keynesian policies could be effective, and Keynesian economists began to acquire government positions.[41] Full employment became a goal and with the expansion of the public sector, changes in government spending or taxes could influence aggregate output.[42] Aggregate demand received a boost due to the 1947 Marshall Plan, which raised overseas demand, and the Korean War, which increased military spending.[43] The third and fourth core institutions of the postwar SSA include the international dominance of the United States and the adoption of a Cold War ideology that guided U.S. policymaking after World War II. According to McDonough, the U.S. became the most powerful nation economically and militarily.[44] It used the 1944 Bretton Woods agreements and the 1947 Marshall Plan to establish “a worldwide capitalist economy open to American investment and export.”[45] The Truman doctrine of containment insisted on a connection between Soviet ideology and the tendency towards international expansion.[46] The Cold War ideology helped discourage the spread of socialist ideas and policies just as U.S. international military and economic dominance helped ensure the creation of a global capitalist order. The fifth core institution of the postwar SSA involved a new relationship between capital and labor. Federal support for collective bargaining, as represented in the passage of the Wagner Act in 1935, led to the growth of membership in industrial unions. Industrial unions aim to organize all the workers in an industry like the automobile industry or the steel industry. The United Auto Workers (UAW) and the United Steelworkers of America (USWA) are examples of industrial unions. Craft unions, on the other hand, aim to organize all the workers with a specific skill, like carpenters or electrical workers. McDonough explains that during the postwar period, a rough equilibrium resulted between labor and management in which management granted automatic cost of living adjustments (COLAs) and productivity-linked wage contracts in exchange for management control of the production process.[47] The final core institution of the postwar SSA was a shift towards the Democratic Party in national politics. The political realignment involved greater capitalist support for internationalization and a willingness to cooperate with organized labor.[48] It represented a departure from Republican support for protectionism and anti-union tactics, which dominated in the past.[49] Support for the Democrats in national elections came from labor unions, the lower-class vote, and capitalists in capital-intensive industries.[50] Whereas the core institutions of the monopoly capitalist SSA were organized around the oligopolistic market structure of the early twentieth century, McDonough argues that the core institutions of the postwar SSA were organized around the characteristics of the war itself. He argues that even though a general principle of SSA construction appears to be at the center of each SSA, it seems to be a different principle with each SSA.[51] We are thus unable to predict the organizing principle of future SSAs or their timing.[52] Nevertheless, the power of the SSA framework lies in its ability to shed light on the historical factors that create a basis for rapid capital accumulation and economic growth. When those elements begin to unravel, the sources of widespread economic crisis also become clear when viewing capitalist history through the lens of SSA theory. T he Austrian Theory of the Business Cycle In this section we consider a theory of the business cycle that Austrian economists developed. Ludwig von Mises and F.A. Hayek are the major contributors to this theory, although it has been refined and developed since they worked on the subject. Austrian economist Roger Garrison offers a helpful overview of the Austrian theory of the business cycle using a variety of graphs that facilitate a comparison with the Keynesian cross model.[53] This section borrows heavily from Garrison’s well-known essay to introduce students to the key elements of the Austrian perspective.[54] In their business cycle theory, Austrian economists refer to capitalists and laborers. The two groups are not in conflict, however, which marks a major difference between the Austrian perspective and the Marxian and Post-Keynesian perspectives. Drawing upon Hayek’s work, Austrian economists use a graph to represent the structure of production. The structure of production has two key elements in Austrian theory: the quantity of capital employed and the period of production.[55] These two elements are positively related in Austrian economic theory. That is, when more capital is employed, a longer period of production becomes possible.[56] The graph that is used to represent the structure of production is referred to as a Hayekian triangle. An example of a Hayekian triangle, which Garrison modifies somewhat to look like a trapezoid,[57] is shown in Figure 14.6. $D_{PG}\Leftrightarrow S_{FG}$ Similarly, when capitalists purchase labor services, it represents a supply of present goods (SPG) because they must advance goods to laborers for present consumption and a demand for future goods (DFG) because they are postponing their own consumption in the present when they pay workers.[62] This relationship may also be represented symbolically: $S_{PG}\Leftrightarrow D_{FG}$ Following Garrison,[63] the market for present goods is represented in Figure 14.7. Figure 14.7 shows how an equilibrium interest rate and an equilibrium quantity exchanged of present goods is determined in the competitive market.[64] Laborers are represented on the demand side of the market, and capitalists are represented on the supply side of the market. We can think about the slopes of the curves as follows. When the rate of interest falls, laborers will prefer to save less and consume more today, which produces a downward sloping demand curve. When the interest rate rises, capitalists will anticipate greater profits by purchasing labor services and so they will supply more present goods. If the market is not in equilibrium due to a relatively high rate of interest, then the surplus of present goods implies a rise in savings that causes the rate of interest to fall. If the interest rate is below the equilibrium interest rate, then the shortage of present goods implies a reduction in saving and a rise in the rate of interest. It is the time preferences of laborers and capitalists, however, that determine the positions of the supply and demand curves for present goods.[65]Time preference refers to an individual’s preference to consume in the present rather than in the future. If an individual has a high time preference, then consumption is preferred today much more than in the future. If an individual has a low time preference, then consumption is preferred in the future much more than in the present. In the market for present goods, a higher time preference for laborers would lead to a rise in the demand for present goods and a rightward shift of the demand curve. A lower time preference for laborers, on the other hand, would lead to a fall in the demand for present goods and a leftward shift of the demand curve. Now consider how a change in time preferences alters the structure of production. A reduction in the time preferences of laborers will cause the demand for present goods to decline[66] and a leftward shift of the demand curve as shown in Figure 14.8. Nothing in the Austrian analysis thus far suggests a cause of the business cycle. In fact, the economy will function without any major disruption as producers respond to shifts in the time preferences of laborers and capitalists. The major cause of depressions is found in central bank manipulation of the money supply. Austrian economists thus introduce the money supply into the analysis as an exogenous variable.[73] A major difference exists between Austrian monetary theory and neoclassical monetary theory. Whereas neoclassical economists argue that “new money is injected uniformly throughout the economy,” Austrian economists argue that injections of new money tend to fall into the hands of producers.[74] Figure 14.10 shows how Garrison represents neutral and non-neutral monetary expansions.[75] The return of the rate of interest to its original level causes the structure of production to return to its original state. The monetary expansion pushes the market rate of interest down below the natural rate of interest. It is the natural rate of interest in the Austrian model that is consistent with the time preferences of capitalists and laborers. The increase in investment causes a surge of demand for the remaining consumer goods because the time preferences of individuals have not actually changed.[78] The price of consumer goods relative to capital goods thus rises. Entrepreneurs recognize their error and begin to liquidate their investment projects.[79] The consequence is disinvestment on a large scale, and an economic crisis ensues. The source of economic crises is thus found in a misallocation of capital and what Austrian economists call forced saving because the shift towards investment goods does not reflect a real change in the time preferences of individuals. Economic crises that Austrian economists explain using this theory of the business cycle include the Great Depression of the 1930s and the Great Recession of 2007-2009. The Federal Reserve is blamed for expanding the money supply and artificially pushing down the rate of interest. The result was excessive investment in the 1920s and in the early 2000s in the housing market or in the 1990s in the IT sector. Eventually, the errors of entrepreneurs became apparent and investment projects were abandoned, leading to major economic contractions. According to Snowdon et al., an iron law of retribution exists.[80] That is, the greater the monetary expansion and economic boom, the greater the contraction and disinvestment to follow will be. In fact, the correction to the economy might be so great that the disinvestment might lead to capital consumption where the economy ends up with a smaller capital stock than it possessed at the beginning of the process.[81] If the government provides consumer credits (i.e., cash subsidies for laborers) to pump up demand, it will boost the demand for consumer goods and alter relative prices in favor of consumer goods even more. The result will be an even larger contraction of investment projects that makes the economic crisis worse.[82] The Austrian theory of the business cycle is noteworthy because it offers a theory of capitalist crises that assigns a central role to the difference between capitalists and laborers even as it places the blame on government intervention and central bank manipulation of the money supply. It also recognizes that entrepreneurs do not have perfect foresight and that they may make mistakes when they increase investment in response to a drop in the market rate of interest that stems from a monetary expansion. Finally, it provides an alternative to Keynesian theory when thinking about the factors that influence aggregate investment, the aggregate production period, the rate of interest, and aggregate output. Post-Keynesian Effective Demand Theory versus the Neoclassical Synthesis Model Like neoclassical economists, Post-Keynesian economists recognize that changes in key macroeconomic variables influence aggregate output and employment. Unlike neoclassical economists, however, Post-Keynesian economists place much emphasis on the class-based distribution of income. We may begin this discussion with the simple fact that aggregate output or aggregate income (Y) can be decomposed into aggregate wages (W) and aggregate profits (П) as follows: $Y=W+\Pi$ When the distribution of income is represented as a division between wages and profits, it is referred to as the functional distribution of income. Post-Keynesian economists agree with neoclassical economists that aggregate expenditure (A) may be written as the sum of the major spending components from the national income and product accounts. $A=C+I+G+X-M$ Aggregate expenditure thus represents the sum of consumer spending (C), investment spending (I), government spending (G), and net exports (X – M). Net exports are calculated as the difference between exports (X) and imports (M). Post-Keynesian economists also agree with the macroeconomic equilibrium condition that aggregate output (Y) equals aggregate planned expenditure (A), as shown below: $Y=A$ If we substitute the expression for aggregate income and the expression for aggregate expenditure into the equilibrium condition, we obtain the following result: $W+\Pi=C+I+G+X-M$ To further modify this equation, let’s divide aggregate consumer spending into the consumer spending by capitalists (Cc) and the consumer spending by workers (Cw) as follows: $W+\Pi=C_{C}+C_{W}+I+G+X-M$ Workers’ consumption may be rewritten as what remains of workers’ wages after they have paid their taxes (T) and set aside a portion to be saved (Sw). $C_{W}=W-T-S_{W}$ The equilibrium condition (Y = A) may now be written as follows: $W+\Pi=C_{C}+W-T-S_{W}+I+G+X-M$ Eliminating aggregate wages from both sides and rearranging the terms, we obtain the following result:[83] $\Pi=I+(G-T)+(X-M)+C_{C}-S_{W}$ This result relates the equilibrium aggregate profit in the economy to various components of aggregate spending, such as the government budget gap (G – T) and the trade balance (X – M). That is, this equation allows us to understand how aggregate profits change over the course of the business cycle as different components of aggregate spending change. For example, if investment spending increases, then aggregate profits will rise, other factors held constant. Whereas neoclassical Keynesian economists focus on the impact of a rise in investment spending on aggregate output, Post-Keynesian economists focus on the impact on aggregate profits. Because aggregate wages do not change when investment spending rises, aggregate profits increase relative to aggregate wages. Capitalists thus gain relative to workers, and income inequality worsens. Now consider how changes in the government budget affect profitability and the distribution of income. If government spending increases and taxes fall, then both factors create a larger government budget deficit. Deficit spending increases the equilibrium aggregate profits in the economy just as it increases the equilibrium output. With unchanged wages, the result is an increase in aggregate profits relative to aggregate wages and a rise in income inequality as the economy grows. Hence, higher government spending or tax cuts can worsen inequality even as they stimulate the economy. If the level of net exports increases due to a rise in exports or a reduction in imports, then the result is a higher level of aggregate profits. With aggregate profits rising relative to aggregate wages, the result is worsening income inequality. At the same time, aggregate output and employment increase with rising net exports. We can conclude then that a trade surplus boosts economic growth in the short run, but it also worsens the class-based distribution of income. A trade deficit has the opposite effects. Because net exports are negative in the case of a trade deficit, the result is a lower level of aggregate output and employment. With falling profitability, the result is a reduction in income inequality. Another component of aggregate spending that can influence the equilibrium level of aggregate profits is capitalist consumption. If capitalists increase their aggregate consumption level, then the economy receives a boost. The rise in aggregate consumption increases equilibrium output but also aggregate profits. The consequence is worsening income inequality because aggregate profits rise relative to aggregate wages. Here we can see the Marxian roots of Post-Keynesian economics at play. In Marx’s two-sector macroeconomic model discussed earlier in this chapter, capitalist consumption makes possible the realization of surplus value in both sectors. Similarly, in this Post-Keynesian model, capitalist consumption enhances the profitability of capitalists. Also, in Marxian economics, workers are paid according to the value of labor-power. If aggregate wages are determined according to the value of aggregate labor-power, then wages will be relatively stable. Rising profits and aggregate income then tend to worsen income inequality over the course of the business cycle. The connection to Marxian economics is not a perfect one, however, because Marxists also argue that wages tend to rise and fall over the course of the business cycle as wages rise above and fall below the value of labor-power. Finally, a rise in workers’ savings will influence the equilibrium aggregate profits in the economy. For example, if workers save more, then aggregate demand falls, which reduces the equilibrium aggregate output and the equilibrium aggregate profits. Income inequality lessens due to increased saving on the part of workers as aggregate profits fall relative to aggregate wages. The major difference between this Post-Keynesian approach to macroeconomic fluctuations and the neoclassical synthesis model is that neoclassical economists only focus on changes in the equilibrium level of output over the course of the business cycle. Post-Keynesian economists, on the other hand, also emphasize changes in profitability and the distribution of income as the economy experiences economic booms and busts. F ollowing the Economic News [84] In a recent news article in The Sydney Morning Herald, Eryk Bagshaw describes the serious unemployment problem facing Australia. In Marxian terms, the nation suffers from a large reserve army of the unemployed. As described in this chapter, when unemployment is high, wages decline. The lower labor costs help transform an economic downturn into a period of economic expansion. In Australia’s case, however, even government efforts that have created millions of jobs have not been enough to reduce the reserve army of the unemployed. According to Bagshaw, “wages growth remains stuck at close to historic lows and the unemployment rate is rising.” The reason that job growth has not reversed the trend of wages, Bagshaw explains, is that population growth and a rise in the labor force participation rate have kept the unemployment rate high even as new jobs are created. Bagshaw explains that older workers and women have entered the workforce in large numbers, which indicates a more productive population but also puts downward pressure on wages. He describes what amounts to an underconsumption problem stemming from low wage growth. That is, “[l]ower wages means less spending, less spending means slower growth,” and economic growth has been sluggish in Australia recently. Bagshaw also explains that underemployment is close to an all-time high at 8.5%. These workers are employed part-time but would prefer full-time work. As a result, their presence puts downward pressure on wages as well. Bagshaw states that “employers don’t need to increase wages because there is plenty of demand for work.” Plenty of demand for work translates into a large supply of labor-power, which keeps wages down. Combining the unemployment and underemployment rates, Bagshaw reports the figure at 13.8% with 1.9 million Australians either unemployed or underemployed out of a workforce of 13.6 million. Bagshaw explains that wages increased nearly four percent prior to the 2008 global financial crisis when underutilization of the nation’s resources was much lower. In Marxian terms, the reserve army of the unemployed was much smaller and upward pressure on wages was occurring. Bagshaw also reports estimates of the Australian natural rate of unemployment (i.e., what is regarded as full employment in neoclassical terms) to be between 4.5 and 5 percent. What a neoclassical economist would regard as the natural rate of unemployment, a Marxian economist would regard as a reserve army of the unemployed that is small enough that wages begin to grow significantly. Once again, we see how economists from different schools of economic thought may understand economic phenomena in radically different ways. For an Austrian interpretation of these events, consider the recent decision of the Reserve Bank of Australia, the nation’s central bank, to cut interest rates to a record low of 0.75%, as reported by Bagshaw. Such intervention forces the market rate of interest down below the natural rate of interest, which promotes an artificial boom and overinvestment in capital goods. Because the time preferences of capitalists and laborers have not changed, consumer goods prices will be pushed upwards, eventually leading to disinvestment and an economic crisis that is more severe than the present one. Summary of Key Points 1. A general rate of profit and prices of production are formed when capital flows out of industries with relatively high organic compositions of capital and into industries with relatively low organic compositions of capital. 2. Other factors held constant, a longer turnover time reduces the annual rate of profit, and a shorter turnover time increases the annual rate of profit. 3. Other factors held constant, a rise in the organic composition of capital over long periods causes the general annual rate of profit to fall, which produces economic crises. 4. Several countertendencies operate to prevent a decline in the general rate of profit, including an increase in the rate of surplus value, a reduction in wages below their value, a cheapening of the elements of constant capital, and a shortening of the turnover processes of different capitals. 5. During periods of rapid capital accumulation, money wages rise above the value of labor-power as the reserve army of the unemployed declines and the rate of profit falls, which ultimately produces an economic crisis. During periods of economic contraction, money wages fall below the value of labor-power as the reserve army of the unemployed expands and the rate of profit begins to rise, which eventually makes an economic recovery possible. 6. In Marx’s two-sector model, the constant capital employed in Sector 2 must equal the sum of the variable capital and the surplus value in sector 1. Otherwise, discoordination across sectors will occur at the macroeconomic level, and an economic crisis will occur. 7. According to Richard Wolff, stagnant wages and rising productivity have made possible a major expansion of surpluses and lending since the 1970s, which set the stage for the 2008 financial crisis and the Great Recession of 2007-2009. 8. Social structure of accumulation (SSA) theory is a framework that radical political economists use to understand how human institutions contribute to long periods of rapid capital accumulation and economic growth. 9. According to Terence McDonough, the organizing principle of the monopoly capitalist SSA is the oligopolistic market structure of the early twentieth century, and the organizing principle of the postwar SSA is the social influence of the Second World War. 10. Austrian business cycle theory maintains that the structure of production depends on the aggregate period of production and the quantity of capital employed in production. 11. Austrian business cycle theorists argue that the structure of production may change when the time preferences of capitalists and laborers change. It may also change when the central bank creates an artificial boom via monetary expansion, but in the latter case, an economic crisis will follow when entrepreneurs realize that they have misjudged the time preferences of the consuming public. 12. The Post-Keynesian theory of effective demand clarifies how the equilibrium profits of the economy and the degree of income inequality change in response to changes in the components of aggregate spending. List of Key Terms Social structure of accumulation (SSA) theory Organic composition of capital (OCC) General rate of profit (r) Prices of production Turnover time Annual rate of surplus value Annual rate of profit Annual mass of surplus value General annual rate of profit (rA*) General annual rate of surplus value (eA*) Law of the tendency of the rate of profit to fall (LTRPF) Industrial reserve army of the unemployed Natural unemployment Simple commodity circuit Say’s Law of Markets Means of consumption Simple reproduction Mortgage-backed securities (MBSs) Collateralized debt obligations (CDOs) Subprime mortgage loans Credit rating agencies Conflict of interest Social structure of accumulation (SSA) Post-Civil War SSA Primitive accumulation Monopoly capitalist SSA Core of the SSA Postwar SSA Industrial unions Craft unions Structure of production Hayekian triangle Original means of production Time preference High time preference Low time preference Neutral monetary expansion Non-neutral monetary expansion Producer credits Market rate of interest Natural rate of interest Forced saving Iron law of retribution Capital consumption Consumer credits Aggregate wages (W) Aggregate profits (П) Functional distribution of income Equilibrium aggregate profit Government budget gap Trade balance Problems for Review 1. Consider an economy with only two capitalist enterprises. Assume that enterprise A advances weekly constant capital of200 and weekly variable capital of $100. Assume that enterprise B advances weekly constant capital of$100 and weekly variable capital of $200. Also assume that enterprise A has a turnover time of 4 weeks and enterprise B has a turnover time of 8 weeks. Calculate the annual rate of profit and the annual rate of surplus value assuming the weekly rate of surplus value is 100% for both enterprises and that each year has 52 weeks. How do the annual rates of profit compare? Is this result surprising? Explain the result with reference to the organic compositions of capital and to the turnover times of the two enterprises. 2. Suppose competition among capitalists in the international commodity markets has pushed down the prices of imported raw materials that are used in many domestic industries. What is the impact on the aggregate constant capital advanced? What is the impact on the organic composition of capital at the aggregate level? What is the impact on the annual general rate of profit? How are these changes important in terms of the long-term trajectory of the domestic capitalist economy? 3. Suppose that a critic of Marxian economics argues that surplus value cannot persist within capitalist economies because competition among capitalists for the source of surplus value (i.e., labor-power) will drive up wages until all the surplus value vanishes. The critic concludes that profits must have their source elsewhere. How can Marxian economic theory be used to refute this argument? 4. Consider an economy with two major sectors: Sector I produces the means of production, and Sector II produces the means of consumption. Suppose that the constant capital advanced in Sectors I and II are$20 billion and $30 billion, respectively. Assume that the variable capital advanced in Sectors I and II are$20 billion and \$10 billion, respectively. Also assume that the rate of surplus value is 100% in each sector. Is the economy balanced or will a macroeconomic crisis occur due to discoordination between the two sectors? Explain with reference to the numerical values in this example.
5. Consider the conflict of interest that arose for the credit rating agencies leading up to the financial crisis of 2008. What do you think is the best way to limit such conflicts of interest that might arise in financial markets?
6. Create a table with two columns. In one column, list the core institutions of the monopoly SSA. In the second column, list the core institutions of the postwar SSA. Try to arrange the sets of institutions so that they correspond to each other as much as possible. Finally, compare the core institutions of the two SSAs and note how they differ and how they are similar.
7. Consider Austrian business cycle theory when answering this question. Suppose that laborers experience an increase in their time preferences. How will the market for present goods be affected? Draw the changes on a graph and explain in words what is happening. How will the structure of production be affected? Draw the changes on a graph and explain in words what is happening. Be sure to refer to the aggregate period of production, the rate of interest, and the level of investment when answering this question.
8. Consider Post-Keynesian theory when answering this question. Suppose that workers reduce their saving. What will happen to capitalist profits and the degree of income inequality? To which phase of the business cycle does this change correspond? Is this change consistent with what you would expect to happen using the neoclassical synthesis model?
1. For Marx’s analysis of the countertendencies to the law of the falling rate of profit, see Marx (1991), p. 339-348.
2. The modeling approach used in Figures 14.1 and 14.2 is inspired by the approach found in Lianos (1987). Lianos’s use of this approach in the context of financial markets is discussed in Chapter 15.
3. Marx (1990), p. 275.
4. I am deeply grateful to Monthly Review Press for granting me permission to include a re-creation of Sweezy's (1970) figure, p. 91.
5. See Marx (1990), p. 209.
6. See Marx (1992), p. 471-478.
7. Wolff (2010), p. 83-86.
8. Ibid. p. 83.
9. Ibid. p. 83.
10. Ibid. p. 83.
11. Ibid. p. 84.
12. Ibid. p. 84.
13. Ibid. p. 85.
14. Ibid. p. 85.
15. Ibid. p. 85.
16. Ibid. p. 85.
17. Wolfson (1994).
18. I am deeply grateful to Cambridge University Press for granting me permission to include extensive citations from McDonough's book chapter.
19. McDonough (1994), p. 103.
20. Ibid. p. 103.
21. Ibid. p. 103.
22. Ibid. p. 104.
23. Ibid. p. 104.
24. Ibid. p. 105.
25. Ibid. p. 105.
26. Ibid. p. 106.
27. Ibid. p. 107.
28. Ibid. p. 108.
29. Ibid. p. 108.
30. Ibid. p. 108.
31. Ibid. p. 109.
32. Ibid. p. 109.
33. Ibid. p. 110.
34. Ibid. p. 110.
35. Ibid. p. 110.
36. Ibid. p. 110.
37. Ibid. p. 111.
38. Ibid. p. 111.
39. Ibid. p. 114.
40. Ibid. p. 115.
41. Ibid. p. 116.
42. Ibid. p. 116-117.
43. Ibid. p. 117.
44. Ibid. p. 117.
45. Ibid. p. 118.
46. Ibid. p. 118.
47. Ibid. p. 119-120.
48. Ibid. p. 121.
49. Ibid. p. 121.
50. Ibid. p. 121.
51. Ibid. p. 125.
52. Ibid. p. 126.
53. Garrison (1978), p. 167-204.
54. I am deeply grateful to Dr. Garrison for granting me permission to include an extensive summary of his argument. Of course, any errors of interpretation are solely my responsibility.
55. Ibid. p. 179.
56. Ibid. p. 179.
57. Ibid. p. 174.
58. Ibid. p. 172-173.
59. Ibid. p. 171.
60. Ibid. p. 173.
61. Ibid. p. 175.
62. Ibid. p. 175.
63. Ibid. p. 176.
64. Ibid. p. 177.
65. Ibid. p. 176.
66. Ibid. p. 184.
67. Ibid. p. 184.
68. Ibid. p. 185.
69. Ibid. p. 185.
70. Ibid. p. 186.
71. Ibid. p. 187-188.
72. Ibid. See Figure 8, p. 187.
73. Ibid. p. 188.
74. Ibid. p. 188.
75. Ibid. p. 190.
76. Ibid. p. 191.
77. Ibid. p. 191-192.
78. Snowdon, et al. (1994), p. 358-360.
79. Garrison (1978), p. 196.
80. Snowdon, et al. (1994), p. 358.
81. Ibid. p. 360.
82. Ibid. p. 360.
83. This equation may be found in Snowdon, et al. (1994), p. 369.
84. Bagshaw, Eryk. “Mountain to Climb to Full Employment.” The Sydney Morning Herald. 03 Oct. 2019. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/03%3A_Principles_of_Macroeconomic_Theory/14%3A_Unorthodox_Theories_of_Macroeconomic_Crisis.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Explain how the rate of interest is defined and measured
2. Explore the relationship between the bond market and the loanable funds market
3. Analyze a neoclassical general equilibrium model of interest rate determination
4. Incorporate the stock market into the neoclassical theory of interest rate determination
5. Investigate an Austrian theory of interest rate determination
6. Examine a Marxian theory of interest rate determination
Prior to this chapter, our exploration of macroeconomic theory has been focused on theories of the business cycle. That is, we have concentrated mostly on factors that influence the overall amount of economic activity, the total production of commodities, and the amount of unemployment. In this chapter, we turn to theories of financial markets. The financial markets have an important role to play in market capitalist economies and if we are to gain a deeper understanding of macroeconomic policy in later chapters, we must first learn to think about how the financial markets work and how they interact with the rest of the economy. Once we have developed a more complete understanding of interest rates, bonds, and stocks, we will then be able to explore in detail the neoclassical, Austrian, and Marxian theories of interest rate determination.
The Definition and Measurement of the Rate of Interest
The rate of interest, or the interest rate, is simply an amount of money paid to a lender by a borrower for the use of money during a specific period, expressed as a percentage of the amount borrowed. For example, if a lender receives $5 in payment for the use of a$100 loan during a year, then the annual interest rate is 5% (= $5/$100). In this case, the $100 is referred to as the principal amount of the loan, and the$5 is the dollar amount of the interest. If the principal is returned at the end of one year, then that will be the end of the transaction. The lender will have received $105. This growth of the principal is captured in the simple diagram in Figure 15.1. On the other hand, if the loan is renewed, then a situation of compound interest arises. That is, the lender leaves the$105 with the borrower at the end of the year, and then the lender expects to receive a 5% interest payment at the end of the second year calculated using the entire $105 loaned at the beginning of the second year. The calculation of the future value (FV) of the original loan amount of$100 at time t = 2 is as follows:
$FV=100(1+0.05)(1+0.05)=(100)(1.05)^2=\110.25$
The growth of the principal using compound interest in this scenario is captured with the diagram in Figure 15.2.
If the period of the loan is three years, then the future value will be even larger, again as a result of compound interest. The calculation of the future value in time t = 3 is as follows:
$FV=100(1+0.05)(1+0.05)(1+0.05)=(100)(1.05)^3=\115.76$
The growth of the principal using compound interest in this scenario is captured with the diagram in Figure 15.3.
In general, if the annual interest rate is i, the present value amount (or the initial loan amount) is PV, and the loan is made for n years, then the calculation of the future value is as follows:
$FV=PV(1+i)^n$
In the examples we just considered, a sum of money was loaned out and we explored how much it would be worth at the end of the loan period. That is, we considered the sum’s future value. It is often the case, however, that we are confronted with different information. For example, we might know the future payment that is to be received in a known number of years. If we also know the interest rate, then we can calculate the present value of that sum by simply rearranging the future value formula as follows:
$PV=\frac{FV}{(1+i)^n}$
To use our earlier example, suppose that a lender knows she will receive $115.76 at the end of three years. If she knows that the annual interest rate is 5%, then she can arrive at the present value in the following way: $PV=\frac{FV}{(1+i)^n}=\frac{\115.76}{(1+0.05)^3}=\100$ In other words, it is possible to equate future dollars with present dollars using the interest rate. The idea is rather intuitive. It means that future dollars are worth less than present dollars to a person. Wouldn’t you rather have$100 today than in three years? Of course, you would. How large would the future sum need to be before you would consider it equivalent to the $100 today? According to the information reflected in the current rate of interest, the answer is$115.76.
When this method of determining the present value of a future sum is used, it is said that the future sum has been discounted to the present using the interest rate. The present value formula and the method of discounting are especially useful when we wish to know the present value of a specific financial asset. For example, a creditor (or lender) might purchase an asset, such as a bond. A bond is really just a financial contract between a lender and a borrower, much like an IOU. When a creditor purchases a newly issued bond, she hands over a sum of money to a borrower. The borrower agrees to repay the amount borrowed when the bond matures. In the case of coupon bonds, the borrower also agrees to make periodic payments of interest to the lender until the bond matures. If we know what the future payments will be, then we can discount each payment back to the present and then simply sum them up to determine the present value of the bond.
For example, suppose that a bond pays $100 in interest annually for the next five years. This situation is depicted in the diagram in Figure 15.4. If we know that the interest rate is 5% and we ignore the repayment of principal for simplicity, then we can calculate the present value of the bond as follows: $PV=\frac{\100}{(1+0.05)}+\frac{\100}{(1+0.05)^2}+\frac{\100}{(1+0.05)^3}+\frac{\100}{(1+0.05)^4}+\frac{\100}{(1+0.05)^5}=\432.95$ For this calculation, the reader should notice that each future payment of$100 is being discounted back to the present before being summed up. Furthermore, each future payment is being discounted according to how many years will pass before it is received. That is, the final payment in year 5 is discounted the most (5 times). The payment at the end of year 1 is discounted the least (once). Finally, the reader should notice that the straightforward sum of the future payments is $500, but the present value is only$432.95. The reason, of course, is that the future payments are not worth $500 today due to the existence of discounting. The important point to notice is that we can determine the present value of a bond or any financial asset if we know the future payments associated with the asset, the rate of interest, and the term to maturity (i.e., the number of years to maturity). Imagine that an investor is considering the purchase of several different assets, each with a different number of years to maturity and a different periodic payment. The only rational way to compare these different assets is to use the current interest rate to discount the future payments associated with each asset back to the present. Once the values of the different assets are determined for the present period, their values can be easily compared. In these examples, we have been assuming that the interest rate is a known quantity. It is possible that an investor might be considering the purchase of a bond, but he only knows the amount of the periodic interest payments, the term to maturity, and the price of the bond. For example, suppose that the periodic payment is$100, the term is 5 years, and the price of the bond is $400. The investor would like to know the interest rate associated with the bond, which is also referred to as the yield to maturity (YTM). To calculate the yield, it is only necessary to determine the interest rate that will equate the current price of the bond with the present value of its future payments as follows: $400=\frac{\100}{(1+i)}+\frac{\100}{(1+i)^2}+\frac{\100}{(1+i)^3}+\frac{\100}{(1+i)^4}+\frac{\100}{(1+i)^5}$ This calculation is difficult without a financial calculator. It can be obtained through a trial and error method. The solution is approximately a 7.93% rate of interest. Using this method, it is possible to measure the rate of interest that applies to a specific financial asset. The reader should also notice that because the price of the bond is below$432.95, the yield on the bond is above 5%. That is, when an investor pays a lower price for the bond, with the periodic interest payments fixed, the yield is necessarily higher. This result is consistent with the widely reported relationship between interest rates and bond prices that one hears in the financial news: Interest rates and bond prices are always inversely related.
Economists of all persuasions tend to refer to the rate of interest as though it is a single entity. In reality, many different interest rates exist in market capitalist economies. Each corresponds to a different loan or asset. Interest rates exist for 12-month certificates of deposit, online savings accounts, 15-year mortgage loans, 30-year mortgage loans, 10-year Treasury bonds, 2-year auto loans, and so on. The reason that economists often refer to a single interest rate is that the many different interest rates that exist tend to move together. Economists do have theories as to how and why interest rates differ from one another, but frequently they are interested in explaining the overall movement of interest rates instead. In the latter case, they refer simply to “the interest rate.”[1]
We are now in possession of a clear definition and method of measurement of the rate of interest. We have also learned how to calculate the future value of a present sum and the present value of a future sum. These tools will be very useful as we consider linkages between different financial markets in the next several sections of this chapter.
The Relationship b etween the Bond Market and the Loanable Funds Market
We are now in a position to identify the linkage between two key financial markets that exist in capitalist economies: the bond market and the loanable funds market. Although we discuss these markets as though they are two separate markets, each can be thought of as the mirror reflection of the other and so they are really one and the same.
The loanable funds market is the market for financial loans. Like any market, it has both a supply side and a demand side. On the supply side of the market are the lenders who possess loanable funds that they wish to lend to borrowers for a period of time in exchange for interest. The supply curve of loanable funds is upward sloping as shown in Figure 15.5 (a). That is, as the rate of interest increases, the quantity of loanable funds that lenders are willing and able to provide increases. The rate of interest may be thought of as the price paid for the use of these funds, and so the supply curve is upward sloping, as it is in most product markets. The demand curve for loanable funds is downward sloping, which is also shown in Figure 15.5 (a). It slopes downward because as the interest rate falls, more investment projects become profitable for businesses, and so businesses are willing and able to borrow more loanable funds.
It is interesting to note that the suppliers in the one market are the demanders in the other market. Similarly, the demanders in the one market are the suppliers in the other market. For example, if I wish to borrow funds in the loanable funds market, then I am on the demand side of that market. At the same time, I can only obtain loanable funds by selling bonds in this scenario, and so I am on the supply side in the bond market. Similarly, if I wish to lend funds in the loanable funds market, then I am on the supply side of that market. At the same time, I can only lend loanable funds by buying bonds in this scenario, and so I am on the demand side in the bond market. To summarize:
1. Demanders of loanable funds = Suppliers of bonds ⇒ Borrowers
2. Suppliers of loanable funds = Demanders of bonds ⇒ Lenders
Because of this logical connection between the two markets, we regard the loanable funds market as the mirror reflection of the bond market. It follows that if one of the markets is in equilibrium, the other market must also be in equilibrium. A natural question to ask then is whether a definite relationship exists between the equilibrium bond price and the equilibrium interest rate.
The answer to this question is that the equilibrium bond price will equal the present value of the bond calculated using the equilibrium interest rate. For example, suppose that we know the term to maturity for a bond to be 5 years, the initial loan amount to be A, and the equilibrium interest rate to be i*. Using this information, it is possible to calculate the present value of the bond when the loanable funds market is in equilibrium. It is this present value calculation that also yields the equilibrium price of the bond, PB*, as shown below:
$P_{B}^*=PV=\frac{A}{(1+i^*)}+\frac{A}{(1+i^*)^2}+\frac{A}{(1+i^*)^3}+\frac{A}{(1+i^*)^4}+\frac{A}{(1+i^*)^5}$
It might not be obvious why the price of the bond has an inherent tendency to move towards this level, PB*. The argument must be made, however, if the claim is to be defended that this bond price is an equilibrium one. To understand why this bond price represents the equilibrium bond price, consider what will happen if the price of the bond is different from this level. For example, suppose the price of the bond is above the present value as calculated here. In that case, investors will not want to purchase the bond. Who would pay more for a bond than its present value? The demand for the bond will fall, which places downward pressure on the bond price. Similarly, bondholders will be eager to sell the bond. Who will want to hold a bond when it can be sold at a price that exceeds its present value? The increase in supply will also put downward pressure on the bond price. Hence, both factors push the price of the bond down towards the present value of the bond and towards PB*. To summarize:
If PB* > PV then demand falls and supply rises until PB*=PV.
Alternatively, consider what will happen if the price of the bond is below the present value as calculated here. In that case, investors will want to purchase the bond. Who would not want to purchase a bond when its price is less than its present value? It’s a bargain. The demand for the bond will rise, putting upward pressure on the price of the bond. Similarly, bondholders will not want to sell the bond. Who would sell a bond when its price is below what it is worth in today’s terms? The reduction in supply will put upward pressure on the bond price. Hence, both factors push the price of the bond up towards the present value of the bond and towards PB*. To summarize:
If PB* < PV then demand rises and supply falls until PB*=PV.
Overall, we see that the equilibrium interest rate and the equilibrium bond price are related in a precise manner as reflected in the present value formula.
Finally, it is helpful to consider what will happen when equilibrium is disrupted in these markets. Suppose that the demand for loanable funds rises as a result of businesses becoming more optimistic about the health of the economy. A rightward shift of the demand curve for loanable funds will cause a rise in the equilibrium interest rate as shown in Figure 15.6 (a).
At the same time, because the demanders of loanable funds are also the suppliers of bonds, the supply curve of bonds will shift to the right in the bond market as shown in Figure 15.6 (b). The result will be a reduction in the equilibrium bond price. It is worth noting that this result of an inverse relationship between the interest rate and the bond price is consistent with the same conclusion drawn earlier from the present value formula.
The Market for Money
In addition to the bond and loanable funds markets, mainstream economists typically also discuss a third financial market that is referred to as the money market. This concept can easily become the source of great confusion because of the different meanings of this phrase.[2] In the financial services industry, the money market refers to the market for short term securities. That is, assets with terms to maturity of less than one year are referred to as money market instruments. Three-month Treasury bills, six-month certificates of deposit, and commercial paper are all examples of assets that involve repayment of principal and interest to a lender in a period of less than one year. Investors purchase these assets to earn interest income using their short-term savings.
It is crucial to understand that the money market we will be discussing in this section is not the money market to which professionals in the financial services industry refer. Indeed, the money market to which financial services professionals refer is much closer to the bond market that we have been discussing, although many bonds have terms to maturity of much longer than one year. Instead, when mainstream economists refer to the money market, they have in mind a theoretical construct that derives from the work of John Maynard Keynes’s 1936 book The General Theory of Employment, Interest, and Money that we discussed in Chapter 13. Because this “market” is rather unusual, we will need to devote some space to this notion. Once the money market is completely understood, we can complete the picture of the neoclassical theory of interest rate determination.
Consider for a moment, the finances of a single household. The household has accumulated a large collection of assets, which includes a home, two automobiles, a savings account, some stocks, some bonds, a certificate of deposit, a checking account, some currency (paper money and coins), and a boat. At any one point in time, the total value of the household’s assets is divided into these different components, and it is possible to identify the dollar value that corresponds to each component, as shown in Figure 15.7.
Of all these different assets, neoclassical economists only consider the checking account and the currency to constitute money. Money, according to the neoclassical perspective, refers to anything that can be readily used for transactions. Only the most liquid assets qualify. Liquidity refers to the ease of conversion of an asset into currency. Currency is obviously the most liquid asset. Checking accounts are also extremely liquid. The funds are payable on demand and checks and debit cards can be easily used to engage in transactions. Savings deposits and certificates of deposit are less liquid because check writing is not possible and withdrawal restrictions apply to each. Stocks and bonds must be sold, which requires payment of a brokerage fee so they are not as liquid as currency or checkable deposits. The least liquid assets for the household are the home, the automobiles, and the boat. These must be sold, which takes time and is costly. Because currency and checkable deposits are the most liquid assets, neoclassical economists typically only consider them to be money.
Given this definition of money, the household’s actual money holdings consist of the sum of its currency holdings and its checkable deposits. On the other hand, this household’s demand for money refers to its desired money holdings, given its wealth. Of course, it is possible that the household’s desired holdings and its actual holdings do not agree. If the household wishes to hold the same amount of money that it is holding in Figure 15.7, however, then this amount constitutes the household’s demand for money.
It is important not to be confused by the concept of money demand. One might think that the demand for money should always be infinite because everyone always wants more of every good and asset, including money, according to the neoclassical way of thinking. This conclusion would be incorrect, however, because money demand only refers to desired money holdings given the assetsof the household. That is, how much of the household’s assets does it wish to hold in the form of money? It may wish to hold a lot of its assets in the form of money or only a little depending on the benefits it perceives to flow from the holding of money.
What are the benefits that flow from the holding of money? Why would a household hold any money? The classical economists considered this question and provided a helpful, albeit somewhat obvious answer. People hold money for the purpose of engaging in transactions. This transactions demand for money forms one part of the household’s money demand. That is, the household requires money if it is to pay for goods and services. A household cannot survive for very long in a market capitalist society if it refuses to use money. Bills must be paid and groceries must be purchased. Clearly, the transactions demand for money seems to be an important piece of the puzzle.
Why else might a household decide to hold some of its assets in the form of money? Keynes offered an additional reason why a household might choose to hold money. Even if the household is not interested in using the money for planned transactions, it might wish to hold some money strictly for precautionary reasons. That is, the fear of unplanned medical expenses might lead a household to maintain an emergency savings fund, just in case. Fear of an unanticipated job loss might be another reason to hold wealth in the form of money. Neoclassical economists refer to this type of money demand as the precautionary demand for money. This factor might also be contributing to the money demand represented in Figure 15.7.
If we wish to draw the demand curves representing the transactions demand (DT) and the precautionary demand (DP) for money, we can do so as in Figure 15.8.[3]
Figure 15.8 suggests that the sum of these two components of money demand (DT+DP) will be represented as a vertical line when the interest rate is placed on the vertical axis. That is, neither of these components of money demand depends on the interest rate. As the interest rate rises, the quantity of money demanded remains the same because households still wish to engage in the same volume of transactions and maintain the same precautionary balances in case of emergencies. When money income changes, however, the transactions demand and precautionary demand increase, and so these curves shift to the right. That is, households will want to purchase more goods and services, and they will prefer to hold more money in case of emergencies. A reduction in money incomes would lead to leftward shifts for similar reasons.
Finally, Keynes identified a third motive for holding money. He argued that a household might hold money purely for speculative reasons. That is, a household might wish to hold money so it might be used to purchase bonds if interest rates unexpectedly rise to higher-than-normal levels. Due to the inverse relationship between bond prices and interest rates, the drop in bond prices will make them attractive investments. Once interest rates fall and bond prices rise to their original, normal levels, the household will enjoy a capital gain. A capital gain is the difference between the selling price and the purchase price of an asset. This speculative demand for money offers a third way to understand the demand for money represented in Figure 15.7.
Although Keynes wrote about the speculative demand for money, neoclassical economists often think in terms of the asset demand for money, which is like the speculative demand for money.[4] If money is thought about as an asset, then it can be compared to the other assets the household owns. Initially, it might appear that money is a terrible asset. A home is a great long-term investment because home prices frequently increase over time at a rate greater than most prices. As a result, a homeowner might experience a capital gain when she finally sells the house, and in the meantime, the members of the household have been able to enjoy the benefits of living in the home. Stocks and bonds may also be sold to realize capital gains, and they pay dividends and interest, respectively, while they are owned. Even savings accounts and CDs pay interest to their owners. Money, however, generally pays no interest. Currency in your wallet and checkable deposits do not pay interest.
What does money have to recommend it then as an asset? The answer: its liquidity. Assets are not only evaluated on the basis of their expected return. Their degree of liquidity is also an important characteristic that is important to investors. Because money is the most liquid asset available, it is typically held as an asset. Remember, if this component of money demand exists, then a part of the total money held by the household is not for the purpose of making transactions or for precautionary reasons. It is held simply because money is one asset among several that the household considers to be worth holding. The asset demand for money depends on a household’s money (or nominal income), just like the transactions demand and the precautionary demand. As a household’s money income rises, its wealth increases, and it will choose to hold more of all assets, including money.
The asset demand for money also depends, however, on another key factor: the rate of interest. As the rate of interest rises, interest-bearing assets become relatively more desirable. That is, the opportunity cost of holding money as an asset increases. Households, therefore, wish to hold less money, and the quantity of money demanded declines. Alternatively, as the rate of interest declines, the quantity of money demanded increases because the opportunity cost of holding it falls. That is, interest-bearing assets become relatively less attractive, and the liquidity characteristic of money makes it seem like a relatively more attractive asset. If we place the rate of interest on the vertical axis and the quantity of money demanded on the horizontal axis, then the curve representing the asset demand for money (DA) is downward sloping, indicating an inverse relationship between the quantity of money demanded and the interest rate. This situation is depicted in Figure 15.9.
The total money demand curve will be downward sloping as a result of the downward slope of the asset money demand curve. Because it represents the sum of all three curves, it lies further to the right than any one of the curves taken individually. As before, a change in money income will shift the money demand curve in the direction of the change.
Up until this point, we have been discussing the demand for money originating with a single household. Of course, each household will have its own money demand curve, reflecting its desire to hold money for transactions, for precautionary reasons, and as an asset. If we aggregate all these individual households’ demands and all the firms’ demands for money (after all, businesses will desire to hold money as well for various reasons), then we obtain the aggregate money demand curve for the entire economy. It should also be downward sloping for the reasons described in this section. Using the aggregate money demand curve, we now have a completely developed notion of one side of the market that neoclassical economists call the money market.
The demand side of the money market refers to desired money holdings. Actual money holdings, however, are reflected in the supply side of the money market. The supply of money is collectively determined by the nation’s central bank, the commercial banks, and the depositors. Among these three, the central bank has the greatest influence over the money supply. In the United States, the Federal Reserve (referred to as “The Fed”) serves as the central bank. It has the power to determine the quantity of checkable deposits and currency in circulation. In other words, the Fed determines the quantity of money supplied. At this stage, we assume that the money supply is exogenously determined. That is, the money supply is determined by central bankers, and it is independent of the interest rate. Therefore, the money supply curve (SM) is perfectly vertical as shown in Figure 15.11.
If the Fed increases the money supply, then the money supply curve shifts to the right. If the Fed reduces the money supply, then the money supply curve shifts to the left.
It is also worth noting that the supply and demand curves intersect at a specific interest rate (i*) in the money market. At this interest rate, the quantity of money that households and firms actually hold equals the quantity that households and firms desire to hold. That is, actual money holdings equal desired money holdings. In this situation, households and firms have no reason to modify their behavior, and so this interest rate represents the equilibrium rate of interest. What is not clear at this stage is how equilibrium is achieved in this market. We provide the answer to this question in the next section.
A Neoclassical General Equilibrium Model of Interest Rate Determination
In this section, we will consider a neoclassical general equilibrium model of interest rate determination. A general equilibrium model is a model that demonstrates how multiple markets simultaneously arrive at an equilibrium outcome. In earlier chapters, our focus was on partial equilibrium models. A partial equilibrium model demonstrates how a single market reaches an equilibrium outcome. Because partial equilibrium models are very easy to explain, neoclassical economists are fond of using such models at the introductory level. When discussing financial markets, however, it is helpful to use a general equilibrium framework that links together the loanable funds market, the bond market, and the money market. It can be shown that a change in any one market leads to the clearing of the other markets.
The first economist to rigorously develop a mathematical model of general equilibrium was the French economist Leon Walras. In the 1870s, Walras was one of the three economists to independently emphasize marginal changes as central to rational economic decision making. William Stanley Jevons in Britain and Carl Menger in Austria were the other two economists to participate in what would later be dubbed the marginalistrevolution. The new approach to economic analysis was considered significant enough that later economists would regard this change as an event separating old-fashioned classical economics from modern neoclassical economics. Walras’s work was unique, however, in that he also developed a theory of general economic equilibrium.
An important part of Walrasian general equilibrium theory is something called Walras’s Law. Walras’s Law states that if n markets exist and n – 1 markets are in equilibrium, then the nth market must also be in equilibrium. Technically, one market cannot be out of equilibrium while the other markets remain in equilibrium. To illustrate the concept of simultaneous equilibrium in multiple markets, however, we will consider the simultaneous adjustments that occur to restore general equilibrium when one market is thrown out of equilibrium in our simple model of three financial markets. To explore how these markets adjust, let’s consider what happens when all three markets begin in equilibrium, but then an exogenous shock causes equilibrium in the money market to be disrupted. For example, suppose that the Fed increases the money supply. In this case, the money supply curve shifts to the right, as shown in Figure 15.12 (b).
As the reader can see, the rightward shift of the money supply curve creates a surplus of money in the money market at the original interest rate, i1. That is, households and firms are now holding more money than they wish to hold at the current interest rate. As a result, they will use the surplus funds to buy bonds, thereby increasing the demand for bonds from DB1 to DB2 as shown in Figure 15.12 (c). The increased demand for bonds drives up the price of bonds towards its new equilibrium level. At the same time, the higher demand for bonds is equivalent to an increase in the supply of loanable funds, and so the supply of loans shifts from SLF1 to SLF2 as shown in Figure 15.12 (a). The interest rate falls towards its new equilibrium level of i2. As the interest rate falls in the loanable funds market, it also falls in the money market, and so an increase in the quantity demanded of money occurs, represented as a movement along the money demand curve. This movement continues until the money market is also in equilibrium. The result is simultaneous equilibrium in all three markets.
Now let’s return to the initial situation of equilibrium in all three markets, but this time let’s suppose that the Fed reduces the money supply, shifting the money supply curve to the left, as shown in Figure 15.13 (b).
This time the leftward shift of the money supply curve creates a shortage of money in the money market at the original interest rate, i1. That is, households and firms are now holding less money than they wish to hold at the current interest rate. As a result, they will attempt to acquire funds by selling bonds, thereby increasing the supply of bonds from SB1 to SB2 as shown in Figure 15.13 (c). The increased supply of bonds drives down the price of bonds towards its new equilibrium level. At the same time, the higher supply of bonds is equivalent to an increase in the demand for loanable funds, and so the demand for loans shifts from DLF1 to DLF2 as shown in Figure 15.13 (a). The interest rate rises towards its new equilibrium level of i2. As the interest rate rises in the loanable funds market, it also rises in the money market and so a decrease in the quantity demanded of money occurs, represented as a movement along the money demand curve. This movement continues until the money market is also in equilibrium. Again, the result is simultaneous equilibrium in all three markets.
Once again, let’s return to the initial situation of equilibrium in all three markets, but this time let’s suppose that money incomes increase so that the demand for money rises. In this case, it is a rightward shift of the money demand curve that occurs as shown in Figure 15.14 (b).
In this case, a rightward shift of the money demand curve creates a shortage of money in the money market at the original interest rate, i1. That is, households and firms are now holding less money than they wish to hold at the current interest rate. As a result, they will attempt to acquire funds by selling bonds, thereby increasing the supply of bonds from SB1 to SB2 as shown in Figure 15.14 (c). The increased supply of bonds drives down the price of bonds towards its new equilibrium level. At the same time, the higher supply of bonds is equivalent to an increase in the demand for loanable funds, and so the demand for loans shifts from DLF1 to DLF2 as shown in Figure 15.14 (a). The interest rate rises towards its new equilibrium level of i2. As the interest rate rises in the loanable funds market, it also rises in the money market, and so a decrease in the quantity demanded of money occurs, represented as a movement along the money demand curve. This movement continues until the money market is also in equilibrium. Again, the result is simultaneous equilibrium in all three markets.
Finally, let’s return to the initial situation of equilibrium in all three markets, but this time let’s suppose that money incomes decrease so that the demand for money falls. In this case, it is a leftward shift of the money demand curve that occurs as shown in Figure 15.15 (b).
In this case, the leftward shift of the money demand curve creates a surplus of money in the money market at the original interest rate, i1. That is, households and firms are now holding more money than they wish to hold at the current interest rate. As a result, they will use their surplus funds to purchase bonds, thereby increasing the demand for bonds from DB1 to DB2 as shown in Figure 15.15 (c). The increased demand for bonds drives up the price of bonds towards its new equilibrium level. At the same time, the higher demand for bonds is equivalent to an increase in the supply of loanable funds, and so the supply of loans shifts from SLF1 to SLF2 as shown in Figure 15.15 (a). The interest rate falls towards its new equilibrium level of i2. As the interest rate falls in the loanable funds market, it also falls in the money market, and so an increase in the quantity demanded of money occurs, represented as a movement along the money demand curve. This movement continues until the money market is also in equilibrium. As before, the result is simultaneous equilibrium in all three markets.
All the cases considered clearly show that any disruption in the money market will lead to adjustments in the bond and loanable funds markets to restore equilibrium in all three markets. The analyses are also consistent with the general observation that bond prices and yields are inversely related, and with the money market in equilibrium, all firms and households are holding precisely the amount of money that they desire to hold, given their total assets.
Incorporating the Stock Market into the Analysis
At this stage, it should be clear how interest rates are explained within the neoclassical framework. Households and firms compare their actual money holdings with their desired money holdings and then buy or sell bonds when any discrepancies exist. Eventually, the rate of interest adjusts, which brings desired money holdings into line with actual money holdings. No surpluses or shortages exist in any of the financial markets, and the situation will persist unless an external shock disrupts the general equilibrium.
One financial market that has not been incorporated into the analysis thus far is the stock market. Just like with bonds, corporations issue shares of stock in order to raise funds. Unlike bonds, however, stocks are shares of ownership in the corporations that issued them. For example, Microsoft sells stock to the public. When a corporation sells stock to the public for the first time, it typically hires an investment bank, like Goldman Sachs or Morgan Stanley, to underwrite the stock issue. That is, it guarantees Microsoft a price per share and then sells it to the public in the primary market, pocketing a promoter’s profit in the process.
The buyer of Microsoft stock becomes a part-owner of the corporation and thus has a partial claim to the net income and assets of the firm. If the firm fails, the stockholder loses her investment. If the firm makes profits, then the stockholder may receive profit distributions in the form of dividends. The stockholder might also decide to sell the stock in the secondary market, such as the New York Stock Exchange. If the price of the stock has increased since the time it was purchased, then the stockholder will enjoy a capital gain upon selling it.
One of the benefits of stock ownership is the right to participate in shareholders’ meetings and to vote in elections that will decide the corporation’s board of directors. The possibility of large dividends and capital gains also makes stock ownership attractive, but losses may be considerable as well, and so stocks are generally rather risky. Another downside to stock ownership is that bondholders have a prior claim to the assets of the firm. If the firm fails, then the stockholders will be the last individuals to receive a share of the failing firm’s remaining assets.
Because many different companies issue their own stocks, when we refer to the stock market, we are referring to the market for many different financial assets. Just like we simplified our analysis in the previous section by referring to the bond market as the market for a single type of bond (even though many different types of bonds exist), we will discuss the stock market as the market for a single stock. Because bond prices tend to rise and fall together and stock prices tend to rise and fall together, many economists are comfortable developing theories without letting these differences stand in the way.
For simplicity, let’s suppose that a share of stock may be purchased at a price PS. To further simplify, let’s assume that annual dividend payments are expected for the next five years but that the firm will cease to exist at the end of that time period. If the annual expected dividend payments are D for the next 5 years, then we can write an equation that allows us to determine the discount factor (d) for the stock in the following way:[5]
$P_{S}=\frac{D}{(1+d^*)}+\frac{D}{(1+d^*)^2}+\frac{D}{(1+d^*)^3}+\frac{D}{(1+d^*)^4}+\frac{D}{(1+d^*)^5}$
A strong similarity exists between this formula and the one used to determine the yield on a five-year bond. The key difference here is that the dividend payments are not guaranteed but only expected. That is, a bondholder knows the interest payments that she will receive (if the corporation does not default on the payments, which means it fails to pay the bondholder). For the stockholder, the payments are much less certain. They may be higher or lower than D. They may be suspended if the Board of Directors decides that reinvestment of the firm’s profits is a superior move. Any number of factors might disrupt the payment of these expected dividends. As a result, stocks are inherently riskier than bonds and stockholders demand a premium to compensate them for this additional risk. The discount factor, is therefore, higher than the interest rate that we used to discount the interest payments associated with a bond. The discount factor applied to stocks (d*), in other words, will be the interest rate on bonds (i*) plus some additional amount to compensate for risk (i.e., a risk premium). That is, d* > i*. If the annual expected dividend payments (D) equal the fixed interest payments (A) on a bond issued by the same company (i.e., D = A), then the price of the stock will be lower than the price of the bond. That is, PS < PB. In other words, other factors the same, a stock should have a lower price than a bond due to its greater risk. The expected yield for the stock would then be higher than the yield on the bond. For this reason, stocks are generally viewed as potentially more lucrative than bonds.
To represent this situation graphically, we may consider the supply and demand for bonds and the supply and demand for stocks. Because stocks are riskier, having uncertain dividend payments, the demand for stocks will be lower than the demand for bonds. The equilibrium price of stocks (PS*) will, therefore, be lower than the equilibrium price of bonds (PB*), as shown in Figure 15.16, assuming other factors are the same, such as the expected payments associated with each asset.
One might expect this situation to be temporary. If stocks are cheaper, then bondholders might be expected to sell their bonds and buy stocks instead. If bondholders behave in this way, then bond prices will fall and stock prices will rise until they are equal. This result will not occur, however, because the price differences are driven by the lower demand for stocks due to their greater perceived risk. The equilibrium is, therefore, stable, and the price discrepancy (PB* – PS*) will persist if the perceived degree of riskiness of the stock does not change.[6]
R eal Interest Rates v ersus Nominal Interest Rates
One additional aspect of the neoclassical theory of interest rates that deserves attention is the neoclassical distinction between nominal interest rates and real interest rates, which was introduced in Chapter 12. The nominal interest rate is the interest rate that is actually observed in the marketplace. As stated previously, many different interest rates are observed in the marketplace, and each interest rate is a nominal interest rate. Because these nominal interest rates tend to move together over time, economic theories frequently refer only to the nominal interest rate.
The real interest rate, on the other hand, refers to the interest rate corrected for inflation. For example, suppose that you lend $100 to someone at a 5% nominal rate of interest. In one year, your money will have grown to$105. If the prices of goods have increased in the meantime, however, then the purchasing power of your money may not have increased at all. The purchasing power of money refers to its real value, or the amount of actual goods it can purchase. The real interest rate then refers to the percentage increase in the purchasing power of a sum of money during the period of a loan.
To calculate the real interest rate, we assume that a sum of money (M) is loaned to an individual who is charged the nominal interest rate (i). We also assume that the general price level, as measured by the GDP Deflator or the Consumer Price Index (CPI), is denoted as P. The real value of M at the time the loan is made is M/P. For example, if M is equal to $10 and a specific type of apple priced at$0.50 each is the only good produced in the economy, then the real value of M is 20 apples (= $10/$0.50 per apple). Of course, when using a price index, the real value of the money will be stated in terms of constant base year dollars.
Now let’s suppose that M/P is loaned to a borrower. The real interest rate (r) will be the percentage change in M/P over the course of the year. The reader might know that when considering the percentage change in the ratio of two variables, it is possible to estimate the solution by subtracting the percentage change in the denominator from the percentage change in the numerator:
$r=\%\Delta \frac{M}{P} \approx \%\Delta M-\%\Delta P$
This shortcut method of calculating the real interest rate has a helpful interpretation. The percentage change in M is simply the nominal interest rate (i) that the borrower is charged. The percentage change in P is the rate of inflation (π). Therefore, the real interest rate may be calculated as follows:
$r=i-\pi$
That is, the real interest rate is simply the difference between the nominal interest rate and the inflation rate. Therefore, if $100 is loaned out for one year at a 5% nominal interest rate and the inflation rate for that year is also 5%, then the real interest rate is 0% (= 5% – 5%). That is, the rising price level completely wipes out the nominal increase in the sum of money, leaving the lender no better off and no worse off than before. Because of the tendency for inflation to wipe out the gains from lending, lenders will aim to charge a nominal interest rate that is high enough to cover the inflation rate and then allow for a positive real interest rate. Of course, the real interest rate will be determined competitively in the financial markets, as we have discussed. The reader should note that it is possible for the real interest rate to be negative. Even if the nominal interest rate is 5%, the real interest rate will be -3% if the inflation rate is 8%. In that case, the lender will receive back a sum of money that is nominally larger than the amount originally loaned out (by 5%), but it will have 3% less purchasing power due to the high inflation rate. An Austrian Theory of Interest Rate Determination The neoclassical school of economic thought offers only one among several approaches to interest rate determination. The Austrian school of economics has also made important contributions to the theories of capital and interest. Eugene von Bohm-Bawerk developed a theory of capital in the late nineteenth century that served as a theory of interest rate determination. His theory was based on the physical productivity of capital whereas Ludwig von Mises in the early twentieth century focused more on the role of subjective preferences in the formation of the rate of interest. In this section, we will look at one way of representing Mises’s theory of interest within the context of a pure exchange economy. Because this theory can be developed without any reference to the theory of production, it serves as a straightforward introduction to the Austrian theory of interest.[7] In developing his theory of interest, Mises relied heavily upon the concept of time preference. As described in Chapter 14, time preference refers to the fact that consumers have specific preferences regarding the time pattern of consumption. A consumer with a positivetime preference prefers to consume in the relatively near future as opposed to the relatively distant future. A consumer with a negativetime preference prefers to consume in the relatively distant future as opposed to the relatively near future. Finally, a consumer with a neutral time preference, or a time preference of zero, is indifferent between consuming in the relatively near future and the relatively distant future.[8] For example, suppose that three consumers have specific preferences regarding how many apples to consume in each of five periods of time. Table 15.1 represents the time preferences of the three consumers. Clearly, consumer 1 has a positive time preference because she would like to consume more apples in the current period and in the relatively near future and fewer applies in the relatively distant future. Consumer 2 has a negative time preference. His desired consumption increases in the relatively more distant future periods. Finally, consumer 3 is comfortable consuming the same number of apples in every period regardless of when the apples will be consumed.[9] Although we can represent all of these cases, most consumers possess positive time preferences. That is, consumers prefer to consume now rather than later, but because these preferences are entirely subjective, negative and neutral time preferences are theoretically possible. To further develop the Austrian theory of interest, let’s consider an example in which three consumers receive endowments of apples in each of seven periods with the first period representing the current period. Table 15.2 shows the time pattern of endowments for each of the three consumers over the course of the seven time periods. Of course, each consumer might have a time preference that differs substantially from their time pattern of endowments as shown in Table 15.3, which allows us to compare the time pattern of endowments with desired consumption for each consumer. Rational people will try to bring their actual consumption into line with their desired consumption. If we assume zero storage costs, then a person with a negative time preference will be able to solve this problem herself. She simply needs to save her apples until later periods, assuming that the endowments are large enough in the periods closer to the present to allow for this plan to be executed. Since most people possess positive time preferences (like the three consumers in Table 15.3), however, consumers will typically want to transfer future apples to the time periods that are closer to the present. The problem is that apples cannot be transferred from the future to the present. A problem of time asymmetry exists. Time asymmetry means that present goods can be transferred to the future, but future goods cannot be transferred to the past.[10] The problem of time asymmetry means that consumers must find another method of increasing present consumption at the expense of future consumption. The way that consumers achieve this goal is by selling paper claims to their future apples to other consumers in the present. No consumer will pay one apple for a claim to one future apple because the consumer will gain nothing. A consumer will only give one apple in exchange for a paper claim to a future apple if the consumer is promised something extra as well. This something extra is called interest. Therefore, interest exists as a direct result of the presence of positive time preferences and the problem of time asymmetry. The market for paper claims to future production (i.e., the bond market) is also a direct result of subjective preferences for future goods. To determine desired net borrowing for one consumer in a specific period, we only need to calculate the difference between her desired level of consumption and her endowment.[11] That is: Desired Net Borrowing = Desired Consumption per Period – Endowment per Period Table 15.4 shows the amount of desired net borrowing per period for each consumer. This amount of net borrowing is required for each consumer to achieve her desired consumption per period. Table 15.4 shows that each consumer wishes to borrow in the current period and in the periods that are closer to the present time (for the most part). They would like to lend in the later periods as reflected in the negative desired net borrowings in the later periods. The problem that each consumer faces, however, is the constraint that the total consumption per period across all three consumers cannot exceed the total endowment per period for all three consumers. Trades of apples for paper claims can occur in a given period, but future apples cannot be received in the present. Any apples obtained above one’s endowment can only be the result of another consumer consuming an amount below her endowment. The situation depicted in Table 15.4 is impossible because the total desired consumption in period 1 is 27 apples, for example, but the total endowment in period 1 is only 11 apples. Therefore, this desired allocation is impossible. Another way of stating the same result is that the total actual net borrowing per period can never exceed zero. Any net borrowing must be balanced by net lending. In Table 15.4 desired net borrowing per period is positive for all three consumers. It is not possible for all three consumers to borrow in a specific period, and so these plans cannot be satisfied, and this ideal situation cannot arise. The consumers resolve these difficulties to the best of their abilities by creating a bond market and supplying and demanding paper claims to future apples. The supply and demand for these paper claims will determine an equilibrium rate of interest. Let’s assume that this problem is resolved in all periods and that the interest rate is 100% in every period with all interest paid in the very last period. This assumption is a greatly simplifying assumption that allows us to represent one possible result. After all this haggling occurs, the actual consumption per period for each consumer will be determined as shown in Table 15.5. The actual consumption per period just represents a redistribution of the total endowment in each period across the three consumers. It is determined as the consumers decide whether to borrow or lend in each period. The actual net borrowing per period for each consumer can also be calculated as follows: Actual Net Borrowing = Actual Consumption per Period – Endowment per Period In this example, the total actual net borrowing per period is equal to zero in each period. Negative values for actual net borrowing indicate net lending and positive values indicate net borrowing. Table 15.2 shows that the aggregate endowment is 96 apples. In Table 15.5 the grand total for actual consumption is also 96 apples.[12] The final three columns of Table 15.5 also show how the receipt of net interest in the final period affects the final results. Because the interest and principal are paid in period 7, those numbers are indicated in bold. The bold numbers represent the only difference between actual consumption per period plus net interest and actual consumption per period omitting net interest, as shown in Table 15.5. Net interest is calculated as follows: Net Interest = Principal and Interest Received – Principal and Interest Paid For example, to calculate the net interest for consumer 1 in period 7, simply multiply 2 times 9 (=3+2+3+1) to obtain 18 apples. This amount is the principal and interest received. It is obtained by multiplying each apple loaned out by 2, shown as negative net borrowings for consumer 1. Remember the interest rate is assumed to be 100% and so the multiplication by 2 ensures that we account for principal and interest. Next determine the principle and interest paid by consumer 1. Multiply each apple borrowed by 2, shown as positive net borrowings for consumer 1. That is, multiply 2 times 6 (=1+3+2) to obtain 12 apples. Finally, using these results, add 3 apples of actual consumption (found in the column that omits net interest) to 18 apples (principal and interest received), and then subtract 12 apples (principal and interest paid). The result is 9 apples. In the end, the total actual consumption plus net interest across all three consumers is equal to 96 apples. Hence, all apples in the initial endowment are reallocated across time to improve the situation of each consumer, although no consumer achieves their most desired allocation. It is also worth noting that consumers 1 and 2 experience negative net borrowings overall, which means they are net lenders. Consumer 3, on the other hand, is a net borrower with a positive total of actual net borrowings. What this means is that, once the net interest is paid, consumers 1 and 2 end up with a total number of apples in excess of their original endowments of 32 apples each. Consumer 3, on the other hand, ends up with 27 apples and thus consumes less than the total endowment.[13] The reason, of course, is that consumer 3 is a net debtor and must pay a fair amount in interest for consuming the most in the first three periods. The model presented here takes certain liberties in extending Mises’s theory to illustrate a number of key points. The main point should be clear, however, which is that interest arises inevitably as a result of the different subjective time preferences of consumers. Because of the problem of time asymmetry, it is not possible for consumers to transfer future goods to the present. Hence, they engage in trade with one another in the present period and create a market for paper claims to future goods. A positive rate of interest is inevitable because otherwise no one would have an incentive to hand over present goods to another person. The rate of interest that results is determined competitively in the marketplace through a process of haggling. This representation of the Austrian theory of interest does not show how this process leads to the formation of a market rate of interest, but it does demonstrate that such a rate of interest arises because consumers seek to fulfill their consumption plans over time. A Marxian Theory of Interest Rate Determination Marxian economists also have ideas about how interest rates are determined. Karl Marx wrote about the formation of the rate of interest in Volume 3 of Capital, which was published after Marx’s death by his friend and collaborator Frederick Engels in 1894. In this section, we will examine one interpretation of Marx’s interest rate theory.[14] In Marxian economics, the rate of interest is related in a logical way to the rate of profit (p) and another rate that Theodore Lianos calls the rate of profit of enterprise (re). To demonstrate the relationship between these three rates, it is necessary to begin with an identity. That is, aggregate profit (P) in the economy is identically equal to aggregate interest (I) plus aggregate profit of enterprise (Re) as follows: $P=I+R_{e}$ In other words, capitalist enterprises possess a total amount of profit, part of which is used to pay interest for the use of borrowed money capital. The other part is profit kept by the enterprise for its own internal use or for distribution to shareholders. The next step is to divide both sides by the aggregate capital, which consists of variable capital (V) and constant capital (C) as follows: $\frac{P}{C+V}=\frac{I}{C+V}+\frac{R_{e}}{C+V}$ Next we multiply each expression on the right hand side of the equation by ratios that are equal to 1, thus maintaining the equality. That is: $\frac{P}{C+V}=\frac{I}{C+V}\cdot \frac{A}{A}+\frac{R_{e}}{C+V}\cdot \frac{C+V-A}{C+V-A}$ A in this equation refers to the total borrowed money capital. Therefore, C+V–A refers to the total non-borrowed money capital. Rearranging the terms a bit yields the following result: $\frac{P}{C+V}=\frac{I}{A}\cdot \frac{A}{C+V}+\frac{R_{e}}{C+V-A}\cdot \frac{C+V-A}{C+V}$ In this case, A/(C+V) refers to the fraction of the total capital that is borrowed. Similarly, (C+V-A)/(C+V) refers to the fraction of the total capital that is not borrowed. If we set k = A/(C+V), then the equation may be written as follows: $\frac{P}{C+V}=\frac{I}{A}\cdot k+\frac{R_{e}}{C+V-A}\cdot (1-k)$ As is explained in Chapter 4, P/(C+V) represents the rate of profit for the economy as a whole. In Chapter 4, we defined the rate of profit as the aggregate surplus value divided by the total capital advanced. However, if values have been transformed into production prices, then it is more appropriate to refer to the ratio of aggregate profit to aggregate capital advanced. The expression I/A is the rate of interest. It is simply the amount of interest paid (received) divided by the amount of capital borrowed (lent). Finally, the expression Re/(C+V-A) is the rate of profit of enterprise. It is the profit of enterprise divided by the amount of non-borrowed money capital. Finally, k refers to the fraction of the total capital that is borrowed. If we substitute the symbols p for the rate of profit, i for the rate of interest, and re for the rate of profit of enterprise, then we have the following result: $p=ik+r_{e}(1-k)$ This final expression states that the rate of profit is equal to a weighted average of the rate of interest and the rate of profit of enterprise where k and 1-k serve as the weights. If we assume that the rate of profit is given, we can then represent all the combinations of the interest rate and the rate of profit of enterprise that are possible in a two-dimensional space. To do so, we solve the above expression for re as follows: $r_{e}=\frac{p}{1-k}-\frac{k}{1-k}i$ This equation turns out to be linear with a vertical intercept of p/(1-k) and a constant slope of –k/(1-k). Given the rate of profit, we can graph the equation as in Figure 15.17. It should be clear that the interest rate and the rate of profit of enterprise are inversely related. That is, as the interest rate rises, the rate of profit of enterprise must fall, and vice versa. This negative relationship is reflected in the downward slope of the line. More can be stated about the relationship between the three rates that we are considering. For example, consider the point at which the rate of interest and the rate of profit of enterprise are equal (i.e., re = i). If we draw a line through all the combinations of re and i where these rates are equal, then we obtain a positively sloped line with the equation re = i. The reader should note that this line has a slope of 1 and will rise at a 45-degree angle relative to the horizontal axis. The point at which this 45-degree line intersects the line that shows the relationship between i and re represents the unique point where re and i are equal given this profit rate. If we set i = re and plug it into the equation of the line relating the two variables, then we obtain the following result: $p=ik+i(1-k)\Rightarrow p=i$ If p = i, then it necessarily follows that p = re as well since i = re. All these results are captured in Figure 15.18. A few interesting results necessarily follow. If the economy is on the downward sloping line at a point above the intersection with the 45-degree line, then the following results must hold: $r_{e}>p,\;r_{e}>i\;and\;p>i$ On the other hand, if the economy is on the downward sloping line at a point below the intersection with the 45-degree line, then the following results must hold: $r_{e} This entire discussion has assumed a constant aggregate rate of profit (p). Over the course of the business cycle, however, the rate of profit fluctuates.[15] Because the vertical and horizontal intercepts of the downward sloping line depend on p, while the slope does not, it follows that parallel shifts to the right (during an expansion) and to the left (during a contraction) will occur over the course of the business cycle.[16] If the rate of profit fluctuates in a way that can be explained, and the rate of interest fluctuates in a way that can be explained over the course of the business cycle, then fluctuations in the rate of profit of enterprise should be explainable as well using the equation of our downward sloping line.[17] That is, with the movements in two out of three variables explained, the third variable must be explained as well given the logical relationship between the three rates. In Chapter 14, we discussed the fluctuations in the rate of profit according to Marxian theory. Hence, our goal is to explain movements in the rate of interest, which will then complete the explanation of all three rates. According to Marx, the rate of interest in a capitalist economy is competitively determined in the market for loan capital. This claim sounds rather similar to the neoclassical claim that the loanable funds market plays a central role in the determination of the rate of interest. A key difference exists, however, between Marx’s theory and the neoclassical theory. Unlike in neoclassical theory, changes in production are the key to interest rate fluctuations over the course of the business cycle. As output rises and falls, the demand for loan capital changes in response to the needs of industrial capitalists.[18] The degree of tightness in the money capital market then determines whether the interest rate rises and falls as well as the speed of the adjustments.[19] To clarify the argument, let’s assume that the demand for loanable money capital is positively related to the level of production (i.e., output). Let’s also assume that the supply of loanable money capital is given or fixed. Both assumptions are reflected in Figure 15.19 (a).[20] As output rises during an expansion, beginning at Q1, the quantity supplied exceeds the quantity demanded of loan capital. As a result, the interest rate declines as shown in Figure 15.9 (b). Because the gap becomes smaller between the two curves, the reduction in the interest rate is relatively slow. Once output rises beyond Q*, the demand for loan capital exceeds the supply, and the interest rate rises. Because the gap grows rapidly, the interest rate rises very quickly. After the level of output peaks at Q2, it begins to fall, indicating that a recession has begun. Since the quantity demanded still exceeds the quantity supplied, the rate of interest continues to rise but much more slowly because the gap is becoming smaller. Once output falls below Q*, the interest rate falls very quickly due to the rapid increase in the gap between the supply and demand for loanable funds. Once the trough of the recession is reached at Q1, it begins to rise once more with a new expansion, and the cycle begins again.[21] Now that the movement of the rate of interest has been fully explained, this knowledge can be combined with what we know about the movement of the profit rate over the course of the business cycle and the identity that relates the profit rate, the interest rate, and the rate of profit of enterprise. All this information provides an explanation for the movement of the rate of profit of enterprise over the course of the business cycle. With the interest rate following the pattern in Figure 15.19 (b), we can expect the rate of profit of enterprise to fluctuate opposite the interest rate fluctuations given the negative relationship between the two rates.[22] In closing, it is worth reflecting on the key differences between the Marxian and neoclassical theories of the rate of interest. First, the Marxian theory suggests that the level of production is the most important factor determining movements in the rate of interest, as opposed to the responses of borrowers and lenders to shortages and surpluses in the market for loanable funds.[23] Second, the Marxian theory is a disequilibrium theory of the rate of interest as opposed to its neoclassical counterpart. Even though a point exists (Q*) where both quantity supplied and quantity demanded are equal in the market for loan capital, the economy has no inherent tendency to move towards this point because the level of production is being driven by other factors.[24] Finally, the rate of interest in Marxian theory is linked to the rate of profit, and the rate of profit reflects how successful capitalists have been in terms of exploiting labor-power. That is, interest income is a portion of the aggregate profit, and aggregate profit is explained on the basis of the surplus labor performed by the working class. By contrast, in neoclassical theory, the rate of interest is the price paid for loanable funds, and the loanable funds market is the market that harmonizes the interests of savers who have a surplus of funds and borrowers who are short of funds. In other words, this market transfers funds from those who lack an efficient use for funds to those who have an efficient use for funds. In short, it is not only the mechanics of interest rate determination that differ across the two theories, it is also the social meaning of the interest rate and the market in which it is determined. Following the Economic News [25] In The Globe and Mail (Canada), Ian McGugan recently described changes in the Canadian and U.S. stock and bond markets. He describes the bond market as “gloomy” and the stock market as “still largely upbeat.” McGugan notes that the markets are sending signals that provide conflicting messages about the likelihood of a U.S. recession. The high share prices, for example, suggest optimism about future economic prospects. The gloomy bond market suggests the opposite. McGugan explains that the bond market has recently become much more pessimistic. The pessimism, McGugan argues, is reflected in a recent increase in bond prices and falling bond yields due to “investors’ newfound eagerness to buy bonds.” That is, a higher demand for bonds has pushed up the prices of U.S. Treasury bonds and pushed down the interest rates on U.S. Treasury bonds. This “flight to safety” is expected when investors are pessimistic about the future and anticipate a recession and a falling stock market. McGugan explains that this shift in investor sentiment is occurring on a global scale. Citing Bloomberg data, McGugan explains that “[n]early US$11 trillion worth of bonds around the world are now yielding below zero per cent,” which represents the largest volume of bonds with negative interest rates since 2016. The negative interest rates suggest that the demand for these safe-haven bonds is so high that their market prices are producing negative yields. For example, a Treasury bill with a negative interest rate would sell for a price that exceeds its face value. An investor would purchase it at a price that is higher than the amount received in the future, which translates into a negative interest payment. As McGugan explains, “an enormous amount of money is now being bet on the proposition that it’s better to own bonds, even at negative rates, than to take on stock market risk in a slowing global economy.” Due to strong recent corporate earnings, however, McGugan argues that the relative calmness of the stock market makes sense. Nevertheless, McGugan explains that eventually “stocks will feel the pain if the bond market is right about slowing economic growth.” In that scenario, the demand for stocks will fall as the demand for bonds continues to rise. Bond prices will rise relative to stock prices as our supply and demand models of the stock and bond markets indicate.
Summary of Key Points
1. To calculate the future value (FV) of a sum of money, it is necessary to multiply the present sum by (1+i)n. To calculate the present value (PV) of a sum of money, it is necessary to divide the future sum by (1+i)n.
2. To calculate the present value of a bond, it is necessary to add up the present value of each future payment associated with the bond.
3. Interest rates and bond prices are always inversely related.
4. The demand for loanable funds is equivalent to the supply of bonds, and the supply of loanable funds is equivalent to the demand for bonds.
5. In equilibrium, the price of a bond is equal to its present value.
6. The money market is not the market for short-term securities but the market in which discrepancies between actual money holdings and desired money holdings influence the rate of interest.
7. The three components of money demand are the transactions demand, the precautionary demand, and the asset demand for money.
8. In the neoclassical model of interest rate determination, competition causes the loanable funds market, the bond market, and the money market to all clear simultaneously, leading to a general equilibrium.
9. Other factors the same, an equilibrium stock price will be lower than an equilibrium bond price due to the greater risk and uncertainty of dividend payments associated with the stock.
10. The real interest rate is calculated as the nominal interest rate minus the rate of inflation.
11. In Austrian theory, a positive rate of interest emerges because consumers have different time preferences and because the problem of time asymmetry exists.
12. In Marxian theory, changes in the level of production lead to changes in the degree of tightness in the loan capital market, which then lead to fluctuations in the rate of interest.
List of Key Terms
Rate of interest
Principal
Compound interest
Future value
Present value
Discounted
Bond
Coupon bonds
Term to maturity
Yield to maturity
Loanable funds market
Equilibrium interest rate
Equilibrium quantity of loanable funds
Bond market
Equilibrium bond price
Equilibrium quantity of bonds
Money market
Money market instruments
Money
Liquidity
Actual money holdings
Demand for money
Transactions demand for money
Precautionary demand for money
Capital gain
Speculative demand for money
Asset demand for money
General equilibrium model
Partial equilibrium model
Marginalist revolution
Walras’s Law
Stocks
Underwriting a stock issue
Primary market
Promoters’ profit
Dividends
Secondary market
Default
Nominal interest rate
Real interest rate
Purchasing power
Time preference
Positive time preference
Negative time preference
Neutral time preference
Time asymmetry
Profit of enterprise
Rate of profit
Rate of profit of enterprise
Problems for Review
1. Suppose you lend $250 for two years at an annual interest rate of 6.5%. What is the future value of your loan at the end of year 2? 2. Suppose you purchase an asset that will pay you$1000 in three years. If the annual interest rate is 3.5%, then what is the present value of your asset?
3. Suppose you buy a bond that will pay you $300 per year for the next four years. If the annual interest rate is 4%, then what is the price of your bond if the market is in equilibrium? 4. Suppose you purchase a stock that is expected to pay a dividend of$300 per year for the next four years. If the discount rate is 5.5%, then what is the stock price in this case? Is this result the one you would expect when comparing this answer to the one you obtained in question 3? Why or why not?
5. Assume that the loanable funds market, the bond market, and the money market are all in equilibrium. What will happen if the central bank reduces the money supply? Explain what occurs in the money market and how these changes lead to additional changes in the bond and loanable funds markets.
6. Suppose that you lend money at an annual interest rate of 12% and the annual inflation rate is 7%. What is the real interest rate that you earn?
7. What is the actual net borrowing of a consumer if the actual consumption in a period is 6 apples and the endowment in that same period is 8 apples? What does the sign of your answer imply?
8. Suppose the rate of profit is 50% and the proportion of capital borrowed by industrial capitalists is 75%. Write the equation of the line that relates the rate of profit of enterprise to the rate of interest. Next draw a graph of this line in a space with the rate of profit of enterprise on the vertical axis and the rate of interest on the horizontal axis. Label the horizontal and vertical intercepts as well.
1. See Mankiw (1997), p. 58, for a justification of references to “the interest rate.”
2. Mishkin (2006), p. 105, footnote 4, emphasizes this point.
3. McConnell and Brue (2008), p. 260, present the transactions demand for money as a vertical line. This presentation adds the precautionary demand as a second source of money demand that is independent of the interest rate.
4. For example, Samuelson and Nordhaus (2001), p. 520-521, and McConnell and Brue (2008), p. 259-261, focus on the transactions demand and the asset demand for money only. Mishkin (2006), p. 521-522, on the other hand, concentrates on the transactions, precautionary, and speculative motives for holding money.
5. Mishkin (2006), p. 143, includes a similar formula for the calculation of a stock price. Mishkin, however, includes a final term representing the final sales price of the stock at the end of the holding period. It is omitted here just as the face value of the coupon bond (i.e., the return of the principal) was omitted in our discussion of how to calculate the present value of a coupon bond.
6. A similar argument can be used to explain the risk structure of interest rates. See Mishkin (2006), p. 120-127.
7. The model presented in this section uses as a starting point the interpretation of Mises’s theory of interest as presented in Moss (1978), p. 157-166. Nevertheless, certain liberties have been taken for ease of exposition.
8. Moss (1978), p. 161, attributes the distinction between positive, negative, and neural time preferences to economists Irving Fisher and Gary Becker.
9. See Moss (1978), p. 161, where he states that “an individual satisfied with an equal number of apples in each time period is said to display neutral or zero time preference.”
10. See Moss (1978), p. 163-164, for an explanation of the asymmetry in the time market.
11. Moss (1978), p. 160, defines net borrowing as the difference between desired consumption and the endowment.
12. Moss (1978), p. 163, states that “total apple consumption for the entire society must equal total apple endowment when both totals are summed over all individuals and all periods.”
13. Moss (1978), p. 163, explains that it is possible “for a single individual’s total n-period consumption to be greater or less than his total aggregate apple endowment. Whether it will be greater or less depends, of course, on whether over the n periods he was a net interest payer or receiver.”
14. The interpretation that is discussed in this section was developed by Lianos (1987). Aside from a few notational changes, the mathematical presentation in this section is found in Lianos’s original article. The original source for this material is: Lianos, Theodore. "Marx and the Rate of Interest." Review of Radical Political Economics. Volume 19 (3): 34-55, copyright © 1987 by Sage Publications. Adapted versions of Figures 1, 4, and 5 with equations 1-6: Reprinted by Permission of Sage Publications, Inc.
15. Lianos (1987), p. 36.
16. Lianos (1987), p. 37, shows these parallel shifts in his Figures 1 and 2.
17. Lianos (1987), p. 38, shows how the rate of profit of enterprise and the rate of interest fluctuate relative to one another in his Figure 3.
18. Lianos (1987), p. 45.
19. Lianos (1987), p. 45-46.
20. The graphs in Figure 15.19 have been re-created using Figures 4 and 5 in Lianos (1987), 46-47.
21. Ibid. p. 46-47.
22. Lianos (1987), p. 37-38, describes the complicated relationship between the two rates throughout the business cycle.
23. As Lianos (1987), p. 52, puts it, “the real sector dominates the monetary sector.”
24. Lianos (1987), p. 52, states that the “differences between demand and supply create equilibrating tendencies but they do not lead to equilibrium and stability of the interest rate, because the moving force in this model is the process of accumulation and income growth.”
25. McGugan, Ian. “Opinion; With bond market gloomy, why are stocks still near record levels?” The Globe and Mail (Canada), Ontario Edition. 30 May 2019. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/03%3A_Principles_of_Macroeconomic_Theory/15%3A_Theories_of_Financial_Markets.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Identify what most economists consider to be the traditional functions of money.
2. Define several different types of money
3. Distinguish between the Neoclassical and Austrian Measures of the Money Supply
4. Examine the difference between 100% reserve banking and fractional reserve banking
5. Explore the different items on a commercial bank’s balance sheet
6. Investigate the neoclassical theory of banks and its link to the theory of financial markets
7. Develop a Marxist theory of banks and link it to the theory of financial markets
In Chapter 15, we explored three theories of financial markets. Our focus was on the way in which financial markets determine the rate of interest and cause fluctuations in the rate of interest over time. Our purpose in this chapter is to incorporate the role of money into our neoclassical and heterodox analyses. Therefore, this chapter explores the topic of money, including its functions, major types, and most important measures. How to measure money is somewhat tricky, and neoclassical and Austrian economists do not agree on the best method of measuring its total quantity. This chapter also discusses the banking system, which is closely connected to the subject of money and its amount. A financial statement known as a balance sheet is introduced in this chapter so that it is easier to explore the way commercial banks’ activities alter their financial positions and affect the quantity of money in the economy. This background makes it possible to compare two competing theories of how commercial banks operate and how they influence the financial markets. The final part of the chapter, therefore, contrasts the neoclassical and Marxist theories of commercial bank behavior and how those theories connect to the neoclassical and Marxist theories of financial markets discussed in Chapter 15.
The Traditional Functions of Money [1]
When neoclassical economists define money, they do not associate it with any specific object but rather with the functions that it fulfills. In other words, if an object fulfills certain functions, then it is regarded as money. If it does not fulfill these functions, then it is not money. The first function of money is medium of exchange. To understand this function, it helps to think about a barter economy. A barter economy is an economy in which no money exists and commodities are exchanged for other commodities. For example, five textbooks might exchange for one desk. Barter economies allow owners of commodities to trade for the commodities that they want, but they face one major problem that is referred to as the double coincidence of wants problem. For example, suppose that I have five textbooks and would like to purchase one desk. To make this trade, I need to find someone who has a desk to sell. The problem that arises for me, however, is that this person must also want my textbooks if the trade is to occur. For our wants to exactly coincide in this manner would require a double coincidence: the desk owner wants what I have and I want what the desk owner has. If money is introduced into this economy, then the problem is solved. Why? Money by its nature is universally regarded as valuable. Even if the desk owner does not want my textbooks, the desk owner will be pleased to accept my money because it can be used to purchase something else that the desk owner wants, such as a painting, from someone else. That is, money serves as a medium of exchange. It facilitates exchange by making possible the exchange of the desk for the painting, even though this exchange does not occur directly. With money regarded as universally valuable, now I only need to find someone with the commodity that I wish to buy (assuming I have sufficient money to pay the price), and the double coincidence of wants problem is solved.
It should now be clear why neoclassical economists regard money as a means of increasing efficiency. The time required to find someone with whom to make a trade in a barter economy is much greater than the time required in a monetary economy. Because money cuts down on the time required to find someone with whom to make a trade, that freed up time can be used for more productive activities.[2] Also, because of the double coincidence of wants problem, people in barter economies do not want to rely so much on market exchange. It is risky to do so because of the difficulties associated with finding people with whom to trade. In a monetary economy, by contrast, people feel much more comfortable relying on market exchange. As a result, producers specialize more because they know that they can easily sell their products and services for money, which can be used to obtain other commodities they really desire.[3] Specialization and division of labor, of course, greatly increase labor productivity. With greater productivity and overall production, it is easy to see that money makes possible a higher level of efficiency.
A second function that neoclassical economists associate with money is its role as unit of account. According to this function, money provides a standardized unit by which to measure prices. That is, money provides a gauge for measuring value. With a standardized unit, it becomes possible to make fine distinctions in the values of commodities. Comparisons of value then become easier to make, which aids decision making. The divisibility of the object is thus of crucial importance. Historically, precious metals like gold and silver, have served as money. Metallic substances can be melted down, weighed, and transformed into standardized units, such as bars and coins. It is important to remember, however, that just because an object can serve as a unit of account, it also must be able to fulfill the other functions of money. If it fails to fulfill the other functions, then societies will reject it for the role of money. For example, milk can be easily measured and careful distinctions in value could be determined if all prices were stated in terms of ounces of milk. Milk does not serve well as a form of money, but it is not because it fails as a unit of account. It fails because it does not fulfill the third function well at all.
The third function that neoclassical economists associate with money is its role as a store of value. As a store of value, money makes it possible to transfer purchasing power from the present to the future. To succeed in this role, the object must be highly durable. An object that spoils or corrodes over relatively short time periods will not maintain its value in exchange. Precious metals are highly durable. Milk, on the other hand, spoils after a short time and so if wealth is held in this form, it will quickly vanish and the owner will lose all claim to future commodities.
If all three functions are fulfilled by an object, then neoclassical economists regard the object as money. Societies also tend to use such objects as money because people have recognized, even if not explicitly, the important role that each of these functions serves. Marxian economists also recognize the importance of these functions although the language used is somewhat different (e.g., means of circulation, measure of value, object of hoarding).[4] One important difference between the Marxian and neoclassical analyses of these functions deals with the unit of account (or measure of value) function. Neoclassical economists regard the unit of account function as important because it makes it possible to compare prices of commodities using a common unit. Marxian economists take it further, however, with the argument that money serves as the universal expression of homogeneous human labor. That is, money does not just make it possible to compare the prices of commodities using a common unit, which it certainly does. It also serves as the object through which socially necessary abstract labor time is expressed within the capitalist mode of production. When money serves as a means of circulation (or medium of exchange) as represented in a commodity circuit (C-M-C’), it renders the socially necessary abstract labor time embodied in the two commodities (C and C’) commensurable. That is, the embodied SNALT is regarded as equal in the two commodities, and it is regarded as equal to the SNALT embodied in (or reflected in) the money commodity.
Different Types of Money: From Mollusk Shells to Bitcoins
Throughout history many different objects have fulfilled the functions of money to a greater or lesser extent. The form of money that was used for the longest period throughout human history is the cowrie, which is a mollusk shell found in the Indian Ocean.[5] For more than 2,000 years, it served as money at various times in China, India, Europe, and Africa. Many other commodities have circulated as money, including stone wheels, gold, silver, copper, and cigarettes. Each commodity is considered commodity money because it circulates as money and yet has some intrinsic value or alternative use. For neoclassical economists, the alternative use of the commodity justifies the label. For Marxian economists, commodity money is also the product of socially necessary abstract labor time, which renders its value in circulation comprehensible.
A more recent form of money is convertible paper money. When people refer to the gold standard, for example, they have in mind convertible paper money. During the nineteenth century, for example, commercial banks would issue paper notes that represented claims to their gold reserves. The holder of the paper notes could use the notes to pay bills and settle debts, and the paper was “as good as gold.” The U.S. government also printed paper notes that represented claims to gold. Eventually, the U.S. Federal Reserve would issue paper notes that were identified as Federal Reserve Notes, representing the debt of the issuer. That is, the holder possessed a claim to the assets of the central bank. As late as the mid-twentieth century, some U.S. notes were still redeemable for precious metals and so were backed by commodity money. After World War II, gold reserves continued to back U.S. dollars used in international transactions. The international gold standard was abandoned, however, in the 1970s.
If convertible paper money is no longer available, then what kind of money do we have? We now have inconvertible paper money. That is, we have paper money that is not convertible into anything else. It cannot be exchanged at any bank for gold, silver, copper, or any other commodity money. Why do people hold money if it is not redeemable into any commodity of value? People hold money because the government has declared it to have value. Because it is declared to be money by government decree, it is labeled fiat money. To some extent, the value of fiat money represents people’s faith in the government. The more important reason that people hold money, however, is that it is generally accepted in exchange for other commodities. People understand inconvertible paper money to represent a claim to the huge variety of commodities available in the marketplace as well as its acceptance for the payment of debts.
Another type of money that has developed with the rise of information technology is digital currencies. The most widely known digital currency is the Bitcoin. Interestingly, Bitcoins are not issued by any central bank. They were introduced in 2009 and are created through a process called mining. That is, computer programmers create new Bitcoins by solving mathematical problems. Because the process takes time, the total supply of Bitcoins increases but only gradually. It has no physical existence or paper form. It is purely digital. It has a market value because it can be exchanged for other currencies like the U.S. dollar. Its value has plummeted at times due to low demand and a rising supply and has skyrocketed at other times due to soaring demand and a relatively slow increase in the supply. The existence of privately created monies poses serious problems for governments and central banks. Because the source of Bitcoins in transactions is more difficult to trace in comparison with paper money, they have been used for money laundering and other illegal activities. Furthermore, although Bitcoins are still a relatively new form of money and not nearly as widely used as national currencies, the expansion of its use could one day interfere with the ability of central banks to manage their nations’ money supplies. Only time will tell whether digital currencies become so widely used that they become serious competitors for national currencies.
The Neoclassical Approach to Money Supply Measurement
Before discussing how changes in the supply of money affect the economy, it is essential to discuss how it is measured. That is, it is usually helpful to address the issue of measurement before one develops a theory, remembering however that how one chooses to define and measure an economic variable may influence the theory that one develops. We saw in Chapter 1 how this problem arose when we considered how the Bureau of Labor Statistics measures the unemployment rate and the way in which that measure may influence one’s perception of a worker without a paid job. Nevertheless, one must begin somewhere, and to begin with the measurement of an economic variable seems like the easier choice.
Neoclassical economists use several primary measures of the money supply (also referred to as the money stock), and the distinction between the measures deals with the liquidity of the assets included in each measure. Liquidity refers to the ease with which an asset can be converted into currency (i.e., coins and paper money).
The first neoclassical measure of the money supply includes the most liquid assets and is referred to as the M1money supply. That is, M1 includes publicly held currency and the checkable deposits of commercial banks and thrift institutions. Thrift institutions include depository institutions like savings and loan associations and credit unions. Savings and loan associations (S&Ls) accept deposits and tend to specialize in home mortgage lending. Credit unions also accept deposits and make loans, but their members own and control them. It should be clear why M1 includes the most liquid assets. Publicly held currency is already currency and requires no conversion into currency. Note that currency held in bank vaults is not counted in the M1 money supply. Checkable deposits are also highly liquid assets. Because account holders can write checks against their balances, these deposits are easily accessed for purchases. They are also available for immediate withdrawal in the form of currency. The M1 money supply is the most closely followed measure of the money supply by the Federal Reserve, and it is the measure that is used most widely in economic models given the extreme liquidity of the assets that it includes.
The second neoclassical measure of the money supply includes less liquid assets than M1 and is referred to as the M2 money supply. The M2 money supply includes the entire M1 money supply but also includes additional assets that are less easily converted into currency. That is, M2 includes savings deposits, small time deposits (e.g., certificates of deposit valued at less than $100,000), money market deposit accounts (MMDAs), and money market mutual funds (MMMFs). Savings deposits are held at banks and thrift institutions, but they are less liquid assets than checkable deposits because the check writing feature is absent. In addition, federal law limits the number of withdrawals per month to six. The funds can still be accessed but not quite as easily as the funds in checking accounts. Certificates of deposit (CDs) are also less liquid. These financial instruments allow the account owner to deposit a sum of money for a given period to earn a higher interest rate than what can typically be earned on a savings deposit. To earn this higher interest rate, however, the CD owner must commit the funds for a specific period. If the owner withdraws the funds before the maturity date of the CD, then a penalty must be paid. The threat of a penalty and the commitment of the funds for a specific period make this asset less liquid than a checkable deposit or even a savings deposit. Money market mutual funds and money market deposit accounts appeared in the 1970s and 1980s. Mutual fund companies offer MMMFs to investors who pool their assets together in these funds. The funds are then used to purchase money market instruments, as described in Chapter 15. MMMFs invest in highly liquid short-term assets and so are liquid as well. Investors value them for their low risk, high liquidity, and return that exceeds what can typically be earned on a savings deposit. MMDAs are like MMMFs in that the deposited funds are invested in money market instruments. Banks offer MMDAs, however, and the check writing option is limited, making them less liquid than checkable deposits. The third neoclassical measure of the money supply is M3. The M3 money supply includes M2 but also large time deposits (i.e., CDs with values at least as great as$100,000). These CDs are so large that only institutional investors, like hedge funds and investment firms, can afford them. Because these assets are less liquid due to their longer terms to maturity and considerable penalties for early withdrawal, they are less liquid assets than the components of M2. Overall, the three neoclassical measures of the money supply tend to move together over time, although not always. The divergences between the three measures of the money supply can create a problem for the Federal Reserve as it aims to regulate the money supply.
The diagram in Figure 16.1 helps us to understand the relationship between the three neoclassical measures of the money supply.
As we move from the M1 money supply to the outer circles, we begin to include less liquid assets and so arrive at the M2 and M3 measures of the money supply. Table 16.1 provides Federal Reserve data on the M1 and M2 money stock measures as of December 2017 so that we can obtain a sense of the magnitudes of the different components of the money supply. We exclude the M3 money supply because the Federal Reserve ceased publication of it on March 23, 2006.[6]
The most obvious example of a divergence in the growth rates of the M1 and M2 money supplies occurred in the mid-1990s. The M1 money stock contracted during this period as reflected in the negative growth rate of M1. At the same time, the growth rate of M2 was positive and rising, indicating that the M2 monetary aggregate was expanding and that the rate of expansion was increasing. The reason for this divergence may relate to the information technology revolution, which made it possible to more easily transfer funds between checkable deposits and non-M1 components of M2 such as savings deposits and money market funds. A similar reduction in the growth rate of M1 while the growth rate of M2 rose occurred in the mid-2000s before the Great Recession. One possibility is that people hold less M1 money during economic booms as they think more about profitable investment strategies. During economic contractions (the shaded areas), on the other hand, M1 money balances appear to increase as people seek security in highly liquid assets.
The Austrian Approach to Money Supply Measurement
Austrian economists agree with some aspects of the neoclassical approach to money supply measurement, but they differ sharply on several key points. Austrian economist Murray Rothbard has written extensively on this subject, and his perspective forms the basis of the Austrian approach described in this section.
First, Austrian economists strongly reject the notion that the liquidity of an asset should determine whether it is considered a part of the money supply. According to Rothbard, “the money supply should be defined as all entities which are redeemable on demand in standard cash at a fixed rate.”[7] Obviously this definition includes currency held by the public. It also includes demand deposits. Even though banks generally do not have sufficient cash reserves to redeem all outstanding demand deposits, Austrian economists emphasize the central role of subjective estimates of value by market participants. Therefore, if individuals believe that demand deposits are redeemable, they represent an active part of the money supply.[8] Austrian economists regard savings deposits held at commercial banks and savings and loan associations to be part of the money supply as well. Although some restrictions on transactions apply, depositors subjectively treat their savings deposits as redeemable for cash at a fixed rate and hence, they should be treated as part of the supply of money.[9]
Time deposits (CDs), however, are treated rather differently in the Austrian definition of the money supply. Rothbard draws upon the theory of money and credit that Ludwig von Mises developed to make this argument. Whereas a demand deposit represents a claim to cash and can be used in the purchase of present goods, a time deposit represents a credit instrument and can only be used for the purchase of future goods.[10] One cannot argue that time deposits are highly liquid assets and so should be counted as part of the money supply. Such assets are sold at market rates and are not directly redeemable for cash.[11] On the other hand, CDs and federal savings bonds are redeemable at fixed, penalty rates. Therefore, Austrian economists believe that we should include these values at their penalty levels as opposed to their face values (e.g., $9,000 as opposed to$10,000 when early withdrawal occurs).[12]
Three other issues must be discussed before providing the complete Austrian definition of the money supply. First, consistent with the Austrian criteria, the cash surrender values of life insurance policies must be included in the money supply because such policies are redeemable in cash. It is necessary, however, to add the total policy reserves minus the policy loans outstanding because the policy loans are not available for immediate withdrawal.[13] Second, if noncommercial banking institutions, such as life insurance companies or savings and loan associations, have deposits that act as reserves supporting their own issued deposits, then to avoid double counting, those reserves must be subtracted from the total demand deposits when calculating the money supply.[14] Finally, it is essential to include U.S. Treasury deposits held at the Federal Reserve in the money supply because such deposits may be used for the purchase of present goods.[15]
We can now state the complete Austrian definition of the money supply (Ma), which contrasts with the neoclassical definitions of the money supply (M1, M2, and M3).
Ma[16] = the total supply of cash
-cash held in the banks
+total demand deposits (including Treasury deposits)
+total savings deposits in commercial and savings banks
+total shares in savings and loan associations (which function like savings deposits)
+time deposits and small CDs at current redemption rates
+total policy reserves of life insurance companies
-policy loans outstanding
-demand deposits owned by savings banks, savings and loan associations, and life insurance companies
+savings bonds, at current rates of redemption
The first three items in the formula, relating to currency and demand deposits, resemble the components of M1. Savings deposits and shares closely resemble the non-M1 elements of M2. It is in the treatment of time deposits, cash surrender values, demand deposits of thrift institutions, and federal savings bonds where we see the greatest difference between the Austrian definition of the money supply and the neoclassical definitions of the money supply. The major reason for the differences in the definitions is that neoclassical economists concentrate on the liquidity of the assets in their definitions whereas Austrian economists focus on the potential to redeem the assets for cash at fixed rates even if the redemption rates are penalty rates.
The Origin of Fractional Reserve Banking
Now that we understand the most important measures of the money supply, we will begin thinking about how the money supply is determined within capitalist economies. We thus move from the practical problem of measurement to the theoretical problem of determination. To address the problem of money supply determination, we will consider two contrasting theories of the way in which a private commercial banking system helps determine the M1 money supply. Because the central bank also plays a major role in determining the M1 money supply, the neoclassical and Marxist theories of money supply determination that we develop in this chapter are only partial theories. The role that the central bank plays in determining the money supply is the subject of Chapter 17.
To understand the contribution of private commercial banks to the determination of the M1 money supply in neoclassical theory and in Marxist theory, we need to first distinguish between 100% reserve banking systems and fractional reserve banking systems. To make sense of this distinction, it is helpful to imagine an economy with a money commodity like gold.[17] Suppose that owners of gold bars decide that it is a great burden to use gold bars each time they wish to buy commodities. Some enterprising young person sees this problem as an opportunity to earn an income. Suppose that she offers to safeguard the gold bars for their owners (for a fee) and offers to issue paper certificates in an amount that is equivalent to the gold bars. These paper certificates are payable in gold on demand. The entire money supply now consists of the total supply of paper certificates, plus any gold bars that remain in circulation. Although a money commodity may still circulate, convertible paper money is now the dominant type of money in this economy.
This economy possesses what economists call a 100% reserve banking system. That is, 100% of the circulating paper certificates are backed up with gold reserves. If every holder of a paper certificate decided at the same time to redeem their certificates for gold, then the banker who issued the certificates would be able to satisfy every certificate owner’s demand for gold bars. The likelihood of everyone redeeming their paper certificates at once is very low, but it could happen if the holders of the certificates begin to doubt whether the banker will pay them in gold. If their faith is shaken enough, then a bank run might occur where every certificate owner demands payment in gold at the same time. It is worth noting that bank runs of this kind create no problems in a 100% reserve banking system because the demand for gold bars can be completely satisfied without delay.
If sufficient faith in the 100% reserve banking system exists, then most holders of paper certificates are very unlikely to redeem their paper notes for gold. After all, the convertible paper money was created so that owners of gold bars would not need to concern themselves with the protection of the gold bars or the burden of hauling them around to engage in market exchanges. It is the low rate of redemption of the paper certificates that causes the private banker to consider another potential source of income. She decides to create and issue new paper certificates to the point where the face value of the certificates exceeds the quantity of gold reserves that she holds. By doing so, she has expanded the convertible paper money supply and has thus created new money. Rather than using these new certificates for purchases, which she could do, the private banker decides to grant loans, in the form of paper certificates, to borrowers who agree to repay her at a future date with interest. The granting of loans has thus expanded the paper money supply. The total money supply now consists of the original certificates that were issued in exchange for deposits of gold bars, the newly issued certificates representing loans to borrowers, and any gold bars that remain in circulation. The total money supply (M) can be calculated as follows:
M = original certificates + additional certificates (representing loans) + publicly held gold bars
Because the face value of the convertible paper money supply exceeds the value of the gold reserves held in the bank, the banking system is regarded as a fractional reserve banking system. That is, the reserves held in the bank only partially support the convertible paper money that is circulating in the economy. In other words, the value of the reserves only equals a fraction of the value of the circulating paper money. Now a bank run has serious consequences for the banker. If every holder of a paper certificate decides to redeem the certificates for gold bars, then the banker will not be able to satisfy those demands. If the rule of law is not firmly established, then the banker might suffer violence at the hands of an angry mob. Figure 16.3a summarizes the results of this analysis:
From an efficiency perspective, neoclassical economists argue that it is understandable that institutions would create and issue convertible paper money. Holders of gold and silver do not want to pay for other commodities directly using these commodities because they are heavy and difficult to transport. Banking institutions arose to specialize in the safeguarding of these assets. Owners of gold and silver are willing to pay for this service and bankers aim to profit from its provision. Nevertheless, it is the granting of loans and the creation of new money that has the potential to make banking so profitable and so risky because of the constant danger of bank panics and bank runs.
Because modern economies have abandoned commodity money, you might think that this analysis is obsolete with no practical application to modern banking systems. On the contrary, this analysis is entirely applicable to modern banking systems. The only important difference is that the material that serves as bank reserves and the asset that serves as a claim to those reserves have changed.
The dominant type of money in the U.S. economy is government-issued fiat money, as described earlier in this chapter. Although paper money is easier to use in transactions than gold bars, it is still a burden to use for all transactions, particularly large transactions. Therefore, commercial banks accept currency deposits, which perfectly parallels the deposits of gold bars in our economy based on the gold standard. It would not make much sense to issue paper certificates in exchange for the currency deposits, however, because paper would then simply circulate in place of paper. Instead, banks issue checkable deposits when currency deposits are made. The checkable deposits are analogous to the paper certificates in the gold standard economy. Purchases are made using paper checks that can be written in any amount to access the underlying deposits. It is important to remember that it is the underlying checkable deposits that represent money in modern banking systems, not the paper checks that only allow one to access them. The total money supply (M1) for this economy can be calculated as follows:
M1 = original checkable deposits + additional deposits (representing loans) + publicly held currency
Just as in the case of the gold standard economy, private commercial banks recognize that they can issue checkable deposits in an amount that exceeds the reserves of U.S. dollars that they hold. It is a desire to grant interest-bearing loans that motivates the issuance of additional checkable deposits. Because checkable deposits are part of the M1 money supply, the issuance of additional checkable deposits represents the creation of new money. A 100% reserve banking system is possible in an economy based on fiat money if the banks only create checkable deposits in response to the receipt of currency deposits. As soon as the face value of the deposits exceeds the reserves of U.S. dollars held in bank vaults, the system becomes a fractional reserve banking system and is vulnerable to the threat of bank runs. Figure 16.3b summarizes the results of this analysis.
The Balance Sheet of a Commercial Bank
To investigate more carefully how commercial banks influence the total quantity of money available in the economy, it is helpful to use a financial statement known as a balance sheet. A balance sheet is a financial statement that provides a snapshot of an entity’s financial position at a point in time. Balance sheets can be constructed for individuals and for firms. The balance sheet has two columns. On the left side of the balance sheet, all the assets of the individual or firm are listed with their associated values. The assets are simply items of property that the entity owns. The right side of the balance sheet lists all the liabilities and the net worth of the entity. Liabilities and net worth represent claims against the assets. Liabilities are external claims to the assets of the entity. That is, they represent the claims of non-owners to the entity’s assets and so they are debts for the individual or firm. Net worth (also referred to as equity) represents an internal claim to the assets of the household or firm. That is, after accounting for all the debts, the net worth of the entity represents its claim to the remaining assets. When people sometimes ask how much someone is worth, they have in mind their net worth. That is, they are asking how much property a person owns after subtracting her debts. We can obtain a better understanding of the balance sheet concept if we consider a simple example of how this financial statement might look for one individual as shown in Figure 16.4.
The individual whose balance sheet is represented in Figure 16.4 possesses several different types of assets, including cash, a checkable deposit, a savings deposit, and an automobile. She also has incurred debts to obtain these assets, including an auto loan and a credit card loan. The auto and credit card loans represent liabilities because the lending agencies have claims to the individual’s assets. Because the individual’s assets exceed her liabilities, her net worth is positive, representing her claim to her own assets. Because every asset will be claimed by someone (i.e., owners or non-owners), the total assets must equal the sum of the liabilities and net worth. The balance sheet equation may thus be written as:
Assets = Liabilities + Net Worth (equity)
It should be noted that liabilities may exceed assets. When this situation arises, net worth must be negative for the balance sheet equation to hold. A negative net worth means that the individual or firm is insolvent and may need to declare bankruptcy to rectify the situation.
In this chapter, we are primarily interested in how the activities of private commercial banks influence the money supply and the financial markets. Therefore, we need to look at the balance sheet of a commercial bank. The items that typically appear on a bank’s balance sheet are different from the items that generally appear on a firm’s balance sheet or an individual’s balance sheet. Figure 16.5 shows a consolidated balance sheet for all commercial banks in the United States as of January 10, 2018.[18]
Only the most essential balance sheet items have been included in Figure 16.5. The asset side of the balance sheet includes securities, which totaled nearly $3.5 trillion in January 2018. Commercial banks purchase income-earning assets such as U.S. Treasury securities and mortgage-backed securities (MBSs). U.S. Treasury securities include short-term money market instruments such as Treasury bills that have terms to maturity of less than one year and long-term capital market instruments such as Treasury notes and Treasury bonds that have terms to maturity of greater than one year. Mortgage-backed securities are assets that represent bundles of many mortgage loans with varying degrees of risk that generate interest income for the owner over time. MBSs were created rapidly during the leadup to the 2008 financial crisis and banks continue to hold large quantities of these assets. Loans are another important asset on the consolidated balance sheet of U.S. commercial banks. In fact, loan assets are the largest item on the balance sheets of banks at over$9.1 trillion in January 2018. This fact should not surprise the reader. After all, fractional reserve banking developed precisely because banks discovered that they could earn interest income by granting loans to borrowers. This category includes loans of all kinds, including commercial loans, industrial loans, real estate loans, and consumer loans. It is the most profitable activity in which banks engage, but it is also the riskiest activity because borrowers sometimes default (or fail to repay) their loans with interest. When such defaults occur, banks experience loan losses, which accounts for the $110 billion asset reduction in Figure 16.5. During the Great Recession, a wave of defaults on residential mortgages occurred, which caused the values of MBSs to collapse and wiped out a massive amount of wealth that only existed on paper. If such loan losses are large enough, then it might cause assets to fall below liabilities, causing insolvency. The final item of importance on the asset side of commercial banks’ balance sheets is cash reserves. Cash reserves include currency that is held in bank vaults, which is not part of M1. It also includes funds that are held in accounts with the Federal Reserve. Cash reserves are important because they make it possible for banks to satisfy depositors’ demands for cash withdrawals or withdraws that occur by means of writing checks. Within a fractional reserve banking system, these cash reserves are generally a fraction of the total deposit liabilities. In this case, cash reserves are approximately 20.11% of total deposit liabilities (= 2,405.4/11,960.2) so that for each dollar of deposits, the bank maintains about$0.20 in cash reserves.
On the liabilities and net worth side of the balance sheet, the main categories include deposits and borrowings. Deposits are the largest category of liabilities at nearly $12 trillion in January 2018. Deposits are created when depositors transfer funds to banks, either by means of cash deposits or check writing. Because depositors do not own the bank, their claims to the assets of banks are external claims or liabilities. This category of liabilities includes checkable deposits but also savings deposits and time deposits. The other major category of liabilities for banks is borrowings at just over$2 trillion. Borrowings include funds borrowed from other commercial banks, the Federal Reserve, and corporations. When commercial banks borrow from other commercial banks, they do so in the federal funds market. The federal funds market is the market for overnight loans made between banks when they lend their reserves to one another. The interest rate that emerges in this market due to the competitive interaction between borrowing banks and lending banks is the federal funds rate. This interest rate fluctuates with changes in the supply and demand for reserves. Another type of borrowing is borrowing from the Federal Reserve. Loans that the Federal Reserve grants to commercial banks are called discount loans. The Fed charges an interest rate on such loans that is referred to as the discount rate. Finally, banks also borrow from corporations. Banks sell large certificates of deposit to corporations in exchange for the use of funds for a fixed period. In exchange for this privilege, banks pay high rates of interest to the lenders.
The final item of importance on the banks’ balance sheet is what we have previously called net worth or equity. The owners of the banks possess a claim to the assets of banks. In this case, the owners are the stockholders, and so net worth is sometimes referred to as stockholders’ equity. In the special case of banking institutions, net worth is also referred to as bank capital. Because the phrase is so widely used, when we refer to the net worth of banks, we will refer to bank capital. In January 2018, total bank capital for U.S. commercial banks approached $1.9 trillion. Adding together the items on each side of the balance sheet yields approximately$16.74 trillion of assets and $16.74 trillion of liabilities plus bank capital. As expected, the two sides of the balance sheet add up to the same total. Although we have considered the consolidated balance sheet of all U.S. commercial banks, each commercial bank has its own balance sheet. The balance sheet equation holds for each bank’s balance sheet, and each item on the consolidated balance sheet is simply the total for that item across all individual banks’ balance sheets. A Neoclassical Theory of Commercial Banks and its Theory of Financial Markets This section explores the way in which neoclassical economists interpret commercial bank behavior and how banks influence the total stock of money. The neoclassical analysis that we develop is also linked to the neoclassical theory of financial markets presented in Chapter 15. That is, it will be shown how neoclassical theory provides an explanation for movements in bond prices and interest rates. The reader should keep in mind that commercial banks are profit-seeking institutions just like production firms. They do not aim to alter the supply of money but do so when seeking maximum profits. To maximize profits, banks must undertake a delicate balancing act that involves carefully adjusting the various items on their balance sheets. Indeed, the profits of a bank depend on the selected mix of the different conflicting balance sheet items. The management of banks involves different and often competing objectives.[19] For example, banks must purchase assets that will generate maximum income for the bank. Loan assets are the greatest generator of bank income, but securities are also an important source of income for banks. At the same time, banks must also acquire funds at minimum cost to the bank. Because liabilities include deposits of all kinds, banks must pay interest to acquire and maintain them. Even in the case of checkable deposits that pay no interest at all, commercial banks compete for new depositors and frequently offer cash bonuses to depositors who open new checking accounts. Banks also need to acquire and maintain sufficiently liquid assets so that the banks can satisfy depositors’ demands for withdrawals. Cash reserves are the most liquid bank asset. Short-term securities, such as Treasury bills, are also relatively liquid assets and can be converted into currency at relatively low cost. Finally, banks are required to maintain sufficient levels of bank capital so that if loan losses occur, the banks will not become insolvent. Since liabilities are fixed, when loan losses occur, net worth or bank capital must shrink to maintain overall balance on the balance sheet. If bank capital falls to the point where it becomes negative, then the bank is technically insolvent. The way that private commercial banks strategize to achieve maximum profits is a subject of great interest, but we set it aside in this chapter to concentrate on the influence that banks and private individuals have on the major measures of the money supply. In other words, we are interested in the reasons for shifts in the money supply curve that occur in the money market as described in Chapter 15. Figure 16.6 shows how the M1 money supply may increase to M1* or decrease to M1** in the money market. When a$500 cash deposit occurs, the bank acquires $500 in cash reserves and$500 in deposit liabilities. The transaction has no impact on the M1 money supply. Although checkable deposits increase, publicly held currency falls by an equal amount. The overall result is an unchanged M1 money supply. M2 is similarly unaffected because M2 includes M1. With M1 remaining the same, M2 remains the same as well.
$\overline{M1}=C\downarrow + D\uparrow$
$\overline{M2}=C\downarrow + D\uparrow + nonM1\;M2$
Now consider a cash withdrawal of $500. This transaction also has no impact on the M1 or M2 money supplies. When the withdrawal occurs, deposit liabilities fall by the same amount as publicly held currency rises. M1 and M2 are thus unchanged in this case. Figure 16.7 (b) shows the changes to the balance sheet that occur when a$500 cash withdrawal occurs.
$\overline{M1}=C\uparrow + D\downarrow$ $\overline{M2}=C\uparrow + D\downarrow + nonM1\;M2$
Similar changes to the balance sheet occur when these transactions are carried out by check rather than with currency. For example, when an account holder deposits a $500 check in Bank A, the check is sent to the Federal Reserve Bank in that district. The Fed then credits the account of Bank A, thus increasing its reserves. Deposit liabilities also increase for Bank A. The only difference is that a cash deposit has an immediate impact on vault cash, whereas the deposit of a check has a quick, although not immediate, impact on the bank’s reserves held at the Fed. The impact is not immediate because the check must be processed and must clear before the bank’s reserves increase in the case of a check being deposited. If the check was written against an account at Bank B, then Bank B experiences a loss of reserves when the Fed adjusts the reserves. Bank B also experiences a reduction in its deposit liabilities. Because deposits at Bank A increase and deposits at Bank B decrease, the total M1 and M2 money supplies remain the same. $\overline{M1}=C + D\uparrow\downarrow$ $\overline{M2}=C + D\uparrow\downarrow + nonM1\;M2$ Figure 16.8 shows the impact on the Federal Reserve’s balance sheet. The cash deposit causes savings deposit liabilities to increase and publicly held currency to fall. Because publicly held currency falls with no change in checkable deposits, the result is a decrease in the M1 money supply. The M2 money supply remains unaffected, however, because savings deposits are included in M2, as is publicly held currency. In other words, the rise in savings deposits exactly offsets the decline in publicly held currency in M2. $M1\downarrow=C\downarrow + D$ $\overline{M2}=C\downarrow + D + nonM1\;M2\uparrow$ It is the reduction in M1 that is of interest, however, because it suggests that the ordinary operation of accepting cash deposits can affect the M1 money supply when the deposits are made in savings accounts. Therefore, the public can influence the M1 money supply with its decision to deposit cash in savings accounts. Similarly, cash withdrawals from savings accounts increase the M1 money supply and leave M2 unchanged. In that case, publicly held currency increases as savings deposits fall. $M1\uparrow=C\uparrow + D$ $\overline{M2}=C\uparrow + D + nonM1\;M2\downarrow$ Figure 16.9 (b) shows the impact of a cash withdrawal of$500 from a savings account on the bank’s balance sheet.
A deposit of a $500 check drawn against a checkable deposit at one bank into a savings account at another bank would also decrease the M1 money supply. In that case, checkable deposits at one bank would fall and savings deposits at another bank would rise. M1 would fall and M2 would remain the same. $M1\downarrow=C + D\downarrow$ $\overline{M2}=C + D\downarrow + nonM1\;M2\uparrow$ The Fed would adjust the reserves of the two banks, as shown in Figure 16.8, but the impact on the M1 and M2 money supplies is the same as what we observe in the case of a cash deposit in a savings account. One other transaction worth considering is a transfer between accounts. If an individual owns a checkable deposit and a savings deposit and decides to transfer$500 worth of funds from her checking account to her savings account, then the M1 money supply falls by $500 and the M2 money supply remains the same. $M1\downarrow=C + D\downarrow$ $\overline{M2}=C + D\downarrow + nonM1\;M2\uparrow$ Again, checkable deposits are included in M1 but not in M2. The resulting change to the bank’s balance sheet due to this transfer is shown in Figure 16.10 (a). Similarly, a transfer of$500 from a savings account to a checking account increases the M1 money supply by $500 and leaves M2 unchanged. $M1\uparrow=C + D\uparrow$ $\overline{M2}=C + D\uparrow + nonM1\;M2\downarrow$ The impact of a transfer from a savings deposit to a checkable deposit on a bank’s balance sheet is shown in Figure 16.10 (b). This section has analyzed the impact on M1 of movements of funds between publicly held currency or checkable deposits, and savings deposits. The same results apply to similar movements of funds between publicly held currency or checkable deposits into and out of other non-M1 aspects of M2, including MMDAs, MMMFs, and time deposits. Members of the public can certainly have an impact on the M1 money supply when they make deposits into M1 from savings accounts and the other non-M1 aspects of M2. Withdrawals from M1 into savings accounts and the other non-M1 aspects of M2 also influence M1. The more important source of changes in the M1 money supply, however, include activities that banks initiate. For example, when loans are granted or are repaid, they have a direct impact on the M1 and M2 money supplies. First consider what happens when a commercial bank grants a$3,000 loan to a borrower. The borrower signs a loan contract or a promissory note, agreeing to repay the loan within a specified period with regular interest payments. The bank accepts the promissory note, which is a non-money asset and credits the checking account of the borrower, which is an M1 money asset. The impact of the transaction on the balance sheet of the bank is represented in Figure 16.11.
The bank has acquired a loan asset and has created a checkable deposit. It should be noted that the bank has no additional cash reserves and so publicly held currency did not change. The deposit was created using a non-money asset. The bank has created new money! Both the M1 and M2 money supplies increase due to the loan.
$M1\uparrow=C + D\uparrow$
$M2\uparrow=C + D\uparrow + nonM1\;M2$
The creation of new money with the issue of additional checkable deposits parallels the issue of new paper certificates in the gold standard economy we considered earlier in this chapter. The bank sees an opportunity to make a profit and takes advantage of the fact that most deposits will not be withdrawn during the period of the loan. The borrower’s deposit is likely to be quickly withdrawn, however, because the borrower has incurred considerable expense to obtain the funds and plans to use them. Whether it is withdrawn in the form of currency or due to the Fed’s check processing activities, the M1 money supply is not expected to be directly affected due to the withdrawal. It is the granting of the initial loan, however, that has created new money.
The bank incurs some risk when it creates new money because the more money it creates, the more likely it is that it will not have sufficient reserves to satisfy depositors who write checks against their balances or who make cash withdrawals. Therefore, the Federal Reserve imposes a reserve requirement, which is a legal requirement that banks maintain a minimum fraction of their deposit liabilities as reserves, which is stated as a percentage of the bank’s deposit liabilities. Banks may also choose to hold excess reserves above the reserve requirement. To understand how reserve requirements work, consider the balance sheet of the bank shown in Figure 16.12.
Figure 16.3a shows that the bank’s deposit liabilities decline by $3,000, but its reserves also decrease by$3,000. The borrower has either withdrawn the funds in the form of currency or has written a check against the amount. During the period of the loan, the loan asset increases in value due to the anticipation of a final payment of interest when the loan matures. If the interest rate (i) is 5%, then the loan asset appreciates by $150 (=5% times$3,000). Because the bank’s deposit liabilities have not changed, the bank’s shareholders enjoy an increase in bank capital. This change is represented in Figure 16.3b.
When the loan matures, the borrower must repay the principal amount plus interest. She, therefore, deposits the full amount of $3,150 in her account at the bank. The bank’s deposit liabilities rise by this amount. The bank’s reserves also increase by this amount giving balance to the balance sheet. This change is shown in Figure 16.3c. Finally, the borrower repays the loan with a check drawn against her account at the bank. The bank debits the borrower’s account, which reduces the bank’s deposit liabilities. It also returns the canceled promissory note to the borrower as shown in Figure 16.13d. Notice that the bank has$3,150 in reserves, which is $150 more than the amount with which it began. It also has increased its bank capital by$150, which reveals how interest-bearing loans serve as a means of appropriating profits for the bank and enhancing bank capital. Most importantly, we see that when loans are repaid, the M1 and M2 money supplies contract. Borrowers repay loans with interest. They receive the canceled promissory notes, and their checking accounts are debited. A non-money asset returns to circulation, and a money asset leaves circulation.
It is assumed that the borrower redeposited the initial loan amount and made an additional cash deposit of $150 to make possible the repayment of interest with the loan. When these cash deposits are made, the M1 and M2 money supplies do not change because publicly held currency falls as much as checkable deposits increase. It is only when the loan is repaid that the M1 money supply falls by the full amount of the repayment. Interest payments on bank loans thus contribute to the contraction of the money supply. In Figure 16.13d, the M1 and M2 money supplies fall because checkable deposits decline with no change in publicly held currency. $M1\downarrow=C + D\downarrow$ $M2\downarrow=C + D\downarrow + nonM1\;M2$ It is the canceled promissory note that returns to the borrower, not currency, as would occur in the case of a cash withdrawal. Hence, both monetary aggregates contract. It is worth noting that banks can change both M1 and M2 with their lending activities whereas deposits and withdrawals from savings accounts (and other non-M1 components of M2) only affect M1. Another operation of commercial banks that has similar impacts on the M1 and M2 money supplies is the purchase and sale of government securities. In this case, government securities act in the role of non-money assets like promissory notes. Otherwise, the process is essentially the same. Banks purchase government securities from bond dealers and credit their checking accounts to pay them, which expands the M1 and M2 money supplies. When banks sell government securities to bond dealers, the dealers pay using their checkable deposits, and the M1 and M2 money supplies contract. Non-money assets return to circulation, and money assets are taken out of circulation. We can now see exactly how the financial markets are affected due to the different operations we have been discussing. The previous discussion has shown that the M1 money supply can be increased due to the following bank operations: 1. A cash withdrawal from a savings account (or other non-M1 M2 asset account) 2. A transfer from a savings account (or other non-M1 M2 asset account) to a checking account 3. A loan granted to a borrower 4. A purchase of government securities from a bond dealer who is paid with cash or with a check that is then deposited in a checking account If any of these transactions takes place, the resulting increase in the M1 money supply will have a direct impact on the financial markets. Figure 16.14 shows how the monetary expansion connects with our theory of financial markets developed in Chapter 15. As Figure 16.14 shows, when the M1 money supply rises, the money supply curve shifts rightward in the money market. At the initial interest rate, a surplus of money exists. Because holders of money hold excess money balances, they decide to lend it in the loanable funds market to earn interest. The rightward shift of the supply curve for loanable funds creates a surplus of loanable funds. Competition between borrowers and lenders in the loanable funds market drives down the rate of interest, which clears the money market. At the same time, the rise in the supply of loanable funds is equivalent to an increase in the demand for bonds since these two markets represent a mirror reflection of one another. The higher demand for bonds pushes bond prices upward. Overall, the monetary expansion leads to an expansion of the loanable funds market and the bond market. Interest rates drop, and bond prices rise. Similarly, the M1 money supply will decline due to the opposite bank operations: 1. A cash deposit into a savings account (or other non-M1 M2 asset account) 2. A deposit of a check drawn against a checking account into a savings account (or other non-M1 M2 asset account) 3. A transfer from a checking account into a savings account (or other non-M1 M2 asset account) 4. A loan repayment 5. A sale of securities to a bond dealer who pays with cash or check drawn against a checking account If any of these transactions takes place, the resulting decrease in the M1 money supply will have a direct impact on the financial markets. Figure 16.15 shows how the monetary contraction connects with our theory of financial markets developed in Chapter 15. As Figure 16.15 shows, when the M1 money supply falls, the money supply curve shifts leftward in the money market. At the initial interest rate, a shortage of money exists. Because holders of money experience a shortage of money balances, they decide to lend fewer funds in the loanable funds market. The leftward shift of the supply curve for loanable funds creates a shortage of loanable funds. Competition between borrowers and lenders in the loanable funds market drives up the rate of interest, which clears the money market. At the same time, the reduction in the supply of loanable funds is equivalent to a reduction in the demand for bonds since these two markets represent a mirror reflection of one another. The drop in the demand for bonds pushes bond prices downward. Overall, the monetary contraction leads to a contraction of the loanable funds market and the bond market. Interest rates rise, and bond prices fall.[20] One other aspect of the neoclassical theory of banking that is important is the multiple expansion of bank deposits through lending. That is, when First Regional Bank obtains new reserves and makes a loan, the new checkable deposit that is created adds to the M1 money supply. The process does not stop there, however, because the recipient of the check that is written against those funds (i.e., whomever the borrower of the funds decides to pay to obtain commodities) is likely to deposit the check in a Second Regional Bank checking account. Second Regional Bank will acquire new reserves. If Second Regional Bank lends the excess reserves, then another new checkable deposit is created. A check is then written against those funds and the recipient of the check might deposit the check in Third Regional Bank. Third Regional Bank thus receives new reserves and might lend the excess reserves, thereby creating yet another new checkable deposit. This process continues and each time a new checkable deposit (i.e., new money) is created. This process is referred to as the multiple expansion of bank deposits. To understand the exact quantitative relationship between new reserves in the banking system and the amount of new checkable deposits or new money that is created, we need to look at the impact of these successive rounds of lending on the banks’ balance sheets. Consider the changes to First Regional Bank’s balance sheet when the bank accepts a$2,000 cash deposit and obtains $2,000 in new reserves. Assume that it is also subject to a 10% reserve requirement and so grants a$1,800 loan using the new excess reserves and that the borrower withdraws the entire amount from his checking account once the loan is granted. All these changes are depicted in Figure 16.16.
The M1 money supply rises by $1,620 due to Second Regional Bank granting the loan and creating the new deposit. Let’s finally consider how Third Regional Bank’s balance sheet is affected if the$1,620 of borrowed funds is deposited there. Again, Third Regional Bank will acquire new reserves equal to $1,620. It will lend 90% of the newly acquired reserves (i.e., the excess reserves). The$162 of required reserves must be kept in the form of cash reserves. Second Regional Bank will lend $1,458 to a borrower and create a new checkable deposit, which adds to the M1 money supply. It will then be withdrawn, and the process continues. The changes to Third Regional Bank’s balance sheet are shown in Figure 16.18. We could explore the roles of Fourth Regional Bank, Fifth Regional Bank, and Sixth Regional Bank, and on and on, but the pattern should be clear by now. $\Delta M1=\1,800 + \1,620 + \1,458 +...$ The M1 money supply increases with each successive round of lending and deposit creation. With each round of lending, however, the new loan amounts become smaller and smaller. The reason for the reduction in deposit creation is that banks must hold a fraction of their newly acquired reserves each time they receive the deposited funds. The legal reserve requirement limits the banks to the creation of new money equal only to their excess reserves. As the story continues, the newly created deposits become smaller and smaller until they cease to influence the M1 money supply. To arrive at an exact calculation of the total new checkable deposits that are created due to the acquisition of new reserves in the first round, we consider a situation in which First Regional Bank acquires only$1.00 of new reserves. If R is the reserve requirement (10% in our example), then R (= 0.10) represents the required reserves in this case (i.e., 10 cents or $0.10). The excess reserves that First Regional Bank acquires are 1-R (= 90 cents or$0.90). The loan amount or newly created checkable deposit is assumed to equal the full amount of the excess reserves and so is also equal to 1-R. The first row of Table 16.2 shows the results for First Regional Bank.
Second Regional Bank then acquires the 1-R amount of borrowed funds, which for it constitute new reserves. Its new required reserves are R times this amount or R(1-R). The excess reserves are 1-R times this amount or (1-R)(1-R) = (1-R)2. The excess reserves are then loaned to a borrower and a new deposit is created in the amount of (1-R)2. The second row in Table 16.2 captures these results.
Third Regional Bank then acquires the borrowed amount of (1-R)2 in the form of reserves. Its required reserves are R times this amount or R(1-R)2 and its excess reserves are 1-R times the new reserves or (1-R)(1-R)2 = (1-R)3, which is then loaned to a borrower. The third row in Table 16.2 captures these results. The results for Fourth Regional Bank have also been added to the table. Readers should think through those results to test their understanding.
To determine the total amount of new deposits created, we need to add up all the entries in the last column, which represents new money created. The change in deposits (D) can be represented as follows:
$\Delta D=(1-R) + (1-R)^2 + (1-R)^3 + (1-R)^4 +...$
We can then factor out (1-R), which makes it possible to solve for D as follows:
$\Delta D = (1-R)\{1 + (1-R) + (1-R)^2 + (1-R)^3 + (1-R)^4+...\}$ $\Delta D=(1-R)(1 + \Delta D)$ $\Delta D=(1-R) + (1-R)\Delta D$ $\Delta D\{1-(1-R)\}=(1-R)$ $\Delta D=\frac{1-R}{1-(1-R)}=\frac{1-R}{R}=(1-R) \cdot \frac{1}{R}=\Delta E \cdot \frac{1}{R}$ The change in deposits are equal to the change in excess reserves (E) times the simple deposit multiplier (1/R), which is also called the money multiplier.
This important result allows us to obtain an exact quantitative solution to the problem of the amount of deposits that can be created using the newly acquired reserves at the First Regional Bank. In that example, the reserve requirement is 10% and so the simple deposit multiplier is simply the reciprocal of the reserve requirement or 1/0.10 = 10. The newly acquired excess reserves at the First Regional Bank are equal to $1,800. Although this bank can only create$1,800 of new money using these excess reserves, the entire banking system can create ten times this amount, or $18,000. The M1 money supply thus rises by$18,000 once the money multiplier process has run its course. In general, when a loan is granted, the M1 money supply curve shifts rightward in the money market, but it may shift by an amount that is much larger than the initial loan amount due to the multiple expansion of bank deposits.
Two factors may weaken the money multiplier process. First, banks frequently choose to hold excess reserves to maintain liquid positions. Because these reserves exceed the minimum legal requirement, they reduce the impact of the money multiplier. If banks hold more reserves, then fewer funds are loaned out, and fewer new deposits are created. Second, if borrowers choose to hold some of their borrowed funds in the form of currency (or the recipients of those borrowed funds when the borrowers pay them for commodities), then the money multiplier process weakens. Currency that is held and not redeposited in the banking system does not add to the reserves of banks and cannot be loaned or used to create new deposits. Because the tendencies of banks to hold excess reserves or the public to hold currency are very real, the money multiplier that we developed in this section is almost certainly larger than we would observe. Advanced money and banking textbooks develop a more complex money multiplier that considers the existence of excess reserves and the currency leakages that occur when the borrowing public decides to hold cash.
Clearly the activities of banks and depositors have a major influence on the M1 money supply. At the same time, neither banks nor the depositors act with the intention of influencing the M1 money supply. According to neoclassical economics, banks are profit-seeking organizations and make decisions about loans and purchases and sales of securities without regard to the impact on the total stock of money in the economy. Similarly, depositors are utility-maximizing consumers and savers and make their decisions about the amount of money to hold without any concern for how their decisions might alter the money supply. Since the state of the economy can sometimes causes banks and depositors to act in predictable ways, the M1 money supply might be subject to extreme fluctuations. For example, during an economic downturn, banks restrict their lending, which reduces the M1 money supply and may intensify a recession. During an economic upswing, banks expand their lending, which might sharply increase the M1 money supply and may cause inflation. Similarly, if depositors decide to hold more currency during a recession, then banks will have fewer reserves to lend, which can worsen a recession. If they decide to deposit more currency in savings deposits (and other non-M1 M2 assets) during an economic expansion, then banks will have more reserves and are likely to grant more loans, which will expand the M1 money supply overall through the multiple expansion of checkable deposits. The potential impact on the M1 money supply stemming from the activities of banks and the public raises the question as to whether the determination of M1 should be left to the private banking community. The history of bank panics and bank runs in the United States provides part of the justification for the establishment of a central bank to regulate the money supply. The role of the central bank in money supply regulation is the subject of Chapter 17.
A Marxist Theory of Commercial Banks and its Theory of Financial Markets [21]
The major difference between the neoclassical and Marxist theories of commercial banking and financial markets that we explore in this chapter is that Marxists assign a central role to class conflict in their analyses of commercial bank behavior and financial markets. The Marxist theory developed in this section is not one that Marx created, but the concepts that Marx shaped directly inspired it. We begin with a reminder about the general formula for capital that is discussed in Chapter 4. Symbolically, capital takes the form of money and commodities but becomes capital when it participates in the following movement:
$M-C-M'$
According to this symbolic representation, M’ is equal to M+ΔM where M represents surplus value. The capitalist thus transforms a sum of money capital (M) into more money (M’). This circuit of money capital may be expanded, as described in Chapter 4, to show that the money capital is advanced for the purchase of labor-power (Lp) and means of production (mop). Figure 16.19 shows the expanded circuit of money capital.
After circulation is interrupted with the production phase of the process (P) in Figure 16.19, the finished commodity (C’) is then transformed into a larger sum of money capital (M’). The surplus value is thus created in production and realized in exchange after production is complete. The reader should recall that it is the exploitation of labor-power that makes surplus value production and realization possible.
Industrial capitalists are capitalists who hire workers to produce commodities containing surplus value. Other kinds of capitalist exist as well. These capitalists receive a share of the surplus value that industrial capitalists appropriate in exchange for supporting the capitalist production process. For example, commercial capitalists (or merchant capitalists) hire workers who do not produce new commodities but who help to market and sell commodities. Because they assist with the realization of surplus value, industrial capitalists are willing to provide them with a share of the surplus value created in production, which becomes commercial profit. Moneylending capitalists provide industrial capitalists with the money capital they require to purchase means of production and labor-power. Because they help make possible the production of commodities containing surplus value, industrial capitalists are willing to provide them with a share of the surplus value created in production, which becomes interest.
Marxists argue that these two forms of capital, merchant capital and interest-bearing capital, are the oldest forms of capital. Ironically, it is necessary to understand how industrial capital produces surplus value if one is to understand the place of merchant capital and interest-bearing capital in modern capitalist societies. To understand the reason for this ironic result, consider the formula for the circuit of interest-bearing capital:
$M-M-C-M'-M'$
This circuit involves the transfer of interest-bearing capital to an industrial capitalist in monetary form. This part of the circuit (M-M) constitutes the granting of a loan. The circuit of industrial capital then follows with commodities (C) purchased and sold for a larger sum of money capital (M’). The initial loan amount (M) is then returned to the moneylending capitalist along with interest (ΔM) since M’ = M + ΔM as before. It is important to note that only the interest payment is shown here, as opposed to the entire surplus value produced, because it is the circuit of interest-bearing capital that is under investigation. In fact, the surplus value is a larger sum and the interest only represents a portion of the surplus value. That interest only represents a fraction of the surplus value (or profit) is a claim that was made in Chapter 15. Competition in the loan capital market determines the specific share that is paid as interest.
When we introduce commercial banks into the analysis, an additional type of capital comes into play. Bank capitalists advance bank capital with the aim of grabbing a share of the surplus value produced in the industrial sector. The circulation of bank capital is represented in Figure 16.20.
Figure 16.20 shows that depositors deposit their cash holdings (MO) in a commercial bank, which become deposit capital (D). The bank capitalist also advances bank capital (B), which then splits into two parts. Part of the bank capital is needed for the purchase of labor-power (Lp) and means of production (mop) to operate the bank. This part of the bank capital is referred to as bank operating capital (BO). The labor-power that is hired to operate the bank is unproductive labor-power in the sense that these workers do not produce surplus value. Instead, they help to make the production of surplus value possible. The means of production that are purchased are not used to produce commodities containing surplus value. The means of production that a bank purchases include office supplies, computers, and electricity. For simplicity, it is assumed that all the capital is circulating capital, which allows us to omit fixed capital such as a banking facility.
It is essential to recognize that the bank operating capital ends its circulation at this stage. Because the means of production and labor-power are not used to purchase valuable commodities, this capital does not return to the banking capitalist in this part of the circuit. Another part of the bank capital is advanced for lending, however, and may be referred to as bank loan capital (BL). It is combined with the deposit capital (D) to form the interest-bearing capital (M) described previously. The interest-bearing capital then passes through its circuit and returns with interest. The money capital that returns to the banking capitalist (M’) is equal to the principal amount of the loan (i.e., the bank loan capital plus the deposit capital advanced) made at the beginning of the circuit (BL+D) plus the gross bank profit (ΔM).
$M'=B_{L} + D +\Delta M$
In this case, the gross bank profit must be sufficient to pay the operating expenses of the bank (BO) and return a net bank profit (ΔB) that is equal to the average profit in the economy.
$\Delta M=B_{O} +\Delta B$
Substituting the gross bank profit into the calculation of M’ yields the following result:
$M'=B_{L} + D + B_{O} + \Delta B$
In other words, because the bank operating capital does not complete its circuit in the lower portion of Figure 16.20, it must return with the gross bank profit. If insufficient profit is realized to cover expenses and generate the average profit, then capitalists will cease to invest in the banking sector.
To understand how this analysis of commercial bank activity relates to the Marxist theory of financial markets presented in Chapter 15, we will initially simplify the discussion with the assumption that all bank capital is bank loan capital (i.e., no bank operating capital exists). Given this assumption, the principal amount of the loan (M) is equal to B+D since the entire bank capital is granted as a loan. The gross bank profit (ΔM) in this case is equal to the net bank profit (ΔB) because the bank need not earn enough profit to cover operating expenses, which are non-existent. We can use equations to express what has been stated thus far.
$M=B + D$ $\Delta M=\Delta B$
In Chapter 15, we explored a Marxist theory of interest rates. If we use the concepts developed in Chapter 16 to write an expression for the rate of interest, then it will be possible to link the two analyses. The rate of interest (i) is simply the amount of interest received on a loan (M) expressed as a percentage of the initial loan amount (M) as shown below:
$i=\frac{\Delta M}{M}=\frac{\Delta B}{B+D}$
At this stage, we only have a definition of the interest rate, and we lack a Marxist theory of its determination. To move in the direction of a theory, we divide the numerator and the denominator in the expression by the bank capital (B), which produces the following result:
$i=\frac{\Delta M}{M}=\frac{\Delta B}{B+D}=\frac{\frac{\Delta B}{B}}{1+\frac{D}{B}}$
To transform this definition of the rate of interest into a theory of the rate of interest, we need to recognize the numerator (i.e., the ratio of the net bank profit to the total bank capital) as the rate of profit on bank capital. In Marxian economics, competition among capitalists in an environment of unrestricted capital movements leads to an equalization of the profit rate throughout the economy. This rate of profit is referred as the general rate of profit (p). Of course, differences among profit rates across industries and capitalists always exist, and so the general rate of profit should be thought of as an average and the level towards which profits rates are constantly moving, even as unexpected factors cause deviations from this level.
Let’s assume that the general rate of profit (p) prevails in the banking sector. If bankers earn the general rate of profit, then we can substitute the general rate of profit for the profit rate expression in the numerator of our interest rate expression. Doing so provides us with an expression that we may refer to as the general rate of interest (ig).
$i_{g}=\frac{\Delta M}{M}=\frac{\Delta B}{B+D}=\frac{\frac{\Delta B}{B}}{1+\frac{D}{B}}=\frac{p}{1+\frac{D}{B}}$
The general rate of interest is the rate of interest that banks must charge borrowers to ensure that they earn the general rate of profit. It is a hypothetical rate of interest rather than the observable, market rate of interest (im). The market rate of interest is what banks earn on their loans. If the market rate of interest exceeds the general rate of interest, then banks earn more than the general rate of profit. If the market rate of interest is below the general rate of interest, then banks earn less than the general rate of profit.
The factors that determine the general rate of interest can be easily gleaned from its expression. If the general rate of profit (found in the numerator in the expression) rises, then the general rate of interest must rise to ensure that sufficient interest is received to guarantee that the bank earns the higher profit rate. If the general rate of profit falls, then the general rate of interest must fall because it takes less interest to ensure that the bank earns the lower profit rate. In Chapter 14 we learned that Marxian economists argue that the long-term tendency of the general rate of profit to fall in capitalist economies causes economic crises. Although various factors counteract this long-term tendency, it is a tendency that reasserts itself periodically. If the general rate of profit has a long-term tendency to fall and the general rate of interest is directly related to the general rate of profit, then the general rate of interest also has a long-term tendency to fall, subject of course to various counteracting tendencies.
One other factor that influences the general rate of interest is the deposit/bank capital ratio (D/B), which is found in the denominator of the expression. If the deposit/bank capital ratio rises, then the general rate of interest falls. This result is intuitive. A higher deposit/bank capital ratio means that banks are relying relatively more heavily on deposit capital when granting loans. These assets are very low-cost liabilities, particularly if they are checkable deposit liabilities that pay no interest, and so the banks require less interest to ensure that they receive the general rate of profit on bank capital. On the other hand, if the deposit/bank capital ratio falls, then banks are relying relatively more heavily on bank capital when granting loans and so must receive a higher rate of interest to ensure that the general rate of profit on bank capital is received.
It is one thing to assert that the general rate of interest is the rate of interest that will allow banks to earn the general rate of profit. It is another thing to assert that the market rate of interest will tend to move towards this level. To understand why the market rate of interest will gravitate towards the general rate of interest, consider what would happen if the market rate of interest was greater than the general rate of interest (im > ig). Figure 16.21 illustrates how capitalists in the industrial sector and in the financial sector will respond to this discrepancy.
As previously explained, in this situation, banks will earn a profit rate that exceeds the general rate of profit. With unrestricted capital flows, industrial capitalists will begin to transfer capital out of industry and into finance. The inflow of capital into finance will cause banks to lend more in the loan capital market. The increased supply of loan capital will push down the market rate of interest in the direction of the general rate of interest. The process will continue until the market rate of interest equals the general rate of interest at which point the inflow of capital into finance will cease because the bank rate of profit is equal to the industrial rate of profit.
If the market rate of interest is below the general rate of interest, then the process works in reverse. That is, the bank rate of profit will be below the industrial rate of profit. Capital will flow out of finance and into industry. As the outflow of capital from finance continues, the supply of loan capital in the loan capital market will shrink, which puts upward pressure on the market rate of interest. The outflow of capital from finance will continue until the market rate of interest equals the general rate of interest at which point the bank rate of profit will once again equal the industrial rate of profit.[22]
It is also possible to incorporate the reserve requirement (R) into the analysis. If banks are required to keep a fraction of their deposit liabilities (D) in the form of reserves, then they will only be able to lend their excess reserves, which are equal to 1-R times the deposit liabilities. The expression for the general rate of interest, considering the reserve requirement, thus changes to the following:
$i_{g}=\frac{\Delta M}{M}=\frac{\Delta B}{B+(1-R)D}=\frac{\frac{\Delta B}{B}}{1+\frac{(1-R)D}{B}}=\frac{p}{1+\frac{(1-R)D}{B}}$
Our earlier expression for the general rate of interest is a special case of this expression where the reserve requirement is zero (R=0). Previously, banks could advance the entire deposit capital (D). Now they can only advance the excess reserves of (1-R)D. Mathematically, it should be clear that if R rises, then the general rate of interest will rise. That is, if banks are required to hold more reserves and thus lend less, then they must earn a higher interest rate on their loans to ensure that they receive the general rate of profit. If R falls, then banks can lend more and the rate of interest they require to obtain the general rate of profit is lower.
It is also worth noting that the maximum value of the general rate of interest is the general rate of profit. The maximum value is reached when banks are required to hold 100% of their deposits as reserves (R=1). In that case, they only lend bank capital and so must receive an interest rate equal to the general profit rate if they are to earn that profit rate. The minimum value of the general rate of interest is the expression we discussed previously where the reserve requirement was equal to zero (R=0). It is the lower bound because the banks can lend all their deposit capital to borrowers. The upper and lower bounds of the general rate of interest may be expressed as follows:
$\frac{p}{1+\frac{D}{B}} \leq i_{g} \leq p$
A final modification that we can make to the general rate of interest expression involves the incorporation of bank operating capital (BO), which has been assumed to equal zero thus far. If bank operating capital is positive, then the bank must receive it as part of gross bank profit (ΔM). Also, if bank operating capital is positive, then the bank cannot lend its entire bank capital but only a portion that we call the bank loan capital (BL). Making these adjustments to the general rate of interest expression and dividing the numerator and denominator by the bank capital yields the following result:
$i_{g}=\frac{\Delta M}{M}=\frac{B_{O}+\Delta B}{B_{L}+(1-R)D}=\frac{\frac{B_{O}}{B}+p}{\frac{B_{L}}{B}+\frac{(1-R)D}{B}}$
This new expression for the general rate of interest shows that a rise in bank operating capital will raise the general rate of interest. This result is expected because higher operating costs mean that banks require a higher interest rate to cover these expenses and obtain the general rate of profit on bank capital. Also, because the total bank capital equals the sum of bank operating capital and bank loan capital, a rise in bank operating capital involves a reduction in bank loan capital, which also raises the general rate of interest. That is, when banks lend less, they require a higher rate of interest to obtain the same general rate of profit. Conversely, a reduction in bank operating capital will lower the general rate of interest because banks have lower expenses and can manage with a lower rate of interest. They also lend more and so can obtain the general rate of profit when they charge a lower interest rate.
When the market rate of interest and the general rate of interest are equal, we have seen that no capital movements will occur between industry and finance. The reason is that the profit rate in the banking sector is equal to the profit rate in the industrial sector, and so capitalists have no incentive to reallocate capital across sectors. The static situation described here can be illustrated with the numerical example in Table 16.3 that uses the final expression for the general rate of interest developed in this chapter.
The two-sector example shown in Table 16.3 treats all items except for the bold-faced items as given. The bold-faced items are calculated using the definitions in this chapter. For example, it is assumed that a 25% rate of profit prevails in the two sectors. Summing together the given bank operating capital and bank loan capital, we obtain the total bank capital. To obtain the borrowed capital in the industrial sector, we calculate the sum of the bank loan capital and the excess reserves, BL+(1-R)D, using the given required reserve ratio (R). The sum of the borrowed capital and the given non-borrowed capital in the industrial sector equals the total productive capital. The total industrial profit is calculated as the product of the industrial profit rate and the total productive capital. The general rate of interest is calculated using the final formula for the general rate of interest that we developed. The total interest paid in the industrial sector is the product of the general rate of interest and the borrowed capital. The total annual social product is the sum of the total productive capital and the total industrial profit. The gross bank profit is the same as the interest that industrial capitalists pay to the banking capitalists. The net bank profit is the gross bank profit minus the bank operating capital. The aggregate profits are the sum of the net bank profits and the industrial profits. The total social capital is the sum of the total productive capital and the total bank capital. The gross general rate of profit is the ratio of the aggregate profits to the total social capital. The reader should carry out these calculations to test his or her understanding of the relationships. Table 16.4 provides formulas for each of the calculations.
To summarize the analysis in this section, the exploitation of labor-power in the industrial sector generates the surplus value that is appropriated as profit in the industrial sector and then as interest in the banking sector. The banking sector is parasitic in that it does not produce surplus value but helps make the production of surplus value possible when it grants loans to the industrial capitalists who use the funds to purchase labor-power and the means of production. To ensure the continuation of such loans, the industrial capitalists allow the banking capitalists to share in the surplus value that they appropriate from the working class.
It is helpful to reflect on the relationship between this Marxian analysis of financial markets and the analysis of the rate of interest in Chapter 15. In Chapter 15, a theory is presented that shows how the market rate of interest fluctuates over time as the level of production changes. That is, the business cycle is assumed in that analysis, and the rate of interest fluctuates due to discrepancies between the supply and demand for loanable money capital. In Chapter 16, however, the business cycle is not responsible for changes in the market rate of interest. It is capital’s search for the highest rate of profit that is responsible for adjustments of the market rate of interest to the general rate of interest. Even if we assume an abstract situation in which the level of production is stable and the business cycle is not operative, the search for a maximum profit rate still applies, and capital will flow between industry and finance accordingly. The general rate of interest then is the average rate of interest towards which the market rate of interest moves. Deviations from this level are due to discrepancies in the loanable money capital market over the course of the business cycle.
F ollowing the Economic News [23]
A recent news article in USA TODAY explains that the biweekly pay schedule that affects millions of American workers is an antiquated practice and burdensome workplace norm. Tim Chen explains that digital banking has made it possible to pay workers more quickly for the work they perform. Because many workers are living paycheck to paycheck, Chen explains that approximately “12 million Americans use payday loans to cover emergencies and living expenses at effective interest rates that exceed 300 percent.” For neoclassical economists, these loans increase the welfare of all involved. Lenders transfer funds from those who do not have a current use for them to those who do. Such transfers of funds improve efficiency and social well-being. It is an example of Adam Smith’s invisible hand applied in the financial marketplace. Both borrower and lender gain as they each serve the other’s interest while only thinking of their own interest. From a Marxian perspective, payday loans force workers into a debt trap. Capitalists withhold wages for half the month, which pressures workers to borrow at very high interest rates. As Chen explains, when their paychecks are finally deposited in their bank accounts, many workers are not able to repay their payday loans, which leads to additional borrowing, more fees, and a continuous cycle of debt accumulation. Chen argues that loan fees cost Americans $9 billion annually with an additional$14 billion annually for overdraft fees. Chen explains that millions of Americans also lack bank accounts, which leads them to use “costly check-cashing services to get same-day access to their funds.” Chen cites the Bookings Institution, which claims that interest payments and fees can amount to $40,000 during a worker’s lifetime. Because these fees and interest payments are paid from workers’ wages, they represent an example of how financial institutions can recover some of the income that is distributed to workers for the necessary labor they perform. Just as industrial capitalists appropriate surplus labor in the form of profit, the financial capitalists can appropriate a portion of the necessary labor of workers in the form of interest. Neoclassical economists, on the other hand, view the fees and interest payments as compensation to lenders for their willingness to grant risky loans to people who might not be able to repay them. Without such compensation, neoclassical economists argue, no lenders will have an incentive to grant these loans and both borrowers and lenders will be harmed. Using different theoretical perspectives, we reach different conclusions about the desirability of specific types of market transactions. Summary of Key Points 1. The three functions of money in neoclassical theory are medium of exchange, unit of account, and store of value. 2. The double coincidence of wants problem exists in barter economies because commodity owners must be able to find other commodity owners who possess the commodities they want and who are willing to exchange those commodities for what they own. 3. Commodity money is an object with intrinsic value, convertible paper money is paper that is convertible into commodity money, and inconvertible paper money is paper that is not convertible into anything else. 4. The M1 money supply includes publicly held currency and checkable deposits; the M2 money supply includes M1 and savings deposits, small time deposits, MMDAs, and MMMFs; the M3 money supply includes M2 and large time deposits. 5. The Ma definition of the money supply is like the M2 definition of the money supply except that: 1) it includes time deposits and federal savings bonds at current redemption rates; 2) it includes the cash surrender values of life insurance policies; and 3) it subtracts the demand deposits of thrift institutions to avoid double counting. 6. In a 100% reserve banking system, the entire money supply is backed up with bank reserves whereas in a fractional reserve banking system, only a fraction of the money supply is backed up with bank reserves. 7. The balance sheet of a bank lists its assets, liabilities, and net worth, and the total assets must always equal the sum of the liabilities and net worth. 8. Securities, loans, and cash reserves are the most important assets for banks. Deposits and borrowings are the most important liabilities for banks. 9. Successful banks must manage their assets, liabilities, degree of liquidity, and bank capital simultaneously. 10. Banks are legally required to hold a specific fraction of their deposits as reserves, which is referred to as the reserve requirement. 11. Commercial banks increase the M1 money supply when they grant loans to borrowers, purchase bonds from bond dealers, and allow transfers from savings accounts into checking accounts or currency. The opposite actions reduce the M1 money supply. 12. When banks acquire new reserves, the rounds of successive lending that are initiated increase the M1 money supply by a multiple of the initial increase in bank reserves. 13. Whereas industrial capitalists appropriate the surplus value that their workers produce, commercial capitalists, moneylending capitalists, and bank capitalists receive a share of the surplus value that industrial capitalists appropriate. 14. The general rate of interest is the rate of interest that prevails when the rate of profit in the financial sector is the same as the rate of profit in the industrial sector. List of Key Terms Medium of exchange Barter economy Double coincidence of wants problem Unit of account Store of value Commodity money Convertible paper money Inconvertible paper money Fiat money Digital currencies Mining Money supply Money stock Liquidity Currency M1 money supply Thrift institutions Savings and loan associations Credit unions M2 money supply Certificates of deposit (CDs) Money market mutual funds (MMMFs) Money market deposit accounts (MMDAs) M3 money supply Cash surrender values Austrian definition of the money supply (Ma) 100% reserve banking system Bank run Fractional reserve banking system Balance sheet Assets Liabilities Net worth (equity) Balance sheet equation Insolvent Securities Loans Default Loan losses Cash reserves Deposits Borrowings Federal funds market Federal funds rate Discount loans Discount rate Bank capital Reserve requirement Excess reserves Multiple expansion of bank deposits Simple deposit multiplier Money multiplier Industrial capitalists Commercial capitalists Commercial profit Moneylending capitalists Merchant capital Interest-bearing capital Bank capitalists Bank operating capital Unproductive labor-power Bank loan capital Rate of interest (i) General rate of profit (p) General rate of interest (ig) Problems for Review 1.Can you think of an asset that is a good store of value but not a good unit of account or medium of exchange? Can you think of an asset that is a good unit of account but not a good medium of exchange or store of value? Can you think of an asset that is a good medium of exchange but not a good unit of account or store of value? 2. Suppose that you are presented with the following information: • Savings deposits =$9 trillion
• Cash surrender values of life insurance policies = $2.5 trillion • Checkable deposits =$3 trillion
• Large time deposits (face value) = $1.5 trillion • Small time deposits (face value) =$4 trillion
• Large time deposits (current redemption value) = $1 trillion • Publicly held currency =$1.5 trillion
• Savings bonds (current redemption value) = $3 trillion • MMDAs =$3.5 trillion
• MMMFs = $2.5 trillion • Small time deposits (current redemption value) =$3.5 trillion
• Demand deposits that thrift institutions hold in commercial banks = $1 trillion Calculate the M1 money supply. Calculate the M2 money supply. Calculate the M3 money supply. Calculate the Ma money supply. 3. Suppose that you are presented with the following information: • A bank has$25 billion in cash reserves.
• A bank has issued checkable deposits of $41 billion. • A bank has borrowings of$62 billion.
• A bank has issued savings deposits of $42 billion. • A bank has$48 billion in securities.
• A bank has $66 billion in loan assets. Create a balance sheet for the bank. Determine the level of bank capital. Does the balance sheet equation hold with the answer you have provided? What is the bank’s situation like? 4. Suppose that a depositor decides to transfer$3000 from a checking account to a savings account. Create a balance sheet for the bank, and show how the balance sheet will be affected due to this change. Only list the changes on your balance sheet. Then explain what will happen to the M1 and M2 money supplies.
5. Suppose a bank purchases $60,000 in bonds from a bond dealer who has a checking account at the bank. Create a balance sheet for the bank, and show how the balance sheet will be affected due to this change. Only list the changes on your balance sheet. Then explain what will happen to the M1 and M2 money supplies. 6. Suppose that a borrower repays a loan in the amount of$80,000. Create a balance sheet for the bank, and show how the balance sheet will be affected due to this change. Only list the changes on your balance sheet. Then explain what will happen to the M1 and M2 money supplies.
7. Suppose that a borrower makes a cash withdrawal of $4000 from a savings account. Create a balance sheet for the bank, and show how the balance sheet will be affected due to this change. Only list the changes on your balance sheet. Then explain what will happen to the M1 and M2 money supplies. 8. Suppose banks acquire$50 billion in new reserves, and the reserve requirement ratio is 6%. What will be the impact on the total deposits in the system, assuming all excess reserves are loaned to borrowers and the public redeposits all the borrowed funds in the banking system?
9. Suppose that you know the following information about the banking system. Use the information to complete the rest of the table.
1. The three functions of money discussed in this section are universally recognized in neoclassical textbooks.
2. Mishkin (2006), p. 46, emphasizes the reduction in search time as an efficiency-enhancing characteristic of money.
3. Mishkin (2006), p. 46, also refers to the promotion of specialization as a second efficiency-enhancing characteristic of money.
4. See Marx (1990), p. 227-244, for a detailed account of these functions.
5. This example is found in OpenStax (2014), p. 617.
6. The Federal Reserve concluded that “M3 does not appear to convey any additional information about economic activity that is not already embodied in M2” (Federal Reserve Statistical Release. H.6 Money Stock Measures: Discontinuance of M3. November 10, 2005. Revised March 9, 2006. Web. Accessed on January 20, 2018. https://www.federalreserve.gov/Releases/h6/discm3.htm).
7. Rothbard (1978), p. 151.
8. Ibid. pp. 145.
9. Ibid. pp. 146-148.
10. Ibid. pp. 148.
11. Ibid. pp. 149.
12. Ibid. pp. 150.
13. Ibid. pp. 150.
14. Ibid. pp. 151.
15. Ibid. pp. 151.
16. Ibid. pp. 151-152.
17. This imaginative approach is inspired by the goldsmith story that is found in neoclassical textbooks. See Even, Louis. "The Goldsmith Who Became a Banker - A True Story." Cahiers du Crédit Social. October 1936.
18. It is common for neoclassical textbooks to present a consolidated balance sheet for commercial banks and then discuss the different balance sheet items. For example, see Samuelson and Nordhaus (2001), p. 522, and Mishkin (2006), p. 201-205. Mishkin (2006), p. 201-205, and Hubbard and O’Brien (2019), p. 872-874, provide detailed discussions of each balance sheet item, which are like the approach of this section.
19. Frederic Mishkin (2006), p. 208, identifies the corresponding types of management for these objectives as asset management, liability management, liquidity management, and capital adequacy management.
20. A higher demand for loanable funds and a higher supply of bonds is also expected, but we emphasize the other shifts because their larger impact brings about contractions of the financial markets consistent with the monetary contraction.
21. The theory presented in this section draws from the following source: Saros, Daniel E. "The Circulation of Bank Capital and the General Rate of Interest." Review of Radical Political Economics, 45 (2), Spring 2013: 149-161. The final, definitive version is available at online.sagepub.com/. journals.sagepub.com/doi/full/10.1177/0486613412458647
22. The analysis presented here is a simplification. A complicating factor is that when capital flows into or out of finance, the level of bank capital changes, which alters the general rate of interest as well. I have explained how to analyze the simultaneous movements in the general rate of interest and the market rate of interest in Saros (2013), p. 158-159.
23. Chen, Tim. “Pay workers in real time, end debt cycle; Help people access their earnings and avoid high interest payday loans.” USA TODAY. First Edition. 25 Oct. 2018. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/03%3A_Principles_of_Macroeconomic_Theory/16%3A_Money_Supply_Measures_and_Theories_of_Commercial_Bank_Behavior.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Identify the organizational structure and functions of the Federal Reserve
2. Analyze the tools that the central bank uses to influence the financial markets
3. Examine the neoclassical approach to monetary policy using an exogenous money supply
4. Inspect the connection between the Quantity Theory of Money and the AD/AS Model
5. Explore a Post-Keynesian approach to monetary policy using an endogenous money supply
6. Investigate a Marxian theory of fiat money and relate it to U.S. economic history
7. Link the monetary policy tools to the Marxian theory of financial markets
In Chapter 16, we explored different methods of measuring the quantity of money and how commercial banks influence those different measures. We also looked at how commercial banking activities influence the financial markets from neoclassical and Marxian perspectives. In this chapter, a major purpose is to explore the role of the central bank in the determination of the money supply. All modern capitalist economies have a central bank and so an analysis of modern banking systems must incorporate its role into any meaningful analysis. Once we understand the role the central bank plays in the determination of the money supply, we will then be able to understand how the central bank influences the overall economy. Of course, our understanding depends on our theoretical lens, and so we will explore this question from neoclassical, Post-Keynesian, and Marxian perspectives.
T he Organizational Structure and Functions of the Federal Reserve
Modern capitalist nations have central banks that are responsible for the management of the money supply and the conduct of monetary policy. Monetary policy refers to the use of money supply changes to influence aggregate production, the aggregate price level, and the level of unemployment. In Japan, the central bank is the Bank of Japan. In England, the central bank is the Bank of England. In the eurozone (i.e., in the European nations that have adopted the euro), the central bank is the European Central Bank. In the United States, the central bank is the Federal Reserve (also known as the Fed).
The Federal Reserve was created by an Act of Congress in 1913. After the 1907 financial crisis, private bankers recognized that some central control of the money supply was needed to prevent bank panics from becoming major financial crises. The notion of a central bank that could intervene during a bank panic as a lender of last resort to stabilize the financial markets was perceived to be necessary. Rather than centralize financial power in a single central bank, however, a compromise was struck that created 12 Federal Reserve Banks in 12 distinct Federal Reserve districts throughout the nation. The 12 Federal Reserve Banks are in the following cities:
1. Atlanta
2. Boston
3. Chicago
4. Cleveland
5. Dallas
6. Kansas City
7. Minneapolis
8. New York
9. Philadelphia
10. Richmond
11. San Francisco
12. St. Louis
Each Federal Reserve Bank has a Board of Directors and a Board-appointed Federal Reserve Bank President. In addition to the 12 Federal Reserve Banks, the Federal Reserve System has its headquarters in Washington, DC. A Federal Reserve Boardof Governors, which consists of appointed officials, governs the entire system. The Board of Governors consists of seven members that the President of the United States appoints for 14-year staggered terms subject to U.S. Senate confirmation. The staggering of the terms guarantees that one Governor’s term expires every two years, which ensures a significant degree of continuity on the Board. If, for example, the terms of all seven Governors expired in the same year, then an entirely new Board would be appointed that year, and it would be difficult for the new Governors to benefit from the knowledge and experience of past Governors. The President of the United States also appoints one Governor to serve as Chair of the Federal Reserve Board and another Governor to serve as Vice-Chair. The terms of Chair and Vice Chair are four-year, renewable terms. Generally, when the term of a Chair expires, they are either reappointed or they leave the Board altogether. The position of Federal Reserve Board Chair is the most powerful position in the Federal Reserve system. The Fed Chair serves as the spokesperson for the Fed. Past Chairs have included Alan Greenspan, Ben Bernanke, and Janet Yellen. The Fed Chair in 2018 is Jerome Powell.
A second governing body within the Federal Reserve system is the Federal Open Market Committee (FOMC). The FOMC consists of 12 members, including the seven members of the Board of Governors, the President of the New York Federal Reserve Bank, and four other Federal Reserve Bank Presidents who serve on a one-year, rotating basis. The FOMC meets about every 6 weeks at the Federal Reserve headquarters in Washington, DC and is responsible for conducting all Fed interventions in the bond market. Federal Reserve bond market interventions are conducted through the New York Fed, which is the reason that the New York Fed President is the only permanent member of the FOMC. The New York Fed is important for several other reasons, as Frederic Mishkin explains, including its role as a gold repository, its conduct of foreign exchange market interventions, and its proximity to the largest financial institutions in the world.[1]
The legal status of the Fed is also somewhat unusual. It is regarded as a quasi-public institution in that it is privately owned but publicly controlled. The owners of the Fed are the private commercial banks within the Federal Reserve system. The banks own stock in the Fed and may receive profit distributions each year if the Fed earns a profit, subject to maximum limits. The legal mandate of the Fed is not to make a profit, however, but to serve the public interest.[2] Control of the system rests with public officials, and so it is neither a purely private nor a purely public institution.
Because the officials that control the Federal Reserve system are appointees with long terms in office, the Fed operates with a considerable degree of independence. That is, elected officials have relatively little control of Fed policy. Of course, Congress can change the Federal Reserve Act or even abolish the Fed if it repeals that legislation, but such changes require major legislative action. As the law stands, elected officials can do little to alter monetary policy aside from waiting until new Governors are appointed to the Federal Reserve Board. The notion of central bank independence is a controversial one. Its strength is simultaneously its weakness. Independence allows the Fed to pursue politically unpopular policies that serve the public interest, but the lack of democratic control of these decision makers makes many people uncomfortable as it seems to conflict with the principles of American democracy. It does not seem possible to resolve this question to the satisfaction of all parties. In any case, the general trend throughout the world in recent decades has been towards greater central bank independence to avoid harmful economic policies, especially those that encourage inflation.
The Fed performs several key functions. Almost all Fed functions can be inferred from the elements on its balance sheet. The one function of the Fed that is not clearly reflected in the Fed’s balance sheet is its role as a supervisor of banks. The Fed shares this responsibility with other state and federal government agencies, which involves monitoring levels of bank capital, reserves, and risk. Figure 17.1 shows the balance sheet of the Federal Reserve. The figure combines all assets, liabilities, and net worth for all 12 Federal Reserve Banks.
On the asset side of the Fed’s balance sheet, we see loans to commercial banks. The loans to commercial banks represent the original function of the Fed as lender of last resort. During a financial panic, banks cease to lend due to fear that borrowers will default on those loans. By granting loans to banks, the Fed can help restore confidence in the financial system. Banks will feel reassured that the banks to whom they lend will receive assistance from the Fed, if necessary, so that they can repay their loans with interest. Because the loans that the Fed grants to banks are interest-bearing loans, they are assets for the Federal Reserve. The reader might be surprised that the loan assets of the Fed only amount to $54 million, which is a small sum in comparison with other items on the asset side of the balance sheet. Because the U.S. economy is not experiencing a financial crisis, the lender of last resort role of the Fed is not especially necessary. Another important reason for the low figure is that the Fed encourages banks to borrow from other banks in the federal funds market. It does so by charging an interest rate on loans to banks that is higher than the federal funds rate. Therefore, banks voluntarily choose to borrow from one another rather than from the Fed. The Federal Reserve is also an issuer of currency. It is important to note that the Fed does not print money, which is a common misconception. The Fed does not print U.S. dollars. It is the Bureau of Engraving and Printing within the U.S. Department of Treasury that is responsible for that duty. Once the dollars are printed, however, a Federal Reserve Bank must issue them. If you look at a U.S. dollar, you can see which Federal Reserve Bank issued your bill. You will also see the words “Federal Reserve Note” at the top of your U.S. dollar. In fact, Federal Reserve Notes are U.S. dollars. The function of the Fed as an issuer of currency also shows up on the liabilities side of the Fed’s balance sheet. The Federal Reserve Notes (outstanding) entry on the balance sheet shows how many U.S. dollars have been issued. It might seem strange that Federal Reserve Notes are treated as a liability of the Fed. If the Fed notes represented convertible paper money, then it would be easy to understand why they are treated as liability. They would represent a claim to so much gold or silver on the asset side of the Fed’s balance sheet. Because U.S. dollars are fiat money, they do not represent a claim to gold, but from a conceptual point of view, you can think of them as representing a claim to the assets of the Fed even though they are not directly redeemable. Incidentally, the Fed does own gold and other precious metals, which are included in the Other category on the asset side of the Fed’s balance sheet. The reader might be puzzled that coins are found on the asset side of the Fed’s balance sheet even as U.S. dollars are treated as a liability of the Fed. The reason is that while Federal Reserve Banks issue U.S. dollars, the Fed does not issue coins. The U.S. Treasury mints and issues coins. Therefore, coins are a liability of the Treasury, and they are treated as an asset when the Fed acquires them. The Fed also serves as a banker for commercial banks. This role is reflected in the reserves of commercial banks, which is a liability on the Fed’s balance sheet. The Fed owes banks that transfer currency and other assets to the Fed. For example, if the Fed buys securities from a commercial bank, then it may pay the bank by crediting the bank’s reserve account. This operation increases the Fed’s assets and liabilities. It reduces the bank’s assets because it loses securities, but it also increases its assets by an equal amount because it gains reserves. Alternatively, the bank might transfer currency to the Fed, which leaves the Fed’s liabilities unchanged. It reduces the Fed’s liabilities because its outstanding Federal Reserve Notes decline, but its liabilities increase by an equal amount as it credits the bank’s reserve account. The bank loses currency but gains reserves and so its assets remain the same overall. For many years, banks were not paid interest for the reserves held at the Fed. In 2008, this policy changed, and banks have since earned interest on their Fed reserve holdings. The Fed also serves as a processor of checks as explained in Chapter 16. When an account holder writes a check against a checkable deposit, the bank that receives the check sends it to the Fed for processing. The Federal Reserve Bank in that district increases the reserves of the recipient bank and debits the reserves of the bank with the account against which the check is drawn. The changes to bank reserves do not occur immediately, however, and so the reserves of the recipient bank increase before the reserves of the other bank are debited. During this period, the Fed acquires a new asset referred to as items in process of collection. The items in process of collection on the Fed’s balance sheet thus reflects this function of the Fed. The Fed also serves as a banker for the federal government. Individuals and firms cannot open bank accounts at Federal Reserve Banks. Only commercial banks in the Federal Reserve system and the U.S. government have this privilege. The U.S. Treasury collects hundreds of billions of dollars in tax revenue each year. These funds must be deposited somewhere, and the U.S. Treasury account at the Fed is where these funds are deposited. U.S. Treasury deposits are a liability for the Fed because the Fed owes the Treasury once these deposits are received. If they are transfers from commercial banks, then the Fed’s reserve liabilities decline and its Treasury deposit liabilities increase by an equal amount. One other asset that appears on the Fed’s balance sheet is foreign currency-denominated assets. Such assets reflect the Fed’s history of intervention in foreign exchange markets. The Fed has the power to intervene on a massive scale in foreign currency markets. By doing so, it can alter the foreign exchange values of the U.S. dollar and other currencies for which it trades. A deliberate manipulation of the U.S. exchange rate can make U.S. exports (or U.S. imports) cheaper or more expensive. Such changes can have an impact on aggregate spending in the U.S. and thus aggregate output, employment, and the price level as discussed in Chapter 13. In an era of floating exchange rates, however, the Fed is not actively involved in managing the foreign exchange value of the U.S. dollar even though it retains the power to do so. The Primary Central Bank Tools of Monetary Policy The last function of the Fed that we can observe in the Fed’s balance sheet is its function as regulator of the money supply. This regulatory function includes the three primary tools of monetary policy.[3] The first monetary policy tool and the tool that has been the most important in recent decades is open market operations. Open market operations refer to the Fed’s bond market interventions (i.e., the purchases and sales of government securities). When the Fed purchases government bonds from commercial banks, it pays the banks with reserves (i.e., it credits their reserve accounts). The additional reserves can be used to grant loans, which expands the money supply. Alternatively, if the Fed sells government securities to commercial banks, then banks will pay for the bonds using reserves (i.e., it debits their reserve accounts). Faced with dwindling reserves, banks contract their lending, and the money supply falls. Hence, the securities and reserves items on the Fed’s balance sheet represent the open market operations of the Fed and thus its role as regulator of the money supply. A second monetary policy tool of the Fed is discount lending. Discount loans are loans that the Fed grants to commercial banks. Sometimes these loans are granted during financial crises and are consistent with the lender of last resort function. Sometimes the loans are granted during periods of economic and financial stability but to banks that are nevertheless in financial trouble. When the loans are granted, the Fed adds to the banks’ reserve accounts. The banks that receive these reserves can then grant additional loans to borrowers, which expands the money supply. Although the Fed cannot force banks to borrow, the Fed can reduce the discount rate, which is the interest rate charged on discount loans. Because the loans are cheaper when the discount rate falls, banks are more inclined to borrow from the Fed. Alternatively, if the discount rate increases, then banks are less inclined to borrow from the Fed, and discount lending and bank lending contract, which reduces the money supply. Hence, the loans to commercial banks and the reserves of commercial banks on the Fed’s balance sheet reflect the Fed’s role as regulator of the money supply. A third monetary policy tool of the Fed is the reserve requirement ratio (R). When the Fed lowers the reserve requirement ratio, commercial banks are not required to hold as many reserves. Commercial banks thus have more excess reserves, which they lend to borrowers, thereby expanding the money supply. Alternatively, if the Fed raises the reserve requirement ratio, then banks have fewer excess reserves, and they need to hold more reserves in their accounts with the Fed. To do so, they contract their lending, which reduces the money supply. Hence, the reserves of commercial banks on the Fed’s balance sheet reflects the Fed’s role as regulator of the money supply. To see the impact of each of these monetary policy tools on the money supply, just consider the balance sheets of a commercial bank and the Fed. When the Fed engages in open market operations, its bond market purchases and sales influence the reserves in the banking system. For example, suppose that the Fed buys$8,000 in bonds from commercial banks. The impact on the Fed’s balance sheet and the commercial bank’s balance sheet are shown in Figure 17.2.
In the case of an increase in the reserve requirement, the bank finds itself with insufficient reserves. To increase the ratio of bank reserves to deposit liabilities, the bank has several options, but one option is to allow borrowers to repay loans while denying them new loans. As borrowers repay loans, the bank’s checkable deposit liabilities will decline. If the process goes far enough, then the bank’s reserves will provide sufficient backing for the checkable deposits that it has issued. The contraction of checkable deposits will lead to a contraction of lending throughout the banking system and to a multiple contraction of bank deposits. The result is a reduction in the M1 money supply.
We have now seen how the Fed’s three monetary policy tools influence the M1 money supply. The Fed’s monetary policy can be described as either expansionary or contractionary. If the Fed uses its policy tools to increase the money supply, then its monetary policy is described as an expansionary monetary policy or as an easy monetary policy. If, on the other hand, the Fed uses its policy tools to reduce the money supply, then its monetary policy is described as a contractionarymonetary policy or as a tight monetary policy.
T he Neoclassical Approach to Monetary Policy Using an Exogenous Money Supply
Let’s consider how a neoclassical economist analyzes the impact of changes in the money supply on the financial markets and the economy. Consider the case of expansionary monetary policy first. If the Fed acts to expand the M1 money supply, then it must choose one of the three following policy actions:
1. Purchase bonds from commercial banks
2. Reduce the discount rate
3. Lower the reserve requirement ratio
Each of these actions will increase the excess reserves available to commercial banks. Using the excess reserves, banks will increase their lending, which will create more checkable deposits and expand the M1 money supply.
The impact of the Fed’s increase in the money supply is essentially the same as the impact of an increase in the money supply that commercial banks entirely initiate, as described in Chapter 16. Indeed, the Fed is only able to increase the money supply because it encourages banks to lend more. The Fed’s role is different, however, in that it has the legal authority to alter the money supply in a deliberate way and on a massive scale. Each commercial bank, on the other hand, is focused on making a profit and only alters the money supply unintentionally and on a small scale.
The impact on financial markets of the Fed’s monetary expansion is shown in Figure 17.8.
As Figure 17.8 shows, the money supply curve shifts rightward as banks lend more and create more checkable deposits. This shift creates a surplus of money in the money market. That is, firms and households are holding more money than they wish to hold. Consequently, they lend the surplus funds in the loanable funds market, which is equivalent to the purchase of additional bonds in the bond market. In the loanable funds market, the increase in supply creates a surplus of loanable funds. Competition in the loanable funds market drives down the rate of interest to clear that market. Similarly, the higher demand for bonds creates a shortage of bonds in the bond market. Competition in the bond market drives up the price of bonds until the bond market clears. Simultaneous equilibrium thus occurs in all three markets. The Fed’s expansionary monetary policy has pushed interest rates down and bond prices up.[5]
The Fed is primarily interested in promoting increased output and employment as well as a stable price level. If it has made the decision to increase the money supply, then it must believe that the economy is operating at less than full employment. Otherwise, a monetary expansion would be purely inflationary. Figure 17.9 shows the impact of an easy money policy in the case of an economy operating below full employment.
As Figure 17.9 shows, when the supply of money increases, the increased money supply leads to a surplus of money in the money market, which is loaned to borrowers in the loanable funds market. Interest rates fall as described previously. The reduction in interest rates then has an impact on the real economy. The type of aggregate spending that is most sensitive to changes in interest rates is investment spending. When interest rates fall, businesses can obtain loans more cheaply, which they use for the purchase of new capital equipment and new structures (e.g., office buildings, factories, production plants). Home buyers also can obtain mortgage loans more cheaply, which they use to buy new homes. The increase in business investment and residential fixed investment increases aggregate expenditures in the economy causing the A curve to shift upwards in the Keynesian Cross model. The consequence is a rise in the equilibrium level of real GDP.
The level of aggregate demand (AD) also increases. The reader should notice that the rise in AD puts upward pressure on the price level. As the economy approaches full employment, the resulting diminishing returns to labor and the use of less efficient resources pushes unit costs upward, and so firms are forced to raise prices. Therefore, real GDP rises to Y2, which is not quite as much as the increase to Y3. The reason is that the rise in the price level prevents aggregate expenditure from rising quite as much due to the wealth effect, the international substitution effect, and the interest-rate effect described in Chapter 13. In any case, the Fed’s monetary policy has expanded the level of real GDP, but it has also caused some demand-pull inflation.
The impact on investment spending is not the only impact that the Fed’s monetary expansion has on aggregate spending. Figure 17.10 shows a second kind of impact that the Fed’s monetary expansion has on the economy.
In Figure 17.10, we can see that the increase in the money supply puts downward pressure on the rate of interest in the money market as previously described. Foreign investors are likely to be affected when they see that the return on interest-bearing assets in the United States is lower. Specifically, they will be less likely to purchase assets, such as U.S. savings deposits, CDs, and bonds. Because they purchase fewer U.S. interest-bearing assets, they will also purchase fewer U.S. dollars in the foreign exchange market, which are needed to purchase U.S. goods and assets. The reduction in demand for U.S. dollars in the foreign exchange market will cause the foreign exchange value of the U.S. dollar to fall. That is, the U.S. dollar will depreciate.
The depreciation of the U.S. dollar makes U.S. goods relatively cheaper in international commodity markets, which stimulates U.S. exports. The increase in U.S. exports raises aggregate expenditure and boosts real GDP as shown in the Keynesian Cross model. As in the case of a boost to investment spending, the increase in aggregate demand pushes up the price level and prevents real GDP from rising as much as is shown in the Keynesian Cross model. The closer the economy is to full employment, the more the price level will rise and the less the economy will expand in real terms. In general, the Fed’s monetary expansion boosts both investment spending and net export spending, allowing the economy to expand but at the cost of producing some demand-pull inflation.
Now let’s consider the case of contractionary monetary policy. If the Fed acts to reduce the M1 money supply, then it must choose one of the three following policy actions:
1. Sell bonds to commercial banks
2. Raise the discount rate
3. Raise the reserve requirement ratio
Each of these actions will reduce the excess reserves available to commercial banks. With fewer excess reserves, banks will reduce their lending, which will cause a reduction in the amount of checkable deposits and contract the M1 money supply.
The impact of the Fed’s reduction in the money supply is essentially the same as the impact of a decrease in the money supply that commercial banks entirely initiate, as described in Chapter 16. Indeed, the Fed is only able to decrease the money supply because it discourages banks from lending. Again, the Fed’s role is different, however, in that it has the legal authority to reduce the money supply in a deliberate way and on a massive scale. Each commercial bank, on the other hand, is focused on making a profit and only decreases the money supply unintentionally and on a small scale.
The impact on financial markets of the Fed’s monetary contraction is shown in Figure 17.11.
As Figure 17.11 shows, the money supply curve shifts leftward as banks lend less and create fewer checkable deposits. This shift creates a shortage of money in the money market. That is, firms and households are holding less money than they wish to hold. Consequently, they lend fewer funds in the loanable funds market, which is equivalent to the purchase of fewer bonds in the bond market. In the loanable funds market, the reduction in supply creates a shortage of loanable funds. Competition in the loanable funds market drives up the rate of interest to clear that market. Similarly, the lower demand for bonds creates a surplus of bonds in the bond market. Competition in the bond market drives down the price of bonds until the bond market clears. Simultaneous equilibrium thus occurs in all three markets. The Fed’s contractionary monetary policy has pushed interest rates up and bond prices down.
The Fed is primarily interested in promoting increased output and employment as well as a stable price level. If it has made the decision to decrease the money supply, then it must believe that the economy is operating at close to full employment and that inflation is a serious concern. Otherwise, a monetary contraction would worsen an already sluggish economy. Figure 17.12 shows the impact of a tight money policy in the case of an economy operating near full employment.
As Figure 17.12 shows, when the supply of money contracts, the reduced money supply leads to a shortage of money in the money market, which leads to less lending to borrowers in the loanable funds market. Interest rates rise as described previously. The rise in interest rates then has an impact on the real economy. Again, the type of aggregate spending that is most sensitive to changes in interest rates is investment spending. When interest rates rise, businesses must pay more to obtain loans for the purchase of new capital equipment and new structures (e.g., office buildings, factories, production plants). Home buyers must also pay more for mortgage loans, which they use to buy new homes. The reduction in business investment and residential fixed investment decreases aggregate expenditures in the economy causing the A curve to shift downwards in the Keynesian Cross model. The consequence is a drop in the equilibrium level of real GDP.
The level of aggregate demand (AD) also decreases. The reader should notice that the drop in AD puts downward pressure on the price level. If prices are sticky in a downward direction, then the price level will remain stable as the level of output and employment fall. Therefore, real GDP falls to Y2, which is the same reduction in real GDP that occurs in the Keynesian Cross model. The reason that the reduction in real GDP is the same in both the Keynesian Cross and AD/AS models is that the price level is sticky. If the price level does decline, then the wealth effect, the international substitution effect, and the interest-rate effect would cause a movement along the AD curve and partially offset the drop in aggregate spending from the higher interest rates. In any case, the Fed’s monetary policy has reduced the level of real GDP.
It might seem surprising that the Fed would ever pursue a contractionary monetary policy. Indeed, it appears that the Fed is trying to engineer a recession! The Fed might pursue a minor contraction, however, if the economy is approaching full employment and a rise in inflation appears to be a likely result. Fed officials might reason that it is better to weaken the economy a little bit than to allow the boom to create so much inflation that a major monetary contraction is required. The former Federal Reserve Chair, Janet Yellen, argued in 2017 that such a boom-bust policy should be avoided.
The impact on investment spending is not the only impact that the Fed’s monetary contraction has on aggregate spending. Figure 17.13 shows a second kind of impact that the Fed’s monetary contraction has on the economy.
In Figure 17.13, we can see that the decrease in the money supply puts upward pressure on the rate of interest in the money market as previously described. Foreign investors are likely to be affected when they see that the return on interest-bearing assets in the United States is higher. Specifically, they will be more likely to purchase assets, such as U.S. savings deposits, CDs, and bonds. Because they purchase more U.S. interest-bearing assets, they will also purchase more U.S. dollars in the foreign exchange market, which are needed to purchase U.S. goods and assets. The increased demand for U.S. dollars in the foreign exchange market will cause the foreign exchange value of the U.S. dollar to rise. That is, the U.S. dollar will appreciate.
The appreciation of the U.S. dollar makes U.S. goods relatively more expensive in international commodity markets, which reduces U.S. exports. The decrease in U.S. exports lowers aggregate expenditure and reduces real GDP as shown in the Keynesian Cross model. As in the case of a contraction of investment spending, the decrease in aggregate demand puts downward pressure on the price level. If prices are sticky in a downward direction, then the price level will remain stable as the level of output and employment fall. Therefore, real GDP falls to Y2, which is the same reduction in real GDP that occurs in the Keynesian Cross model. As stated previously, the reason that the reduction in real GDP is the same in both the Keynesian Cross and AD/AS models is that the price level is sticky. If the price level does decline, then the wealth effect, the international substitution effect, and the interest-rate effect would cause a movement along the AD curve and partially offset the drop in aggregate spending from the higher interest rates. In any case, the Fed’s monetary policy has reduced the level of real GDP. In general, the Fed’s monetary contraction reduces both investment spending and net export spending, allowing the economy to avoid a surge of inflation but at the cost of some reduction in aggregate output and employment.
The Quantity Theory of Money and the AD/AS Model
Neoclassical economists also defend a theory referred to as the Quantity Theory of Money. It is based on an identity known as the Quantity Equation. The Quantity Equation identifies a specific relationship between the money supply, the velocity of money, the price level, and the level of real output. The velocity of money is the number of times that a unit of the domestic currency (e.g., a U.S. dollar) is spent on average during a given period. For example, if the money velocity is equal to 6, then a dollar bill is spent six times on average during the year. Using this definition of the velocity of money, we can now write the Quantity Equation:
$MV=PY$
The Quantity Equation shows that the product of the money supply (M) and the velocity of money (V) is equal to the product of the price level (P) and the level of real output (Y). The product of the price level and the level of real output (PY) is the nominal GDP of the economy. Given the velocity of money, M is the money supply that will support that level of nominal spending. For example, suppose that the money supply is $900 billion and the velocity of money is 6. Then if the price level is 2, the level of real output must be$2,700 billion.
The Quantity Equation is just an identity. It is true given the definition of the variables and does not constitute a theory until we consider how the variables in the equation are causally related. Neoclassical economists have traditionally argued that the velocity of money is relatively stable over time. Hence, the quantity of money is the main determinant of the level of nominal GDP. This view suggests that the Fed has a considerable amount of influence over the economy.
As soon as it is argued that an increase in the money supply causes a rise in nominal GDP given a relatively stable money velocity, the Quantity Equation is transformed into a monetary theory called the Quantity Theory of Money. For neoclassical economists, the length of the period under consideration influences the nature of the impact of a money supply change on the economy. For example, in the short run when prices are sticky, a money supply increase (given V) causes an unambiguous rise in the level of real output.
$M\uparrow\;\; \overline{V}=\overline{P}\;\;Y\uparrow$
The impact of an increase in the money supply in the short run may be depicted as in Figure 17.14.
Figure 17.14 shows how the rise in the money supply raises aggregate demand and the level of real output while leaving the price level the same. Similarly, a monetary contraction would lower aggregate demand and the level of real output while leaving the price level the same.
In the long run, the situation is reversed. Neoclassical economists argue that the economy will operate at full employment. That is, the quantities of labor, capital, and land will determine the level of real GDP. Therefore, if the money supply rises in the long run (given V), then the price level will rise and the level of real GDP will remain the same.
$M\uparrow\;\; \overline{V}=P\uparrow\;\;\overline{Y}$
The impact of an increase in the money supply in the long run may be depicted as in Figure 17.15.
Figure 17.15 shows how the rise in the money supply raises aggregate demand and the price level while leaving the level of real output the same. Similarly, a monetary contraction would lower the price level while leaving the level of real output unchanged.
The Post-Keynesian Approach to Monetary Policy Using an Endogenous Money Supply
Post-Keynesian economists are critical of the neoclassical approach to monetary policy. Much of the disagreement relates to the assumption that the velocity of money is relatively stable. If the velocity of money is unstable and unpredictable, then changes in the money supply may be offset or intensified when the velocity of money changes. For example, if Fed increases the money supply with the intention of increasing nominal GDP by a given amount, then an unpredictable increase in money velocity may cause an inflationary surge of nominal GDP. On the other hand, if money velocity unpredictably declines, then nominal GDP will not rise as much or may even decline.
Given these questions about the stability and predictability of the velocity of money, it is natural to wonder what the historical evidence suggests. A simple rearrangement of the Quantity Equation allows us to easily calculate the velocity of money.
$V=\frac{PY}{M}$
In other words, if we divide nominal GDP by the money supply, we obtain the velocity of money. Figure 17.16 shows the historical pattern of the M1 velocity of money from 1959 to 2017.
As Figure 17.16 shows, the M1 velocity of money, although increasing steadily, was relatively predictable prior to 1980.[6] This pattern suggests that changes in the money supply could be used to manipulate nominal GDP with a fair amount of accuracy. With sticky prices in the short run, money supply manipulation could be used to influence the level of real GDP, as explained previously. In the long run, however, manipulation of the money supply would alter the price level. In any case, the neoclassical assumption appears to be reasonable prior to 1980.
After 1980, the pattern of money velocity abruptly changes. Money velocity begins to fluctuate in ways that do not appear to follow any predictable pattern. In this environment, altering the money supply with the goal of changing nominal GDP is extremely difficult. Central bankers have no way of knowing how much to change the money supply to alter nominal GDP.
The reasons for the shift are debated. One possibility is that modern information technology has made it easier to transfer funds from non-M1 M2 assets (e.g., savings deposits, MMDAs) to checkable deposits and back again. For example, during the 1990s, we observe a huge jump in the velocity of money. If people began to minimize their M1 money holdings to maximize their interest income from non-M1 M2 assets, then money velocity as defined in this chapter would jump for a given level of nominal GDP. Because it became easier to transfer funds to M1 accounts when they were needed for transactions, many households and firms may have made the decision to hold less M1 money.
Even though the velocity of money began to change in unpredictable ways after 1980, it is worth noting that during recessions since that time (the shaded areas in Figure 17.16), the velocity of money has always fallen. One possibility is that people wish to hold more money during recessions because they are afraid of job loss. The loss of a job means the loss of income and the inability to pay bills. Holding liquid assets during recessions makes sense then. With an increase in currency holdings and checkable deposits (M), the velocity of money (V) will fall. The reduction in velocity will be even greater due to the contraction of nominal GDP (PY) during the recession. A reference to the formula for money velocity provided above confirms these results.
Due to complications that stem from the use of monetary policy, Post-Keynesians tend to share Keynes’s preference for fiscal policy as a means of combatting recessions. The Post-Keynesian objection to neoclassical monetary theory, however, goes beyond the claim that the velocity of money is unstable and unpredictable. Post-Keynesians also argue that the causal relationship between the money supply and the aggregate economy is exactly the reverse of what neoclassical economists assert.
Before exploring the post-Keynesian alternative monetary theory in detail, let’s consider how the neoclassical assertion of an exogenous money supply creates a problem within the AD/AS framework. Suppose that government spending increases in the neoclassical AD/AS model. The result is a rightward shift of the aggregate demand curve as shown in Figure 17.17.
As Figure 17.18 shows, the federal government’s assets increase when its Treasury deposits at the Fed increase, but its liabilities also increase because it has issued new debt. This increase in Treasury deposits directly adds to the money supply, and the Fed has monetized the debt. In this way, the money supply increase is endogenously related to government spending. That is, rather than the money supply representing an exogenous variable that changes for reasons not explained in our model, the money supply represents an endogenous variable that changes for reasons explained within our model.
This example shows that the neoclassical commitment to an exogenous money supply makes it difficult to uphold the claim that higher government spending will shift the aggregate demand curve to the right. It is still possible for the government to increase its spending with a constant money supply and money velocity, but only a couple scenarios can explain how such a change occurs. If the government, for example, increases spending, it will increase the production of certain commodities and their prices. With the money supply and money velocity constant, however, a reallocation of spending must occur. That is, spending on other commodities for consumption and investment will decline, causing their production levels and prices to fall. The AD curve does not change since these factors offset one another in the calculation of nominal GDP, and overall PY does not change as shown in Figure 17.19.
Another possibility is that the Fed does not accommodate the government spending increase with a money supply increase, but the reallocation of spending occurs differently. For instance, the increased government spending that pushes up the prices and production of some commodities comes at the expense of spending in input markets. The reduction in spending in input markets could cause the prices of inputs to fall so much that unit costs for firms decline. A reduction in unit production costs shifts the short run aggregate supply (SRAS) curve to the right as explained in Chapter 13. This rightward shift of SRAS puts downward pressure on the price level even though it leads to a higher level of real output. If the fall in the price level is sufficiently large, then it might offset the rise in the prices of government-purchased commodities and the rise in real output. Overall, the economy will expand even though the money supply and velocity of money are constant. This possibility is represented in Figure 17.20.
Figure 17.21 shows how a rise in input costs (e.g., wages, oil prices) leads to a leftward shift of the SRAS curve. The result is a rise in the general price level and a reduction in the level of real output. In this case, the fall in real output implies that the rising price level can occur even if the money supply and velocity of money remain constant.
$\overline{M}\overline{V}=P\uparrow\;\;Y\downarrow$
This analysis shows that cost-push inflation does not require accommodation from the banking system and the central bank.
We have seen how the assumption of an exogenous money supply creates problems for the traditional neoclassical analysis of the macroeconomy using the AD/AS model. Let’s take a closer look at why Post-Keynesian economists argue that the supply of money should be treated as an endogenous variable in macroeconomic theory. Post-Keynesian economists typically argue that owners of different factors of production are involved in a conflict over the distribution of income. Workers, for example, demand higher wages and form labor unions to apply pressure on employers. Firms, on the other hand, use their market power to maintain markups of product prices over unit labor costs. A constant struggle ensues that can generate cost-push inflation and a wage-price inflationary spiral. This relationship can be captured best using the following equation:[7]
$P=m \cdot \frac{wL}{Q}$
In the equation above, the price of a product (P) is equal to a markup (m) over the per unit labor cost of production. In this case, the markup is multiplied by the ratio of total labor cost (wL) to total output. Total labor cost is the wage rate (w) times the number of units of labor hired (L). When this total wage bill is spread across the number of units produced (Q), we obtain the labor cost per unit (wL/Q).
When workers demand and win wage increases, firms raise their product prices since they desire a constant markup over unit labor costs. When product prices increase, workers recognize that their real wages (w/P) are eroded, which brings their purchasing power back to the original level. Workers thus demand higher nominal wage increases, which continues the process. One should not assume that workers are the guilty party initiating the wage-price inflationary spiral. Firms might initiate the cycle with an initial increase in the markup over unit labor costs, which raises prices and per unit profits. Workers may then respond with the demand for higher money wages.
The upward pressure on prices means that firms and consumers will have a more difficult time making purchases and will need to obtain more loans to make purchases. They will also need to hold more checkable deposits and currency to engage in desired transactions. Post-Keynesian economists argue that the commercial banks and the central bank will accommodate the desire for additional loans and money balances. That is, the assumption that the money supply is determined endogenously in response to these events carries very different consequences for the financial markets than what we observed when an exogenous money supply was assumed. Figure 17.22 shows the impact on the financial markets of an increased demand for money and loans in the context of an endogenously determined money supply.
Figure 17.22 shows that the rate of interest does not change when the central bank completely accommodates the higher demand for money. As the demand for money rises, the slightest rise in the rate of interest leads to increased commercial bank lending and increased open market purchases by the Fed, which provides banks with more reserves to encourage bank lending. The money supply curve is perfectly elastic as a result. At the same time, the higher demand for loans in the loanable funds market is completely accommodated as the Fed provides banks with more reserves, and the banks lend more. The rise in the supply of loanable funds keeps the rate of interest from rising in the loanable funds market. As the demand for loans rises, the counterpart of this change in the bond market is a higher supply of bonds, which puts downward pressure on bond prices. Because banks accommodate securities dealers’ higher demand for checkable deposits by purchasing their securities, the demand for bonds rises too. This accommodating stance keeps bond prices from falling. Hence, simultaneous equilibrium in all three markets occurs, and interest rates and bond prices remain unchanged.
The constant rate of interest suggests that the cost of borrowing does not change. Hence, aggregate investment spending and consumer spending, which are sensitive to interest rate changes will not change. We should not expect an aggregate demand shift to occur. Even though firms and households are borrowing more, they are doing so in response to a higher price level. It does not represent an increase in the demand for real goods and services. Another way to think about why the AD curve does not shift is to consider the impact on the velocity of money. The higher demand for money has a lower money velocity as its counterpart. Even though the money supply rises, the velocity of money falls, and so AD does not change overall. Nevertheless, the rise in per unit costs does cause a leftward shift of the SRAS curve, and so stagflation results.
$M\uparrow\;\;V\downarrow=P\uparrow\;\;Y\downarrow$
Figure 17.23 shows a Post-Keynesian result that involves a wage-price inflationary spiral and recession with accommodation from the Fed and the commercial banks.
In Figure 17.23 the Fed and the banks are accommodating the higher demand for money and loans resulting from the wage-price inflationary spiral. A lending boom takes place within an inflationary economy where output and employment are falling. The lending allows firms and households to continue spending and driving up the price level even as the economy contracts.
The contrast between this Post-Keynesian analysis and the neoclassical analysis should be clear. If the Fed controls the money supply and does not respond to demands of borrowers and banks, then an exogenous money supply increase will push interest rates down and stimulate investment spending and consumer spending. The result will be a rise in aggregate demand and an economic boom capable of producing higher levels of real output and a higher price level as shown in Figure 17.9. The Post-Keynesian analysis, on the other hand, reveals why the monetary expansion of the 1970s was incapable of pulling the economy out of recession even as it produced high rates of inflation.
A Marxian Theory of Fiat Money and its Relationship to U.S. Economic History
In Chapter 4, the Marxian theory of money was developed. In that theory, money serves as the universal expression of socially necessary abstract labor time (SNALT). Each commodity in circulation is equated with the specific quantity of the money commodity that contains the same amount of SNALT. As paper symbols began to circulate as a representative of the money commodity, each commodity acquired a paper money price. Because the paper symbols were convertible into the money commodity (e.g., gold), the paper value of a commodity represented the exact quantity of the money commodity that contained an equivalent amount of SNALT as the commodity. Convertible paper money, therefore, does not present any special challenges for Marxian monetary theory.
As suggested in Chapter 4, the greater challenge to Marxian monetary theory is the existence of inconvertible paper money or fiat money. Because fiat money is not directly convertible into a commodity, it appears that the paper itself possesses value. The problem is that the SNALT required to produce the paper symbols is negligible. Therefore, it does not make sense to compare the labor time required to produce a house with the labor time required to produce $200,000 worth of fiat paper. To make such a comparison would reveal that the labor time required to produce the house far exceeds the labor time required to produce the paper money. The discrepancy seems to suggest that a Marxian theory of fiat money is doomed. How can we determine the specific quantity of fiat paper for which a commodity will exchange if commodities exchange at their values? Within a system of simple commodity circulation, commodities are constantly being exchanged for money and money is constantly being exchanged for commodities. Commodity circuits with the form C-M-C’ are used to represent these patterns of sale and purchase. The question we are asking is the following: How much money (M) is needed to complete the circuit when the exchange of equivalent values (C for C’) is assumed? In the case of commodity money or convertible paper money, the answer is simple. We just use the amount of commodity money or convertible paper money that represents the same amount of SNALT as the commodities being sold and purchased. That is, a unit of commodity money requires so much SNALT for its production. We simply adjust the quantity of the money commodity so that its value is equivalent to the values of the commodities being exchanged. With fiat paper money, we cannot rely on that solution because, as already argued, the SNALT required for its production is negligible and so it cannot provide the solution if one exists. The way to resolve the problem is to recognize that fiat paper money continues to represent a claim to other commodities in circulation. Although it cannot be redeemed at a bank for a specific quantity of gold, it can be easily exchanged for commodities. The answer then requires that we reflect on the entire world of circulating commodities. Consider a period during which the circulating paper money supply (M) is given.[8] On average each unit of paper money is spent a specific number of times, which we have called the velocity of money (V). The fiat money price of each commodity represents some fraction of this total expenditure represented as MV. What is the fraction of MV that a specific commodity will command as a price when commodities exchange at their values? It is determined by the SNALT required for its production. Given this reasoning, the fiat money value of a commodity j will equal the following: $P_{j}=\frac{L_{j}}{\Sigma_{i=1}^n L_{i}}MV$ In this equation, Lj represents the SNALT required to produce commodity j and ΣLi represents the aggregate SNALT embodied in all n circulating commodities.[9] Pj is the fiat money price or paper value of the commodity j. Only when commodity prices are determined according to this formula will all commodities exchange according to the quantities of SNALT required for their production. The formula for the fiat money price of commodity j reveals that the paper value of a commodity depends on several factors: 1. It depends positively on the SNALT required for its production (Lj). If the SNALT required to produce the commodity rises, then the paper value of the commodity will increase, other factors held constant (and vice versa). 2. It depends negatively on the aggregate SNALT embodied in all circulating commodities. Other factors held constant, if the economy expands and more SNALT is required to produce all circulating commodities, then the paper value of the commodity will fall. That is, the commodity represents a smaller fraction of the aggregate labor time embodied in commodities and so it commands a smaller part of the effective money supply (MV). The opposite occurs if the economy contracts. 3. It depends positively on the supply of paper money (M). If the money supply rises, other factors held constant, then the fiat paper price of commodity j will increase (and vice versa). In this case, the fraction of the aggregate SNALT devoted to this commodity is applied to a larger effective money supply (MV). 4. It depends positively on the velocity of money (V). If the velocity of money increases, then the fiat money price of commodity j will increase (and vice versa). In this case as well, the fraction of the aggregate SNALT devoted to this commodity is applied to a larger effective money supply (MV). It turns out that this expression for the fiat money price of commodity j is consistent with the Quantity Equation that neoclassical economists describe. To understand the relationship between the fiat money price equation and the Quantity Equation, try adding up all the fiat money prices in the entire economy. Because n commodities are in circulation, we can write the following equation: $P_{1}+P_{2}+P_{3}+...+P_{n}=\frac{L_{1}}{\Sigma_{i=1}^n L_{i}}MV+\frac{L_{2}}{\Sigma_{i=1}^n L_{i}}MV+\frac{L_{3}}{\Sigma_{i=1}^n L_{i}}MV+...+\frac{L_{n}}{\Sigma_{i=1}^n L_{i}}MV$ $P_{1}+P_{2}+P_{3}+...+P_{n}=(\frac{L_{1}}{\Sigma_{i=1}^n L_{i}}+\frac{L_{2}}{\Sigma_{i=1}^n L_{i}}+\frac{L_{3}}{\Sigma_{i=1}^n L_{i}}+...+\frac{L_{n}}{\Sigma_{i=1}^n L_{i}})MV$ $P_{1}+P_{2}+P_{3}+...+P_{n}=(\frac{L_{1}+L_{2}+L_{3}+...+L_{n}}{\Sigma_{i=1}^n L_{i}})MV$ $P_{1}+P_{2}+P_{3}+...+P_{n}=(\frac{\Sigma_{i=1}^n L_{i}}{\Sigma_{i=1}^n L_{i}})MV$ $P_{1}+P_{2}+P_{3}+...+P_{n}=MV$ This derivation shows that the sum of all prices of circulating commodities is equal to the effective fiat paper money supply. Because some of the prices listed individually in this equation are for the same commodity, we could write the equation using the products of prices and quantities. If we only include prices that are uniquely associated with one commodity and multiply by the quantity in circulation, then we obtain the equation below: $P_{1}' Y_{1}+P_{2}' Y_{2}+P_{3}'Y_{3}+...+P_{k}' Y_{k}=MV$ In this equation, Pirefers to the price of commodity i where all the other prices of commodity i have been eliminated to allow for multiplication by the quantity of that commodity in circulation (Yi). In other words, the equation reduces to the Quantity Equation: $PY=MV$ Like in neoclassical theory, the Quantity Equation is a simple identity. In Marxian economics, this identity becomes a theory when the prices of the commodities are explained using their individual labor values, the aggregate labor time required for their production, the supply of paper money, and the velocity of money. The theory provides us with a way of interpreting U.S. macroeconomic history. For example, it is widely recognized that capitalist development has led to an increase in productivity in many sectors of the economy. Increases in productivity allow production to increase more rapidly than production cost. That is, total output per dollar of production cost rises. The flipside of a rise in productivity then is a reduction in production cost per unit of output. In a competitive economy, a reduction in per unit cost should also causes prices to fall as firms compete and profits are pushed down to a level that is consistent with the general rate of profit in the economy. Figures 17.24 and 17.25 show how average labor productivity in the U.S. and the general price level have changed since the 1950s. As Figure 17.24 shows, the average productivity in the U.S. has risen considerably since 1950. The general price level has also risen as shown in Figure 17.25 for the period 1959-2017. This finding seems to contradict the argument that rising productivity lowers per unit cost and prices in a competitive market economy. How do we resolve this paradox? To resolve the paradox, we need only consider the formula for the price of commodity j. When productivity increases, the SNALT required to produce commodity j falls. Other factors the same, the fiat money price of commodity j will fall. Other factors are not the same, however, because the money supply has grown enormously since 1959 as shown in Figure 17.26. As the money supply rises, the price of commodity j increases, which shows the inflationary impact of a rise in the money supply. Therefore, the upward pressure on the price due to the increased money supply has offset the downward pressure on the price due to increased productivity. If productivity had not changed at all during this period, then the price of commodity j would have increased even more. The Marxian theory of fiat money also allows us to respond to a common criticism of Marxian economics. It has been argued that the working class has gained tremendously within capitalist societies as the standard of living has risen. That is, the real wage or the commodity bundle that workers generally consume has increased over time. Because this increase in the real wage has occurred, critics argue, workers are better off, and the suggestion that capitalism would lead to an “increasing misery of the proletariat” is unfounded.[10] Richard Wolff and Stephen Resnick, however, demonstrate that a rising real wage may occur even as the rate of exploitation rises.[11] To understand their point, suppose that NL represents the necessary labor time expended in the production of commodity j. The reader should recall that necessary labor time refers to the time required to produce a value equivalent to the wage that the worker is paid. The worker must be paid enough for her labor-power to purchase the means of subsistence for the worker and her dependents. If q represents the real wage (expressed in terms of physical units of commodities that workers require for the reproduction of their labor-power), and e represents the SNALT required per unit of commodity in the commodity bundle, then we can write the following equation: $NL=eq$ Because the worker requires many different commodities, and each has its own labor requirement, eq represents the product of two vectors or: $eq=e_{1}q_{1}+e_{2}q_{2}+e_{3}q_{3}+...+e_{m}q_{m}$ In this case, the worker consumes m commodities as part of the required commodity bundle. We can now see how American capitalist development is consistent with a rising rate of exploitation even as the real wage has risen. As productivity has risen, the SNALT required for each commodity in the worker’s commodity bundle has fallen. At the same time, workers have been able to benefit from this increased productivity as the socially acceptable real wage has increased since 1979 as shown in Figure 17.27. Therefore, each q has increased even as each e has fallen. If the productivity increases (and the corresponding reductions in e) are relatively larger than the increases in the real wage, the necessary labor must fall. Given the length of the working day, the surplus labor thus increases, and the rate of exploitation (S/V or SL/NL) rises, as Wolff and Resnick suggest. The reader might object, however, to this analysis because a reduction in necessary labor time should mean that the money wage (or nominal wage) has fallen. After all, if it costs less to purchase the commodity bundle due to falling per unit labor values even though the quantities purchased have risen, then less money should be required. U.S. economic history suggests the exact opposite about the overall movement of money wages, which have increased considerably since 1979 as shown in Figure 17.28. With rising real wages and rising money wages, it seems that Marxian economists cannot defend the claim that workers have been made worse off within capitalist societies. They are not worse off in an absolute sense because of the rise in real wages. They do not appear worse off in a relative sense because of the rise in money wages. The latter suggestion contains a flaw, however, because it ignores the inflationary impact of a rise in the money supply. Looking again at the formula for the price of commodity j, it should be clear that it is possible to multiply any labor time magnitude by the ratio of the effective money supply to the aggregate SNALT embodied in all circulating commodities to obtain a fiat money price. This ratio is a magnitude that Marxian economists refer to as the monetary expression of labor time (MELT): $MELT=\frac{MV}{\Sigma_{i=1}^n L_{i}}$ Hence, the price of commodity j can be written in the following way:[12] $P_{j}=MELT \cdot L_{j}$ We can use the MELT to calculate the variable capital (v) as follows: $v=MELT \cdot NL$ Now consider all the relevant factors that have changed since 1950 in the United States. Productivity has increased, which has caused the real wage to rise and the labor values of commodities to fall. The net change has been a reduction in the necessary labor time because the productivity increases had a relatively larger impact on the labor values than on the real wage. At the same time, the money supply has increased enormously, which has caused the MELT to rise substantially. The variable capital has thus increased, but the increase represents an inflationary increase rather than a redistribution of the new value produced during the workday between capitalists and workers. In other words, the surplus labor (SL) time has increased, reflecting a rise in the rate of exploitation, but the surplus value (S) has also become inflated due to the increase in the MELT. $S=MELT \cdot SL$ It should be clear that the surplus value has risen more than the variable capital. Inflation has caused a rise in both measures due to inflation, but the redistribution from the fall in necessary labor time and the rise in surplus labor time has caused the surplus value to rise relatively more than the variable capital has risen. Before we conclude, it is also helpful to think about the meaning of the reciprocal of the MELT. Its reciprocal is what we might call the value of money (VOM). It shows us how much SNALT is represented with one unit of currency. $VOM=\frac{\Sigma_{i=1}^n L_{i}}{MV}$ Hence, when the MELT rises, the value of money falls. During periods of price inflation, a reduction in the value of money occurs, and so this result is consistent with our intuition. Monetary Policy Tools within the Context of the Marxian Theory of Financial Markets In this section, we will consider how a Marxian economist might analyze the Fed’s use of its three primary monetary policy tools. To do so, we will return to the analysis of financial markets that was introduced in Chapter 16. Recall the definition of the general rate of interest (ig) that was provided in that chapter. $i_{g}=\frac{\Delta M}{M}=\frac{B_{O}+\Delta B}{B_{L}+(1-R)D}=\frac{\frac{B_{O}}{B}+p}{\frac{B_{L}}{B}+\frac{(1-R)D}{B}}$ It should be immediately apparent that one of the monetary policy tools has already been included in the definition. The reserve requirement ratio (R) was used to determine the amount of excess reserves that a bank can lend. The bank’s total excess reserves are BL+(1-R)D. Clearly, if the central bank reduces R, then the bank will have more excess reserves that it can lend. This increase in excess reserves will reduce the general rate of interest because a lower interest rate will be sufficient to generate the general rate of profit on bank capital. The reader should recall that the market rate of interest will tend to fall towards the level of the general rate of interest. As soon as the general rate of interest falls below the market rate of interest, capital will flow out of industry and into finance. The increased lending will push the market rate of interest down until it equals the general rate of interest. The conclusion that the market rate of interest will fall when the reserve requirement ratio is reduced is perfectly consistent with the result that neoclassical economists reach. In neoclassical economics, the reduction in the reserve requirement leads to a shift of the money supply curve to the right, which pushes down the market rate of interest in the money market. In Marxian economics, the reduction in the reserve requirement ratio raises the profitability of banking, which leads to capital inflows and pushes down the market rate of interest as lending increases. The results are similar, but these economists reach their conclusions using different theories. The opposite case involves the central bank increasing the reserve requirement ratio. In this case, the bank will have fewer excess reserves for lending. The general rate of interest will then be higher because a higher interest rate will be needed to ensure the general rate of profit on bank capital. As the general rate of interest rises above the market rate of interest, capital will flow out of finance and into industry. The consequent reduction in bank lending will lead to a higher market rate of interest. The market rate of interest will continue to rise until it equals the general rate of interest. This result is also consistent with the neoclassical conclusion that an increase in the reserve requirement will raise the market rate of interest. Let’s next consider how open market operations may be incorporated into the Marxian analysis of financial markets. That is, if the central bank buys securities from banks or sells securities to banks, how will that alter the analysis that we have explored thus far? If the central bank is going to be involved in these transactions with commercial banks, then it makes sense to consider how banks acquire securities in the first place. Consider Table 16.3 that illustrated how the general rate of interest ensures an equal rate of profit in both the industrial sector and the financial sector. In that example, the banks made loans to the industrial capitalists using their excess reserves of$195,000. Industrial capitalists used the borrowed capital of $195,000 to purchase means of production and labor-power. The commodities produced were sold for a profit, and a share of it was given to the bankers in the form of interest. Now consider how the situation would be different if the bankers sell government bonds to the central bank at the start of this same period. That is, suppose that the central bank decides to purchase government securities in the amount of$97,500 from the commercial banks in an open market purchase. The impact on the central bank’s balance sheet is shown in Figure 17.29.
Figure 17.29 shows that the central bank acquires an asset in the form of government securities and pays the banks for the bonds with an increase in their reserves. Once the central bank takes the securities off the banks’ balance sheets, the banks once again have those excess reserves to lend. In other words, the central bank’s purchase of $97,500 of securities gives the banks new excess reserves, which they can then lend to the industrial capitalists. Table 17.1 shows how the borrowed capital in the industrial sector expands by 1.5 times due to the open market purchase and the additional bank lending that stems from it. The increase in borrowing in the industrial sector causes the total productive capital in that sector to expand. Consequently, the total profit and the total social product increase. This change reflects the expansionary impact of open market purchases on the economy. At the same time, a lower rate of interest is necessary to ensure the general rate of profit on bank capital since the bank is lending more than previously. In other words, the general rate of interest has fallen: $i_{g}=\frac{\Delta M}{M}=\frac{B_{O}+\Delta B}{F_{P}+B_{L}+(1-R)D}=\frac{\frac{B_{O}}{B}+p}{\frac{F_{P}}{B}+\frac{B_{L}}{B}+\frac{(1-R)D}{B}}$ In this new formula for the general rate of interest, FP represents Federal Reserve open market purchases of securities. These purchases allow the banking system’s excess reserves to increase, which increases lending. Therefore, the rate of interest that is necessary to ensure the general rate of profit on bank capital is lower. Table 17.1 shows that the general rate of interest has fallen to 5.13% (compared with 7.69% prior to the open market purchases). This reduced rate of interest allows the banking sector to appropriate the same amount of interest as before, which ensures a 25% rate of profit. Because the government must also pay interest to the central bank, the interest owed may be calculated as the product of the general rate of interest and the face value of the securities to obtain$5,000 (= 5.13% times $97,500). Industrial capitalists now owe$15,000 in interest payments to the commercial banks due to their expanded borrowing, and the government owes $5,000 in interest payments to the central bank. This example shows how open market purchases push down the general rate of interest and the market rate of interest while promoting an economic expansion. The process may be reversed if the central bank sells securities it owns to the commercial banks. In that case, FP will decline and the general rate of interest and the market rate of interest will rise. In that case, the banks will lose reserves and will contract their lending. Finally, let’s consider how central bank discount lending affects the economy and the general rate of interest. In this example, we will assume that the open market purchase described previously has occurred. If the central bank next extends discount loans to commercial banks, then the banks acquire new excess reserves. Assume that the central bank lends$97,500 in discount loans to banks. The banks then grant this entire amount as additional loans to industrial capitalists bringing the total borrowed capital to $390,000. The change to the central bank’s balance sheet, which includes the open market purchase described previously, is shown in Figure 17.30. Figure 17.30 shows that the central bank has acquired discount loans as a new asset in addition to the securities previously acquired through the open market purchase of bonds from the banks. The discount loans are granted in the form of reserves, which increases the reserve accounts of banks by$195,000. It is assumed that this entire amount is loaned to the industrial capitalists. The additional $97,500 of discount loans has led to an expansion of the industrial sector by this amount due to increased commercial bank lending stemming from the discount loans. Table 17.2 shows how the borrowed capital expands in the industrial sector when bank lending rises following the granting of the discount loans. With the expansion of the borrowed capital in the industrial sector, the productive capital and the profit also increase. The total social product also rises. The discount lending has led to an expansion of the economy, which suggests that it is an expansionary monetary policy. It is also worth considering the impact that the discount lending has on the rate of interest. The general rate changes for two reasons due to the granting of the discount loans as shown below: $i_{g}=\frac{\Delta M}{M}=\frac{i_{d}F_{L}+B_{O}+\Delta B}{F_{L}+F_{P}+B_{L}+(1-R)D}=\frac{\frac{i_{d}F_{L}}{B}+\frac{B_{O}}{B}+p}{\frac{F_{L}}{B}+\frac{F_{P}}{B}+\frac{B_{L}}{B}+\frac{(1-R)D}{B}}$ First, the extension of discount loans to the banks increases the excess reserves of banks and thus bank lending to the industrial sector. Federal Reserve loans (FL) must, therefore, be added to the aggregate loan amount (or excess reserves) shown in the denominator of the equation. The banks lend reserves they acquire from discount loans and open market purchases by the central bank. They also lend their bank loan capital and the excess reserves they acquire from cash deposits. Second, the discount loans are not provided at zero cost to the banks. The banks must pay the discount rate (id) for these loans. Therefore, the interest received on loans must be sufficient to cover the interest expense of discount loans. For that reason, idFL is included in the numerator of the equation. This product represents the interest expense of the discount loans. If the commercial banks receive the interest implied in the numerator from the industrial sector, then they will be able to pay the interest expense of the discount loans and their operating expenses and still have enough remaining to equal the average profit. These changes show how a change in the discount rate affects the general rate of interest and the market rate of interest. When the central bank increases the discount rate, the general rate of interest must rise. The cost of servicing discount loans increases, and the banks must pass this higher cost along to industry if they are to continue to receive the general rate of profit on bank capital. On the other hand, if the central bank reduces the discount rate, then the general rate of interest must fall because the fall in interest expenses allows the banks to earn the general rate of profit on bank capital even as it charges a lower rate of interest on its industrial loans. Less obvious is the impact that discount lending has on the general rate of interest. Discount loans (FL) are present in the denominator and the numerator. That is, the discount lending leads to more bank lending, which puts downward pressure on the general rate of interest. At the same time, it increases the interest expenses of the bank, which puts upward pressure on the general rate of interest. The question we must answer then is where the increase in discount loans will have the largest relative impact. To answer this question, it helps to use a bit of calculus. Using the quotient rule from calculus, we can differentiate the general rate of interest with respect to Fed discount loans as follows: $\frac{di_{g}}{dF_{L}}=\frac{i_{d}(F_{L}+F_{P}+B_{L}+(1-R)D-(i_{d}F_{L}+B_{O}+\Delta B)}{(F_{L}+F_{P}+B_{L}+(1-R)D)^2}$ If we set this derivative equal to zero, then we can determine what the discount rate must equal so that discount loans have zero impact on the general rate of interest. $\frac{di_{g}}{dF_{L}}=0$ If the fraction is to equal zero, then the numerator must equal zero. In other words: $\frac{di_{g}}{dF_{L}}=0\Leftrightarrow i_{d}(F_{L}+F_{P}+B_{L}+(1-R)D-(i_{d}F_{L}+B_{O}+\Delta B)=0$ Now we find the discount rate that solves the equation (id*): $i_{d}(F_{L}+F_{P}+B_{L}+(1-R)D)=i_{d}F_{L}+B_{O}+\Delta B$ $i_{d}F_{L}+i_{d}F_{P}+i_{d}B_{L}+i_{d}(1-R)D=i_{d}F_{L}+B_{O}+\Delta B$ $i_{d}(F_{P}+B_{L}+(1-R)D)=B_{O}+\Delta B$ $i_{d}^*=\frac{B_{O}+\Delta B}{F_{P}+B_{L}+(1-R)D}$ This result is interesting. It shows that discount loans will have no impact on the general rate of interest if the discount rate equals the general rate of interest when no discount lending occurs. We will refer to id* as the zero-impact discount rate because it implies that discount loans will have no impact on the general rate of interest. Two additional results may also be stated: $\frac{di_{g}}{dF_{L}}>0\Leftrightarrow i_{d}>i_{d}^*$ $\frac{di_{g}}{dF_{L}}<0\Leftrightarrow i_{d} That is, if the discount rate is relatively high (greater than id*), then increased discount lending will raise the general rate of interest due to the high interest expense of the discount loans. On the other hand, if the discount rate is relatively low (lower than id*), then increased discount lending will reduce the general rate of interest due to the low interest expense of the discount loans. Because the discount rate has been historically set at a low level, the impact of discount lending has tended to reduce interest rates, but it is theoretically possible that such lending could increase the general and market rates of interest. The discount rate shown in Table 17.2 is below the level of the general rate of interest without any discount lending and only open market purchases (id* = 5.13%) as shown in Table 17.1. Therefore, the discount lending causes the general rate of interest to fall to 4.35%. Table 17.3 shows the case of a relatively high discount rate of 6%. The discount rate of 6% shown in Table 17.3 is above id*. Therefore, when the discount lending occurs, the interest expense is so great that the general rate of interest rises to 5.35%. Table 17.4 shows the case of a discount rate that is exactly equal to id*. When this situation arises, the general rate of interest does not change when the discount lending occurs. The economy expands even as the rate of interest remains the same. The analysis in this section has explained how interest rates and the scale of economic activity changes when the central bank employs its monetary policy tools. The conclusions resemble those reached in neoclassical economic theory, but they are obtained using a Marxian framework in which industrial capitalists, banking capitalists, and the central bank share in the distribution of profits that the working class produces. F ollowing the Economic News [13] According to a recent article in The Wall Street Journal, the Fed committed an error in December 2018 when it reduced the size of its balance sheet and increased interest rates. A reduction in the size of the balance sheet means that the Fed is selling securities, which banks purchase using reserves. Hence, assets (i.e., securities) and liabilities (i.e., reserves) decline, causing a decrease in the size of the Fed’s balance sheet. With reserves declining, the money supply shrinks, which creates a shortage of money in the money market, an increase in borrowing in the loanable funds market, and a rise in the supply of bonds. The consequence is a rise in interest rates and fall in bond prices as our general equilibrium model of the financial markets suggests. The article reports that by July 2019, the Fed was indicating that it would be cutting the federal funds rate by 25 basis points or more, which translates into 0.25% or more. The author of the article explains that the Fed seemed to be acting inconsistently and that much disagreement seems to have arisen among members of the Federal Open Market Committee (FOMC) about the best path forward for monetary policy. The author argues that possible motivations for the rate cut include the slow growth of business investment in the second quarter of 2019 and slowing economic growth among trading partners. A fall in investment and net exports reduces aggregate demand and lowers equilibrium real GDP so the Fed may be hoping to counter these reductions with its rate cut. The author argues that a more likely reason for the rate cut is that the European Central Bank (ECB) indicated its plan to increase its bond purchases, which is an expansionary monetary policy that will push down European interest rates. The likely impact of that change is that investors will purchase U.S. assets, which pay higher interest rates, leading to an appreciation of the U.S. dollar. A dollar appreciation makes U.S. exports more expensive and U.S. imports cheaper. A drop in U.S. net exports is likely to follow, which reduces aggregate demand. Therefore, the Fed’s rate cut would help to discourage an increase in the demand for U.S. assets and U.S. dollars and prevent a dollar appreciation with the accompanying negative impact on net exports. Summary of Key Points 1. Monetary policy refers to the use of money supply changes to influence aggregate production, the aggregate price level, and the level of unemployment. 2. The Federal Reserve System consists of a system of 12 Federal Reserve Banks, and the Federal Reserve Board of Governors and the Federal Open Market Committee (FOMC) determine monetary policy. 3. The Fed acts as a supervisor of banks, a lender of last resort, an issuer of currency, a banker for commercial banks, a processor of checks, a banker for the federal government, and a regulator of the money supply. 4. The Fed influences the money supply using open market purchases and sales of securities, discount lending, and adjustments to the reserve requirement ratio (R). 5. Expansionary monetary policy involves monetary expansions and interest rate reductions to boost investment spending and net export spending. 6. Contractionary monetary policy involves monetary contractions and interest rate increases to discourage investment spending and net export spending. 7. Expansionary monetary policy involves Fed purchases of bonds from banks, discount rate reductions, and reserve requirement ratio reductions. 8. Contractionary monetary policy involves Fed sales of bonds to banks, discount rate increases, and reserve requirement ratio increases. 9. The Quantity Equation is an identity that shows how the money supply, the velocity of money, the aggregate price level, and the level of real output are all related. 10. Whereas the Quantity Theory of Money emphasizes the relative stability and predictability of the velocity of money, Post-Keynesian economists emphasize the instability and unpredictability of money velocity. 11. Post-Keynesian economists argue that the class conflict over the distribution of income leads to a wage-price inflationary spiral, which causes banks and the central bank to increase the money supply, thus providing an endogenous explanation for money supply growth. 12. The Marxian theory of fiat money treats the price of an individual commodity as a fraction of the effective money supply (MV) where fiat paper money represents a claim to other commodities in circulation. 13. Money wages and real wages have risen throughout U.S. economic history due to inflation and productivity growth. Nevertheless, American workers have experienced rising rates of exploitation due to the reduction in the necessary labor time required to produce the consumption bundle that workers consume and the increase in the surplus labor time performed during the workday. 14. When the central bank purchases securities from commercial banks, the increased lending to industrial capitalists pushes down the general and market rates of interest and leads to an economic expansion. 15. When the central bank increases discount lending to commercial banks, the increased lending to industrial capitalists leads to an economic expansion. It also pushes down the general rate of interest so long as the discount rate is below the zero-impact discount rate. 16. When the central bank reduces the discount rate, the general and market rates of interest fall. When the central bank raises the discount rate, the general and market rates of interest rise. List of Key Terms Monetary policy Federal Reserve Federal Reserve Banks (FRBs) Federal Reserve Board of Governors Chair of the Federal Reserve Board Vice Chair of the Federal Reserve Board Federal Open Market Committee (FOMC) Quasi-public institution Central bank independence Supervisor of banks Lender of last resort Issuer of currency Federal Reserve Notes Banker for commercial banks Processor of checks Banker for the federal government Foreign currency-denominated assets Regulator of the money supply Open market operations Discount loans (FL) Discount rate (id) Reserve requirement ratio (R) Expansionary (easy) monetary policy Contractionary (tight) monetary policy Boom-bust policy Quantity Equation Velocity of money Quantity Theory of Money Debt monetization Exogenous variable Endogenous variable Exogenous money supply Wage-price inflationary spiral Endogenous money supply Marxian monetary theory Marxian theory of fiat money Effective money supply (MV) Real wage (q) Necessary labor (NL) SNALT required per unit of commodity (e) Money wage (nominal wage) Variable capital (v) Surplus labor (SL) Value of money (VOM) Federal Reserve open market purchases of securities (FP) Zero-impact discount rate (id*) Problems for Review 1. Suppose the Fed purchases$20,000 in bonds from the Valpo Bank. How will the Fed’s balance sheet change? How will the Valpo Bank’s balance sheet change? Show the changes on the Fed’s balance sheet and the Valpo Bank’s balance sheet. What is the impact on the money supply likely to be (e.g., positive, negative)?
2. Suppose the Fed sells $30,000 worth of bonds to the Valpo Bank. How will the Fed’s balance sheet change? How will the Valpo Bank’s balance sheet change? Show the changes on the Fed’s balance sheet and the Valpo Bank’s balance sheet. What is the impact on the money supply likely to be (e.g., positive, negative)? 3. Suppose the Fed grants a$15,000 discount loan to the Gary Bank. How will the Fed’s balance sheet change? How will the Gary Bank’s balance sheet change? Show the changes on the Fed’s balance sheet and the Valpo Bank’s balance sheet. What is the impact on the money supply likely to be (e.g., positive, negative)?
4. Suppose the Gary Bank repays the Fed for a $12,000 discount loan (ignore the interest payment). How will the Fed’s balance sheet change? How will the Gary Bank’s balance sheet change? Show the changes on the Fed’s balance sheet and the Valpo Bank’s balance sheet. What is the impact on the money supply likely to be (e.g., positive, negative)? 5. Suppose the Fed reduces the reserve requirement ratio from 15% to 10%. If the South Bend Bank has checkable deposits of$200,000 and $30,000 in reserves, then how is it affected due to the reduction in the required reserve ratio? Show the changes to the South Bend Bank’s balance sheet. What is the impact on the money supply likely to be (e.g., positive, negative)? 6. Suppose the Fed raises the reserve requirement ratio from 20% to 25%. If the South Bend Bank has checkable deposits of$200,000 and $40,000 in reserves, then how is it affected due to the increase in the required reserve ratio? Show the changes to the South Bend Bank’s balance sheet. What is the impact on the money supply likely to be (e.g., positive, negative)? 7. Suppose the Fed increases the discount rate. According to neoclassical theory, what is expected to happen to bank reserves, the money supply, interest rates, investment spending, the value of the U.S. dollar, and net export spending? 8. Suppose the money supply is$2.5 trillion, the velocity of money is 6, and the aggregate price level is 3. What is the level of real output? Use the Quantity Equation.
9. Suppose a wage-price deflationary spiral occurs. That is, prices fall and firms insist on cutting workers’ wages. As demand drops, prices fall further. Using Post-Keynesian theory, predict what will happen to money demand, the money supply, interest rates, the supply and demand for loanable funds, the supply and demand for bonds, and bond prices.
10. Suppose the SNALT embodied in a mobile phone is 0.5 hours and the aggregate labor time embodied in all circulating commodities is 20 billion hours. If the money supply is \$2 trillion and the velocity of money is 5, then what is the fiat money price of the mobile phone?
11. Consider Table 17.4. Suppose that the discount rate falls to 0.25%. How is the general rate of interest affected? Carry out the calculation. Compare the 0.25% discount rate to the zero-impact discount rate. Compare the newly calculated general rate of interest to the general rate of interest when no discount lending occurs (i.e., 5.13%). Are your comparisons consistent with your expectations? Explain.
1. See Mishkin (2006), p. 339.
2. Chisholm and McCarty (1981), p. 207, state that the “purpose is not to earn money but to stabilize American banking and to decide and carry out monetary policy.”
3. The monetary policy tools described in this section are discussed in virtually all neoclassical macroeconomics textbooks.
4. Mishkin (2006), p. 393-410, provides a helpful model of the supply and demand for bank reserves, which shows how the relationship between the discount rate and the federal funds rate influences the supply curve in the market for bank reserves.
5. In this discussion, it is assumed that only one interest rate exists in the economy. Many interest rates exist in the economy, but they tend to move together and so it is useful to refer to only one interest rate. See Mankiw (1997), p. 58, for a discussion of this point. The Fed has the most direct impact on the federal funds rate when it uses its policy tools, which then affects the general structure of interest rates throughout the economy.
6. Neoclassical textbooks recognize the increased instability of the M1 money velocity beginning in the early 1980s. For example, see OpenStax College (2014), p. 655.
7. A similar equation, based on one that Sidney Weintraub presented, is found in Snowdon, et al. (1994), p. 372.
8. Although I refer to the circulating paper money supply throughout this discussion, the symbols may take electronic form and so checkable deposits may also be included here.
9. Saros (2002). I prove this result for the case of V = 1 in Saros (2006) and Saros (2007), p. 407-415. Fred Moseley has proven the result for the general case where V is equal to any value, which is a major contribution to Marxian monetary theory. Moseley conclusively proves that Marx's theory of value does not depend on the widespread use of a money commodity such as gold. See Moseley (2004) and Moseley (2011). I was unaware of Moseley's 2004 draft until after the RRPE review process ended for my 2006 submission, which was published in abbreviated form in 2007. I am thankful to Prof. Moseley for recognizing my development of these concepts as "completely independent" of his 2004 draft (see Moseley (2011), p. 102), although the development of my diagrammatic approach was certainly influenced by my reading of the introduction to his edited volume. See Moseley (2005).
10. See Hunt (2002), p. 244. Hunt explains that, despite what critics argue, nothing can be found in Marx’s mature writings to suggest that he associated a falling real wage for workers with their increasing misery.
11. See Wolff and Resnick (2012), p. 192-195.
12. Moseley (2004/2011) has written the equation this way.
13. “The Confusing Federal Reserve.” The Wall Street Journal. Eastern edition. New York, NY. 30 July 2019. A.16. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/03%3A_Principles_of_Macroeconomic_Theory/17%3A_Monetary_Theories_and_the_Role_of_the_Central_Bank.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Explore the components of the federal budget and the federal debt
2. Analyze the macroeconomic impact of expansionary and contractionary fiscal policy
3. Examine the macroeconomic impact of government budget deficits and surpluses
4. Distinguish between marginal tax rates and average tax rates in different taxation systems
5. Develop the concept of a tax rate multiplier
6. Investigate the implications of fiscal policy for macroeconomic stability
7. Demonstrate the value of a concept referred to as the full employment budget
8. Contrast Keynesian full employment policies with neoclassical austerity policies
9. Evaluate a Marxian theory of government borrowing and debt
In Chapter 17, we investigated the role of the central bank and how monetary policy may be used to influence aggregate output, employment, and the general price level from several different theoretical perspectives. In this chapter, we consider the role of the federal government and how fiscal policy may be used to influence key macroeconomic variables from different vantage points, including neoclassical, Keynesian, and Marxian perspectives. To set the stage, we will define such concepts as government deficits and government debt and then look at the structure of the federal budget for Fiscal Year 2015. Next, we will investigate the macroeconomic impacts of expansionary and contractionary fiscal policy and government deficits and surpluses from neoclassical and Keynesian perspectives. We will then look at different systems of taxation and how to develop a tax rate multiplier to be used in the Keynesian Cross model. Additional topics include the implications of fiscal policy for macroeconomic stability, the usefulness of the concept of the full employment budget, and the contrast between Keynesian full employment policies and neoclassical austerity policies. We will conclude with a Marxian analysis of government borrowing and government debt.
The Federal Budget and the Federal Debt
Fiscal policy pertains to the use of government spending and taxation to influence aggregate output, employment and the price level. The federal government spends a great deal of money, but it also receives a great deal of money through tax collections and other sources. When government outlays and receipts for the year do not match, we say that the budget is out of balance. When government outlays and government receipts for the year do match, then we refer to a balanced budget. When government outlays exceed government receipts for the year, then we say that a budget deficit exists. Finally, when government outlays fall short of government receipts for the year, then we say that a budget surplus exists. Using R to represent government receipts and O to represent government outlays, we can list the possibilities as follows:
$O=R\Leftrightarrow A\;Balanced\;Budget$
$O>R\Leftrightarrow A\;Budget\;Deficit$ $O Although we speak in terms of an annual budget, it is not the calendar year that we have in mind but rather the government’s fiscal year, which begins on October 1 of each year and ends the following September 30. For example, Fiscal Year 2018 (FY2018) includes the date October 5, 2017 but not September 25, 2017.
To develop a sense of which items the federal budget includes, let’s consider the figures for FY2015. Table 18.1 shows the Unified Federal Budget for FY2015, which lists all Federal outlays and receipts for that year.[1]
Only some items have been listed in Table 18.1 with the remaining items included in “Other” categories. Also, the receipts and outlays have been divided according to whether they are “on-budget” or “off-budget.” The federal government’s unified budget includes all receipts and outlays of the federal government. Some receipts and outlays, however, are treated as off-budget because of a desire to protect them from lawmakers, in the case of Social Security, for example, or because it is viewed as a part of the budget that should be able to achieve balance independently, as in the case of the U.S. Postal Service.
To calculate the unified budget deficit or unified budget surplus, we simply add up the total receipts and subtract the total outlays to obtain a unified budget deficit of $438.496 billion. It is also possible to calculate the on-budget deficit or on-budget surplus and the off-budget deficit or off-budget surplus. In the former case, we take the on-budget receipts and subtract the on-budget outlays to obtain an on-budget deficit of$465.791 billion. In the latter case, we take the off-budget receipts and subtract the off-budget outlays to obtain an off-budget surplus of $27.295 billion. The on-budget deficit and the off-budget surplus may be added together to obtain the unified budget deficit. To understand why, consider the following equation: $Unified\;Budget\;Balance=R-O=(R_{N}+R_{F})-(O_{N}+O_{F})$ In the above equation, the sum of on-budget receipts (RN) and off-budget receipts (RF) is calculated and then we subtract the sum of on-budget outlays (ON) and off-budget outlays (OF). Rearranging the terms, we obtain the following result: $Unified\;Budget\;Balance=(R_{N}-O_{N})+(R_{F}-O_{F})=on\;budget\;balance+off\;budget\;balance$ That is, we can simply sum together the on-budget balance and the off-budget balance to obtain the unified budget balance. In this example, if we add the off-budget surplus to the on-budget deficit, then we obtain the unified budget deficit of$438.496 billion. This example demonstrates the benefit of reporting the off-budget balance separately from the on-budget balance. Because the Social Security program has had many years of surpluses, its inclusion in the unified budget has made the federal deficit appear smaller than it is. If we focus on the part of the budget over which lawmakers have more control from year to year (i.e., the on-budget deficit), then we can see that the federal deficit was somewhat larger in FY2015 and has been far larger than the unified budget deficit in some years due to large Social Security surpluses.
The complete federal budget contains many items, but federal outlays can be divided into three major categories. Frequently, the appropriated programs or discretionary programs are grouped together because they require Congress to pass annual appropriations bills. These outlays include spending on agriculture, defense, education, energy, homeland security, health and human services, housing and urban development, environmental protection, and so on. Mandatory spending, on other hand, includes entitlement programs that do not depend on Congress to pass annual appropriations bills and where the outlays depend on who qualifies for benefits under federal law. Examples include Social Security, Medicare, and Medicaid. The final major component is net interest, which accounts for interest paid to owners of U.S. government bonds less the interest that the government receives. The total outlays for FY2015 amounted to $3,688.383 billion, or$3.688383 trillion, as shown in Table 18.1.
Federal receipts for FY2015 include several different sources, including individual income taxes, corporate income taxes, Social Security payroll taxes, Medicare payroll taxes, unemployment insurance taxes, excise taxes, estate and gift taxes, customs duties, and profit distributions from the Federal Reserve. The total receipts for FY2015 amounted to $3,249.887 billion, or$3.249887 trillion, as shown in Table 18.1.
Table 18.2 shows federal receipts and federal outlays as percentages of their totals for FY2015.[2]
The percentages reveal which items represent the largest shares of the total receipts and total outlays. In terms of receipts, individual incomes taxes represent the largest share, followed by Social Security receipts and corporate income taxes. In terms of outlays, Human Resources, Social Security, and National Defense represent the largest shares, followed by net interest payments on the debt.
The federaldebt represents the entire accumulated debt of the federal government minus whatever has been repaid over the years. In any given year, when federal receipts fall short of federal outlays, it is necessary for the federal government to borrow to make up the difference. These borrowings add to the federal debt. Therefore, the federal debt may be thought of as the accumulation of past federal deficits (less any repayments out of federal surpluses). Deficits and surpluses, therefore, represent flow variables because they are measured on an annual basis. The national debt, on the other hand, represents a stock variable because we can identify the total amount owed at a given point in time.
At the end of FY2015, total gross federal debt amounted to $18.120 trillion. Most of the borrowed funds (about$18.094 trillion) was acquired through the issuance and sale of Treasury securities. The remainder (about $26 billion) was acquired via the issuance and sale of federal agency bonds. This information is presented in Table 18.3. The buyers of these government bonds are the holders of U.S. government debt obligations. These bonds represent promises of the U.S. government to repay the face values and to maintain regular interest payments until the bonds mature. Who owns these bonds? Interestingly, a large portion of these bonds (about$5.003 trillion) is held in U.S. Government accounts such as the Social Security Trust Fund. When the Social Security program has a surplus, for example, this amount is invested in a special category of U.S. Treasury bonds. The Federal Reserve Banks hold another portion of the debt (about $2.4619 trillion). The remaining$10.6548 trillion is held by the rest of the public. It is surprising to many people that such a large percentage of the federal debt (about 41.2%) is owed to the federal government or to the nation’s quasi-public central bank.
Domestic and foreign investors own the part of the debt that the federal government and the Fed do not own. At the end of December 2015, the total foreign holdings of Treasury securities amounted to $6,146.2 billion, or$6.1462 trillion. China and Japan are the largest holders of U.S. Treasury securities with more than $1.1 trillion each in FY2015. All foreign nations with more than$100 billion of Treasury securities in FY2015 are listed in Table 18.4.
The foreign holdings of Treasury securities amount to approximately 1/3 of the federal debt. These basic facts about the federal budget and the federal debt will be useful as we consider the macroeconomic impacts of each from a variety of perspectives.
Expansionary Fiscal Policy versus Contractionary Fiscal Policy
The federal government may use fiscal policy to pursue an economic expansion. In that case, it implements an expansionary fiscal policy with the aim of promoting higher aggregate output and employment. On the other hand, the federal government might use fiscal policy to restrict the growth of output and employment. The government is then said to be implementing a contractionary fiscal policy, which it might implement due to its fear of inflation.
Expansionary fiscal policy might be implemented in several different ways. To pursue an economic expansion, the federal government might cut taxes, increase government spending, or combine the two policies.[3] Each measure will stimulate aggregate demand, and if implemented in the short run when prices are sticky, the rise in aggregate demand will increase real output and employment as shown in Figure 18.1.
The precise impact on real output, however, depends on the size of the lump sum tax multiplier and the government expenditures multiplier.
The reader should recall that the lump sum tax multiplier is the following:
$\frac{\Delta Y}{\Delta T}=\frac{-mpc}{1-mpc}$
Similarly, the government expenditures multiplier is the following:
$\frac{\Delta Y}{\Delta G}=\frac{1}{1-mpc}$
Suppose that the government cuts taxes by $50 billion, and the marginal propensity to consume (mpc) is 3/4. Then the lump sum tax multiplier is equal to -3 and we can calculate the change in real output as follows: $\Delta Y=\frac{-mpc}{1-mpc} \cdot \Delta T=\frac{-\frac{3}{4}}{1-\frac{3}{4}} \cdot (-50B)=(-3) \cdot (-50B)= +\150B$ That is, a tax cut of$50 billion increases real output by $150 billion. The reason for the multiplier is that households have more after-tax income to spend, which triggers additional rounds of household spending. Ultimately, real output increases three times as much as the tax cut. Now suppose that the government increases spending by$50 billion, and the mpc is ¾. The government expenditures multiplier in this case is equal to 4, and we can calculate the change in real output as follows:
$\Delta Y=\frac{1}{1-mpc} \cdot \Delta G=\frac{1}{1-\frac{3}{4}} \cdot (50B)=(4) \cdot (50B)= +\200B$
A government spending increase of $50 billion increases real output by$200 billion. The reason for the multiplier is that households receive the government spending as income and then spend it, which triggers even more consumption. In the end, real output rises by four times the initial increase in spending.
Finally, consider a combination of government spending increases and tax cuts, not unlike the policy pursued in early 2009 when Congress passed the American Recovery Act. This piece of legislation included a mix of tax cuts and spending increases aimed at boosting economic activity during the Great Recession. Suppose that the government increases spending by $20 billion and cuts taxes by$40 billion. Continue to assume that the mpc is ¾. In this case, real output receives a boost from both sources as follows:
$\Delta Y=\frac{1}{1-mpc} \cdot \Delta G+\frac{-mpc}{1-mpc} \cdot \Delta T=(4) \cdot (20B)+(-3)(-40B)= +\200B$
This example shows that a combination of a tax cut and a smaller spending increase can achieve the same increase in real output as the larger spending increase. In all three cases, aggregate demand experiences a rightward shift, and real output and employment expand as a result.
It is also worth noting that the full multiplier effects are felt only if prices are sticky in the short run as shown in Figure 18.1. The situation is somewhat different if prices are at least partly flexible as shown in Figure 18.2.
In Figure 18.2, the rise in real output is somewhat offset due to the rising price level. As the economy approaches the full employment level of real output, some of the aggregate demand increase leads to higher prices in addition to higher real output. Since the full impact of the rise in aggregate demand is not felt on real output, the multipliers do not fully function. In the extreme case where AD rises and intersects the vertical portion of the AS curve, prices are completely flexible and the multipliers are not operative at all with real output stuck at the full employment level. The case of perfect price flexibility is the neoclassical assumption whereas the assumption of sticky prices is the Keynesian assumption.
Contractionary fiscal policy might also be implemented in several different ways. To pursue an economic contraction, the federal government might raise taxes, cut government spending, or combine the two policies. Each measure will reduce aggregate demand, and if implemented in the short run when prices are sticky, the drop in aggregate demand will decrease real output and employment as shown in Figure 18.3.
As before, the precise impact on real output depends on the size of the lump sum tax multiplier and the government expenditures multiplier.
Suppose that the government raises taxes by $50 billion, and the mpc is 3/4. Then the lump sum tax multiplier is equal to -3 and we can calculate the change in real output as follows: $\Delta Y=\frac{-mpc}{1-mpc} \cdot \Delta T=\frac{-\frac{3}{4}}{1-\frac{3}{4}} \cdot (50B)=(-3) \cdot (50B)=-\150B$ That is, a tax increase of$50 billion reduces real output by $150 billion. The reason for the multiplier effect here is that households have less after-tax income to spend, which triggers additional reductions of household spending. Ultimately, real output falls three times as much as the tax increase. Now suppose that the government reduces spending by$50 billion, and the mpc is ¾. The government expenditures multiplier in this case is equal to 4, and we can calculate the change in real output as follows:
$\Delta Y=\frac{1}{1-mpc} \cdot \Delta G=\frac{1}{1-\frac{3}{4}} \cdot (-50B)=(4) \cdot (-50B)=-\200B$
A government spending reduction of $50 billion decreases real output by$200 billion. The reason for the multiplier effect in this case is that households no longer receive the government spending as income and so cannot spend it, which triggers even less consumption. In the end, real output falls by four times the initial reduction in spending.
Finally, consider a combination of government spending reductions and tax increases. Suppose that the government reduces spending by $20 billion and raises taxes by$40 billion. In this case, real output receives a boost from both sources as follows:
$\Delta Y=\frac{1}{1-mpc} \cdot \Delta G+\frac{-mpc}{1-mpc} \cdot \Delta T=(4) \cdot (-20B)+(-3)(40B)=-\200B$
This example shows that a combination of a tax increase and a smaller spending reduction can achieve the same decrease in real output as the larger spending reduction. In all three cases, aggregate demand experiences a leftward shift and real output and employment contract as a result. The benefit to the economy is that less danger exists that a sudden rise in aggregate demand will lead to inflation.
It is also worth noting that the full multiplier effects are felt only if prices are sticky in the short run and downwardly inflexible as shown in Figure 18.3. The situation is somewhat different if prices are at least partly flexible as shown in Figure 18.4.
In Figure 18.4, the fall in real output is somewhat offset due to the falling price level. As the economy contracts, some of the aggregate demand reduction leads to lower prices in addition to lower real output. Since the full impact of the fall in aggregate demand is not felt on real output, the multipliers do not fully function, and deflation occurs. In the extreme case where AD falls and intersects the vertical portion of the AS curve, prices are completely flexible and the multipliers are not operative at all with real output stuck at the full employment level even as AD and the price level fall.
The Macroeconomic Impacts of Government Budget Deficits and Surpluses
When the federal government runs a budget deficit, it typically borrows to make up the difference. Printing the money is another option, but modern economies have rejected this solution given its tendency to produce hyperinflation. When a government wishes to borrow to cover its budget deficit, it must issue bonds and sell them to the public. This entrance into the loanable funds market and the bond market has the potential to alter interest rates and bond prices, which then carries consequences for the rest of the economy.
Suppose the government is borrowing in the loanable funds market, which permits it to run a budget deficit. At the same time, assume that the central bank is tightly controlling the supply of loanable funds such that it always adjusts the supply of loanable funds to offset any change in the equilibrium quantity exchanged in this market. The increased government borrowing will raise the demand for loanable funds, which creates a shortage and drives up the interest rate. The equilibrium quantity exchanged rises too. Because the central bank is regulating the quantity of loanable funds, it reduces the supply of loanable funds using its monetary policy tools. These changes to the supply and demand for loanable funds are shown in Figure 18.5.
The consequence of the central bank’s supply reduction is a further appreciation of the U.S. dollar even as it stabilizes the quantity exchanged of dollars. Figure 18.10 assumes that the crowding out of private investment and the negative impact on net exports exactly offset the expansionary impact of the deficit spending. The impact on U.S. net exports could be so extreme, however, that combined with the partial crowding out of private investment, the consequent drop in real output and employment could more than cancel the expansionary impact of the government deficit spending.
The examples in this section demonstrate that expansionary fiscal policy may be partly offset, completely offset, or not at all offset by action that the central bank takes. Coordination among policymakers is essential so that fiscal policy and monetary policy do not work against one another in terms of their impact on output, employment, and the price level.
A very different possibility is that the federal government manages to run a budget surplus. In that situation, it must decide how to use the surplus, and the repayment of federal debt is one possibility. If the government chooses this option, then it repays the bondholders. The bondholders then possess loanable funds that they will probably wish to lend again. The likely result then is an increase in the supply of loanable funds as shown in Figure 18.11.
In this example, the increased supply creates a surplus of loanable funds and pushes the rate of interest down. The drop in the rate of interest stimulates investment spending and consumer spending. Aggregate demand thus shifts to the right. If prices are sticky, the impact is an unambiguous gain for the economy with rising real output and employment and a stable price level. If the economy is close to full employment, however, the risk of the rise in the supply of loanable funds is demand-pull inflation.[5]
Marginal Tax Rates and Average Tax Rates
Up to this point, it has been assumed that the government simply appropriates tax revenue in one lump sum amount from the households each year. The lump sum tax (T) is not very realistic because households are taxed a specific percentage of their incomes. In fact, different tax rates apply to different income levels. For example, in 2017 a single taxpayer paid 10% of the first $9,325 of income in taxes to the federal government. For earnings between$9,325 and $37,950, a single taxpayer paid a 15% tax rate while still paying a 10% tax rate on the first$9,325 earned. The tax rates that apply to different income tax brackets are called marginal tax rates. Marginal tax rates tell us how much of an additional dollar of income is paid in taxes. Income tax brackets tell us the range of income to which a specific marginal tax rate applies. Table 18.5 shows the marginal tax rates and income tax brackets for a single taxpayer filing in 2017.
Table 18.5 also shows us how much tax is paid if one earns the median income in each tax bracket. The median income is the income level that is exactly halfway between the highest and lowest income levels in the tax bracket. For example, the median income levels in the 10% and 15% income tax brackets are equal to $4,662.50 and$23,637.50, respectively, and are calculated as follows:
$Median\;Income\;for\;10\%\;Income\;Tax\;Bracket=\frac{9,325+0}{2}=\4,662.50$ $Median\;Income\;for\;15\%\;Income\;Tax\;Bracket=\frac{37,950+9,325}{2}=\23,637.50$
To determine the taxes owed on the median income in any tax bracket, we cannot simply multiply that median income level by the marginal tax rate that applies to that tax bracket. To calculate the taxes owed in this manner would assume that a single tax rate applies to the entire income. The correct calculation requires that we multiply each increment of income up to that income level by the appropriate marginal income tax rate. For example, to calculate the taxes owed for someone who earns $141,775 (the median income in the 28% income tax bracket), we do the following: $Taxes\;owed\;on\;\141,775=(141,775-91,900)(28\%)+(91,900-37,950)(25\%)+(37,950-9,325)(15\%)+(9,325-0)(10\%)=\32,678.75$ The taxes owed for other income levels are determined in a similar fashion. It is simple to determine the percentage of income that is owed in taxes, which is called the average tax rate. To calculate the average tax rate, simply divide the total taxes owed by the income level as follows: $Average\;tax\;rate=\frac{Taxes\;Owed}{Income}$ For example, the median income level in the 33% income tax bracket is$304,175, and the taxes owed are $83,777. The average tax rate is calculated as follows: $Average\;tax\;rate=\frac{83,777}{304,175}=27.54\%$ The reader should notice that the marginal income tax rates increase with income level. As a result, the taxes owed rise more quickly than income as income increases. The result is a rising average tax rate, which is also visible in the table. When the average tax rate increases with income, the system of taxation is referred to as a progressive tax system. When a person earns more income in such a tax system, the total taxes owed increase but also the percentage of income paid in taxes increases. The justification for a tax system of this kind is that it makes possible the government redistribution of income and thus has the potential to reduce after-tax income inequality. Those who oppose progressive taxation systems frequently advocate a single marginal tax rate that applies to all levels of income. In this case, the taxes owed and the income level rise at the same rate, which leaves the average tax rate unchanged. When the average tax rate remains constant at all levels of income, the system of taxation is referred to as a proportional tax system. A proportional tax is more commonly called a flat tax. A third type of taxation system involves marginal tax rates that decrease as the income level rises. In this type of taxation system, the taxes owed increase more slowly than income rises. The result is a decline in the average tax rate as income rises. When the average tax rate falls as income rises, the taxation system is referred to as a regressive tax system. Introducing a Flat Tax into the Consumption Function and the Keynesian Cross Model Since the flat tax is the simplest system of taxation, with a single tax rate that applies to all income levels, let’s consider how the introduction of a flat tax rate modifies the consumption function that we introduced in Chapter 13. A flat tax rate (t) is calculated as the ratio of taxes (T) to total income, which in this case is the level of real GDP. $t=\frac{T}{Y}\Rightarrow T=tY$ That is, the total taxes owed are simply a fraction (t) of the total income earned. In Chapter 13, we wrote the after-tax consumption function as follows: $C_{a}=C_{0}+mpc \cdot (Y-T)$ We can include the fact that the taxes owed are a function of Y as follows: $C_{a}=C_{0}+mpc \cdot (Y-tY)$ $C_{a}=C_{0}+mpc \cdot (1-t)Y$ This result shows that the flat tax has made the graph of the consumption function flatter. That is, the slope of the line is smaller due to the tax. Without a tax in place, the slope of the line is equal to the marginal propensity to consume (mpc). Once the tax is imposed, the slope of the line changes to (1-t) times the mpc as shown in Figure 18.12. This result is very different from the situation involving a lump sum tax. In that situation, explored in Chapter 13, the imposition of the lump sum tax causes a downward parallel shift of the consumption line because the vertical intercept falls. In the case of a flat tax rate, the aggregate expenditure line also becomes flatter. We can derive the aggregate expenditures function (A) by adding together the after-tax consumption function (Ca) and the exogenously given values for investment spending (I), government spending (G), and net exports (Xn) as follows: $C_{a}=C_{0}+mpc \cdot (1-t)Y$ $I=I_{0}$ $G=G_{0}$ $X_{n}=X_{n0}$ $A=C_{a}+I+G+X_{n}=C_{0}+mpc \cdot (1-t)Y+I_{0}+G_{0}+X_{n0}$ $A=(C_{0}+I_{0}+G_{0}+X_{n0})+mpc \cdot (1-t)Y$ The aggregate expenditures function shows that the vertical intercept of the A curve is not affected when the flat tax is imposed. It simply becomes flatter just as the consumption function becomes flatter due to the smaller slope as shown in Figure 18.13. Figure 18.13 shows that the flatter A curve implies a lower equilibrium level of real output. The reader should recall that the equilibrium output level in the Keynesian Cross model occurs where planned aggregate spending is equal to real output (A=Y). This condition is met where the aggregate expenditures curve intersects the 45-degree line in the graph. A higher tax rate would cause aggregate spending to fall even further and would further reduce real output and employment. On the other hand, a lower flat tax rate would represent an expansionary fiscal policy. It would make the A curve steeper and would raise the equilibrium output and level of employment. The multiplier effect is also modified due to the imposition of a flat tax rate. Using the equilibrium condition that A = Y, the government expendituresmultiplier may be derived in the following manner: $A=Y$ $(C_{0}+I+G+X_{n})+mpc \cdot(1-t)Y=Y$ $C_{0}+I+G+X_{n}=Y-mpc \cdot(1-t)Y$ $C_{0}+I+G+X_{n}=(1-mpc \cdot(1-t))Y$ $Y=\frac{1}{1-mpc \cdot (1-t)}(C_{0}+I+G+X_{n})$ In the equation for the equilibrium output, government spending, investment spending, and net export spending have been included as variables so that we may consider what happens when a change occurs in these components of spending. Specifically, we can derive the government expenditures multiplier as follows: $\Delta Y=\frac{1}{1-mpc \cdot(1-t)} \cdot \Delta G$ $\frac{\Delta Y}{\Delta G}=\frac{1}{1-mpc \cdot(1-t)}$ Previously the government expenditures multiplier was equal to 1/(1-mpc). In the presence of a flat tax, the mpc is multiplied by (1-t). An increase in the tax rate will increase the denominator and lower the government expenditures multiplier. The result is intuitive. When additional government spending occurs, the additional income that households receive is only partly consumed because some is saved. The addition of a tax means that even less additional income is available for consumption, and the consequence is a lower multiplier. The Implications of Different Taxation Systems for Macroeconomic Stability The three systems of taxation that we have considered in this chapter carry very different implications for the stability of the economy. Table 18.6 summarizes the results that have been presented thus far regarding the different systems of taxation as well as claims about the degree of macroeconomic stability that each implies.[6] In Figure 18.14, tax revenues rise very quickly as real income rises, which causes the ratio of taxes paid to income to grow quickly. The steeply rising T line implies a rising marginal tax rate (MTR) and a rising average tax rate (ATR). Each of these tax rates may be defined as follows: $MTR=\frac{\Delta T}{\Delta Y}$ $ATR=\frac{T}{Y}$ The MTR in Figure 18.14 is reflected in the slope of the T line. It shows the additional taxes paid out of additional income. The ATR in Figure 18.14 is reflected in the slope of the ray from the origin drawn through the T line for a given income level. It should be clear that both the MTR and the ATR increase with income. The slope of the T line becomes steeper, which indicates a rising MTR. The rays drawn from the origin also become steeper, which implies a rising ATR. If we add a government expenditures line to the graph of the steeply rising T line, then we obtain a graph like the one shown in Figure 18.15. Figure 18.15 shows that the two curves intersect at Y1, which implies that the budget is balanced with G equal to T. At income levels below Y1, however, a government budget deficit exists. At income levels above Y1 a government budget surplus exists. Now suppose that the economy begins at income level, Y1, and a recession occurs. Because budget deficits have an expansionary impact on the economy due to high government spending relative to tax revenues, the economy has an automatic tendency to expand. Similarly, if the economy begins at Y1 and an expansion occurs, then the budget surplus that results has a contractionary impact on the economy due to low government spending relative to tax revenues. We certainly should not view the Y1 level of real output as an equilibrium level of real output. The equilibrium level of real GDP depends on many factors aside from the state of the government budget. Nevertheless, since the level of real output tends to increase in the case of a recession and tends to fall in the case of an expansion, the progressive tax system has a stabilizing effect on the economy. It should be noted that the budgetary response is automatic due to the automatic reduction in tax revenues when real income falls and the automatic increase in tax revenues when real income rises. The steep T line produces greater expansionary and contractionary effects. When real output falls, T falls significantly because the marginal tax rate declines. When real output rises, T rises significantly because the marginal tax rate increases. The resulting deficits and surpluses will, therefore, be larger as a result. Let’s now turn to a proportional tax system or a flat tax system. Figure 18.16 shows how tax revenues change as real income increases. If we add a government expenditures line to the graph of the linear T line, then we obtain a graph like the one shown in Figure 18.17. Figure 18.17 shows that the two curves intersect at Y1, which implies that the budget is balanced with G equal to T. At income levels below Y1, however, a government budget deficit exists. At income levels above Y1 a government budget surplus exists. Now suppose that the economy begins at income level, Y1, and a recession occurs. Because budget deficits have an expansionary impact on the economy due to high government spending relative to tax revenues, the economy has an automatic tendency to expand. Similarly, if the economy begins at Y1 and an expansion occurs, then the budget surplus that results has a contractionary impact on the economy due to low government spending relative to tax revenues. This result seems very much like the result we obtained for a progressive tax system. The difference is that tax revenues do not decline as rapidly during a recession, and they do not rise as quickly during an expansion. For this reason, we may consider proportional tax systems to have a stabilizing impact on the economy but less so than in the case of a progressive tax system. Finally, let’s consider a regressive tax system. Figure 18.18 shows how tax revenues change as real income increases. If we add a government expenditures line to the graph of the inverted-U T line, then we obtain a graph like the one shown in Figure 18.19. Figure 18.19 shows that the two curves intersect at two points, Y1 and Y2, which implies that the budget is balanced at both income levels with G equal to T. At income levels below Y1, a budget deficit exists. At income levels above Y1 a government budget surplus exists. These results are identical to what we have observed in the case of a progressive tax system and a proportional tax system. If the economy moves away from Y1, then it has an automatic tendency to move back towards it. Suppose, however, that the economy begins at income level, Y2, and a recession occurs. In this case, a budget surplus results with tax revenues exceeding government spending, which is contractionary. The contractionary effect of the budget surplus is to move the economy further away from that output level and to worsen the recession. Similarly, if the economy begins at Y2 and an expansion occurs, then a budget deficit will result with government spending exceeding tax revenues. Because budget deficits have an expansionary impact on the economy due to high government spending relative to tax revenues, the economy has an automatic tendency to expand, which might cause the economy to overheat. It should be clear that it is at least possible for a regressive tax system to destabilize the economy when the economy is operating at a level of output like Y2. Critics sometimes point out that regressive tax systems tax lower income people at a higher rate than higher income people. We can add to that criticism that regressive tax systems also have greater potential to cause macroeconomic instability. The Full Employment Budget Another concept that neoclassical economists and Keynesian economists use to evaluate fiscal policy is the full employment budget. The full employment budget, as its name indicates, refers to the state of the budget at full employment. Neoclassical economists like to evaluate the nature of the government’s fiscal policy at a point in time. If we only consider the actual budget, then the state of the economy might skew our perception. For example, consider the case of a balanced government budget at full employment as shown in Figure 18.20.[8] If the economy enters a recession and real output falls below the full employment output (Yf), then an actual deficit will arise such as occurs at Y1. Looking at the actual budget deficit suggests that the government is actively pursuing an expansionary fiscal policy. It is true that budget deficits have an expansionary effect on the economy, but this deficit exists only because of the automatic reduction in tax revenues that occurs when aggregate income declines. If we want to evaluate the government’s fiscal policy separate from the impact of the automatic stabilizers, then it makes sense to look at the full employment budget, which suggests a neutral fiscal policy when the full employment budget is in balance. Suppose that the economy experiences an expansion and real output rises to Y2, which is above the full employment level of output. An actual surplus now exists, which might suggest a contractionary fiscal policy. It is true that budget surpluses have a contractionary effect on the economy, but this surplus exists only because of the automatic rise in tax revenues that occurs when aggregate income is rising. As in the case of an actual budget deficit, if we want to evaluate the government’s fiscal policy separate from the impact of cyclical fluctuations, then we need to examine the full employment budget, which suggests a neutral fiscal policy due to the balanced full employment budget. Now consider the case of a full employment budget deficit as shown in Figure 18.21. As Figure 18.21 shows, a budget deficit exists at the full employment level of output, Yf. The line labeled T – G represents the actual budget whereas the full employment balanced budget reference line shows the state of the budget when it is balanced at full employment. The budget deficit at full employment suggests that the government is deliberately pursuing an expansionary policy. If the economy experiences a recession and real output falls to Y1, however, then the actual budget deficit will grow as tax revenues fall. The full employment budget deficit is still represented as the difference between the actual budget line and the reference line, but now we can see an addition to the actual deficit. When a recession occurs, the actual deficit exceeds the full employment budget deficit. On the other hand, if the economy experiences an expansion and real output rises to Y2, then an actual budget surplus will arise. Due to the full employment deficit, however, the actual surplus is smaller than it would be if the full employment budget was balanced. Finally, consider the case of a full employment budget surplus as shown in Figure 18.22. As Figure 18.22 shows, a budget surplus exists at the full employment level of output, Yf. As before, the T-G line represents the actual budget whereas the full employment balanced budget reference line shows the state of the budget when it is balanced at full employment. The budget surplus at full employment suggests that the government is deliberately pursuing a contractionary policy. If the economy experiences an inflationary boom and real output rises to Y2, however, then the actual budget surplus will grow as tax revenues rise. The full employment budget surplus is still represented as the difference between the actual budget line and the reference line, but now we can see an addition to the actual surplus. When an expansion occurs, the actual surplus exceeds the full employment budget surplus. On the other hand, if the economy experiences a recession and real output falls to Y1, then an actual budget deficit will arise. Due to the full employment surplus, however, the actual deficit is smaller than it would be if the full employment budget was balanced. The concept of the full employment budget is useful because it allows us to see how an actual budget surplus or deficit may be larger or smaller than it would otherwise be due to the full employment budget gap. Balanced Budget Amendments Most U.S. states have balancedbudget amendments, which is a constitutional requirement that the state governments balance their budgets each year. What are the macroeconomic consequences of adhering to such policies? Consider the case of a recession in which output falls below the full employment output as shown in Figure 18.23. Figure 18.24 shows two methods of balancing the budget when output rises to Y1 and a budget surplus arises. In Figure 18.24 (a), a government spending increase will restore balance to the budget since tax revenues have increased. The problem with this approach is that a government spending increase is an expansionary fiscal policy, which is likely to worsen the inflation. A balanced budget is once again achieved but at great cost to the nation’s well-being. An alternative policy is to decrease the average tax rate as shown in Figure 18.24 (b). In Figure 18.24 (b), tax revenues fall to restore balance to the budget, but a tax decrease is also an expansionary policy taken during an inflationary boom. It seems that a balanced budget policy is likely to be a difficult and painful policy to pursue during an economic boom as well.[9] A Comparison of Keynesian Full-Employment Policies and Austerity Policies Much public discussion has been devoted to the proper role of government in the economy. Many argue for fiscal policy that promotes full employment while others advocate so-called austerity measures that reduce government budget deficits and government debt. Professor David Gleicher at Adelphi University has developed an excellent framework for demonstrating the difference between Keynesian full-employment policies and neoliberal austerity policies within a Keynesian model. The presentation here follows Prof. Gleicher’s approach in its general outline.[10] Let’s first consider the case of an unemployment equilibrium outcome in the Keynesian Cross model. Assume that we know the following information about the economy: $C_{0}=50$ $I_{0}=100$ $G_{0}=400$ $X_{n0}=150$ $mpc=0.75$ $t=0.20$ Using this information, we write the aggregate expenditures function as follows: $A=C_{0}+mpc \cdot (1-t)Y+I_{0}+G_{0}+X_{n0}$ $A=50+0.75(1-0.20)Y+100+400+150$ $A=700+0.60Y$ Setting A equal to Y allows us to calculate the unemployment equilibrium level of real output (Yu*) as follows: $700+0.6Y=Y$ $700=0.4Y$ $Y_{u}^*=1750$ This solution is represented in Figure 18.25 as the initial unemployment equilibrium. We can also determine the state of the government budget at this unemployment equilibrium. Government spending is equal to 400, which is given information. Tax revenues may be calculated as follows: $T=tY=(0.20)(1750)=350$ Therefore, a budget deficit exists as shown below: $T-G=350-400=-50$ When the economy is in a state of unemployment equilibrium, a debate erupts about what to do. Unemployment is high, and production has fallen. A large budget deficit exists as well. Political leaders and party officials begin to argue about the best path forward. Each side emphasizes different aspects of the nation’s economic problems. One side points to job losses and economic insecurity while the other side points to government waste. One side advocates a full employment policy while the other side advocates austerity measures. Consider the consequences of a Keynesian full employment policy. If the full employment level of real output (Yf*) is estimated to be equal to 2000, then advocates of a full employment policy will favor increasing government spending so that the full employment level of real output becomes the new equilibrium for the economy. It is possible to use tax cuts or a combination of tax cuts and government spending to achieve this result, but for simplicity we focus only on the use of increased government spending. To determine the required level of government spending, we simply leave G as a variable and insert the full employment output (Yf* = 2000) in the aggregate expenditures function as follows: $A=50+0.75(1-.20)Y_{F}^*+100+G+150$ $A=50+0.75(1-.20)(2000)+100+G+150$ $A=1500+G$ Now we set aggregate spending equal to the full employment output: $A=Y_{F}^*$ $1500+G=Y_{F}^*$ $1500+G=2000$ $G=500$ Because government spending must rise by 100 to make the full employment level of output the new equilibrium, the vertical intercept has increased to 800 as shown in Figure 18.25. We can also calculate the tax revenues as follows: $T=tY=(0.20)(2000)=400$ The government budget deficit in this case is equal to the following: $T-G=400-500=-100$ The full employment policy has achieved full employment in this case, but it has also doubled the government budget deficit, which is why the advocates of austerity measures are so strongly opposed to this solution to the crisis. Now let’s consider the macroeconomic consequences of austerity measures. If austerity measures are implemented, then the government is committed to balancing the budget. To balance the budget, the following condition must hold: $G=T=tY$ Government spending is now a function of real income. The reason is that when real income increases during an expansion, tax revenues rise and government spending must increase to maintain a balanced budget. Similarly, when real income falls during a recession, tax revenues fall and government spending must be cut to balance the budget. If we substitute the new expression for government spending into the aggregate expenditures function, then we obtain the following: $A=C_{0}+mpc \cdot (1-t)Y+I_{0}+G_{0}+X_{n0}$ $A=C_{0}+mpc \cdot (1-t)Y+I_{0}+tY+X_{n0}$ $A=C_{0}+(mpc \cdot (1-t)+t)Y+I_{0}+X_{n0}$ Because t has been added to the slope of the A line, it is steeper as shown in Figure 18.25. The intuition behind this addition to the slope is that when$1 of additional real income is received by households, tax revenues increase by $0.20 and the government must increase its spending by$0.20 to maintain a balanced budget. Both government spending and consumer spending thus rise when real income rises, which creates a steeper aggregate expenditures line.
Let’s now substitute the known information into the A function as follows:
$A=50+(0.75(1-0.20)+0.20)Y+100+150$
$A=300+0.80Y$ Setting A = Y allows us to solve for the equilibrium outcome (Ya*) as shown below: $300+0.80Y=Y$ $Y_{a}^*=1500$ The equilibrium output has fallen due to the austerity measures, which has worsened the recession, as shown in Figure 18.25. The government has managed to balance the budget, which can be confirmed as follows:
$G=T=tY=(0.20)(1500)=300$ The budget is balanced and to achieve this result, the government reduced spending from 400 (the level at the initial unemployment equilibrium) to 300. Tax revenues also decline because of the reduction in real income. The balanced budget is achieved at great cost in terms of additional lost production and employment. This Keynesian analysis supports the view that no easy solution exists if one wants to return to full employment during a recession and balance the federal budget at the same time.
A Marxian Analysis of Government Borrowing and the Accumulation of Debt
In this final section, we will consider a Marxian analysis of the impact of government borrowing on the economy. Let’s return to our earlier example involving the equalization of the profit rate across the industrial and financial sectors as shown in Table 18.7.
In the example shown in Table 18.7, the market rate of interest has adjusted to equal the general rate of interest. The adjustment of the interest rate is what makes possible the equalization of the rate of profit in this example. Using this case as a starting point, we will consider two scenarios involving government borrowing and the consequences that it carries for the economy.
The first scenario involves government borrowing from the financial sector to finance the operation of state-owned enterprises (SOEs). The banks continue to lend BL+(1-R)D, but now the loan is divided between the industrial sector and the state sector. Let’s assume that a fraction, φ (between zero and one), is loaned to the industrial sector and that a fraction 1 – φ, is loaned to the state sector. Table 18.8 represents this case.
In Table 18.8 KSB represents state-borrowed capital, πs represents profits produced in the state sector, TPs represents the total product of the state sector, KINB represents non-borrowed capital in the industrial sector, KIB represents industry-borrowed capital, KI represents the total capital invested in industry, πI represents the total profits of the industrial sector, and TPI represents the total product of the industrial sector. When looking at the aggregate economy, KA and πA represent the aggregate capital and the aggregate profits across all three sectors.
Table 18.8 assumes that the fraction of the total loan capital borrowed in the industrial sector is ½ (i.e., φ = 1/2). Similarly, the fraction of the total loan capital borrowed in the state sector is also ½ (i.e., 1 – φ = ½). Because the non-borrowed capital in the industrial sector has not changed, the aggregate capital in the economy is the same. The total social product is also the same. The only important difference is that the composition of the total social product has changed with a portion of it now being produced in the state sector. Also, because the SOEs in the state sector hire workers to produce commodities, this sector creates surplus value and appropriates profits. This situation thus represents a Marxian example of complete crowding out of private sector activity. Even so, the situation has not changed for the working class. Workers still create profits. The only difference is that the exploiters have changed to include both industrialists and state managers. Finally, a portion of the profits of the SOEs is paid as interest to the banks that granted the loans to the government in the first place. The government can also repay the entire debt (the principal amount of the loan) because it has received revenue from the sale of commodities equal to the value of the total product in that sector.
The second scenario that we will consider involves government borrowing from the financial sector. This time, however, the government does not use the borrowed funds as capital. Instead, it plans to spend the entire amount on commodities produced in the industrial sector. This borrowing withdraws a great deal of capital from the economy. If nothing else changes, the result will be a major contraction of economic activity and inflation as the prices of the remaining commodities are driven up. In fact, this scenario can lead to many different results depending on the assumptions that we make about the private sector’s response to the government borrowing, and we will only consider one special case.
Let’s assume that the government enters into private sector contracts immediately with plans to use its borrowed funds to purchase commodities from the industrial sector. Let’s further assume that the government contracts encourage the introduction of more non-borrowed capital into the industrial sector as those with hoards decide that the government guarantees justify new investment in this sector. The amount of capital that will be introduced into the industrial sector in response to the government contracts (KGC) may be calculated using the following equation:
$(1-\phi)\{B_{L}+(1-R)D\}=(1+p)K_{GC}$
The left-hand side in this equation represents the funds that the government has borrowed from the financial sector, which it spends on commodities produced in the industrial sector. The right-hand side represents the value of output produced in the industrial sector under government contract. If the fraction of the loan capital that the government borrows is ½ (i.e., 1 – φ = 1/2), then this case may be represented as in Table 18.9.
We may calculate the non-borrowed capital in the industrial sector (KGC) in Table 18.9 as follows:
$(1-\frac{1}{2})\{35,000+(1-0.20)200,000\}=(1+0.25)K_{GC}$ $K_{GC}=78,000$
The total profit in the government-contracted industrial sector (πGC) may be calculated as follows:
$\pi_{GC}=pK_{GC}=(0.25)(78,000)=19,500$
Table 18.9 also includes the total product in the government-contracted industrial sector (TPGC), the profits in the private sector-contracted industrial sector (πPC), and the total product in the private sector-contracted industrial sector (TPPC). It should be clear from a comparison of Table 18.8 and Table 18.9 that the total social product has fallen. The reason is that the government spends the entire borrowed funds rather than advances it as capital. Therefore, the new capital advanced in industry due to the government contracts is only a fraction of the government’s borrowed amount. The industrialists who enter into government contracts expect to receive the average profit in return for their investments and so they only advance as much capital as will enable them to recover the investment plus the average profit. The government borrowing thus leads to a minor contraction in the private sector. The government can borrow from many sources, however, including from sources that do not reduce the capital invested in the private sector, and so the government demand for commodities will generally increase the total social product. We have assumed that the government borrowing partially interferes with private sector productive activity to simplify the example and the calculations in this section.
Finally, we need to consider the matter of the interest that the government owes to the financial sector for its loan (IG). It uses the entire borrowed amount of $97,500 to purchase the total product of the government-contracted industrial sector. The government owes interest equal to the following: $I_{G}=i_{g}(K_{GC}+\pi_{GC})=(0.0769)(97,500)=7,500$ The government cannot pay the interest out of profits as it did in the first scenario when it advanced the funds as capital and appropriated the profits of the SOEs. To pay the interest, it must impose a tax on profits, on wages, or on a combination of wages and profits. Let’s suppose that the government taxes all profits at the same rate so that it can exactly meet its interest payment. In that case, the tax rate on profits (tπ) is calculated using the aggregate profits across all three sectors (πA). $t_{\pi}\pi_{A}=I_{G}$ $t_{\pi}=\frac{I_{G}}{\pi_{A}}$ $t_{\pi}=\frac{7,500}{130,125}=5.76\%$ This tax rate on profits will ensure that the government receives just enough tax revenue to equal the interest owed on its debt. Suppose the government taxes only wages to acquire enough tax revenue to pay the interest on its debt. To consider this case, we will assume that 2/3 of the aggregate capital stock (KA) consists of wages. Then the aggregate wages across all three sectors (WA) may be calculated as follows: $W_{A}=(\frac{2}{3})(K_{A})=347,000$ The tax rate on wages (tw) may then be calculated as follows: $t_{w}W_{A}=I_{G}$ $t_{w}=\frac{I_{G}}{W_{A}}$ $t_{w}=\frac{7,500}{347,000}=2.16\%$ This tax rate on wages will ensure that the government receives just enough tax revenue to equal the interest owed on its debt. It is worth noting that the tax on wages reduces the after-tax wages that workers receive. Workers thus have less money to purchase the commodities they need to reproduce their labor-power each day. Because the value of labor-power is not a biological minimum, it is possible that after-tax wages may fall below the value of labor-power. Alternatively, the lower after-tax wages may represent a change in the value of labor-power such that workers are no longer recognized as requiring the previously affordable larger bundle of commodities for their daily consumption. Either way, workers experience a reduction in their living standards due to the tax. A final possibility is that the government imposes a tax on both wages and profits. In this case, we simply add the two sources of tax revenue and set the total tax revenue equal to the interest owed. $t_{w}W_{A}+t_{\pi}\pi_{A}=I_{G}$ If all wages and profits in the economy are taxed at the same rate (t), then we calculate the following: $tW_{A}+t\pi_{A}=I_{G}$ $t(W_{A}+\pi_{A})=I_{G}$ $t=\frac{I_{G}}{W_{A}+\pi_{A}}$ $t=\frac{7,500}{347,000+130,125}=1.57\%$ Taxing all wages and profits the same allows all income to be taxed at a lower rate than if only wages or profits are taxed. Of course, because workers are responsible for creating all the new value (i.e., wages plus profits), if their wages are taxed at all, then they not only have the profits they produced taken from them, but some of their wage income as well. It is possible to see all the combinations of tax rates on profits and wages if we consider the general equation that allows for a combination of profit taxes and wages taxes and then solve for the tax rate on wages as follows: $t_{w}W_{A}+t_{\pi}\pi_{A}=I_{G}$ $t_{w}=\frac{I_{G}}{W_{A}}-\frac{\pi_{A}}{W_{A}}t_{\pi}$ Figure 18.26 shows all the combinations of the two tax rates that will generate just enough tax revenue to ensure that the government can meet its interest payment on the debt. As Figure 18.26 shows, if only wages are taxed, then the tax rate on wages may be calculated simply by dividing the interest owed by the aggregate wage income. If only profits are taxed, then the tax rate on profits may be calculated simply by dividing the interest owed by the aggregate profit income. If both wages and profits are taxed at the same rate (t), then the tax rate is calculated by dividing the interest owed by the sum of all wages and profits. Finally, the slope indicates that a one percent increase in the tax rate on profits will cause a reduction in the tax rate on wages of πA/WA. We can conclude this section with a look at the state of the government budget and the government debt in this second scenario. Government outlays (G) include the state spending on commodities (TPGC) plus the interest paid on the loan (IG). $G=TP_{GC}+I_{G}$ Government receipts (R) only include tax revenues. $R=t_{w}W_{A}+t_{\pi}{\pi_{A}}$ The government budget gap is equal to the following (since taxes equal interest owed): $R-G=t_{w}W_{A}+t_{\pi}\pi_{A}-(TP_{GC}+I_{G})=-TP_{GC}=-(1-\phi)\{B_{L}+(1-R)D\}$ In other words, a budget deficit exists that is equal to the borrowings from the financial sector. This amount also represents the increase in the government debt for this period. F ollowing the Economic News [11] In a recent opinion piece in the Los Angeles Times, Tom Campbell explains that the Congressional Budget Office (CBO) reported in August 2019 that federal budget deficits are estimated to be over$1 trillion per year for the next ten years. He explains that these deficits contribute to our national debt and that between now and the end of the 2020s, it will have grown from $22 trillion to$34 trillion. Campbell explains that China owns $1.2 trillion of U.S. government bonds and that a massive sale of these bonds would lead to falling bond prices and rising interest rates. The negative impact on investment spending and net export spending from the higher interest rates would lead to a significant reduction in aggregate demand and thus output and employment. Campbell explains that this scenario is unlikely even though it is theoretically possible. Campbell also argues that the national debt poses another problem. If the United States were faced with a crisis that required massive spending, then the amount of borrowing required would drive up interest rates and interest expenses even more. Hence, the deficit spending is creating vulnerabilities that Americans cannot afford to create. A third argument that Campbell offers as to why the growing national debt should concern Americans is that it represents increased consumption today with a bill that future generations of Americans must pay. Campbell argues that the accumulation of debt might be acceptable if the deficit spending was devoted to the establishment of “great public universities, better roads and airports, [and] a military to keep international lines of commerce open.” The problem, Campbell explains, is that the deficits do not represent productive investments in the future but rather “tax cuts, entitlement payments and military expenditures in Iraq and Afghanistan.” Campbell bemoans the lack of concern that the President and members of Congress express regarding the national debt. Campbell seems to be suggesting that deficit spending can lead to short-term expansions of aggregate demand through tax cuts, entitlement payments, and wartime spending. It would be better, Campbell seems to argue, to pursue lasting increases in aggregate supply with huge investments in infrastructure and the labor force, as well as lasting changes in aggregate demand. In the latter case, the interest that future generations would pay on the federal debt would represent a payment in exchange for their enhanced productive abilities. In the former case, it would represent a payment for benefits that a past generation received. Summary of Key Points 1. A budget deficit exists when government outlays exceed government receipts for the year; a budget surplus exists when government receipts exceed government outlays for the year; and a balanced budget exists when government outlays equal government receipts for the year. 2. The unified budget deficit (or surplus) is equal to the sum of the on-budget deficit (or surplus) and the off-budget deficit (or surplus). 3. Federal outlays consist of appropriated programs, mandatory spending, and net interest. 4. Federal receipts consist of individual income taxes, corporate income taxes, payroll taxes, unemployment insurance taxes, excise taxes, estate and gift taxes, customs duties, and profit distributions from the Federal Reserve. 5. The federal debt represents the accumulated debt of the federal government minus the amount that has been repaid over the years. 6. Expansionary fiscal policy uses tax cuts and/or government spending increases to combat recession. Contractionary fiscal policy uses tax increases and/or government spending cuts to combat inflation. 7. When the government borrows to finance deficit spending, the consequence can be complete crowding out, partial crowding out, or no crowding out of private investment and private consumption. Net exports may also be negatively affected depending on the exchange rate policy of the central bank. 8. When the government runs a budget surplus and repays its debt, the consequence can be demand-pull inflation. 9. Marginal tax rates refer to the rate at which an additional dollar of income is taxed. Average tax rates refer to the ratio of total taxes paid to total income. 10. In progressive tax systems, the average tax rate rises as income rises; in a proportional tax system, the average tax rate remains constant as income changes, and in a regressive tax system, the average tax rate falls as income rises. 11. When a flat tax rate is introduced into the Keynesian Cross model, the government expenditures multiplier changes to 1/[1-(1-t)mpc], and the equilibrium output falls. 12. A progressive tax system is the most stable tax system; a proportional tax system is the second most stable tax system; and a regressive tax system is the least stable tax system. 13. The full employment budget allows us to consider the state of the government budget at full employment and ignore the impact on the budget of automatic changes in tax revenues that occur over the course of the business cycle. 14. Strict adherence to a balanced budget policy tends to worsen recessions and inflationary booms. 15. Full employment Keynesian policies during a recession can generate full employment but cause large budget deficits. Austerity measures during a recession can balance the budget but cause a further contraction of aggregate output and employment. 16. In a Marxian framework, when the government borrows to finance production in SOEs, the only macroeconomic consequence is a change in the composition of the total output. When the government borrows to purchase privately produced output, however, it accumulates debt and must impose taxes on wages or profits (or both) to pay interest on its debt. List of Key Terms Fiscal policy Balanced budget Budget deficit Budget surplus Unified budget deficit Unified budget surplus On-budget deficit On-budget surplus Off-budget deficit Off-budget surplus Appropriated programs (discretionary programs) Mandatory spending Entitlement programs Net interest Federal debt Expansionary fiscal policy Contractionary fiscal policy Complete crowding out Partial crowding out Complete accommodation Flexible exchange rates Fixed exchange rates Marginal tax rates Income tax brackets Median income Average tax rate Progressive tax system Proportional tax system Flat tax Regressive tax system Government expenditures multiplier Full employment budget Actual budget Automatic stabilizers Full employment balanced budget reference line Balanced budget amendments Austerity measures Problems for Review 1. Suppose that individual income tax revenues are$1.2 trillion, corporate income taxes are $0.5 trillion, and excise taxes are$0.2 trillion. Social Security payroll taxes (off-budget) are $0.75 trillion. National defense spending is$0.6 trillion, spending on Human Resources is equal to $1.75 trillion, and net interest equals$0.4 trillion. Finally, assume that off-budget Social Security expenditures are $0.8 trillion and spending by the Postal Service amounts to$0.1 trillion. Calculate the following:
a. On-budget receipts
b. On-budget outlays
c. Off-budget receipts
d. Off-budget outlays
e. On-budget deficit or surplus
f. Off-budget deficit or surplus
g. Unified budget deficit or surplus
2. Suppose that the marginal propensity to consume (mpc) is 0.8. Also assume that government spending increases by $200 billion and lump sum taxes fall by$100 billion. What is the total change in the equilibrium real GDP, if the price level is fixed in the short run?
3. Suppose you are a single taxpayer and that your annual income is $175,000. Calculate the total taxes that you owe the federal government for the year using Table 18.5. Also calculate your average tax rate (ATR). How does it compare with the marginal tax rate in your income tax bracket? What is the reason for this relationship? 4. Suppose you know the following information about the economy: a. Autonomous consumer spending is$200 billion.
b. Autonomous investment spending is $100 billion. c. Autonomous government spending is$150 billion.
d. Autonomous net export spending is $75 billion. e. The flat tax rate is 15% or 0.15. f. The marginal propensity to consume is 0.7. Solve for the equilibrium output. Then place the aggregate expenditures function on a graph with A on the vertical axis and real GDP (Y) on the horizontal axis. Include the 45-degree line (A = Y) on your graph. Identify the numerical values for the equilibrium output and the vertical intercept on the graph. 5. Suppose real GDP rises from$2 trillion to $3.5 trillion, and tax revenues rise from$0.25 trillion to $0.6 trillion. Calculate the average tax rate before and after the change. What happens to it? What kind of tax system is it? 6. Is it possible for a government to have a full employment budget surplus but then also to be running an actual deficit? What would need to happen to bring about that situation? Consider Figure 18.22 when answering this question. 7. Use the information from question 4 for these problems. a. Suppose that the full employment output is$1500 billion. What level of government spending will generate this level of output as the equilibrium output? What will tax revenues be at this output level? What will be the state of the government budget at this output level?
b. Suppose the government implements austerity measures in the hopes of balancing the budget. What should it do? Find the new equilibrium output. Does the result surprise you? How is this outcome possible?
8. Consider Figure 18.26. Answer the following questions:
a. What kind of change might cause a parallel shift of the tax rate line to the right?
b. How would a change in the distribution of income between profits and wages affect the tax rate line? Consider, for example, a rise in profits and a fall in wages. Assume that the sum of wages and profits remains the same. Redraw the graph in Figure 18.26, and then add the new tax rate line on the same graph.
1. Office of Management and Budget. Tables 2.1, 2.5, 3.1, and 3.2. Historical Tables. Web. Accessed on February 21, 2018. https://www.whitehouse.gov/omb/historical-tables/
2. Office of Management and Budget. Tables 2.1, 2.5, 3.1, and 3.2. Historical Tables. Web. Accessed on February 21, 2018. https://www.whitehouse.gov/omb/historical-tables/
3. See Bade and Parkin (2013), p. 827, for an explanation of how changes in government spending, transfer payments, and taxes may be used separately or in combination with each other as part of a nation's fiscal policy. McConnell and Brue (2008), p. 209-211, also discuss the effects of changes in government spending and taxes, separately and then in combination with one another, where the policies reinforce one another in their impact on real output. Samuelson and Nordhaus (2001), p. 504, discuss the combined impact of changes in taxes and government spending also, but they consider the case where the two policies offset each other in their impact on real output. Chisholm and McCarty (1981), p. 175-178, discuss expansionary and contractionary spending policies and then expansionary and contractionary tax policies before considering the synthesis case.
4. Wolff and Resnick (2012), p. 114-115, emphasize this point.
5. As Chisholm and McCarty (1981), p. 187-188, explain, debt retirement may be “fiscally counterproductive” since the aim of the budget surplus is to “diminish total demand.”
6. McConnell and Brue (2008), p. 212-213, evaluate the three systems of taxation based on their degrees of built-in stability. This section greatly expands upon their discussion.
7. Eventually, the slope of the T line becomes negative, which implies a negative marginal tax rate. This case might seem unrealistic as it seems to imply negative marginal income taxes for the rich (i.e., subsidies) as their income grows. The MTR here, however, only shows what happens to total taxes as income grows. If enough taxpayers move to higher income levels where the marginal tax rates are lower (albeit still positive), then the aggregate MTR falls as shown.
8. Graphs that show how the budget gap changes relative to changes in aggregate output may be found in Solomon (1964), p. 107, and in Oakland (1969), p. 350.
9. Neoclassical textbooks tend to emphasize the difficulties associated with balanced budget amendments. For example, see Hubbard and O’Brien (2019), p. 970-971, and Chiang and Stone (2014), p. 564.
10. I am deeply grateful to Prof. Gleicher for granting me permission to include a summary of his model in this section, which uses a different numerical example than the one he originally used to illustrate these points. The original source is: Gleicher, David. "A Novel Method of Teaching Keynesian Demand versus Neoliberal Austerity Policies within the Simple Keynesian Model." Union for Radical Political Economics (URPE) Newsletter. Volume 44, Number 3, Spring 2013: 6-7. The final, definitive version is available at urpe.org/content/media/UA_URPE_Past_Newsletters/spring2013newsletter.pdf.
11. Campbell, Tom. “When Soaring Deficits Come Home to Roost.” Los Angeles Times. Sunday, Home Edition. 25 Aug. 2019. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/03%3A_Principles_of_Macroeconomic_Theory/18%3A_Theories_of_Government_Budget_Deficits_and_Debt.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Explain the neoclassical theory of comparative advantage
2. Analyze the limits to the terms of trade and the determination of the terms of trade
3. Explore criticisms of the theory of comparative advantage
4. Investigate the so-called New Trade Theory
5. Examine the neoclassical approach to tariffs and quotas
6. Consider radical theories of world trade
In this final part of the book, we explore key principles that will help us understand the operation of the world economy. This chapter concentrates on theories of international trade. The final chapter of the book considers theories of international finance. As in other chapters, much of the focus in the present chapter will be on the dominant neoclassical perspective. Unlike other economics principles textbooks, however, we will also consider criticisms of the mainstream theory of trade as well as radical theories that serve as alternatives to neoclassical trade theory. As before, these alternative theories of trade will provide us with a different perspective of the same material, but they will also deepen our understanding of mainstream trade theory.
The Ricardian Theory of Comparative Advantage
The theory of international trade that dominates mainstream neoclassical discourse today has its roots in a theory developed by David Ricardo in his 1817 book The Principles of Political Economy and Taxation. According to Ricardo, two nations can always gain from trade as long as each nation has a comparative advantage in the production of at least one commodity. That is, he argued that a richer nation might still gain from trade with a poorer nation even if it is a better producer of every commodity that the two countries produce. If the rich nation is relatively much better at producing one commodity than another, then it makes sense for the rich nation to specialize in the production of the commodity that it is relatively much better at producing. It can then engage in trade with the poor country, and both nations will be better off.
When Ricardo presented his theory, he did so using the classical labor theory of value. He assumed that one nation was absolutely better at producing both commodities. Within that framework, a nation has an absolute advantage if it can produce one unit of a commodity with less labor time than another nation. A nation has a comparative advantage in the production of a commodity, on the other hand, if its absolute advantage is relatively greater for that commodity. In that case, it should specialize in the commodity in which it has a comparative advantage. The other country should specialize in the production of the other commodity in which it will have a comparative advantage. Even though the poor country has an absolute disadvantage in the production of both commodities (more labor time is required to produce a unit of each commodity), it nevertheless has a smaller absolute disadvantage in the production of one commodity. The commodity in which it has a smaller absolute disadvantage is the one in which it has a comparative advantage.
Table 19.1 provides an example (like Ricardo’s original example) of two nations and the amounts of labor required to produce one unit of cloth and one unit of wine.
Table 19.1 shows that England has an absolute advantage in the production of both wine and cloth. That is, it requires absolutely less labor time in England to produce each commodity than in Portugal. At the same time, it requires 1 and 2/3 times as much labor time to produce wine in Portugal (= 5/3) as it does in England. Similarly, it requires 3 times the amount of labor time in Portugal to produce cloth (= 6/2). Therefore, Portugal’s absolute disadvantage is greatest in cloth production and so we should expect it to have a comparative advantage in wine production.
Similarly, England requires only 0.6 times as much labor time as Portugal to produce wine (= 3/5). England also requires only about 0.33 times as much labor time as Portugal to produce cloth (= 2/6). Again, we see that England can produce each commodity in less time than Portugal. Nevertheless, England’s absolute advantage is clearly greatest in the production of cloth. Therefore, England’s comparative advantage is in the production of cloth. In conclusion, England should specialize in cloth production, Portugal should specialize in wine production, and the two nations can trade to the benefit of each.
The Modern Theory of Comparative Advantage: The Case of Constant Opportunity Cost
Of course, mainstream economists have long since abandoned the classical labor theory of value. As a result, the Ricardian presentation is not the best way to represent the mainstream theory of international trade. Instead, we will return to the production possibilities model and consider an example in which Thailand and the United States produce both sugar (S) and corn (C). Each nation has a given production technology and given stocks of land, labor, and capital. We will also modify our earlier assumption from Chapter 2 that each nation experiences increasing marginal opportunity costs. The reader might recall that increasing opportunity cost arises because societies generally possess heterogeneous resources. Instead, we will assume that each nation possesses homogeneous resources that are equally well-suited to all lines of production and thus are easily transferrable from one industry to another. The assumption of homogeneous resources generates a pattern of constant marginal opportunity costs. Figure 19.1 shows the production possibilities for the U.S. and Thailand.
Because the U.S. has an absolute advantage in the production of both corn and sugar, the conclusion might be drawn that the U.S. should not trade with Thailand. If we recall Ricardo’s insight, however, that a nation only needs to have a comparative advantage for trade to be beneficial to it, then trade might still be worthwhile. To provide a definite answer to this question, we must introduce a modified definition of comparative advantage that is compatible with the production possibilities framework. It will be said that a nation possesses a comparative advantage in the production of a commodity if the marginal opportunity cost of production is lower for that nation than for another nation.
As mentioned previously, the marginal opportunity cost, also known as the domestic terms of trade, is reflected in the slope of the PPF. In the U.S., therefore, the marginal opportunity cost of corn is the following:
$\frac{\Delta S}{\Delta C}=-\frac{50S}{50C}=-\frac{1S}{1C}$
In other words, the opportunity cost of producing 1 unit of corn is 1 unit of sugar in the U.S. In Thailand, however, the domestic terms of trade are rather different. The marginal opportunity cost of producing corn in Thailand is the following:
$\frac{\Delta S}{\Delta C}=-\frac{30S}{10C}=-\frac{3S}{1C}$
That is, the marginal opportunity cost of producing 1 unit of corn in Thailand is 3 units of sugar. It should be clear that the marginal opportunity cost of producing corn is lower in the United States. In the U.S. it only costs 1 unit of sugar rather than 3 units as in Thailand. Therefore, the U.S. has a comparative advantage in the production of corn.
Thailand must have a comparative advantage in sugar production, but to verify this result, let’s consider the reciprocal of the slope. The reciprocal of the slope will allow us to see the marginal opportunity cost of producing 1 unit of sugar. For example, in the U.S. the marginal opportunity cost of sugar production is the following:
$\frac{\Delta C}{\Delta S}=-\frac{50C}{50S}=-\frac{1C}{1S}$
In other words, the marginal opportunity cost of 1 unit of sugar is one unit of corn. Similarly, we can write the marginal opportunity cost of 1 unit of sugar in Thailand as follows:
$\frac{\Delta C}{\Delta S}=-\frac{10C}{30S}=-\frac{1/3C}{1S}$
In other words, the marginal opportunity cost of 1 unit of sugar in Thailand is 1/3 units of corn. Therefore, the marginal opportunity cost of sugar production is lower in Thailand than in the United States since it costs 1 unit of corn in the U.S. but only 1/3 units of corn in Thailand to produce 1 unit of sugar. Thailand thus has a comparative advantage in sugar production.
In such examples, it will always be the case that if one nation has a comparative advantage in the production of a commodity, then the other nation will have a comparative advantage in the production of the other commodity. The only case in which a nation will not have a comparative advantage in the production of either commodity is the one in which the marginal opportunity cost is the same in the two nations. Trade in that case will not achieve anything that cannot be achieved domestically.
In this case, however, trade can benefit both nations. To understand how, we assume that each nation completely specializes in the production of the commodity in which it has a comparative advantage. That is, it is assumed that the U.S. shifts all its resources to the production of corn since it has a comparative advantage in corn production. Similarly, Thailand shifts all its resources to sugar production since that is where its comparative advantage lies. In Figure 19.1, the U.S. will produce at the horizontal intercept in (a), and Thailand will produce at the vertical intercept in (b).
Now that each nation is producing only one commodity, it will certainly want to trade a part of its production for the other commodity that is now only produced by its trading partner. The amount of the other commodity that each country will wish to import and the amount of its own commodity that it wishes to export will depend on the preferences of the consumers in each nation. Because the PPF only tells us about production possibilities and does not tell us about consumption possibilities, we do not possess enough information to address this question.
We are in a position, however, to draw some conclusions regarding the international terms of trade that will be established in the world market. The international terms of trade tell us how much sugar Thailand is willing to exchange for 1 unit of American corn. Its reciprocal tells us how much corn the U.S. is willing to exchange for 1 unit of Thai sugar. This international rate of exchange will be determined through a process of negotiation between the two nations. That is, competition in the marketplace will ultimately determine the price that emerges for each commodity.
Certain limits to the terms of trade exist though. To understand why such limits exist, consider the situation from Thailand’s perspective first. Would Thailand ever trade more than 3 units of sugar for 1 unit of corn? The answer is definitely “no.” According to its domestic terms of trade, Thailand can produce 1 unit less of corn and 3 units more of sugar by shifting resources from corn production to sugar production. It should never pay more in the world market than 3 units of sugar for 1 unit of corn since it is capable of that tradeoff at home. Similarly, the U.S. will never pay more in the world market than 1 unit of corn for 1 unit of sugar. According to its domestic terms of trade, the U.S. can shift resources from corn production to sugar production. If it does so, then it will gain 1 unit of sugar at the cost of 1 unit of corn. Since it can obtain 1 unit of sugar by only sacrificing 1 unit of corn at home, the U.S. will never be willing to pay more than 1 unit of corn in the world market.
This reasoning suggests that the domestic terms of trade serve as the maximum limits for the international terms of trade. Any international exchange ratio that falls within these limits, including the domestic exchange ratios themselves, is a possible outcome for the international terms of trade. For example, the international terms of trade might be the following:
$\frac{\Delta S}{\Delta C}=-\frac{2S}{1C}$
In other words, in the world market, Thailand must pay 2 units of sugar for 1 unit of corn. This outcome is possible because the world price of corn lies in between the domestic prices (i.e., between 1 and 3 units of sugar). Similarly, taking the reciprocal allows us to consider the world price of a unit of sugar.
$\frac{\Delta C}{\Delta S}=-\frac{1/2C}{1S}$
That is, 1 unit of sugar costs the U.S. 1/2 units of corn. The world price of sugar also lies between the domestic prices (i.e., 1/3 and 1 units of corn).
To visualize these limits to the terms of trade and this additional possible outcome for the international terms of trade, consider Figure 19.2.
In Figure 19.2, the domestic marginal opportunity costs are represented as straight lines stemming from the origin. The negative signs of the slopes have been eliminated, but a tradeoff between the two commodities in each nation is implied. Because the domestic marginal opportunity costs are constant in each nation by assumption, these lines have constant slopes. The international terms of trade line must lie somewhere in between these extreme possibilities (although the extremes cannot be ruled out as possibilities for the established international terms of trade). If the international terms of trade line coincides with one of the domestic price lines, then that nation does not benefit from trade, and the other nation enjoys all the gains from trade. In this case, the international terms of trade line has a slope of 2 and thus it lies in between Thailand’s domestic marginal opportunity cost line, which has a slope of 3, and the U.S. domestic marginal opportunity cost line, which has a slope of 1.
The final step in this section is to determine what effect these specific terms of trade have on the consumption possibilities of Thailand and the U.S. Each nation is engaged in complete specialization in the production of the commodity in which it has a comparative advantage. Given the international terms of trade that have been established, each nation can export a part of its production in exchange for a definite amount of the other nation’s commodity. For example, the U.S. could hypothetically trade all 50 tons of corn for 100 tons of sugar at these specific terms of trade. Multiplying the numerator and denominator of the international terms of trade fraction by 50, the calculation may be completed as follows:
$\frac{\Delta S}{\Delta C}=-\frac{2S}{1C}=-\frac{100S}{50C}$
Because the U.S. can hypothetically reach this point of 100 tons of sugar as well as any other point along the straight line connecting these two points, we can add a trading possibilities frontier (TPF) to our earlier graph from Figure 19.1 (a). This graph is shown in Figure 19.3 (a).
Similarly, Thailand can hypothetically export all its sugar for 15 tons of corn. Following a similar approach as the one used to obtain the TPF of the U.S., we can arrive at this result by multiplying the numerator and denominator of the international terms of trade fraction by 30:
$\frac{\Delta C}{\Delta S}=-\frac{1/2C}{1S}=-\frac{15C}{30S}$
Since Thailand can trade with the U.S. to reach any point on the straight line connecting these two points, it also has a trading possibilities frontier (TPF). Thailand’s TPF is represented in Figure 19.3 (b).
The major lesson from this analysis is that each nation can reach consumption combinations of the two commodities that lie outside their production possibilities frontiers. Neither nation has acquired more resources or more advanced production technologies, and yet each nation is able to consume more than before. Specialization and trade are what make this extraordinary result possible.
The careful reader might notice one little problem with the analysis. The maximum amount of sugar production in Thailand with complete specialization is 30 tons of sugar. Therefore, it is impossible for the U.S. to trade all its corn in exchange for 100 tons of sugar. Nevertheless, if it trades 15 tons of corn for 30 units of sugar, then it achieves a much higher level of consumption than what it can achieve without trade. The same holds true for Thailand. Aside from the complication of limited production in Thailand, each nation can achieve consumption possibilities that lie in between their PPFs and their TPFs. These results, therefore, suggest that international trade leads to mutual gains, and it is, therefore, the main reason why neoclassical economists argue so forcefully for free trade in the world market.
The Modern Theory of Comparative Advantage: The Case of Increasing Opportunity Cost
Now that we have a clear understanding of how nations may enjoy mutual gains from trade in the case of constant marginal opportunity costs, we can consider the case of increasing marginal opportunity costs. That is, we will assume that nations have production possibilities frontiers like those we considered in Chapter 2. Again, the underlying assumption that gives rise to this pattern of increasing opportunity cost is the assumption of heterogeneous resources.
Let’s assume that the U.S. and Thailand have the PPFs shown in Figure 19.4., and that each nation is producing at the points identified on their PPFs in the figure.
In this example, the U.S. is producing 15 tons of corn and 44 tons of sugar in autarky. Autarky refers to a closed economy, or an economy that does not engage in trade relationships. That is, it neither imports nor exports commodities. The slope of the tangent line drawn to the PPF at that point tells us the marginal opportunity cost of corn production. Let’s assume that this slope is -0.5S/C. That is, the opportunity cost of producing 1 unit of corn is 1/2 unit of sugar in the U.S. as shown in Figure 19.4 (a). Similarly, Thailand is producing 8 tons of corn and 16 tons of sugar. The marginal opportunity cost of producing corn is also determined by the tangent line that just touches the curve at that point. Let’s assume that the slope is -7S/1C. That is, the opportunity cost of producing 1 ton of corn in Thailand is 7 tons of sugar as shown in Figure 19.4 (b). It should be clear that the U.S. has a comparative advantage in corn production. Thailand would then have a corresponding comparative advantage in sugar production.
As before, each nation will specialize in the commodity in which it has a comparative advantage. That is, the U.S. will begin shifting resources towards corn production, and Thailand will begin shifting resources towards sugar production. Something interesting happens in this case, however, that is very different from the case of constant opportunity cost that we considered before. As the U.S. begins producing more corn, the marginal opportunity cost of corn production begins to rise. This increase occurs because less suitable resources must be increasingly relied upon as the best resources for sugar production are transferred to corn production. At the same time, Thailand’s increased production of sugar causes the marginal opportunity cost of sugar production to rise for the same reason. Eventually, the marginal opportunity cost of corn production in each nation will be the same. That is, the domestic terms of trade will be exactly the same in each nation. At this point, the specialization ceases because neither nation will have anything to gain from further specialization. This situation is depicted in Figure 19.5.
As Figure 19.5 shows, the two nations cease to increase their degrees of specialization once the domestic marginal opportunity costs in each nation are equal to 1 ton of sugar for each ton of corn. That is, once the slopes of the PPFs in each nation are equal to -1S/C, then the domestic prices in each nation are the same. The interesting result, in this case, is that each nation only pursues partial specialization. That is, each nation will not completely specialize in the production of the commodity in which it has a comparative advantage because its comparative advantage would then become a comparative disadvantage.
Once the marginal opportunity costs of the two nations have become equal, the limits to the terms of trade become identical for the two nations. Therefore, the international terms of trade must be the same as the domestic terms of trade once partial specialization is achieved. Unlike in the case of constant opportunity cost, the case of increasing opportunity cost leads to a unique outcome for the international terms of trade. Each nation partially specializes in one commodity and then exports some of that commodity in exchange for the commodity in which the other nation partially specializes.
As it turns out, we have only discussed part of the story. Demand also plays a role in determining the equilibrium terms of trade. To provide the complete picture, we would need an additional analytical tool to represent consumer preferences in each nation. Let’s just move to a graph that will allow us to see how both supply and demand contribute to the determination of the equilibrium terms of trade. Figure 19.6 shows what are referred to as offer curves in mainstream trade theory, and it has its roots in the theory of international prices developed by John Stuart Mill.[1]
$P_{C}C=P_{S}S$
In the above equation, PC represents the price of corn, C represents the quantity of corn, PS represents the price of sugar, and S represents the quantity of sugar. Hence, the value of corn exported must equal the value of sugar imported in the U.S., and similarly, the value of corn imported must equal the value of sugar exported from Thailand. If we rearrange the equation, we obtain the following result:
$\frac{S}{C}=\frac{P_{C}}{P_{S}}$
This equation shows that the price of corn relative to the price of sugar (i.e., the relative price ratio) may be represented as the slope of a ray drawn from the origin, which may be calculated as the ratio of S to C at any point along that ray. If the ray becomes steeper, then the relative price of corn has risen. If the ray becomes flatter, then the relative price of corn has fallen. In addition, the nation’s trade is balanced if it operates on this ray.
Now suppose that the international terms of trade are given by the line TOT1 in Figure 19.7.
In this case, if the U.S. produces and offers C1 amount of corn in the market in exchange for S1 amount of sugar and succeeds in doing so, then its trade will be balanced. The problem, however, is that Thailand will offer S2 amount of sugar and wish to import corn of C2. In other words, a large excess demand for corn will exist. Similarly, a large excess supply of sugar will exist. As a result, the relative price of corn will rise, and the relative price of sugar will decline. As the relative price of corn rises, the terms of trade line becomes steeper and moves in the direction of the equilibrium terms of trade line (TOT*) at the intersection of the two offer curves.
Next suppose that the international terms of trade are given by the line TOT2 in Figure 19.8.
In this case, Thailand produces and offers S1 amount of sugar and wishes to import C1 of corn. If it succeeds then its trade will be balanced. The problem this time is that the U.S. is offering a great deal more corn (C2) and wishes to buy S2 tons of sugar, which is much more than Thailand wishes to export. As a result, an excess demand for sugar exists and an excess supply of corn exists in the world market. As a result, the relative price of sugar will rise and the relative price of corn will fall. As the relative prices change, the ray from the origin, which represents the international terms of trade, becomes flatter. It is only when the ray passes through the point of intersection of the two offer curves that each nation is balancing its trade, which is reflected in the fact that each nation is operating on the terms of trade line and its offer curve. Trade has become balanced due to international prices responding to the competitive pressures of supply and demand in the world market.
Criticisms of the Neoclassical Theory of Comparative Advantage
The theory of comparative advantage remains the foundation of the neoclassical theory of international trade to this day. It is more than just a theory of how nations engage in trade with one another. It is also an ideological defense of free trade. That is, economists reject any kind of government restrictions on international trade on the grounds that such restrictions undermine the potential for the mutual gains from trade to be realized. Because the theory of comparative advantage is the foundation of the free trade theory that most economists advocate, it is important to consider heterodox critiques of the theory to determine whether they have any merit.
One argument that is made in favor of free trade is that a nation’s exports generate sufficient revenue to finance the purchases of imports. Hence, trade should be balanced. It is important to note, however, that just because a nation recently sold commodities does not mean that it must immediately buy commodities from another nation. A trade surplus may persist if the nation does not purchase imported commodities. Meanwhile the other nation has purchased commodities but is unable to sell. It will, therefore, experience a trade deficit. If it finances its purchases using foreign currency reserves and those reserves begin to run out, then the result may be a currency crisis. The argument that imports can always be financed by exports is rather similar to the argument that whenever commodities are produced, enough income is paid to the factors of production to ensure the purchase of those commodities. We identified this claim in Chapter 13 as Say’s Law of Markets. The problem, of course, is that just because incomes have been received does not guarantee that they will be spent any time soon. If enough income is saved, then the result is a glut of commodities. In both examples, gaps between the time income is received and the time it is spent create the possibility of economic crises.
The problems with mainstream trade theory go far beyond this specific issue and relate to the assumptions underlying the theory of comparative advantage. In his book Free Trade Doesn’t Work (2009), Ian Fletcher provides a detailed account of eight hidden assumptions that form the basis of the theory.[2] Fletcher argues that exposing the assumptions reveals the flaws inherent in the theory. This section and the next section provide a brief overview of Fletcher’s insights.[3]
The first hidden assumption that Fletcher identifies is the assumption that trade is sustainable.[4] To finance its imports, a nation may not be exporting commodities but rather assets.[5] If it sells bonds, for example, then its debts may accumulate to the point where the future interest payments associated with that debt interfere with future capital investments and/or consumption. A nation may also specialize in the production of a non-renewable natural resource because it has a comparative advantage in its production.[6] Mutual gains from trade might exist in the short run, but in the long run, its exports will deplete its natural resource base to the point where it might experience a massive economic crisis. Fletcher offers Nauru as an example of an island nation that exported large quantities of guano, which is used for manufacturing fertilizer, from 1908 to 2002.[7] The economy boomed in the 1960s and 1970s. When the guano ran out, the economy collapsed. Middle Eastern nations that are heavily dependent on exports of petroleum might experience similar problems of economic dislocation as their oil reserves are depleted.[8]
The second hidden assumption that Fletcher identifies is the assumption that no externalities exist.[9] Ideally, the prices that emerge in the world market should reflect all costs and benefits associated with the production and consumption of the commodities traded. If third parties are affected by the production or consumption of the products, either positively or negatively, then positive or negative externalities exist and the commodities are either under-produced or over-produced, as discussed in Chapter 3. For example, if a production process leads to air pollution, then efficiency requires that the commodity produced be priced high enough to cover the costs to third parties who are then compensated for the harm done to them. If this corrective action is not taken, then the commodity will be produced in quantities that are too large, and the price will be too low.
The third hidden assumption is that factors of production are easily transferred from one industry to another.[10] Consider our earlier example involving constant opportunity costs in which the U.S. completely specialized in corn production and Thailand completely specialized in sugar production. Complete specialization requires a nation to shift all its resources away from the production of the commodity in which it has a comparative disadvantage. The problem is that transferring resources may not be an easy task to accomplish. Factories need to be converted to an entirely new type of production. Workers who may be considerably skilled at producing one type of commodity must now produce an entirely different commodity. The training involved may be expensive and time-consuming. The result very well may be unemployment of labor and production capacity that is not used for a long time. The theory of comparative advantage is entirely static. That is, the passage of time is not taken into account. Therefore, this problem is not obvious when considering movements along a production possibilities frontier, but the problem will be very real for workers and factory owners when cheaper imports begin to enter their nation.
The fourth hidden assumption is that international trade unambiguously raises the welfare of everyone in the nation.[11] After all, each nation can import the commodity it desires at a lower price. In our constant opportunity cost example, the U.S. acquires sugar at a lower price than it could domestically, and Thailand acquires corn at a lower price than it could domestically. We also saw that both nations enjoyed trading possibilities frontiers that lie beyond their production possibilities frontiers. The problem, however, is that with some industries contracting and other industries expanding, the gains are not equally shared. In fact, some workers and capital owners will lose income as a result of trade. A net gain for society as a whole does occur, however, because the winners gain more than the losers lose. According to the Kaldor-Hicks criterion, as long as the winners can hypothetically compensate the losers, the situation represents an improvement (even if the compensation does not actually occur). Nevertheless, the welfare gain for the nation may not be shared by all, and so this criticism is worth considering when evaluating the comparative advantage model.
The fifth hidden assumption is that capital is not internationally mobile.[12] If capital is able to leave the nation and seek more productive opportunities in other nations, then the comparative advantage model does not apply. Remember that the nation’s resource stocks determine the position of its PPF. If a nation begins to lose capital as trade begins, then its PPF will shift inwards. None of the results we discussed earlier would apply in such a situation. When Ricardo first developed the theory of comparative advantage, the assumption of capital immobility was more realistic. In the modern world, capital is highly mobile and so the argument is that the comparative advantage model does not apply very well today.
The sixth hidden assumption is that short-term efficiency is compatible with long-term economic growth.[13] As a static model, the theory of comparative advantage has nothing to say regarding the long-term growth prospects of the nation. Indeed, it is possible that a nation that chooses to specialize in the production of a commodity in which it has a comparative advantage might become permanently stuck producing a commodity that does not promote long-term economic growth. The original example that Ricardo used involved Britain’s specialization in textile production and Portugal’s specialization in wine production. As Fletcher points out, the British textile industry promoted technological change with the development of steam engines and sophisticated machine tools.[14] Wine, on the other hand, was produced using traditional methods that did not encourage innovation or productivity improvements.[15] Although Portugal may have benefited from specializing in wine production at the time, it is now the poorest nation in Western Europe.[16] Its goal of static efficiency appears to have been achieved at the expense of its long term growth prospects.[17]
The seventh hidden assumption is that trade does not induce adverse productivity growth abroad.[18] It is possible that trade might promote foreign industries to the point where their comparative advantage begins to change. In that case, the commodities that the foreign nation previously supplied are no longer supplied. The example that Fletcher gives is that of Japan in the 1950s and 1960s.[19] As the Japanese economy boomed, it transitioned from providing the U.S. with cheap manufactured commodities to those that required more sophisticated manufacturing processes.[20] This problem is not as serious as some of the others mentioned because the nations still gain from trade. The argument appears to be that the importing nation might not gain as much as previously.
Development s in New Trade Theory: The Potential Impact of Economies of Scale
Ian Fletcher’s eighthhidden assumption is that no economies of scale exist in production.[21] This assumption leads to a discussion of what has become known as New Trade Theory. One of the major contributions to New Trade Theory was made by Ralph Gomory and William Baumol in their book Global Trade and Conflicting National Interests (2000).[22] In their book, Gomory and Baumol assume that economies of scale in production do exist.[23] Therefore, they depart from the traditional assumption implicit in the theory of comparative advantage that no scale economies exist.
To understand how scale economies lead to different theoretical results, we need to understand what Gomory and Baumol mean by “retainable industries.” Retainable industries are industries that a nation captures and retains due to a cost advantage stemming from economies of scale.[24] To capture such industries, the nation must be the first to achieve a large enough volume that such low per unit production costs cannot be easily matched by competitors in other nations.[25] What is interesting about retainable industries is that a nation might capture one even if another nation would turn out to be a superior producer if it managed to reach the same large volume of production as its competitor. The reason that the superior producer fails to do so is that scale economies serve as a barrier to entry, preventing the potential competitor from entering the market. Fletcher refers to this possibility as the lockout phenomenon.[26]
Figure 19.9 provides an example involving Japan and China where each nation enjoys economies of scale in production as reflected in the downward slope of their average cost (AC) curves.[27]
It should be clear from Figure 19.9 that China is the superior producer. At any level of output, China has a per unit production cost that is below Japan’s per unit cost. Because Japan entered the industry much earlier, however, its larger production level of QJ allows it to produce at a much lower per unit cost than China, which produces an output level of QC. Clearly, if China expanded production to match Japan’s output, then its unit cost would be lower and it would capture the industry, but because its unit cost is higher, firms will not be able to compete and its superior efficiency will not be realized. In this case, the industry is a retainable industry for Japan, and the world economy loses out on an opportunity to produce the product at a lower unit cost. That is, free trade may not produce the best result for the world economy because many efficient producers will be locked out.
Fletcher draws upon this theory to explain why Bangladesh exports many T-shirts but few soccer balls while Pakistan exports many soccer balls but few T-shirts.[28] That is, economies of scale allowed these nations to capture specific industries, and those nations that achieve large production volumes first are the most likely to retain those industries.
It might seem that a nation’s best trade strategy in a world of scale economies is to capture as many retainable industries as possible by increasing production rapidly to drive down unit costs.[29] Fletcher explains that a nation should pursue such a strategy but only up to a point.[30] That is, if a home nation captures too many retainable industries, then it might actually prevent other nations from producing and exporting commodities to the point where it misses out on the superior efficiency of its competitors.[31] It might also leave other nations impoverished and unable to buy the exports of the home nation.[32] Furthermore, the home nation might find its own resources to be spread too thinly.[33] To see how a nation might go too far, consider the graph in Figure 19.10.[34]
Figure 19.10 shows what happens to a nation’s GDP as it captures a larger percentage of the world’s retainable industries. Its GDP increases with the capture of more industries but only up to a point. Beyond that point, its GDP begins to fall for the reasons stated.
The analysis has a startling implication for trade theory. It suggests that trade is not always a positive sum game.[35] A positive sum game refers to a situation in which all players may gain without anyone losing. The theory of comparative advantage suggests that competitive interaction in the international marketplace is a positive sum game. The economic pie is made larger as both nations experience expanded consumption possibilities. The potential for a negative impact on income distribution is not explicitly captured in the model and so it appears to be a positive sum game. A zero-sum game, on the other hand, occurs when one player may gain but only at the expense of another player. The suggestion of New Trade Theory is that international trade is sometimes a zero-sum game in which one nation gains at another nation’s expense. This perspective directly contradicts what mainstream economists have long argued about the mutual benefits from international exchange.
To see how the assumptions of New Trade Theory generate this interesting result, consider Figure 19.11.[36]
In Figure 19.11, we see how each nation’s GDP changes as it captures or loses retainable industries. As nation A increases its share of retainable industries in Area 1, its GDP increases. Notice that nation B also experiences an increase in GDP as it loses retainable industries. The reason is that nation B would be overreaching if it acquired such a large percentage of retainable industries. Giving up some of these industries to nation A actually benefits nation B. Both nations gain from trade, and this result is consistent with what neoclassical economists have long argued about the mutual gains from international trade.[37]
We discover a similar result in Area 3. In Area 3, nation A experiences a loss of GDP as a result of overreaching. It has captured so many retainable industries from nation B that its GDP begins to fall. Therefore, Nation A will prefer to capture fewer industries. Nation B, on the other hand, will prefer to capture more retainable industries because it has not yet captured many. In doing so, its GDP will rise so it will happily acquire the retainable industries that Nation A is willing to sacrifice. We see another example of a positive sum game and of mutual gains from trade, consistent with the traditional neoclassical conclusions about free trade.
It is in Area 2 where we see a very different result. In Area 2, Nation A gains from capturing more retainable industries. Nation B, however, also wants retainable industries and so any gains in GDP for Nation A must come at the expense of Nation B. Similarly, any gains in GDP for Nation B must come at the expense of Nation A in this region. This area is characterized by mutual conflict rather than mutual benefit.[38] A theory of international trade that suggests trade is a zero-sum game is one that contrasts very sharply with conventional trade theory.
This New Trade Theory suggests that foreign productivity growth might lead to a loss of GDP, sending the gains from trade into negative territory.[39] It also suggests that the consequences of trade are rather complicated, sometimes leading to mutual benefit and sometimes leading to conflict.[40] Fletcher’s policy conclusion is that free trade should be the rule in Ricardian industries in which no economies of scale are present.[41] Because of the existence of retainable industries, however, he favors something called rational protectionism.[42] That is, free trade should be the rule in Ricardian industries, but protectionism should be used in the case of retainable industries. For example, subsidies should be given to industries that are relatively new but in which large scale economies exist. Such subsidies will allow firms to break into industries in which the low unit costs of foreign rivals are likely to keep them out.[43] Infant industry tariffs are another option to protect domestic industries from foreign competition so that they have an opportunity to grow and take advantage of scale economies.[44] Infant industry tariffs are taxes on imported commodities aimed at protecting fledgling domestic industries.
Although he advocates rational protectionism, Fletcher does acknowledge the difficulties associated with such a policy stance. One of the main challenges is knowing which industries to target.[45] Another difficulty is that many modern corporations have become multinational in scope. It may be difficult to identify the home nation in which case it is not clear how such firms would even fit into this analysis.[46]
The Neoclassical Critique of Protectionist Policies : Import Tariffs and Quotas
In addition to the neoclassical theory of comparative advantage, mainstream economists also rely on partial equilibrium analyses to defend the claim that protectionist policies reduce social welfare by undermining efficiency. To understand why, let’s consider the case of a small nation that has a domestic supply and demand for a specific good. Furthermore, let’s assume that it can import all it wants at the world market price (Pw). This situation is depicted in Figure 19.12.
In this market, the equilibrium autarky price is $3.00 per unit, and the equilibrium quantity exchanged is 60 units. Because trade is permitted, however, and the world price is only$1.50 per unit, the domestic producers will only produce and sell 20 units. At that price, the remaining 80 units of the 100 units demanded will be imported. Free trade, therefore, allows the small nation to consume a larger quantity of the good and at a lower price than in the case of autarky. This result is basically consistent with the conclusion of comparative advantage theory.
The reader should also recall that the welfare of the consumers in the market can be measured using consumers’ surplus. In Figure 19.12, consumers’ surplus is equal to the area below the market demand curve and above the world price line at $1.50. In this case, no producers’ surplus exists because the world supply curve coincides with the world price line and so the price received by producers is just enough to cover production costs. Therefore, the large triangle representing consumers’ surplus also represents the total surplus or total welfare of all market participants. The next question that we might consider is what will happen if the government decides to impose a tariff on imports. An import tariff is simply a tax on imported goods. In this case, the tariff is assumed to take the form of a tax per unit of physical product. For example, if the tariff is$0.75 per unit and the world price is $1.50 per unit, then the producers in the rest of the world must simply add$0.75 to the world price. The new world price then becomes $2.25 per unit. This situation is depicted in Figure 19.13. In Figure 19.13, we see that the effect of the higher price is to reduce the quantity demanded of the product domestically from 100 units to 90 units. In addition, the higher price encourages domestic producers to produce and sell 30 units of the good rather than the 20 units previously sold. Imports thus fall to 60 units, which is the difference between the quantity demanded and the quantity domestically supplied. In terms of the impact on social welfare, we need to consider what happens to consumers’ surplus due to the tariff. In Figure 19.13, all the areas labeled represent lost consumers’ surplus due to the higher world price. One of these regions, however, is captured by domestic producers in the form of producers’ surplus, which they receive now that they are able to charge a higher price as well. This gain is the portion above the domestic supply curve (and between the old and new prices). The portion that is below the domestic supply curve from units 20 to 30 (and above the old price) represents excess production costs that the higher world price allows the domestic producers to incur. In other words, the higher world price allows the domestic producers to produce inefficiently. Because it does not represent producers’ surplus and it is lost as consumers’ surplus, it therefore represents a deadweight loss to society. An additional deadweight loss to society that results from the tariff derives from the reduction in quantity demanded that occurs when the world price rises. Because consumers consume 10 fewer units as a result of the tariff, it is not possible for any consumers’ surplus to be realized on these units. Furthermore, since the units are not produced, producers cannot capture these gains, and the welfare simply vanishes. The final portion that must be considered is the rectangle in the middle, which represents tariff revenue. The base of the rectangle represents the quantity of imports, which in this case is 60 units. The height represents the tariff per unit of$0.75. If we multiply the tariff per unit times the quantity of imports, we obtain the total tariff revenue of $45 (=$0.75 per unit times 60 units). This portion of the lost consumers’ surplus is captured by the government and so it is not a deadweight loss. The government has the power to use this revenue to provide services for the nation’s citizens and so it does not necessarily lead to a shrinking of the economic pie. Overall, however, the economic pie shrinks due to the excess production costs that protected firms are able to incur and the reduction in quantity demanded resulting from the higher world price.
One final protectionist policy that we might consider is an import quota. An import quota is simply a quantitative limit on the amount of a good that may be imported into a nation. For example, the government might set an import quota of 60 units. Therefore, the maximum quantity that may be imported of the product is 60 units. What the quota means is that the total supply of the product will now be equal to the U.S. domestic supply plus the quota amount. That is, the domestic supply curve will shift to the right by the amount of the quota, as shown in Figure 19.14.
The intersection between the new supply curve and the demand curve gives us an equilibrium price of $2.25 and an equilibrium quantity exchanged of 90 units. The result in this case looks very much like the result we obtained in the case of an import tariff. In fact, the new world price and the quantity of imports are the same as in the case of the tariff. The welfare effects are also similar. We see that more domestic producers produce the commodity at a higher cost thanks to the restriction on imports, which makes possible the higher price. We also see that the quantity demanded drops leading to a lower consumers’ surplus. Both of these effects contribute to the deadweight loss of the policy. As before, the domestic producers do gain producers’ surplus at the expense of consumers. The one consequence of the import quota that makes this case rather different from the import tariff is the portion of lost consumers’ surplus represented with the rectangle. Because the government does not collect a tax but simply limits the quantity of imports, who collects this extra revenue from the sale of the product? As Robert Carbaugh explains, the answer depends on whether foreign exporters or domestic importers have greater market power.[47] For example, if the foreign exporters collude, then they can demand a price of$2.25 per unit from domestic importers. The exporters will then gain the entire amount as a windfall profit. On the other hand, if the importers collude and refuse to pay any price higher than $1.50 per unit, then when they turn around and sell at$2.25 per unit to the consumers, the importers will gain the entire amount as a windfall profit. A final possibility that is worth considering is that the government sells import licenses to domestic importers that grants them the legal right to import the good.[48] If it charges the maximum possible fee for these licenses, then it may end up capturing the entire windfall profit itself.[49] In this last case, the consequences of the import quota are identical to the consequences of the import tariff since the government captures the rectangle of lost consumers’ surplus. The major point to notice, of course, is that in both the import tariff and the import quota cases, overall social welfare is reduced due to the deadweight losses that result from the higher world price and the subsequent responses of domestic producers and consumers.
The Challenge of Dependency Theory
Another radical theory of the world capitalist economy is known as dependency theory. According to this theory which first appeared in the 1960s, the world capitalist economy consists of rich, capitalist nation-states that exploit poor, developing nation-states by taking advantage of their abundant natural resources and cheap labor-power. The exploitative relationship that results creates a situation in which the poor nations become economically dependent on the rich nations and also become stuck in a chronic state of underdevelopment as a consequence of that relationship.
Andre Gunder Frank is one of the most important contributors to dependency theory. His Capitalism and Underdevelopment in Latin America (1969) has been highly influential in developing this theoretical framework. This section mainly concentrates on his perspective. According to Frank, because power and resources are unequally distributed throughout the world economy, some nations develop more rapidly than others.[50]The result has been underdevelopment in less developed nations and rapid economic growth in advanced industrialized nations. The theory asserts that the world capitalist economy is divided into a series of networks that link a few capitalist metropolises to many satellite nations. Sometimes dependency theorists refer to the center and the periphery rather than to the metropolis and its satellites. The metropolis acquires raw materials at low cost from the periphery and uses them to produce finished commodities in the center. The commodities are sometimes exported to the periphery, which allows the center to continue its appropriation of surplus production in the satellite nations.[51] The Marxist roots of the theory are apparent in the emphasis on surplus production and appropriation. Indeed, Frank’s thesis is that the contradictions in the world capitalist economy have allowed metropolitan centers to appropriate the surpluses of peripheral satellites to the benefit of the former and to the detriment of the latter.[52]
Monopoly capital plays an important role in Frank’s theory. Indeed, monopoly characterizes the world capitalist economy in his view.[53] Frank asserts that the world capitalist system and the periphery have possessed an extremely monopolistic structure throughout their history.[54] The metropolis uses monopoly power in markets in which it is a seller, and monopsony power in markets in which it is a buyer, to extract the surplus product from its satellites. It then refuses to invest in the satellites, which stunts their growth.[55] Over time, the satellites become increasingly dependent on the metropolis, which creates distortions in the satellites’ economies.[56] A local ruling class in each satellite, referred to as the lumpenbourgeoisie, ensures that this system of exploitation persists.[57] It captures a part of the surplus as it is transferred in an upward direction and so the system serves the interests of the lumpenbourgeoisie.
In Frank’s theory, this network of metropolises and satellites contains several levels of surplus appropriation. Chains of metropolis-satellite connections make up the global capitalist order with each metropolis extracting surpluses from its satellites and with a metropolis sometimes serving a higher-order metropolis.[58] The chains also operate within nations and between them, creating “an extended continuum of exploitative relationships.”[59] Frank describes how the national metropolises appropriate the surpluses of regional centers and how the chain of surplus appropriation continues down the chain to local centers, then to large landowners or merchants, then to small peasants or tenants, and sometimes even to landless laborers.[60] Figure 19.15 shows how these different levels of surplus appropriation relate to one another.
The world capitalist system consists of several national metropolises and each exercises its monopoly power. This position of power makes it possible for the metropolis to transfer surplus production to itself and away from the periphery through its control of pricing. The solid arrows and the dashed arrows in Figure 19.15 indicate that the flows traveling in an upward direction are larger than the flows traveling in the downward direction at each level. The differences between the upward flows and the downward flows represent the surplus.
The hierarchical structure depicted in Figure 19.15 does not exist within a competitive capitalist framework. That is, in competitive capitalism, workers confront capitalists but no complicated chain of metropolises and satellites will arise. The reason is that monopoly power is required for these networks to become established. Indeed, two kinds of monopoly give rise to these networks within the world capitalist economy.[61] The first kind is monopolistic merchant’s capital where merchants purchase products from local producers and then export them.[62] These merchant capitalists are not directly involved in production.[63] The second type is modern monopoly capital, which is based on large-scale capitalist production and modern production technology.[64]
Frank’s major point is that underdevelopment results from these interactions. Without investment in sectors that will promote the growth of employment and production in the satellites, the satellite nations will remain at a low level of development. Unfortunately for the periphery, it is the metropolis that decides how to use the large surpluses appropriated from the satellites and collected at the center. This situation poses long-term development problems for developing nations.
The development of capitalism on a global scale since the 1960s provides an abundance of evidence to test the theory. According to Felipe Antunes de Oliveira, “Frank’s prediction that no real development was possible within capitalism was almost immediately challenged by the facts.”[65] Economic growth in Latin America in the late 1960s and early 1970s and the rapid growth of East Asian countries in the 1980s and 1990s “seemed to provide further and conclusive evidence against Frank’s dependency theory.”[66] The burden then is on dependency theorists to explain these cases in a manner that is consistent with their overall approach to the world capitalist economy. Nevertheless, Antunes explains that the economic stagnation in the 1980s and more recent economic crises in Latin America imply that Frank’s theory “may still capture a deeper truth about the limits to peripheral development.”[67] Despite some challenging evidence, dependency theory provides us with a way of thinking about how aggregate surplus production is appropriated and redistributed throughout the world capitalist economy.
T he Theory of Unequal Exchange
One final alternative radical theory of world trade that we will consider is Arghiri Emmanuel’s theory of unequal exchange, which he developed in the early 1970s in his Unequal Exchange: A Study of the Imperialism of Trade (1972) . Emmanuel aimed to show that competitive interaction between rich and poor countries in the global marketplace could lead to a worsening terms of trade for poor countries. To obtain this result, Emmanuel built some key assumptions into his model. For example, he assumed that the world economy only consists of two nations. He also assumed that the profit rates in the rich and poor nations would equalize as a result of the free flow of global capital. If the profit rate of one nation exceeds the profit rate of another nation, then capital will flow to the nation with the highest profit rate, driving that profit rate down and pushing the other nation’s profit rate up until equalization occurs.
At the same time, Emmanuel assumes that wage rates in the rich and poor nations differ significantly because labor is not mobile. As Itoh explains, capital is relatively mobile, but labor is relatively immobile in Emmanuel’s framework due to immigration restrictions.[68] Howard and King also state that Emmanuel’s theory assumes “a powerful tendency for the rate of profit to be equalized on a world scale, while there remain huge differences in both wage rates and rates of exploitation between advanced and backward countries.”[69] As Emmanuel explains, he treats wages as the independent variable in his theoretical system.[70] Furthermore, he assumes that the rates of surplus value are “institutionally different” and not subject to competitive factor market equalization.[71] The fact that differences in rates of surplus value exist alongside wage differentials should make sense. If two nations have similar working day lengths but one nation has much lower wages, then the nation with much lower wages should have a higher rate of surplus value, other factors the same.
The persistent wage differentials are explained in terms of the immobility of labor, but what about the absolute levels of wages in the two nations? Emmanuel’s assumption about differential wages stems from Marx’s claim that the value of labor-power contains an “historical and moral element.”[72] The reader should recall that the value of labor-power depends on cultural and social factors that are specific to time and place. What constitutes a required bundle of commodities in a rich nation will be very different from the required bundle of commodities in a poor nation. Generally, the workers in a rich nation will be accustomed to having more and better quality commodities available to them than workers in poor nations. Hence, the value of labor-power and the wage rate will be higher in the rich nation and lower in the poor nation.
One might suspect that the wage differentials will lead to different prices across nations for the same product. This result does not necessarily follow though. For example, Emmanuel considered the possibility that two capitalist nations produce the same commodities but that one nation has a higher wage level.[73] For a common rate of profit and for a single production price to emerge, the higher wage nation must have higher productivity so that its other costs are lower and its overall costs are the same across the two nations.[74] The situation is easier to visualize by considering the diagram in Figure 19.16.
As Figure 19.16 shows, the higher wage nation has other production costs that are lower, which makes it possible for costs to be equal across the two nations. Indeed, the higher productivity is the cause of the higher wages. When adding the average profit to the equal costs, the production prices are the same. We thus see one example of how profit rates and production prices can be the same even when wages vary across nations.
We next consider an example in which two nations produce different commodities. As we learned in Chapter 8, Marx transformed values into production prices to explain how a general rate of profit is formed in the economy. Setting aside the criticism of Marx’s procedure, which has given rise to the Transformation Problem debate, we follow Emmanuel in using Marx’s procedure to equalize the rate of profit throughout the world economy. Let’s consider only two countries: the United States and Mexico. Consistent with Emmanuel’s approach, we will assume that one nation, in this case the United States, pays a higher wage to workers than the other nation (Mexico). Howard and King use a simple numerical example involving the same organic compositions of capital in two nations with one nation having a lower wage rate and a higher rate of surplus value.[75] Here we consider a similar example using basic algebra to illustrate the main findings of the theory.
For notational purposes, the high-wage United States will labeled nation A, and low-wage Mexico will be labeled nation B. The two nations produce different goods, but the organic composition of capital is the same in the two nations. The equal organic compositions of capital may be represented as follows:
$\frac{C_{A}}{V_{A}}=\frac{C_{B}}{V_{B}}$
In this equation, C and V refer to the constant capital and the variable capital in each nation. It is also assumed that wages are lower in Mexico, and the rate of surplus value is correspondingly higher in Mexico. These conditions are represented as follows:
$V_{A}>V_{B}\:and\:\frac{S_{B}}{V_{B}}>\frac{S_{A}}{V_{A}}$
In this inequality, S refers to the surplus value in each nation. We can now write the total value produced in each nation as the product of the physical quantity produced (q) in that nation and the value per unit (w) in that nation. The following equations thus hold:
$q_{A}w_{A}=C_{A}+V_{A}+S_{A}$
$q_{B}w_{B}=C_{B}+V_{B}+S_{B}$
The reader should recall that to obtain the rate of profit, we need to add the entire surplus value (S) in both nations and divide by the aggregate capital (C+V). Without subscripts, these aggregates refer to the combined sums for the two nations. That is,
$r=\frac{S}{C+V}$
The total production price (qp) in each nation may now be computed as the product of the output (q) in that nation and the per unit production price (p) in that nation. The total production price in each nation is then equal to the sum of the capital advanced and the average profit in that nation as follows:
$q_{A}p_{A}=C_{A}+V_{A}+r(C_{A}+V_{A})$
$q_{B}p_{B}=C_{B}+V_{B}+r(C_{B}+V_{B})$
Using this information, we can calculate the value ratio and the production price ratio. The value ratio is the ratio of the individual unit values (based on embodied labor) in the two nations. The price ratio is the ratio of the per unit production prices in the two nations. The ratios are calculated as follows:
Value ratio: $\frac{w_{A}}{w_{B}}=\frac{\frac{C_{A}+V_{A}+S_{A}}{q_{A}}}{\frac{C_{B}+V_{B}+S_{B}}{q_{B}}}=\frac{C_{A}+V_{A}+S_{A}}{C_{B}+V_{B}+S_{B}}\cdot\frac{q_{B}}{q_{A}}$
Production price ratio: $\frac{p_{A}}{p_{B}}=\frac{\frac{C_{A}+V_{A}+r(C_{A}+V_{A})}{q_{A}}}{\frac{C_{B}+V_{B}+r(C_{B}+V_{B})}{q_{B}}}=\frac{C_{A}+V_{A}+r(C_{A}+V_{A})}{C_{B}+V_{B}+r(C_{B}+V_{B})}\cdot\frac{q_{B}}{q_{A}}$
To give these equations meaning, suppose that we have the following information for the U.S. and Mexico:
Nation Constant Capital Variable Capital Surplus Value Quantity Qroduced (q)
United States (A) 600 300 200 130
Mexico (B) 300 150 300 110
Aggregate 900 450 500
It should be clear that the organic compositions of capital are the same in the two nations (C/V = 2). It should also be clear that the rate of surplus value (S/V) is higher in Mexico where wages (V) are lower. Using our equations to calculate the value ratio, the rate of profit, and the production price ratio, we obtain the following:
Value ratio: $\frac{w_{A}}{w_{B}}=\frac{C_{A}+V_{A}+S_{A}}{C_{B}+V_{B}+S_{B}}\cdot\frac{q_{B}}{q_{A}}=\frac{1100}{750}\cdot\frac{110}{130}=1.24$
Rate of profit: $r=\frac{S}{C+V}=\frac{500}{1350}=37.04\%$
Production price ratio: $\frac{p_{A}}{p_{B}}=\frac{C_{A}+V_{A}+r(C_{A}+V_{A})}{C_{B}+V_{B}+r(C_{B}+V_{B})}\cdot\frac{q_{B}}{q_{A}}=\frac{1233.33}{616.67}\cdot\frac{110}{130}=1.69$
The example shows that the production price ratio exceeds the value ratio. The implication is that the price of the commodity in the United States is relatively higher than what it should be given its relative value. Another way of stating the same result is to say that international competition causes the price of commodity A to be 1.69 units of commodity B. At the same time, if the commodities exchanged according to their values (i.e., labor content), then the price of commodity A would be 1.24 units of commodity B. The price of the U.S. commodity is thus higher than it would otherwise be. As Howard and King explain, the unequal exchange of commodities that results causes a transfer of value to the rich country, which worsens global inequality over time.[76] As the reader might recall, Marx showed how capitalist societies produce surplus value even when commodities exchange according to their values. He did acknowledge, however, that unequal exchanges can lead to the redistribution of value with one side to the exchange gaining and the other side losing. Emmanuel has shown how nations can be involved in such unequal exchanges leading to systematic value transfers from poor nations to rich nations.
A general claim may thus be stated and proven: If the production price ratio (pA/pB) exceeds the value ratio and two nations organic compositions of capital are the same, then the rate of surplus value must be higher in nation B. To prove this result, we proceed as follows:
$\frac{p_{A}}{p_{B}} > \frac{w_{A}}{w_{B}} \Rightarrow \frac{C_{A}+V_{A}+r(C_{A}+V_{A})}{C_{B}+V_{B}+r(C_{B}+V_{B})} > \frac{C_{A}+V_{A}+S_{A}}{C_{B}+V_{B}+S_{B}}$
With a bit of algebraic manipulation, this equation may then be rewritten as follows:
$(\frac{C_{B}}{V_{B}}+1+\frac{S_{B}}{V_{B}})\cdot r(\frac{C_{A}}{V_{A}}+1)+\frac{S_{B}}{V_{B}}\cdot (\frac{C_{A}}{V_{A}}+1)>(\frac{C_{A}}{V_{A}}+1+\frac{S_{A}}{V_{A}})\cdot r(\frac{C_{B}}{V_{B}}+1)+\frac{S_{A}}{V_{A}}\cdot (\frac{C_{B}}{V_{B}}+1)$
Because the organic compositions of capital are the same in the two nations, we can substitute CA/VA for CB/VB to obtain the following result:
$(\frac{C_{A}}{V_{A}}+1+\frac{S_{B}}{V_{B}})\cdot r(\frac{C_{A}}{V_{A}}+1)+\frac{S_{B}}{V_{B}}\cdot (\frac{C_{A}}{V_{A}}+1)>(\frac{C_{A}}{V_{A}}+1+\frac{S_{A}}{V_{A}})\cdot r(\frac{C_{A}}{V_{A}}+1)+\frac{S_{A}}{V_{A}}\cdot (\frac{C_{A}}{V_{A}}+1)$
With some additional algebraic manipulation, this equation may then be reduced to obtain the final result:
$\frac{S_{B}}{V_{B}}>\frac{S_{A}}{V_{A}}$
Therefore, the rate of surplus value in Mexico must exceed the rate of surplus value in the United States given these assumptions.
Emmanuel’s finding that rich nations benefit at the expense of poor nations is interesting because it is achieved without any reference to monopoly power or military intervention.[77] As Shaikh argues, Emmanuel demonstrated that “it is not necessary to abandon the law of competition in order to be able to understand the intrinsic determinants of modern imperialism.”[78] The result is obtained by means of an analysis of competitive interaction in the marketplace. It thus provides us with a way of thinking about how voluntary exchange in the international sphere can lead to a situation of worsening inequality between nations. Overall, international trade and the export of capital can be combined to explain how developing nations are exploited in the neocolonial era.[79]
Fo llowing the Economic News [80]
In a recent article in The Economic Times, Arvind Panagariya argues in favor of economics policies that will bring about a boom in exports in India. Panagariya claims that rapid economic growth in India will require a much greater presence in global markets. He explains that the Indian share of global merchandise exports is very low at only 1.7%. To achieve a greater influence in the international marketplace, Panagariya encourages the elimination of three strategies that have been a focus of Indian trade policy in the past. One strategy to eliminate is import substitution, which is a policy aimed at promoting the domestic production of products that will serve as substitutes for imported products. He also favors less promotion of small enterprises and a weaker rupee in the foreign exchange market. Panagariya does not believe that the import substitution strategy will succeed, but even if it does succeed, the elimination of import substitution as a strategy also makes sense when considered through the lens of new trade theory. If India succeeds at increasing exports of goods in industries that are likely to become retainable industries and then proceeds to capture even more retainable industries through an import substitution strategy, then it risks capturing so many retainable industries from other nations that its GDP and other nations’ GDPs begin to decline. This possibility involves India moving into the zone of mutual gain, where India can reduce the percentage of retainable industries it captures to the benefit of its trade partners and itself. As Panagariya argues, policies aimed at rapidly expanding exports should be joined with support for the rapid expansion of imports. Panagariya points to the rapid expansion of exports during the 2000s as an example. He explains that without the rapid expansion of exports, India could not have imported so many cell phones, which would have undermined the cell phone revolution in India. Therefore, India should aim to capture some retainable industries but an aggressive approach that does not allow its trade partners to capture some retainable industries will hurt India and its trade partners. Similarly, the emphasis on “micro and small enterprises” is a policy that Panagariya believes should be discarded. The capture of retainable industries depends upon growth in industries in which large economies of scale exist. An emphasis on small-scale production will make it impossible to reduce per unit production costs below those of competitors. The capture of retainable industries then becomes impossible. Finally, Panagariya advocates “a realistic exchange rate.” In other words, a weaker rupee will allow producers of exports to compete more effectively in international markets where their goods are cheaper as a result. The enhanced competitiveness that follows will make it possible for India to capture additional retainable industries. The trick is to capture many retainable industries but without overreaching to the detriment of India and its trading partners.
Summary of Key Points
1. According to neoclassical economists, even if a nation has an absolute advantage in the production of two commodities, it may still benefit from trade if another country has a comparative advantage.
2. When resources are heterogeneous, a nation experiences increasing marginal opportunity costs, but when resources are homogeneous, a nation experiences constant marginal opportunity costs.
3. In the case of trade between two nations, the limits to the international terms of trade are the domestic terms of trade in each nation.
4. When two nations specialize in the commodities in which they have comparative advantages, each can consume output combinations along a trading possibilities frontier that lies beyond the production possibilities frontier.
5. In the case of increasing marginal opportunity costs, each nation will only pursue partial specialization because eventually the marginal opportunity costs become equal.
6. The equilibrium terms of trade in the case of increasing marginal opportunity costs is given by the slope of the ray from the origin that passes through the intersection of the two nations’ offer curves.
7. The basis of Ian Fletcher’s critique of comparative advantage is the existence of eight hidden assumptions that form the basis of the theory.
8. Retainable industries are industries in which economies of scale are so large that the first nation to reach a large production volume captures and retains the industry.
9. New trade theory suggests that free trade leads to a positive sum game of mutual gains at times and to a zero-sum game of bitter conflict at other times.
10. According to neoclassical economists, import tariffs and import quotas lead to losses of social welfare because higher prices reduce the quantity demanded of consumers and increase the production of inefficient domestic producers.
11. According to dependency theorists, a national metropolis appropriates the surpluses of multiple satellites by using its considerable market power.
12. In the theory of unequal exchange, unequal exchanges in the international marketplace allow a high-wage nation to appropriate excess value from a low-wage nation when both nations possess the same organic compositions of capital and a higher rate of surplus value exists in the low-wage nation.
List of Key Terms
Comparative advantage
Absolute advantage
Increasing marginal opportunity costs
Heterogeneous resources
Homogeneous resources
Domestic terms of trade
International terms of trade
Limits to the terms of trade
Trading possibilities frontier (TPF)
Autarky
Partial specialization
Offer curves
Free trade
Kaldor-Hicks criterion
Retainable industries
Lockout phenomenon
Positive sum game
Zero-sum game
Rational protectionism
Import tariff
Import quota
Dependency theory
Underdevelopment
Center
Periphery
Metropolises
Satellites
Lumpenbourgeoisie
Monopolistic merchant’s capital
Modern monopoly capital
Unequal exchange
Value ratio
Production price ratio
Problems for Review
1. Given the information in the table below, determine which nation has an absolute advantage in the production of each commodity and which nation has a comparative advantage in the production of each commodity.
2. Given the information in Figure 19.17, answer the following questions:
• Which nation has an absolute advantage in sugar production?
• Which nation has an absolute advantage in steel production?
• Which nation has a comparative advantage in sugar production?
• Which nation has a comparative advantage in steel production?
• What are the limits to the international terms of trade? That is, what is the maximum and minimum price of sugar in terms of steel? What is the maximum and minimum price of steel in terms of sugar?
• If the international terms of trade are 1 unit of sugar/1 unit of steel, then add the new trading possibilities frontiers to the graphs for each nation. Label the intercepts.
3. Given the information in Figure 19.18, answer the following questions:
• If a tariff of \$0.60 is imposed as implied in the graph, then calculate the deadweight loss that results from the excess production costs of domestic producers. Hint: Recall the formula for the area of a triangle.
• Calculate the deadweight loss resulting from the reduction in quantity demanded.
• Calculate the tariff revenue.
• If an import quota of 70 units was imposed instead in this case, then which parties might acquire the windfall profit? Why?
4. Suppose the two nations represented in the table below produce different commodities and trade in the international marketplace. Complete the table, and calculate the general rate of profit (r). Then calculate the value ratio and the production price ratio. How do these ratios compare? What role do the assumptions of this model play in producing this result?
Nation Constant Capital Variable Capital Surplus Value Quantity Produced (q)
Nation A 250 275 135 125
Nation B 200 220 75 110
Aggregate
1. See Hunt (2002), p. 191-192, for a summary of Mill’s theory of international prices.
2. See Fletcher (2009), p. 105-118.
3. I am deeply grateful to Ian Fletcher for granting me permission to include this summary of his perspective. The original source is: Fletcher, Ian. Free Trade Doesn't Work: Why American Needs a Tariff. U.S. Business & Industry Council: Washington, DC, 2009. p. 105-118 and 215-231. Information about the most recent edition of his book may be found at: http://www.freetradedoesntwork.com/index.html
4. Ibid. p. 105-106.
5. Ibid. p. 105.
6. Ibid. p. 105.
7. Ibid. p. 105.
8. Ibid. p. 106.
9. Ibid. p. 106-107.
10. Ibid. p. 107-109.
11. I rephrased this hidden assumption. Fletcher (2009), p. 109-110, states it somewhat differently.
12. Ibid. p. 110-113.
13. Ibid. p. 113-115.
14. Ibid. p. 114.
15. Ibid. p. 114.
16. Ibid. p. 115.
17. Noam Chomsky offers a similar critique of Ricardo’s original example involving England and Portugal in Chomsky (2002), p. 254.
18. Fletcher (2009), p. 115-118.
19. Ibid. p. 116.
20. Ibid. p. 116.
21. Ibid. p. 215.
22. Ibid. p. 215.
23. Ibid. p. 215.
24. Ibid. p. 216.
25. Ibid. p. 216.
26. Ibid. p. 216.
27. Carbaugh (2011), p. 88, makes the same point using a single, similarly shaped AC curve for two different nations. Although the firms have the same AC curves, the firm that produces the larger volume first ends up with a lower AC.
28. Fletcher (2009), p. 217.
29. Ibid. p. 220.
30. Ibid. p. 221.
31. Ibid. p. 221.
32. Ibid. p. 222.
33. Ibid. p. 222.
34. This graph is a modified version of the one that Fletcher (2009), p. 221, provides. A similar graph can be found in Gomory and Baumol (2000), p. 31.
35. Ibid. p. 224.
36. This graph is a modified version of the one that Fletcher (2009), p. 224, provides. A similar graph can be found in Gomory and Baumol (2000), p. 37.
37. Ibid. p. 223.
38. Ibid. p. 223-225.
39. Ibid. p. 228.
40. Ibid. p. 228.
41. Ibid. p. 229.
42. Ibid. p. 229.
43. Ibid. p. 229.
44. Ibid. p. 229.
45. Ibid. p. 229.
46. Ibid. p. 230-231.
47. Carbaugh (2011), p. 158.
48. Ibid. p. 158.
49. Ibid. p. 158.
50. Rose (2016).
51. Rose (2016) provides a simple diagram to demonstrate these relationships.
52. Frank (1969), p. 3.
53. Howard and King (1992), p. 177.
54. Frank (1969), p. 7.
55. Brewer (1992), p. 164.
56. Ibid. p. 196.
57. Ibid. p. 164.
58. Howard and King (1992), p. 177.
59. Howard and King (1992), p. 177.
60. Frank (1969), p. 7.
61. Brewer (1990), p. 166.
62. Ibid. p. 166.
63. Ibid. p. 166.
64. Ibid. p. 166.
65. Antunes (2017)
66. Antunes (2017)
67. Antunes (2017)
68. Itoh (2009), p. 207.
69. Howard and King (1992), p. 189.
70. Emmanuel (1972), p. 64.
71. Ibid. p. 64.
72. Brewer (1990), p. 209.
73. Brewer (1990), p. 203.
74. Ibid. p. 203.
75. Howard and King (1992), p. 190-191.
76. Howard and King (1992), p. 191.
77. Brewer (1990), p. 200.
78. Shaikh (1980), p. 210.
79. Itoh (2009), p. 207.
80. Panagariya, Arvind. “View: How to make exports boom [Foreign-Trade]. The Economic Times; New Delhi. 27 June 2019. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/04%3A_Principles_of_International_Economic_Theory/19%3A_Theories_of_International_Trade.txt |
Goals and Objectives:
In this chapter, we will do the following:
1. Develop the fundamentals of balance of payments accounting
2. Explain how to work with foreign exchange rates
3. Analyze the determination of equilibrium exchange rates using supply and demand
4. Investigate the Theory of Purchasing Power Parity
5. Distinguish between fixed and floating exchange rate regimes
6. Explore Marxist theories of imperialist finance and exchange rate determination
This final chapter of the book considers theories of international finance. As the reader may have noticed, not much was said about the role of money in the previous chapter. Curiously, neoclassical economists have a tradition of separating the field of international economics into these two parts. Part of this chapter concentrates on the accounting method, referred to as balance of payments accounting, which is used to describe the financial flows between one nation and the rest of the world. The neoclassical model of supply and demand, as applied to the foreign exchange market or currency market, is also developed in this chapter to explain how exchange rates are determined in competitive markets. Using this framework, we will explore the role of exchange rate adjustments in equalizing purchasing power across nations. We will also consider the role of the central bank in foreign exchange markets and see why currency crises sometimes occur. Finally, we will examine Marxist theories of imperialist finance to provide us with an alternative perspective on the same topic.
Balance of Payments Accounting
When a nation interacts with the rest of the world, it engages in a wide variety of transactions. A nation buys and sells goods and services in international commodity markets. It receives interest payments and pays interest in international financial markets. Domestic employers in that nation also pay foreign employees just as foreign employers sometimes pay domestic employees. Public and private transfer payments are often made from a nation to other nations, or such transfers are received by a nation. Investors in a nation also often buy financial assets like stocks and bonds or the investors sell them to other nations. Economists find it necessary to have a way of recording all these transactions so that they can study how these interactions influence the overall health and direction of a nation’s economy.
Balance of payments accounting refers to the method of accounting for all international transactions that occur between one nation and the rest of the world. Its counterpart at the domestic level is national income accounting, which we thoroughly analyzed in Chapter 12. The method involves recording each transaction between a nation and the rest of the world as a debit or a credit, depending on the nature of the transaction. For example, a transaction that leads to a payment to the United States from another nation is recorded as a credit in the U.S. Balance of Payments and is thus given a positive value. Any transaction that leads to a payment from the United States to another nation is recorded as a debit in the U.S. Balance of Payments and is thus given a negative value.
Table 20.1 shows a summary of the U.S. Balance of Payments for 2014.
Source: U.S. Bureau of Economic Analysis[1]
It is helpful to think about which types of transactions lead to credits or debits in the U.S. Balance of Payments. For example, when the U.S. exports goods and services, these transactions are recorded as credits because these sales lead to payments to U.S. sellers. When an American investor receives an interest payment from a foreign borrower, the transaction is recorded as a credit as well. When an American worker receives compensation for work performed by a foreign firm, the payment is recorded as a credit. If an American family receives private transfer payments (referred to as remittances) from a family member working in another nation, the transfer payment is recorded as a credit. Each of these transactions is recorded in a subaccount of the Balance of Payments referred to as the current account. The current account only includes transactions that do not involve the purchase or sale of an income-earning asset.
Just as credits may be recorded in the current account, debits may also be recorded. For example, when the U.S. imports goods and services, these transactions are recorded as debits in the U.S. current account because these purchases lead to payments from U.S. buyers to foreign sellers. If an American borrower makes an interest payment to a foreign lender, then the payment is recorded as a debit in the current account. If a U.S. employer pays a salary to a foreign worker, then the payment is treated as a debit in the current account. If the U.S. government grants monetary aid to a foreign nation, then the public transfer payment is treated as a debit in the U.S. current account. Once all the credits and debits are added up in the U.S. Current Account, the final calculation is referred to as the Current Account Balance. When this figure is positive, it is said that a current account surplus exists. When this figure is negative, it is said that a currentaccount deficit exists. Table 20.1 shows that a current account deficit of over $389.5 billion existed in 2014. One subaccount of the Balance of Payments is worth mentioning because it is frequently cited in the financial news. The trade balance, as it is commonly known, refers to the difference between exports of goods and services and imports of goods and services. In this case, imports exceed exports of goods and services by about$508 billion. Therefore, a negative U.S. trade balance of approximately $508 billion existed in 2014. When a negative trade balance exists, it is referred to as a trade deficit. If exports exceed imports of goods and services, then a positive trade balance exists, which is referred to as a trade surplus. If the balance happens to be exactly zero, which would be extremely unlikely in practice, then balanced trade exists. In addition to the current account, another important subaccount is called the capital account. Transactions that are recorded in the capital account involve the purchase and sale of nonfinancial assets like copyrights, patents, and trademarks. When these transactions involve sales of nonfinancial assets, they are recorded as credits. When they involve purchases of nonfinancial assets, they are recorded as debits. For example, if a copyright owned by an American firm is sold to a Chinese firm, then the payment received by the American firm is recorded as a credit in the U.S. Capital Account. If a patent owned by a German firm is sold to an American firm, then the payment made by the American firm is recorded as a debit in the U.S. Capital Account. The final subaccount of interest is the financial account. Transactions that are recorded in the financial account involve the purchase and sale of financial assets like stocks and bonds. Examples include purchases and sales of corporate and government bonds, foreign direct investment, and central bank purchases and sales of securities. When these transactions involve sales of financial assets, they are recorded as credits. When they involve purchases of financial assets, they are recorded as debits. For example, if an American bondholder sells a U.S. government bond to a foreign investor, then the payment that the bondholder receives is recorded as a credit in the U.S. Financial Account. If a foreign investor purchases newly issued shares of stock in an American company, then the transaction is recorded as a credit in the financial account because an American firm receives a payment. If the U.S. Federal Reserve sells securities to the Bank of Japan in exchange for Japanese yen, then the transaction is recorded as a credit in the U.S. Financial Account. In the last case, official reserve holdings increase. Official reserves include items like foreign currencies and gold held by a nation’s central bank. On the other hand, if an American investor buys a bond from a foreign bondholder, then the payment to the foreign bondholder is recorded as a debit in the U.S. Financial Account. Similarly, if an American investor purchases a controlling stake in a foreign corporation, the payment to the corporation for the shares is recorded as a debit in the U.S. Financial Account. Finally, if the Bank of England sells U.S. Treasury bonds to the U.S. Federal Reserve, then the payment that the Fed makes to the Bank of England is recorded as a debit in the U.S. Financial Account. If the Fed makes the purchase using British pounds, then U.S. official reserves fall by an equal amount. Overall, if the credits in the capital account and financial account exceed the debits in the capital account and financial account, then a capital and financial account surplus exists. On the other hand, if the debits in the capital account and financial account exceed the credits in the capital account and financial account, then a capital and financial account deficit exists. We are now in a much stronger position to understand the information presented in Table 20.1. The table shows that a current account deficit existed in the U.S. in 2014. Similarly, the table reveals that a capital and financial account surplus also existed in the U.S. in 2014. In theory, the current account and the capital and financial account should always add up to zero. That is, the overall U.S. Balance of Payments should be balanced with neither a surplus nor a deficit. The reason is that balance of payments accounting uses a double entry bookkeeping method. For every credit that is entered, a corresponding debit must be entered. Similarly, for every debit that is entered, a corresponding credit must be entered. For example, suppose that a U.S. company sells computers to a firm in France. As we have seen, exports are recorded as credits in the balance of payments because a payment is received for the exported goods. What is the offsetting debit in the U.S. Balance of Payments? Suppose that the French company pays for the computers with Euros. The America firm will then deposit the Euros in a Euro-denominated bank account in, let’s say, France. This transaction requires a payment to the French bank in exchange for that financial asset (i.e., the bank deposit). Therefore, the transaction is recorded as a debit in the financial account in an amount that exactly offsets the credit associated with the exports in the current account. All transactions that occur between the home nation and the rest of the world can be analyzed in the same manner. As a result, all debits and credits should cancel out in the aggregate, even though each subaccount may show a surplus or deficit. This result can be proven as follows. Suppose that we add up all the credits in the current account and all the credits in the capital and financial account. Now suppose that we add up all the debits in the current account and all the debits in the capital and financial account. Given what has been said about every credit having a corresponding debit and vice versa, it must be true that the sum of all credits equals the sum of all debits. If CA refers to the current account, KA+FA refers to the capital and financial account, and the subscripts refer to credits (c) and debits (d) in that specific subaccount, then the following must hold: $CA_{c}+(KA+FA)_{c}=CA_{d}+(KA+FA)_{d}$ Rearranging the terms in the above equation yields the following result: $CA_{c}-CA_{d}=(KA+FA)_{d}-(KA+FA)_{c}$ In other words, a current account surplus (an excess of credits over debits in the current account) necessarily implies a capital and financial account deficit (an excess of debits over credits in the capital and financial account). Similarly, a current account deficit (debits > credits in the current account) necessarily implies a capital and financial account surplus (credits > debits in the capital and financial account). Although the balance of payments cannot show a surplus or deficit overall, in practice, statisticians must estimate the total credits and debits in each subaccount. Because statistical estimates do not guarantee completely accurate results, the current account balance is never exactly offset by the capital and financial account balance. Therefore, the difference must be included, which is referred to as statistical discrepancy. Once we add this figure in Table 20.1, the surplus in the capital and financial account exactly offsets the current account deficit. The Net International Investment Position The balance of payments is a record of all international flows of goods and services, interest payments, transfers, and financial and non-financial assets between one nation and the rest of the world. Because the items on the statement are all flows, they are measured during a given period, such as a year or a quarter. Economists are also interested in keeping a record of the stocks of all foreign financial assets and foreign financial liabilities that a nation possesses at any one time. The difference between a nation’s foreign financial assets and its foreign financial liabilities at a point in time is called its net investment position. The statement that contains this information is called the Net International Investment Position. It is a bit like a balance sheet for the entire nation, and the net investment position is somewhat like the net worth of the nation. The U.S. Net International Investment Position for 2014 is shown in Table 20.2. Source: U.S. Bureau of Economic Analysis[2] As Table 20.2 shows, the net international investment position for the U.S. in 2014 was a negative value exceeding$7 trillion. This amount was the net foreign debt of the United States in 2014. This figure means that U.S. liabilities with respect to the rest of the world exceeded U.S. ownership of foreign assets. The U.S. was considered a net debtor in 2014 because it owed more to the rest of the world than was owed to it. If its foreign-owned assets exceeded its foreign liabilities, then it would be regarded as a net creditor.
An interesting relationship exists between the Balance of Payments and the Net International Investment Position of a nation. Specifically, a nation that receives more from the rest of the world than it spends runs a current account surplus. Its corresponding capital and financial account deficit implies that it lends this amount to the rest of the world by purchasing foreign assets. The increase in its stock of foreign assets increases its net investment position and moves it in the direction of net creditor status.
Similarly, a nation that spends more than it receives from the rest of the world runs a current account deficit. Its corresponding capital and financial account surplus implies that it borrows this amount from the rest of the world by selling assets to foreign investors. The increase in its foreign liabilities worsens its net investment position and moves it in the direction of net debtor status. As a result, the current account deficit that the U.S. experienced in 2014 worsened the U.S. net international investment position by this amount. In other words, the current account deficit in 2014 increased America’s net foreign debt.
It is possible to use the national income accounts identity from Chapter 12 to show how a current account balance relates to a nation’s government budget and private sector gaps. The identity below shows how a nation’s GDP (Y) may be calculated as the sum of the different components of spending on final goods and services.
$Y=C+I+G+NX$
In the above identity, C refers to personal consumption expenditures, I refers to gross private domestic investment, G refers to government expenditures, and NX refers to expenditures on net exports (i.e., exports minus imports).
Subtracting taxes (T) and consumer spending (C) from both sides of the equation and substituting X – M for NX yields the following result.
$Y-T-C=I+G-T+X-M$
The left-hand side of the equation represents total saving, which is equivalent to disposable income (Y-T) minus consumer spending. We can thus substitute saving (S) into the equation to obtain the following result.
$S=I+G-T+X-M$
Solving for the trade balance (i.e., net exports), we can write the equation as follows:
$X-M=(S-I)+(T-G)$
If we ignore primary and secondary income payments and receipts (i.e., investment payments, compensation, and transfers), then the nation’s trade balance is the same as the current account balance. Therefore, the current account balance equals the sum of the private sector gap and the government budget gap. For example, if a current account surplus exists, then the nation is a net lender for the year, strengthens its net international investment position, and may achieve this goal with positive net saving in the private sector (S > I) and a government budget surplus (T > G). Net saving in the private and public sectors thus corresponds to net lending in the world economy. On the other hand, if a current account deficit exists, then the nation is a net borrower for the year, worsens its net international investment position, and may bring about this result with negative net saving in the private sector (S < I) and a government budget deficit (T < G). Net borrowing in the private and public sectors thus corresponds to net borrowing in the world economy.
Working with Foreign Exchange Rates
Now that we have an accounting framework for thinking about a nation’s international transactions, we will shift gears and begin thinking about how foreign exchange markets work. Once we have a theory that helps us understand how these markets function, we will circle back and connect that theory to balance of payments accounting.
Before we explore the inner workings of foreign exchange markets, we need to consider what foreign exchange rates are and what they mean. On any given day, it is possible to turn to the financial news section of The Wall Street Journal and see the current exchange rates. For example, Figure 20.1 provides information about current spot rates (foreign exchange rates) for two consecutive days in January 2008.[3]
It is no coincidence that each of these foreign currencies depreciated against the U.S. dollar while the U.S. dollar appreciated against them. In fact, it must be the case. The reader may have noticed that the calculations in the two columns on the left are in terms of U.S. dollars per unit of foreign exchange ($/FX) whereas the calculations in the two columns on the right are in terms of foreign exchange per U.S. dollar (FX/$). The one calculation is the reciprocal of the other. Therefore, if an increase occurs from one day to the next in the first two columns, then a decrease must occur from one day to the next in the final two columns, and vice versa.
For these exchange rates to change from one day to the next is not exceptional. In fact, foreign exchange rates typically change from minute to minute. Consider, for example, the U.S. dollar-U.K. pound exchange rate during the period from May 26 to July 2, 2015 shown in Figure 20.2.
The pound fluctuated during that short period from less than $1.52 to almost$1.59. Much larger fluctuations can sometimes occur, as we will discuss later in the chapter.
Explaining Foreign Exchange Rate Fluctuations
It is natural to wonder why these fluctuations of exchange rates occur. The answer is that exchange rates are determined competitively in international currency markets or foreign exchange markets. As buyers and sellers change their purchases and sales for a huge variety of reasons, the market prices of foreign currencies change in response. To explain market prices in Chapter 3, we used the supply and demand model. In this chapter, we will once again turn to the supply and demand model to understand how equilibrium exchange rates and equilibrium quantities exchanged are determined.
In the market for any currency, we can speak about the supply and demand for that currency. The sellers of the currency represent the supply side of the market, and the buyers of the currency represent the demand side of the market. Their competitive interaction determines the exchange rate and the amount exchanged. Figure 20.3 (a) offers an example of supply and demand in the market for Euros.
Also of importance is the relationship between the supply and demand curves in each market. The sellers supplying Euros in the Euro market must also be demanding U.S. dollars in the dollar market. Similarly, the buyers of Euros in the Euro market must also be supplying U.S. dollars in the dollar market. Think about an American who goes to Europe on vacation. She wants to buy some souvenirs but needs Euros to do so. She buys Euros and must sell U.S. dollars to do so. Similarly, a European tourist in the United States must sell Euros to buy U.S. dollars. To sell one is to buy the other and to buy one is to sell the other. In summary, the two markets are mirror reflections of one another.
The mirror reflection argument applied to foreign exchange markets is consistent with what we said earlier about the reciprocal relationship between the exchange rates. For example, suppose that the demand for Euros increases, shifting the demand curve for Euros to the right in Figure 20.4 (a).
The rightward shift of demand in the market for Euros corresponds to a rightward shift of supply in the market for U.S. dollars. As mentioned previously, buyers of Euros are sellers of U.S. dollars when we are discussing the dollar-euro and euro-dollar markets. The consequence is an appreciation of the Euro and a depreciation of the U.S. dollar, reflecting the reciprocal connection between these exchange rates. The equilibrium quantities exchanged of both Euros and U.S. dollars increase as well.
Building the Supply and Demand Model of Foreign Exchange Markets
It is easy to declare that supplies and demands for foreign currencies exist, but we really need to build the supply and demand model of foreign exchange markets from scratch to understand the underlying determinants of exchange rate movements. We will begin with the demand side of a foreign currency market. As stated previously, we can consider the market for foreign exchange or the market for the domestic currency (e.g., U.S. dollars) since each is just a mirror reflection of the other. Throughout the remainder of this chapter, we will adopt the perspective of a market for foreign exchange. The reason for doing so is that we are accustomed to thinking of products and services as having dollar prices. For example, the price of an automobile is stated in terms of dollars per automobile ($/auto). We can think of the price of Japanese Yen in the same way. That is, the price of the Yen (¥) is in terms of so many dollars per Yen ($/¥). If the exchange rate (e) rises, then the Yen appreciates, just as a rise in the price of an automobile would mean that it has appreciated in value. Whenever we consider exchange rates, therefore, we will refer to dollars per unit of foreign exchange ($/FX). Figure 20.5 shows a downward sloping demand curve for foreign exchange. As the exchange rate falls, the foreign currency becomes cheaper and the quantity demanded of foreign exchange rises. It is worth asking why the demand curve slopes downward in this case.[4] When we learned about product markets, it seemed obvious that a lower price for a good, like apples, would lead to a rise in the quantity demanded of apples, ceteris paribus. As the price falls, other things the same, consumers are willing and able to buy more apples. The problem is that the explanation does not seem quite so obvious in the case of a foreign currency. After all, we do not eat Euros and so a lower price may not seem like it would automatically lead to increased purchases. Although we do not eat foreign currencies, they are useful as a means to an end. That is, if we wish to buy foreign goods and services, then we need to buy foreign currencies. Therefore, when the price of a foreign currency declines, we buy more of it because it allows us to purchase more foreign goods and services. That is, foreign goods and services become cheaper when the exchange rate ($/FX) falls and so we purchase more of the foreign currency.
In addition to imported goods and services becoming cheaper when the exchange rate falls, it is also true that foreign assets become cheaper when the exchange rate falls. That is, foreign stocks, bonds, and businesses become less expensive and so investors buy more foreign currency when its price falls. In this case, foreign currency is viewed as a means to buy foreign assets.
A final reason why the demand curve for foreign exchange slopes downward deals with currency speculation. That is, when the price of a foreign currency falls, speculators may expect it to rise later to some normal level. Therefore, speculators will purchase more as the exchange rate falls because they will anticipate a capital gain from the later appreciation of the currency.[5]
These three factors combine to create a downward sloping demand for foreign exchange. We could easily reverse the logic for all three factors to explain what happens when the exchange rate rises. When the exchange rate rises, the foreign currency becomes more expensive. Therefore, imports of goods and services become more expensive as do foreign assets. Furthermore, speculators will expect the exchange rate to fall later and so they expect a capital loss in the future. As a result, speculators will buy less foreign exchange.
We have now explained movements along the downward sloping demand curve for foreign exchange, but shifts of the demand curve for foreign exchange are possible too. Figure 20.6 shows a rightward shift of the demand curve for foreign exchange.
In Figure 20.6 a rise in the quantity demanded of foreign exchange occurs at every possible exchange rate. Which factors might cause such a shift of the demand curve for foreign exchange?[6] The first factor that we will consider is a change in consumers’ preferences in the home nation. For example, let’s suppose that American consumers develop a stronger preference for a specific nation’s automobiles. In that case, the demand for that foreign currency will increase. That is, the quantity demanded of the foreign currency will rise at every exchange rate, resulting in a rightward shift of the demand curve. If for some reason, the preferences of Americans changed such that they wanted fewer automobiles at each exchange rate, then the demand curve would shift to the left.
A second factor that might shift the demand curve for foreign exchange is a change in consumers’ incomes in the home nation. For example, suppose that Americans experience a rise in their incomes due to an economic expansion. They will demand more goods and services as well as more assets. This greater demand for goods, services, and assets, includes a greater demand for the goods, services, and assets of foreign nations. In this case, we would expect a higher demand for foreign exchange and a rightward shift of the demand curve for foreign exchange. On the other hand, if Americans experience a drop in their incomes due to a recession, then they will demand fewer goods, services, and assets, including the goods, services, and assets of other nations. As a result, the demand for foreign exchange will decline, and the demand curve for foreign exchange will shift to the left.
A third factor that might shift the demand curve for foreign exchange is a change in the relative price levels of the two nations. For example, suppose that the price level in the U.S. rises while the price level in the foreign nation remains the same. In that case, Americans will view foreign goods, services, and assets as relatively cheaper. Therefore, they will demand a great amount of foreign exchange to buy these relatively less expensive foreign commodities and assets. The same result would occur if the U.S. price level remained the same and the foreign price level fell. On the other hand, a drop in the U.S. price level and/or a rise in the foreign price level would lead to a reduction in the demand for foreign exchange because U.S. goods, services, and assets would now be relatively cheaper than foreign commodities and assets.
A fourth factor that might shift the demand curve for foreign exchange is a change in the relative interest rates of the two nations. For example, suppose that interest rates in the foreign nation rise while interest rates in the U.S. remain the same. American investors will view foreign assets as better investments because they now pay more interest. The reader should also recall that as foreign interest rates rise, foreign asset prices will fall. Therefore, foreign assets will be viewed as a bargain and the demand for foreign exchange will increase. As a result, the demand curve for foreign exchange will shift to the right. The same result would occur if U.S. interest rates fell while foreign interest rates remained the same. American investors would view foreign interest-bearing assets as the better investments and would thus demand more foreign exchange. On the other hand, if U.S. interest rates rose as foreign interest rates remained the same or fell, then the demand for foreign exchange would fall and the demand curve for foreign exchange would shift to the left as American investors shied away from foreign investments.
A fifth and final reason that we will consider for shifts of the demand curve for foreign exchange deals with currency speculation. For example, suppose that the expectations of speculators change such that the exchange rate is expected to rise soon. In other words, they expect the foreign currency to appreciate soon. In that case, the demand for foreign exchange will rise as speculators decide to buy the currency before it appreciates. As a result, the demand curve for foreign exchange will shift to the right. Alternatively, suppose that speculators decide that the foreign currency is expected to depreciate soon. In that case, speculators will demand less foreign exchange, and the demand curve for foreign exchange will shift to the left. That is, speculators do not want to buy the foreign exchange because they expect it to lose value soon.
In these examples, it is important to keep in mind the distinction between a movement along the demand curve for foreign exchange and a shift of the demand curve. Movements along the demand curve for foreign exchange are caused by changes in the exchange rate. Shifts of the demand curve for foreign exchange, on the other hand, result from changes in any other factors that might affect the buyers of foreign exchange. This distinction is the same one discussed in Chapter 3 between a change in quantity demanded and a change in demand. Understanding the difference helps prevent possible confusions that might otherwise arise. For example, the reader may have noticed that speculative behavior has been used to explain both a movement along the demand curve and a shift of the demand curve for foreign exchange. In the case of a movement along the demand curve, the speculative response is a direct response to a falling exchange rate. In the case of a shift of the demand curve, speculators experience a change in their expectations that is not related to a change in the current exchange rate.
The situation is very similar on the supply side of the foreign exchange market. Indeed, the same factors should be at play on the supply side since the supply of foreign exchange is the mirror reflection of the demand for dollars. Because the demand for dollars is affected by the factors that we just described, the supply of foreign exchange must be affected by these same factors.
Figure 20.7 shows an upward sloping supply curve for foreign exchange.
As the exchange rate rises, the foreign currency becomes more expensive and the quantity supplied of foreign exchange rises. It is worth asking why the supply curve slopes upward in this case. When we learned about product markets, it seemed obvious that a higher price for a good, like apples, would lead to a rise in the quantity supplied of apples, ceteris paribus. As the price rises, other things the same, producers are willing and able to sell more apples. In the case of foreign exchange markets, sellers are willing and able to sell more foreign exchange because when they sell foreign exchange, they acquire U.S. dollars, which are useful as a means to an end. That is, if foreigners wish to buy American goods and services, then they need to buy U.S. dollars. Therefore, when the price of a foreign currency rises, they sell more of it because it allows them to purchase more American goods and services. That is, American goods and services become cheaper when the exchange rate ($/FX) rises and so they sell more of the foreign currency. In addition to U.S. exports of goods and services becoming cheaper to foreign consumers when the exchange rate rises, it is also true that American assets become cheaper to foreign investors when the exchange rate rises. That is, American stocks, bonds, and businesses become less expensive and so foreign investors sell more foreign currency when its price rises. In this case, U.S. dollars are viewed as a means to buy American assets. A final reason why the supply curve for foreign exchange slopes upward deals with currency speculation. That is, when the price of a foreign currency rises, speculators may expect it to fall later to some normal level. Therefore, speculators will sell more of the foreign currency as the exchange rate rises because they will anticipate a capital loss from the later depreciation of the currency if they do not sell now. These three factors combine to create an upward sloping supply for foreign exchange. We could easily reverse the logic for all three factors to explain what happens when the exchange rate falls. When the exchange rate falls, the foreign currency becomes less expensive. Therefore, U.S. exports of goods and services become more expensive to foreigners as do foreign assets, and so less foreign exchange will be sold. Furthermore, speculators will expect the exchange rate to rise later and so they expect a capital gain in the future. As a result, speculators will decide to sell less foreign exchange. We have now explained movements along the upward sloping supply curve for foreign exchange, but shifts of the supply curve for foreign exchange are possible too. Figure 20.8 shows a rightward shift of the supply curve for foreign exchange. In Figure 20.8 a rise in the quantity supplied of foreign exchange occurs at every possible exchange rate. Which factors might cause such a shift of the supply curve for foreign exchange? Just as in the case of the demand for foreign exchange, it is possible that consumer preferences in the foreign nation change. For example, let’s suppose that foreign consumers develop a stronger preference for American automobiles. In that case, the supply of the foreign currency will increase. That is, the quantity supplied of the foreign currency will rise at every exchange rate, resulting in a rightward shift of the supply curve. If for some reason, the preferences of foreigners changed such that they wanted fewer American automobiles at each exchange rate, then the supply curve would shift to the left. A second factor that might shift the supply curve for foreign exchange is a change in consumers’ incomes in the foreign nation. For example, suppose that foreigners experience a rise in their incomes due to an economic expansion. They will demand more goods and services as well as more assets. This greater demand for goods, services, and assets, includes a greater demand for the goods, services, and assets of the United States. In this case, we would expect a higher supply of foreign exchange and a rightward shift of the supply curve for foreign exchange. On the other hand, if foreigners experience a drop in their incomes due to a recession, then they will demand fewer goods, services, and assets, including the goods, services, and assets of the United States. As a result, the supply of foreign exchange will decline and the supply curve for foreign exchange will shift to the left. A third factor that might shift the supply curve for foreign exchange is a change in the relative price levels of the two nations. For example, suppose that the price level in the foreign nation rises while the price level in the U.S. remains the same. In that case, foreigners will view U.S. goods, services, and assets as relatively cheaper. Therefore, they will sell a greater amount of foreign exchange to obtain dollars with which they can buy these relatively less expensive American commodities and assets. The same result would occur if the foreign price level remained the same and the American price level fell. On the other hand, a drop in the foreign price level and/or a rise in the American price level would lead to a reduction in the supply of foreign exchange because foreign goods, services, and assets would now be relatively cheaper than American commodities and assets. A fourth factor that might shift the supply curve for foreign exchange is a change in the relative interest rates of the two nations. For example, suppose that interest rates in the United States rise while interest rates in the foreign nation remain the same. Foreign investors will view U.S. assets as better investments because they now pay more interest. Furthermore, if U.S. interest rates rise, U.S. asset prices will fall. Therefore, U.S. assets will be viewed as a bargain and the supply of foreign exchange will increase. As a result, the supply curve for foreign exchange will shift to the right. The same result would occur if foreign interest rates fell while U.S. interest rates remained the same. Foreign investors would view American interest-bearing assets as the better investments and would thus supply more foreign exchange. On the other hand, if foreign interest rates rose as U.S. interest rates remained the same or fell, then the supply of foreign exchange would fall and the supply curve for foreign exchange would shift to the left as foreign investors shied away from American investments. A fifth and final reason that we will consider for shifts of the supply curve for foreign exchange deals with currency speculation. For example, suppose that the expectations of speculators change such that the exchange rate is expected to fall soon. In other words, they expect the foreign currency to depreciate soon. In that case, the supply of foreign exchange will rise as speculators decide to sell the currency and avoid potential losses from its future depreciation. As a result, the supply curve for foreign exchange will shift to the right. Alternatively, suppose that speculators decide that the foreign currency is expected to appreciate soon. In that case, speculators will supply less foreign exchange, and the supply curve for foreign exchange will shift to the left. That is, speculators will not want to sell the foreign currency because they expect it to gain value soon. Once again, the distinction between a movement along the supply curve and a shift of the supply curve is important to keep in mind. Movements along the supply curve are always caused by changes in the exchange rate. Shifts of the supply curve, on the other hand, are always caused by changes in any other factors that influence sellers of foreign exchange aside from exchange rate changes. This distinction is the same one discussed in Chapter 3 between a change in quantity supplied and a change in supply. We are now able to combine the supply and demand sides of the foreign exchange market to provide a more thorough explanation of the equilibrium exchange rate and the equilibrium quantity exchanged of foreign exchange. Furthermore, we use the model to explain how exchange rates change in response to exogenous shocks. That is, using the method of comparative statics, we can analyze how changes in consumers’ preferences, consumers’ incomes, relative price levels, relative interest rates, and the expectations of speculators can cause the equilibrium exchange rate and the equilibrium quantity exchanged to change in a specific foreign exchange market. For example, suppose that American consumers learn about a new European automobile that is spacious, comfortable, and environmentally friendly. The popularity of this product causes American consumers to increase their demand for Euros so that they can purchase these automobiles. The demand for Euros will shift to the right as shown in Figure 20.9 (a). The higher demand for Euros creates a shortage or an excess demand for Euros. As a result, competition drives up the exchange rate and the quantity exchanged of the Euro to their new equilibrium values of e2 and Q2. In this case, the Euro appreciates relative to the dollar. Consider a different example. Suppose that interest rates in Japan fall significantly relative to interest rates in the United States. American investors will find Japanese assets to be less attractive investments, and they will want to purchase fewer of them. As a result, they will demand fewer Yen. The demand for Yen will shift to the left as shown in Figure 20.9 (b). The lower demand for Yen creates a surplus or an excess supply of Yen. Competition then drives the exchange rate and the quantity exchanged of Yen down to their new equilibrium values of e2 and Q2. In this case, the Yen depreciates relative to the dollar. As another example, consider what will happen if speculators conclude that the Mexican peso will depreciate soon. They will want to sell pesos quickly before they lose value, and the supply of pesos will rise in the foreign exchange market. As a result, the supply curve shifts to the right, as shown in Figure 20.10 (a). The resulting surplus or excess supply of pesos puts downward pressure on the peso. The exchange rate falls and the quantity exchanged of pesos rises to their new equilibrium values of e2 and Q2. In this case, the peso has depreciated. The belief that the peso will depreciate turns out to be accurate, but it is also a self-fulfilling prophecy. That is, speculators’ belief that the peso will soon depreciate brings about its depreciation. Such self-fulfilling prophecies are not unusual in foreign currency markets and can often create considerable volatility. Finally, let’s consider a case where the price level in the United States rises relative to the price level in Britain. In this case, British consumers will find American exports of goods and services to be more expensive. As a result, they will demand fewer American exports and so they will supply fewer pounds in the foreign exchange market, as shown in Figure 20.10 (b). The reduced supply of pounds will create a shortage of pounds. Competition for the limited supply of pounds will then drive up the exchange rate and cause the quantity exchanged of pounds to fall overall until they reach their new equilibrium values of e2 and Q2. In this case, the pound appreciates relative to the dollar. It is important to keep in mind that each of these examples is relatively simple. Frequently, many different factors will be changing at the same time, leading to simultaneous shifts in both the supply and demand sides of the market. In fact, even if only one factor changes, both sides of the market will be affected because each supply curve is the mirror reflection of a demand curve. In other words, the same factors affect both sides of the market. Complex cases make it difficult to draw conclusions about the overall impact on the equilibrium outcome, as we learned in Chapter 3. As we discussed in the case of product markets, when both sides of the market are affected, unless we know the extent of the shifts, the direction of the overall change in either the exchange rate or the quantity exchanged will be indeterminate. The reader might want to review the examples of complex cases in Chapter 3 when thinking about how simultaneous shifts of supply and demand might influence the foreign exchange market. Floating Versus Fixed Exchange Rate Regimes Now that we have a solid understanding of the supply and demand model of the foreign exchange market, it will be easier to see how this analysis connects to balance of payments accounting. Suppose, for example, that the demand for Euros rises in the U.S. dollar-Euro market, as shown in Figure 20.11. The increase in the demand for Euros creates a shortage of Euros, but we can also give another interpretation to the shortage. The quantity demanded of Euros (€3) at the initial exchange rate of e1 exists because Americans wish to buy European imports of goods and services, European assets, and Euros for speculation. These demands are potential debits in the U.S. Balance of Payments. At the same time, the quantity supplied of Euros (€1) at the initial exchange rate of e1 exists because Europeans wish to purchase American exports of goods and services, American assets, and U.S. dollars for speculation. These supplies are potential credits in the U.S. Balance of Payments. Therefore, the difference between the quantity demanded of Euros and the quantity supplied of Euros may be interpreted as the potential excess of debits over credits in the U.S. Balance of Payments. That is, it reflects a potential balance of payments deficit. Of course, the overall balance of payments must be balanced. Therefore, we need an explanation for how this discrepancy is resolved. To understand how the overall balance in the balance of payments is achieved, it is helpful to consider two different types of exchange rate regime. That is, we will consider two methods that governments might adopt in their approach to foreign exchange markets. The first type is a floating exchange rate regime. In this case, the exchange rate moves to its equilibrium level. For a currency that is widely traded, like the U.S. dollar or Japanese Yen, competition will lead to quick changes in exchange rates whenever the equilibrium level is disturbed by an external shock that affects the supply or demand of the currency. Therefore, when a balance of payments deficit exists, the exchange rate will quickly rise to eliminate the payments deficit and ensure an overall balance in the balance of payments.[7] Another possible exchange rate regime is referred to as a fixed exchange rate regime. When governments and central banks adopt a policy of fixed exchange rates, they commit themselves to a specific value of the exchange rate. If market competition drives the exchange rate up or down, the central bank intervenes by selling or buying the foreign currency to ensure that the exchange rate returns to its original level. This policy affects the central bank’s reserve assets. In the case of the potential balance of payments deficit that is represented in Figure 20.11, the shortage will put upward pressure on the Euro. Because the U.S. Federal Reserve is committed to a fixed exchange rate at e1, however, it will sell Euros in the foreign exchange market. This increase in the supply of Euros will shift the supply curve of Euros to the right, as shown in Figure 20.12. In the case of the potential balance of payments surplus that is represented in Figure 20.13, the surplus will put downward pressure on the Yen. If the U.S. Federal Reserve is committed to a fixed exchange rate at e1, however, it will reduce its supply of Yen in the foreign exchange market. This reduction in the supply of Yen will shift the supply curve of Yen to the left, as shown in Figure 20.14. This decrease in supply will allow the central bank to maintain an exchange rate of e1. It will also eliminate the potential for a balance of payments surplus because credits and debits will be equal. The difference has been made up with an increase in U.S. official reserves. That is, Yen have been withdrawn from the foreign exchange market and added to official reserve assets. Because the supply of Yen represents potential credits in the U.S. Balance of Payments, when they are withdrawn, potential credits fall until they match debits. The result is an overall payments balance and a fixed exchange rate. In other words, since exports of U.S. goods, services, and assets exceed U.S. imports of Japanese goods, services, and assets, the surplus of Yen must be added to foreign exchange reserves if the exchange rate is to be kept at the same level. A system of fixed exchange rates was the norm in the years after World War II until the early 1970s. When a government or central bank with a fixed exchange rate decides to reduce its value, it is referred to as a devaluation of the currency. When a government or central bank decides to increase its value, it is referred to as a revaluation of the currency. Since the early 1970s, a system of managed floating exchange rates has become the norm for many nations. This kind of a system allows exchange rates to find their equilibrium levels, but central banks will sometimes intervene to stabilize their currencies. Other nations have simply adopted a floating exchange rate regime. Currency Crises In some situations, investors might rapidly sell a currency causing its foreign exchange value to plummet. This situation is referred to as a currency crisis. Efforts to stabilize the exchange rate may fail if chronic balance of payments deficits lead to the rapid depletion of foreign exchange reserves. Eventually the central bank may lose control of the exchange rate and then the foreign exchange value of the currency will continue its downward movement. In his book Globalization and Its Discontents (2002), Nobel Prize-winning economist Joe Stiglitz explains that the Thai baht collapsed on July 2, 1997 when its value fell overnight by 25% against the U.S. dollar. This event was the trigger for the East Asian financial crisis, which spread to several other nations with effects felt around the world.[8] Let’s analyze this situation by considering the Thai baht-U.S. dollar foreign exchange market, shown in Figure 20.15. In this case, the demand for the U.S. dollar rises, which creates a shortage of U.S. dollars. The result is a balance of payments deficit in Thailand. If Thailand wishes to prevent a drop in the value of the baht then its central bank must sell dollars, which pushes the value of the dollar back down and the baht back up. If additional increases in the demand for U.S. dollars occur, then the central bank in Thailand must sell even more U.S. dollars to prevent the fall of the baht. Eventually, it will run short of U.S. dollar reserves and may be forced to abandon the peg to the dollar, which is what occurred during the 1997 currency crisis in Thailand.[9] The Theory of Purchasing Power Parity In this section, we will consider an additional aspect of neoclassical theory that is referred to as the theory of purchasing power parity. According to the theory of purchasing power parity, a single price for a commodity or asset will emerge in a global market economy even though many different currencies exist. To understand how it works, let’s first consider a simple case of two regions in the United States where a commodity is bought and sold. Assume that the equilibrium prices in the two regional markets differ. In Figure 20.16, for example, the initial equilibrium price in market A is$4 per unit, and the initial equilibrium price in market B is $6 per unit. Because of the different currencies, it is necessary to convert one currency into the other so that we can compare prices. This conversion can only be carried out using the current exchange rate. Let’s assume that the pound-dollar exchange rate is £0.50/$ and that the dollar-pound exchange rate is $2/£. The reader should recall that each is simply the reciprocal of the other. Therefore, if the commodity is priced at £3 in the U.K., then its dollar price must be$6 (= £3 times $2/£). Because the price of the commodity in the U.S. is$4, arbitrageurs will try to gain by exploiting these price differences. If they purchase the commodity in the U.S., transport it at zero cost to the U.K., and sell it in the U.K., then the rising demand in the U.S. and the rising supply in the U.K. will ensure that the prices become equal. For example, let’s suppose that the price in the U.S. rises to $5 per unit and the price in the U.K. falls to £2.50 per unit. Once we convert pounds into dollars in the U.K., we can see that the U.K. price is now$5, which is the same as the U.S. price.
The analysis we just completed sidestepped a very important issue. That is, when international arbitrageurs demand more of the low-priced U.S. commodity, they must first buy dollars in the foreign exchange market. They then purchase the commodity in the U.S. using dollars and transport the commodity to the U.K. where they sell it for pounds at a high price. The pounds are then sold in the foreign exchange market for dollars, which are then used to buy the commodity in the U.S. again, and the cycle repeats until the prices equalize. It is important to notice that the purchase of dollars and the sale of pounds in the foreign exchange market will cause the dollar-pound exchange rate to change under a floating exchange rate regime. That is, the dollar will appreciate, and the pound will depreciate.
The adjustments that take place in the foreign exchange market are very rapid and so let’s assume that these adjustments occur before the U.S. demand for the product and the U.K. supply of the product have a chance to change. Suppose that the pound-dollar exchange rate becomes £0.75/$and the dollar-pound exchange rate becomes$1.3333/£ (i.e., 4/3 pounds to be exact). Now the U.K. price is equal to $4 (= £3 times$1.3333/£). That is, the prices have equalized, and they became equal entirely due to exchange rate adjustments, as shown in Figure 20.18.
For commodities that are easily tradeable across nations, we should expect an outcome like what we have just described. For commodities that are not easily traded (e.g., services like haircuts), we should expect to observe more variation in purchasing power. That is, we should expect the prices to be quite different after we make the conversion into a common currency. Therefore, while this theory provides a useful starting point for thinking about international price adjustments, limits to its application exist, particularly where excessive transport costs interfere with arbitrage.
Marxist Approaches to Imperialist Finance
In this section, we will consider Marxist approaches to foreign exchange markets and imperialist finance. Theories of imperialism have a long history in Marxian economics, stretching back to the works of Rudolf Hilferding, J.A. Hobson, Rosa Luxemburg, and V.I. Lenin in the early part of the twentieth century.[10] Because these different theories of imperialism have much in common, this section provides a brief overview of Hobson’s theory of imperialism, which emphasizes several key aspects of this body of thought.
This section draws upon E.K. Hunt’s account of Hobson’s theory from his valuable History of Economic Thought (2002).[11] During the late nineteenth century, many of the least developed places on Earth were colonized as the capitalist powers grabbed colonial possessions in Africa and Asia.[12] According to Hobson, the many reasons given for overseas military adventures by the advanced capitalist nations during this period had little to do with the publicly stated reasons that were offered at the time. The suggestion that the primary purpose was to spread Christianity to uncivilized people in backward lands was a distortion that hid the true motives of the imperialist nations.[13] Hobson dismissed other suggestions about the root cause of imperialism, including the claim that militaristic tendencies were an inherent part of human nature.[14] After all, the surge of imperialism in the late nineteenth century must have a cause that is connected to historical circumstances. Hobson also rejected the notion that the imperialist aggression of the period simply stemmed from the “irrational nature” of politicians.[15] In Hobson’s view, even though the activities might appear irrational from the perspective of the nation, they benefitted certain classes within the nation a great deal.[16]
Hobson’s explanation for the imperialist tendencies of the late nineteenth century concentrates on the massive accumulation of capital that became concentrated in the hands of large banks and financial houses during the late nineteenth century.[17] So much capital had accumulated and the disparity between workers and capitalists had become so great that profitable domestic outlets for the surplus capital could not be found even as capitalists spent enormous amounts on luxuries.[18] As a result, financial capitalists began to look elsewhere for a way to invest surplus capital and prop up the demand for commodities.
The capitalist classes in the imperialist nations found a new way to relieve their economic problems at home. Financial capitalists bought government bonds, which helped finance military production. They also bought shares to help finance the activities of international cartels and global monopolies. These activities allowed large corporations to invest in production in the nation’s colonial possessions. To carry on this overseas production, it was necessary to develop the infrastructure in the colonies, which it achieved by purchasing capital goods from the imperialist nation.[19] The expanded infrastructure in the colonies created a large network of roads, bridges, and railroads that made it possible to transform pre-capitalist societies into capitalist societies.[20] These changes made it possible to acquire vast possessions of cheap raw materials and transport them for use in capitalist production using the cheap wage labor of the colonies.[21] The increase in the wage labor forces in the colonies also created a large demand for the consumer goods of the imperialist nations,[22] which helped alleviate the insufficient aggregate demand problem at home and which boosted the profits of the global capitalist enterprises. These relationships are captured in Figure 20.19.
To make all this imperialist activity possible, it was necessary to promote feelings of patriotism and nationalism within the population of the imperialist nation.[23] That is, the public needed to be convinced of the righteousness of the imperialist cause. That required capitalist support for pro-war messages in the newspapers. With enough willing military recruits, the wars to seize and hold colonial possessions could be fought and won, as shown in Figure 20.19. Without this sort of imperialist activity, the result would be more severe business cycles and depressions in the home nation because capitalists would fail to find profitable outlets for the surplus capital they controlled.[24]
Although Hobson’s theory applied to late nineteenth century imperialism, it can be modified easily to apply to the present neo-colonial period in which the capitalist nations use military aggression in support of regimes in developing nations. Such regimes are willing to grant the imperialist nations production contracts and access to cheap raw materials and labor-power. In return for their cooperation, the imperialist power provides infrastructure development, including oil pipelines and military bases to protect the regimes of the subjugated nations. These regimes are frequently puppet governments that operate under the guidance of the imperialist nation.
A Marxian Approach to Exchange Rate Determination
Because much of this chapter has focused on the determination of exchange rates and the factors that influence them, it is worth considering what Marxian economists might have to say about exchange rates. In Chapter 17, a Marxian theory of fiat money was developed. In that chapter, it was shown that Marxian economists can explain the determination of the value of fiat money by dividing the product of the money supply and the velocity of money in circulation (MV) by the aggregate labor time embodied in the circulating commodities (L) during a given period. The calculation produces the monetary expression of labor time (MELT), which is in units of currency per hour of socially necessary abstract labor time (e.g., $16 per hour). The MELT makes it possible to convert any labor time magnitude into its monetary equivalent. Any nation taken by itself will have a MELT in terms of its home currency per unit of socially necessary abstract labor time. It is relatively easy to determine the foreign exchange rate between two currencies if you happen to know the MELTs in each nation. For example, suppose that the MELT in the United States is$16 per hour and the MELT in the United Kingdom is £8 per hour. The dollar-pound exchange rate (e), or the foreign exchange value of the pound can then be calculated as follows:
$e=\frac{MELT_{U.S.}}{MELT_{U.K.}}=\frac{\16/hour}{\pounds 8/hour}=\frac{\2}{\pounds}$
In theory, we should expect this exchange rate to emerge in the international marketplace. However, deviations from this value are to be expected. In Chapter 4, it was pointed out that supply and demand can cause deviations of price from commodity value in Marxian economics. The same applies in the foreign exchange market. A change in the supply or demand of pounds (dollars) can cause the actual market exchange rate to deviate from its expected value. Such fluctuations may be rather large, but over time exchange rates will gravitate towards their MELT-determined values, as shown in Figure 20.20.
It should be noted that Marxian economists are interested in understanding the social relationships that give rise to the appearances they observe in the marketplace. This theory of exchange rate determination should be viewed only as a complement to the theory of imperialist finance that was developed in the previous section. The class conflict inherent in the vast network connecting the imperialist power with its colonies (or neo-colonies) is what ultimately concerns the Marxist. It is the focus because a solid understanding of it will assist in the revolutionary overthrow of the capitalist economic system that makes such relations possible.
Fo llowing the Economic News [25]
A recent news article in Gulf News offers reasons for the recent movements in the exchange rate between the Japanese yen and the U.S. dollar. Mitra Rajarshi explains that the Japanese central bank, the Bank of Japan, implemented a quantitative easing program in 2012, which increased the money supply and reduced interest rates. Rajarshi reports that this action led to a sharp depreciation of the yen against the U.S. dollar. A depreciation is expected in this case because lower interest rates in Japan make yen-denominated interest-bearing assets less attractive to foreign and Japanese investors. Therefore, foreign investors will reduce their demand for yen and Japanese investors will increase their supply of yen. Both factors will put downward pressure on the yen, although the impact on the quantity exchanged of yen will be indeterminate. Rajarshi also explains that the exchange rate measured in yen per U.S. dollar rose from 79 in 2012 to 110 in 2018. The increase in yen per dollar indicates an appreciation of the U.S. dollar (which is in the denominator) and a depreciation of the yen. Nevertheless, Rajarshi predicts an appreciation of the yen over the next year for a couple reasons. For example, Rajarshi explains that the Japanese government has been taking steps to increase tourism in the country, which would lead to a higher demand for the yen and to its appreciation. He also mentions the Olympic Games, which are scheduled to be held in Tokyo in July 2020. This event will also put upward pressure on the demand for the yen and cause it to appreciate. Rajarshi then identifies an additional factor that may lead to an appreciation of the yen. Because foreigners who live and work in Japan are likely to anticipate this future appreciation of the yen, they will choose to retain their savings longer before converting them into dollars. This decision will reduce the supply of yen, causing it to appreciate. Here we see a good example of a self-fulfilling prophecy in the foreign exchange market. Holders of a currency anticipate an appreciation and their response to it causes an appreciation of the currency. In this case, the appreciation is likely to occur for the additional reasons that Rajarshi mentions. Rajarshi hints, however, that if the Bank of Japan changes interest rates in response to an appreciation of the yen, then some of these effects might be reversed. One of the challenges of predicting exchange rate movements is that so many variables can change that our predictions can turn out to be wrong.
Summary of Key Points
1. In the balance of payments accounts, each payment that a nation receives from the rest of the world is recorded as a credit and each payment that a nation makes to the rest of the world is recorded as a debit.
2. The current account is a subaccount of the balance of payments statement that records transactions that do not involve the purchase and sale of income-earning assets.
3. The capital account is a subaccount of the balance of payments statement that records transactions involving non-financial assets, whereas the financial account records transactions involving financial assets.
4. If a current account surplus exists, then a capital and financial account deficit of the same amount must exist. If a current account deficit exists, then a capital and financial account surplus of the same amount must exist.
5. When a nation’s net international investment position is positive (its foreign assets exceeds its foreign liabilities), then it is a net creditor. When a nation’s net international investment position is negative (its foreign assets fall short of its foreign liabilities), then it is a net debtor.
6. A nation’s current account balance may be expressed as the sum of its private sector gap and its government budget gap.
7. If a nation’s currency appreciates relative to another currency, then the other currency depreciates relative to the nation’s currency.
8. Changes in the quantity demanded or quantity supplied of a foreign currency may only be caused by a change in the exchange rate. If any other factors change that might affect buyers and sellers of a foreign currency, then changes in demand and supply occur.
9. When potential balance of payments deficits exist, official reserves are used to guarantee an overall payments balance. When potential balance of payments surpluses exist, official reserves are accumulated to guarantee an overall payments balance.
10. International arbitrage ensures that exchange rates adjust to equalize the purchasing power of different currencies.
11. Marxist theories of imperialism emphasize capitalists’ inability to find profitable domestic outlets for surplus capital as the reason for foreign conquest and the subjugation of foreign people in less developed parts of the world.
12. Marxian economists explain the average level of a foreign exchange rate over time using the ratios of the monetary expressions of labor time (MELTs) of two different nations.
List of Key Terms
Balance of payments accounting
National income accounting
Credit
Debit
Current account
Current account balance
Current account surplus
Current account deficit
Trade balance
Trade deficit
Trade surplus
Balanced trade
Capital account
Financial account
Official reserve holdings
Capital and financial account surplus
Capital and financial account deficit
Statistical discrepancy
Net investment position
Net International Investment Position
Net debtor
Net creditor
Depreciated
Appreciated
Equilibrium exchange rate
Equilibrium quantities exchanged
Capital gain
Capital loss
Change in quantity demanded
Change in demand
Change in quantity supplied
Change in supply
Self-fulfilling prophecy
Exchange rate regime
Floating exchange rate regime
Fixed exchange rate regime
Devaluation
Revaluation
Managed floating exchange rates
Currency crisis
Theory of purchasing power parity
Arbitrage
Imperialism
Monetary expression of labor time (MELT)
Foreign exchange value
Market exchange rate
Problems for Review
1. Suppose that a current account deficit of $400 billion exists, that a capital account surplus of$100 billion exists, and that statistical discrepancy is $75 billion. What is the state of the financial account? 2. Suppose that the net international investment position of the U.S. at the start of 2014 is negative$7.1 trillion. Also assume that a current account surplus of $400 billion exists at the end of 2014. What is the net international investment position at the end of 2014? 3. Suppose that the Yen-U.S. dollar exchange rate is ¥110/$. What is the U.S. dollar-Yen exchange rate in this case?
4. Suppose that the Yen-U.S. dollar exchange rate changes from ¥110/$to ¥115/$. What has happened to the foreign exchange value of the U.S. dollar? What has happened to the foreign exchange value of the Yen?
5. Suppose that the incomes rise significantly in India with conditions in the rest of the world remaining roughly the same. What will happen in the U.S. dollar-rupee market? Analyze the situation using the three steps shown in Figures 20.9 and 20.10. Only consider the sellers’ side of the market for rupees in your answer.
6. Suppose that interest rates fall in Europe with conditions in the rest of the world remaining roughly the same. What will happen in the U.S. dollar-Euro market? Analyze the situation using the three steps shown in Figures 20.9 and 20.10. Only consider the buyers’ side of the market for Euros in your answer.
7. Suppose that an automobile costs $12,000 in the United States and ¥1,300,000 in Japan. If the Yen-U.S. dollar exchange rate is ¥100/$, then does purchasing power parity hold in this case? If not, then what needs to happen to the dollar (appreciate or depreciate) for purchasing power parity to hold?
8. Suppose that the MELT in the United States is \$22 per hour of socially necessary abstract labor time (SNALT) and the MELT in Russia is 0.3793 rubles per hour of SNALT. What is the foreign exchange value of the ruble in this case?
1. See Bureau of Economic Analysis, U.S. International Transactions Accounts Data, at http://www.bea.gov.
2. See Bureau of Economic Analysis, U.S. International Transactions Accounts Data, at http://www.bea.gov.
3. The loose inspiration for this brief section stems from Melvin (2000), p. 1-6.
4. The explanations of the shapes of the demand and supply curves of foreign exchange are found in many introductory neoclassical textbooks. For example, see Bade and Parkin (2013), p. 880-884, and McConnell and Brue (2008), p. 700-701.
5. Bade and Parkin (2013), p. 880-884, refer to the expected profit effect when discussing this effect in their discussion of the derivation of the demand and supply curves in the foreign exchange market.
6. Lists of the factors discussed in this section (that affect both the supply and demand sides of the market) are commonly found in neoclassical textbooks. For example, McConnell and Brue (2008), p. 701-703, analyze the same factors. Carbaugh (2011), p. 407-421, analyzes the same factors except that he emphasizes productivity differences rather than income differences. Carbaugh (2011), p. 410, also mentions trade barriers as a factor.
7. As Samuelson and Nordhaus (2001), p. 622, explain, movements of exchange rates serve as “a balance wheel to remove disequilibria in the balance of payments.”
8. Stiglitz (2002), p. 89.
9. Ibid. p. 94-95.
10. See Hunt (2002), p. 351.
11. Ibid. p. 351-356.
12. Hunt (2002), p. 348-351, provides an overview of this period.
13. Ibid. p. 351.
14. Ibid. p. 352.
15. Ibid. p. 352.
16. Ibid. p. 352.
17. Ibid. p. 353.
18. Ibid. p. 354.
19. Some of these elements are given greater emphasis in Rosa Luxemburg’s account of imperialism. See Hunt (2002), p. 361.
20. Ibid. p. 361.
21. Ibid. p. 359.
22. Ibid. p. 361.
23. Hobson argues that finance “manipulates the patriotic forces” of the population. See Hunt (2002), p. 354.
24. Ibid. p. 356.
25. Rajarshi, Mitra. “Japanese yen versus the US dollar.” Gulf News. Dubai. 07 Aug. 2019. | textbooks/socialsci/Economics/Principles_of_Political_Economy_-_A_Pluralistic_Approach_to_Economic_Theory_(Saros)/04%3A_Principles_of_International_Economic_Theory/20%3A_Balance_of_Payments_Accounting_and_Theories_of_Currency_Markets.txt |
• 1.1: Introduction to the Study of Economics
Food and agricultural markets are in the news and on social media every day. Numerous fascinating and complex issues are the subject of this course: food prices, food safety, diet and nutrition, agricultural policy, globalization, immigration, agricultural labor markets, obesity, use of antibiotics and hormones in meat production, hog confinement, and many more. As we work through the course material this semester, please find examples of the economics of food and agriculture in the news.
• 1.2: Supply and Demand
The study of markets is a powerful, informative, and useful method for understanding the world around us, and interpreting economic events. The use of supply and demand allows us to understand how the world works, how changes in economic conditions affect prices and production, and how government policies and programs affect prices, producers, and consumers. A huge number of diverse and interesting issues can be usefully analyzed using supply and demand.
• 1.3: Markets - Supply and Demand
The market mechanism is a useful and powerful analytical tool. The market model can be used to explain and forecast movements in prices and quantities of goods and services. The market impacts of current events, government programs and policies, and technological changes can all be evaluated and understood using supply and demand analysis. Markets are the foundation of all economics!
• 1.4: Welfare Economics - Consumer and Producer Surplus
• 1.5: The Motivation for and Consequences of Free Trade
Thumbnail: Charging Bull, a bronze statue by Arturo Di Modica at Bowling Green, Manhattan, New York City. Image used wtih permission (CC BY-SA 2.0; Aseba).
01: Introduction to Economics
Economics is Important and Interesting!
The Economics of food and agriculture is important and interesting! Food and agricultural markets are in the news and on social media every day. Numerous fascinating and complex issues are the subject of this course: food prices, food safety, diet and nutrition, agricultural policy, globalization, immigration, agricultural labor markets, obesity, use of antibiotics and hormones in meat production, hog confinement, and many more. As we work through the course material this semester, please find examples of the economics of food and agriculture in the news. Application of economic principles to food and agricultural issues in real time will enhance the relevance, timeliness, and importance of learning economics.
Scarcity
Economics can be defined as, “the study of choice.” The concept of scarcity is the foundation of economics. Scarcity reflects the human condition: fixed resources and unlimited wants, needs, and desires.
Scarcity = Unlimited wants and needs, together with fixed resources.
Since we have unlimited desires, and only a fixed amount of resources available to meet those desires, we can’t have everything that we want. Thus, scarcity forces us to choose: we can’t have everything. Since scarcity forces us to choose, and economics is the study of choice, scarcity is the fundamental concept of all economics. If there were no scarcity, there would be no need to choose between alternatives, and no economics!
Microeconomics and Macroeconomics
The subject of economics is divided into two major categories: microeconomics and macroeconomics.
Microeconomics = The study of individual decision-making units, such as firms and households.
Macroeconomics = The study of economy-wide aggregates, such as inflation, unemployment, economic growth, and international trade.
This course studies microeconomics, the investigation of firm and household decision making. Our basic assumption is that firms desire to maximize profits, and households seek to maximize utility, also called satisfaction.
Economic Models and Theories
The real world is enormously complex. Think of how complicated your daily life is: just waking up and getting ready for class has a huge number of possible complications! Since our world is complicated, we must simplify the real world to understand it. A Model is a simplified representation of the world, not intended to be realistic.
Model = A theoretical construct, or representation of a system using symbols, such as a flow chart, schematic, or equation.
We frequently use models in physical sciences such as biology, chemistry, and physics. Think of the model of an atom, with the atomic particles: neutron, proton, and electrons. No one has ever seen an atom, but there is significant evidence for this model. It is easy to be critical of economic models, since we are in many cases more familiar with economic events than scientific observations. When we simplify supply and demand into a model, we can think of many oversimplifications and limitations of the theory… the real world is complicated. However, this is how all science works: we must simplify the complex real world in order to understand it.
The Scientific Method
Our economic models are built and used following the Scientific Method.
Scientific Method = A body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.
The major characteristic of the scientific method is to use measurable evidence to support or detract from a given model or theory. Following this method, economists will keep a theory as long as evidence backs it up. If the evidence does not support the model, the theory will be modified or replaced. Science, or knowledge, advances in this imperfect manner. To repeat, “We have to simplify the real world in order to understand it.” Science is limited, and the human condition continues to be one of imperfect knowledge, finite lives, and an enduring search for solutions to poverty, pain, and suffering.
Positive Economics and Normative Economics
As social scientists, economists seek to be unbiased and objective in their study of the world. Economists have developed two terms to separate factual statements from value judgments, or opinions.
Positive Economics = Statements that include only factual information, with no value judgments. “What is.”
Normative Economics = Statements that include value judgments, or opinions. “What ought to be.”
In our study of food and agriculture, we will strive to purge our discussions, analysis, and understanding from opinions and value judgments. Our background and experience can make this challenging. For example, a corn producer might say, “The price of corn is higher, which is a good thing.” But, the buyer of the corn, a livestock feedlot operator, might see things differently. All price changes have winners and losers, so economists try to avoid describing price movements in terms of “good” or “bad.”
Economists who study food and agriculture seek to be neutral, unbiased, and professional in their work. This can be challenging at times, when we present our finding and observations to individuals or groups who may not like the outcomes. For example, an economist might be asked to study organic, natural, or local foods and report eh results to farmers and ranchers of conventional food products. Economists could be asked to study and report Chipotle’s impact on the demand for beef, or the profit margins on cage-free eggs. Although some individuals may not like the results of these studies, economists try to be unbiased and objective in reporting their scientific work. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/01%3A_Introduction_to_Economics/1.01%3A_Introduction_to_the_Study_of_Economics.txt |
The study of markets is a powerful, informative, and useful method for understanding the world around us, and interpreting economic events. The use of supply and demand allows us to understand how the world works, how changes in economic conditions affect prices and production, and how government policies and programs affect prices, producers, and consumers. A huge number of diverse and interesting issues can be usefully analyzed using supply and demand.
Supply
The Supply of a good represents the behavior of firms, or producers. Supply refers to how much of a good will be produced at a given price.
Supply = The relationship between the price of a good and quantity supplied, ceteris paribus.
Notice the important term, “ceteris paribus” at the end of the definition of supply. Recall the complexity of the real world, and how economists must simplify the world to understand it. Use of the concept, ceteris paribus, allows us to understand the supply of a good. In the real world, there are numerous forces affecting the supply of a good: weather, prices, input prices, just to name a few.
Ceteris Paribus = Holding all else constant (Latin).
When studying supply, we seek to isolate the relationship between the price and quantity supplied of a good. We must hold everything else constant (ceteris paribus) to make sure that the other supply determinants are not causing changes in supply. An example is the supply of organic cotton. Patagonia spearheaded the movement into using organic cotton in the production of clothing. Nike and other clothing manufacturers are increasing organic clothing production to meet the growing demand for this good. Interestingly, conventional (non-organic) cotton is the most chemical-intensive field crop, and can result in agricultural chemical runoff in the soil and groundwater. A small but convicted group of consumers are willing to pay high premiums for clothing made with organic cotton, to reduce the potential environmental damage from agricultural chemicals used in cotton production. Notice that this graph has two items on each axis: (1) a label, and (2) units. Every graph drawn must have both labels and units on each axis to effectively communicate what the graph is about.
The supply curve seen in Figure $1$ is a market supply curve, as it represents the entire market of organic cotton (note that cotton is sold in bales). The market supply curve was derived by horizontal summation all of the individual firm supply curves. This is indicated by the notation $Q^s = ΣMC_i$ in Figure $1$. The individual firm supply curve is the firm’s marginal cost curve $(MC)$ for all prices above the shut down point, and equal to zero for all prices below the shut down point. The shut down point is the minimum point on the firm’s average variable cost curve $(AVC)$, as shown in Figure $2$
Since the market supply curve is the sum of all of the individual firms’ marginal cost curves $(ΣMC_i)$, the market supply curve represents the cost of production: the total amount that a business firm must pay to produce a given quantity of a good.
There are three properties of a market supply curve.
Properties of Supply
1. Upward-sloping: if price increases, quantity supplied increases,
2. $Q^s= f(P)$, and
3. Ceteris Paribus, Latin for “holding all else constant.”
The first property reflects the Law of Supply, which states that there is a direct relationship between price and quantity supplied.
Law of Supply = There is a direct, positive relationship between the price of a good and the quantity supplied, ceteris paribus.
The second property demonstrates that price $(P)$ is the independent variable, and quantity supplied $(Q^s)$ is the dependent variable. Graphs of supply and demand are drawn “backward” with the independent variable $(P)$ on the vertical axis. In all other fields of mathematics and science, when a function such as $y=f(x)$ is graphed, the independent variable $(x)$ appears on the horizontal axis, and the dependent variable $(y)$ is drawn on the vertical axis. Supply and demand graphs are drawn, “backwards” due to economist Alfred Marshall, who drew the original supply and demand graphs this way in his Principles of Economics book in 1890. The third property reflects the need to simplify all of the determinants of supply to isolate the relationship between price and quantity supplied, using the ceteris paribus assumption.
The Determinants of Supply
There are numerous determinants of supply, so we will focus on five important ones. The most important supply determinant, or driver, is price $(P)$. Other determinants include input prices $(Pi)$, the prices of related goods $(Pr)$, technology $(T)$, and government taxes and subsidies $(G)$.
$Q^s = f(P, Pi, Pr, T, G) \label{1.1}$
To draw a supply curve, we focus on the most important determinant of supply: the good’s own price. We hold all of the other determinants constant. To show this in equation form, we use a vertical bar to designate ceteris paribus: all variables that appear to the right of the vertical bar are held constant. Equation \ref{1.2} shows the relationship between quantity supplied and price, holding all else constant. This relationship is the market supply curve in Figure $1$ and in supply and demand graphs.
$Q^s = f(P| Pi, Pr, T, G) \label{1.2}$
Input prices $(Pi)$ are important determinants of supply, since the supply curve represents the cost of production. Prices of related goods $(Pr)$ represent prices of substitutes and complements in production. Substitutes in production are goods that are produced either/or, such as corn and soybeans. One land parcel can be used to grow either corn or soybeans. Complements in production are goods that are produced together in a fixed ratio. Beef and leather are complements in production. Technology $(T)$ is major driver of supply, as new methods and techniques become available, they increase the amount of food produced. Technological change allows more output to be produced with the same level of inputs. Restated, the same level of output can be produced with fewer inputs. Government policies and programs $(G)$ can shift the supply of a good through taxes or subsidies.
Movements Along vs. Shifts In Supply
The supply curve represents the mathematical relationship between the price and quantity supplied of a good. Therefore, when a good’s own price changes, it is as a movement along the supply curve. When any of the other supply curve determinants change, it will shift the entire curve.
A movement along a supply curve, caused by a change in the good’s own price, is called a change in quantity supplied (left panel, Figure $3$). A shift in the supply curve, caused by a change in any supply determinant other than the good’s own price, is called a change in supply (right panel, Figure $3$). The change in supply shown in Figure $3$ is an increase in supply, since it increases the quantity supplied at any given price.
Notice that the supply curve has shifted down, yet this represents an increase in supply. The supply change is measured on the horizontal axis, so a movement from left to right represents an increase in supply. The shift shown could be the impact of technological change on organic cotton supply: suppose that biotechnology allows for higher yielding varieties of organic cotton.
Demand
The Demand of a good represents the behavior of households, or consumers. Demand refers to how much of a good will be purchased at a given price.
Demand = The relationship between the price of a good and quantity demanded, ceteris paribus.
Figure 1.4 shows the market demand curve for beef $(Q^d)$, derived by the summation of all individual consumers demand curves $(Σq_i)$. Note that beef is measured in units of one hundred pounds, or a “hundredweight” (cwt).
Demand represents the willingness and ability of consumers to purchase a good. As with supply, there are three properties of demand.
Properties of Demand
1. Downward-sloping: if price increases, quantity demanded decreases,
2. $Q^d= f(P)$, and
3. Ceteris Paribus, Latin for holding all else constant.
The first property reflects the Law of Demand, which states that if the price of a good increases, the quantity demanded of that good decreases, holding all else constant.
Law of Demand = There is an inverse relationship between the price of a good and the quantity supplied, ceteris paribus.
The Law of Demand is one of the major “take home messages” of economic principles. Price increases lead to smaller quantities of goods purchased. The Law of Demand does not say that all consumers will stop buying a good, it says that at least some consumers will decrease consumption of the good. The magnitude of the decrease will depend on the price elasticity of demand for the good, as will be discussed in Section 1.4 below.
The Determinants of Demand
There are numerous demand shifters, or determinants of demand. Six of the most important determinants are included in the demand equation in Equation \ref{1.3}. The good’s own price $(P)$ is the most important determinant. Demand is also influenced by: the price of related goods $(Pr)$, futures prices $(Pf)$, income $(I)$, tastes and preferences $(T)$, and government programs and policies $(G)$.
$Q^d = f(P, Pr, Pf, I, T, G) \label{1.3}$
Related goods include substitutes and complements in consumption. Substitutes in consumption are goods that are purchased either/or, such as hot dogs and hamburgers. If the price of hot dogs increases, at least some consumers will shift out of hot dogs and into hamburgers. Complements in consumption are goods that are consumed together, for example hot dogs and hot dog buns. If the price of hot dogs increases, consumers will purchased fewer hot dogs and fewer buns.
Expectations of future prices $(Pf)$ have a large influence on consumption decisions today. If the price of corn was expected to increase in the future, corn demand would increase today, as corn buyers would seek to buy prior to the price increase. This would allow traders to “buy low and sell high,” providing profit from arbitrage across time.
Income $(I)$ can have a large impact on purchase decisions. Cars, houses, and other expensive items will be affected by changes in income. Inexpensive items such as used clothes or ramen noodles are also influenced greatly by income changes. During the great recession of 2008-2010, Walmart had high profit levels, while boat manufacturers and country clubs lost profits due to significant decreases in income.
Tastes and preferences $(T)$ shift the demand for goods and services based on the diverse wants, needs, and desires of consumers in the market. Taxes and subsidies, as well as other government programs, policies, and regulations $(G)$ influence demand, sometimes significantly. Government programs and policies will be explored in Sections 1.4 through 1.6, and in Chapter 2 below.
To draw a demand curve, the most important determinant of demand is isolated: the good’s own price. We hold all of the other determinants constant, ceteris paribus.
$Q^d = f(P | Pr, Pf, I, T, G) \label{1.4}$
Movements Along vs. Shifts In Demand
The demand curve represents the mathematical relationship between the price and quantity demanded of a good. Therefore, when a good’s own price changes, it is depicted as a movement along the demand curve. When any of the other demand curve determinants change, it will shift the entire curve.
As with supply, if the good’s own price changes, it results in a movement along the demand curve, called a change in quantity demanded. If any other demand determinant changes, it causes a shift in demand, called a change in demand. The shift shown in the right panel of Figure $5$ is an increase in demand, since the demand curve has shifted upward and to the right.
Supply and demand form the foundation for the study of markets. Markets are defined as the interaction of supply and demand. Market analysis is the core concept and foundation of all of economics, and will be explored in the next section. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/01%3A_Introduction_to_Economics/1.02%3A_Supply_and_Demand.txt |
In the previous section, supply and demand were introduced and explored separately. In what follows, the interaction of supply and demand will be presented. The market mechanism is a useful and powerful analytical tool. The market model can be used to explain and forecast movements in prices and quantities of goods and services. The market impacts of current events, government programs and policies, and technological changes can all be evaluated and understood using supply and demand analysis. Markets are the foundation of all economics!
A market equilibrium can be found at the intersection of supply and demand curves, as illustrated for the wheat market in Figure $1$. An equilibrium is defined as, “a point from which there is no tendency to change.” Wheat is traded in units of metric tons (MT), or 1000 kilograms, equal to approximately 2,204.6 pounds.
Equilibrium = a point from which there is no tendency to change.
Point $E$ is the only equilibrium in the wheat market shown in Figure $1$. At any other price, market forces would come into play, and bring the price back to the equilibrium market price, $P^*$. At any price higher than $P^*$, such as $P’$ in Figure $2$, producers would increase the quantity supplied to $Q_1$ million metric tons of wheat, and consumers would decrease the quantity demanded to $Q_0$ million metric tons of wheat. A surplus would result, since quantity supplied is greater than quantity demanded $(Q_1 > Q_0)$.
A wheat surplus such as the one shown in Figure $2$ would bring market forces into play since $Q^s \neq Q^d$. Wheat producers would lower the price of wheat in order to sell it. It would be preferable to earn a lower price than to let the surplus go unsold. Consumers would increase the quantity demanded along $Q^d$ and producers decrease the quantity supplied along $Q^s$ until the equilibrium point $E$ was reached. In this way, any price higher than the market equilibrium price will be temporary, as the resulting surplus will bring the price back down to the equilibrium price $P^*$.
Market forces also come into play at prices lower than the equilibrium market price, as shown in Figure $3$. At the lower price $P’’$, producers reduce the quantity supplied along $Q^s$ to $Q_0$, and consumers increase the quantity demanded to $Q_1$. A shortage occurs, since the quantity demanded is greater than the quantity supplied $Q_1 > Q_2$. The shortage will bring market forces into play, as consumers will bid up the price in order to purchase more wheat and producers will produce more wheat along $Q^s$. This process will continue until the market price returns to the equilibrium market price, $P^*$.
The market mechanism that results in an equilibrium price and quantity performs a truly amazing function in the economy. Markets are self-regulating, since no government intervention or coercion is needed to achieve desirable outcomes. If there is a drought, the price of wheat will rise, causing more resources to be devoted to wheat production, which is desirable, since wheat is in short supply during a drought. If good weather causes a surplus, the price will fall, causing wheat producers to shift resources out of wheat and into more profitable opportunities. In this fashion, the market mechanism allows voluntary trades between willing parties to allocate resources to the highest return. Efficiency of resource use and high incomes are a feature of market-based economies.
Although markets provide huge benefits to society, not everyone wins from free market economies, and market changes over time. Price increases help producers, but hurt consumers. Technological change has provided lowered food prices enormously over time, but has led to farm and ranch consolidation, and the large migration of farmers and their families out of rural regions and into urban areas.
The market graphs of supply and demand are based on the assumption of perfectly competitive markets. Perfect competition is an ideal state, different from actual market conditions in the real world. Once again, economists simplify the complex real world in order to understand it. We will begin with the extreme pure case of perfect competition, and later introduce realism into our analysis.
Competitive Market Properties
A competitive market has four properties:
1. homogeneous product,
2. numerous buyers and sellers,
3. freedom of entry and exit, and
4. perfect information.
The first property of perfect competition is a homogeneous product. This means that the consumer can not distinguish any differences in the good, no matter which firm produced it. Wheat is an example, as it is not possible to determine which farmer produced the wheat. A John Deere tractor is an example of a nonhomogeneous good, since the brand is displayed on the machine, not to mention the company’s well known green paint and deer logo.
The assumption of numerous buyers and sellers means something specific. The word, “numerous” refers to an industry so large that each individual firm can not affect the price. Each firm is so small relative to the industry that it is a price taker.
Freedom of entry and exit means that there are no legal, financial, or regulatory barriers to entering the market. A wheat market allows anyone to produce and sell wheat. Attorneys and physicians, however, do not have freedom of entry. To practice law or medicine, a license is required.
Perfect information is an assumption about industries where all firms have access to information about all input and output prices, and all technologies. There are no trade secrets or patented technologies in a perfectly competitive industry. These four properties of perfect competition are stringent, and do not reflect real-world industries and markets. Our study of market structures in this course will examine each of these properties, and use them to define industries where these properties do not hold. Competitive markets have a number of attractive properties.
Outcomes of Competitive Markets
Competitive markets result in desirable outcomes for economies. A competitive market maximizes social welfare, or the total amount of well-being in a market. Competitive markets use voluntary exchange, or mutually beneficial trades, to achieve this result. In a market-based economy, no one is forced, or coerced, to do anything that they do not want to do. In this way, all trades are mutually beneficial: a producer or consumer would never make a trade unless it made him or her better off. This idea will be a theme throughout this course: free markets and free trade lead to superior economic outcomes.
It should be emphasized that free markets and free trade are not perfect, since there are negative features associated with markets and capitalism. Income inequality is an example. Markets do not solve all of society’s problems, but they do create conditions for higher levels of income and wealth than other economic organizations, such as a command economy (as found in a communist or fascist nation). There are winners and losers to market changes. An example is free trade. Free trade lowers prices for consumers, but often causes hardships for producers in importing nations. Similarly, open borders allow immigrants to improve their conditions and earnings by moving from low-income nations to high-income nations such as the United States (US) or the European Union (EU). Workers in the US and the EU will face competition from a larger labor supply, causing reductions in wages and salaries. A simple example of markets is an increase in the price of corn. Corn producers are made better off, but livestock producers, the major buyers of corn, are made worse off. Thus, the market shifts that allow prosperity also create winners and losers in a free market economy.
Supply and Demand Shift Examples
Given our knowledge of markets and the market mechanism, current events and policies can be better understood.
Demand Increase
China was a command economy until 1986. At that time, the government introduced the Household Responsibility System, which allowed farmers to earn income based on how much agricultural output they produced. The new policy worked very well, and China moved from being a net food importer to a net food exporter. Soon, the policy was extended to all industries, and China was on its way to a market-based economy. The result has been a truly unprecedented increase in income. China has gone from a low income nation to a middle income nation, and the rates of economic growth are higher than any nation in history. And, these growth rates are for the world’s most populous nation: 1.4 billion people (for comparison, the United States (USA) has approximately 326 million people).
This historical income growth in China has been good for US farmers and ranchers. As incomes increase, consumers shift out of grain-based diets such as rice and wheat, and into meat. There has been a large increase in beef consumption in China as incomes increased. This is an increase in the demand for US beef, as shown in Figure $4$. The units for beef are hundredweight (cwt), or one hundred pounds. This is called an increase in demand (do you remember why this is not an increase in quantity demanded?). The outward shift in demand results in a movement from equilibrium $E_0$ to $E_1$. The movement along the supply curve for beef is called an increase in quantity supplied. The equilibrium market price increases from $P_0$ to $P_1$, and the equilibrium market quantity increases from $Q_0$ to $Q_1$. An increase in demand results in higher prices and higher quantities. As a result, the best way to increase profitability for a firm is to increase demand.
Interestingly, income growth in China is beneficial to not only US beef producers, who face an increased demand for beef, but also for grain farmers in the USA. The major input into the production of beef is corn, sorghum (also called milo), and soybeans. These grains are fed to cattle in feedlots. Seven pounds of grain are required to produce one pound of beef. Therefore, any increase in the global demand for beef will result in an increase in demand for beef, and a large increase in the demand for feed grains.
Demand Decrease
In the United States, the demand for beef offals (tripe, tongue, heart, liver, etc.) has decreased in the past few decades. As incomes increase, consumers shift out of these goods and into more expensive meat products such as hamburger and steaks. The demand for offals has decreased as a result, as in Figure $5$. This is a decrease in demand (shift inward), and a decrease in quantity supplied (movement along the supply curve). The outcome is a decrease in the equilibrium market price and quantity of beef offals.
Supply Decrease
A large share of citrus fruit in the US is grown in Florida and California. If there is bad weather in either State, the market for oranges, lemons, limes, and grapefruit is affected. An early freeze can damage the citrus fruit, resulting in a decrease in supply (Figure $6$).
The supply decrease is a shift in the supply curve to the left, resulting in a movement along the demand curve: a decrease in quantity demanded. The equilibrium price increases, and the equilibrium quantity decreases.
Supply Increase
Technological change is a constant in global agriculture. Science and technology has provided more output from the same levels of inputs for many decades, and especially since 1950. Biotechnology in field crops has been a recent enhancement in the world food supply. Biotechnology is also referred to as genetically modified organisms (GMOs). Although GMOs are often in the news media as a potential health risk or environmental risk, they have been produced and consumed in the US for many years, with no documented health issues. However, the herbicide glyphosate has been determined to be a carcinogen in recent studies. Glyphosate is the ingredient in “RoundUp,” a widely used herbicide in corn and soybean production. Genetically modified corn and soybeans are resistant to this herbicide, so it has been used extensively since the introduction of GM crops. Biotechnology has increased the availability of food enormously, and is considered the largest technological change in the history of agriculture. The impact of biotechnology is shown in Figure $7$.
Biotechnology results in an increase in supply, the rightward shift in the supply curve. This supply shift results in a movement along the demand curve, an increase in quantity demanded. The equilibrium quantity increases, and the equilibrium price decreases. It may seem that the decrease in price is bad for corn producers. However, in a global economy, this keeps the US competitive in global grain markets. Since a large fraction of US grain crops are exported, this provides additional income to the corn industry.
Mathematics of Supply and Demand
The above market analyses are qualitative, or non-numerical. Numbers can be added to the supply and demand graphs to provide quantitative results. The numbers used here are simple, but can be replaced with actual estimates of supply and demand to yield important and interesting quantitative results to market events.
As an example, consider the phone market. Let the inverse demand for phones be given by Equation \ref{1.5}. The equation is called, “inverse” because the independent variable $(P)$ appears on the left-hand side and the dependent variable $(Q^d)$ appears on the right hand side. Traditionally, the independent variable $(x)$ is on the right, and the dependent variable $(y)$ is on the left. We use inverse supply and demand equations for easier graphing, since $P$ is on the vertical axis, typically used for the dependent variable (can you remember why these graphs are backwards?).
$P = 100 – 2Q^d \label{1.5}$
In the inverse demand equation, $P$ is the price of phones in USD/unit, and $Q$ is the quantity of phones in millions. The inverse supply equation is given in Equation \ref{1.6}.
$P = 20 + 2Q^s\label{1.6}$
These examples of inverse supply and demand functions are called “price-dependent” for ease of graphing. The equations can be quickly and easily inverted to “quantity-dependent” form. To do this, use simple algebra to isolate $Q^d$ or $Q^s$ on the left-hand side of the equations.
To find equilibrium, set $Q^s = Q^d = Q^e$. This is the point where the market “clears,” and supply is equal to demand. By inspection of the market graph (Figure $8$), there is only one price where this can occur: the equilibrium price: $P^e$.
\begin{align*} P &= 100 – 2Q^e = 20 + 2Q^e\ 80 &= 4Q^e\ Q^e &= 20 \end{align*}
To find the equilibrium price, plug $Q^e$ into the inverse demand equation:
$P^e = 100 – 2Q^e = 100 – 2*20 = 100 – 40 = 60.$
This result can be checked by plugging $Q^e = 20$ into the inverse supply equation:
$P^e = 20 + 2Q^e = 20 + 2*20 = 20 + 40 = 60$
The equilibrium price and quantity of phones are:
$P^e = \text{ USD } 60\text{/phones, and } Q^e = 20 \text{ million phones.}$
Notice that these equilibrium values have both labels (phones) and units.
We will be using quantitative market analysis throughout the rest of the course. If you have any questions about how to graph the functions, or how to solve for equilibrium price and quantity, be sure to review the material in this chapter carefully. We will be using these graphs throughout our study of market structures! | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/01%3A_Introduction_to_Economics/1.03%3A_Markets_-_Supply_and_Demand.txt |
Introduction to Elasticities
An elasticity is a measure of responsiveness.
Elasticity = How responsive one variable is to a change in another variable.
An elasticity $(E)$, or responsiveness, is measured by the percentage change of each variable. The change in a variable is the ending value $(X_1)$ minus the initial value $(X_0)$, or $ΔX = X_1 – X_0$. A percentage change in a variable is defined as the change in the variable divided by the initial value of the variable:
$\%ΔX = \dfrac{ΔX}{X_0}.$
Using this formula, Equation \ref{1.7} shows the responsiveness of $Y$ to a change in $X$:
$E = \frac{\%ΔY}{\%ΔX} = \frac{(ΔY/Y)}{(ΔX/X)} = \left(\frac{ΔY}{ΔX}\right)\left(\frac{X}{Y}\right). \label{1.7}$
Elasticities can be calculated for any two variables. Elasticities are widely used in economics to measure how responsive producers and consumers are to changes in prices, income, and other economic variables. Elasticities have a very desirable property: they do not have units. Since the two variables are measured in percentage changes, the units of each variable are cancelled, and the resulting elasticity has no units. This allows elasticities to be compared to each other, when prices and quantities cannot be directly compared. For example, the quantity of apples cannot be directly compared to the quantity of orange juice, since they are in different units. However, the elasticities of oranges and apples can be compared directly, since there are no units for elasticities.
Own Price Elasticity of Demand: Ed
The own-price elasticity of demand (most often called simply the “price elasticity of demand” or the “elasticity of demand”) measures the responsiveness of consumers to a change in price, as shown in Equation \ref{1.8}:
$E_d = \frac{\%ΔQ^d}{\%ΔP} = \left(\frac{ΔQ^d}{ΔP}\right)\left(\frac{P}{Q^d}\right).\label{1.8}$
Own Price Elasticity of Demand = the percentage change in quantity demanded given a one percent change in the good’s own price, ceteris paribus.
The own-price elasticity of demand is the most important thing that a business firm can know. The price elasticity informs the business about how a change in price will affect the quantity demanded. If consumers are responsive to price changes, the firm may think twice before raising the price and losing customers to the competition. On the other hand, if consumers are relatively unresponsive to price changes, the firm may increase the price, and most customers will continue to purchase the good at the higher price. Food is an example of an inelastic good, since we all need to eat.
The price elasticity of demand $(E_d)$ depends on the availability of substitutes. If there are no substitutes for a good (food, toilet paper, toothpaste), the good is called, “price inelastic.” Consumers will purchase the good even at a high price. If substitutes are available, the good is considered to be “price elastic:” a higher price will cause customers to decrease consumption of the good by buying the substitute good. Green shirts are an example: if the price of green shirts is increased, consumers will shift purchases to blue shirts, or shirts of a different color.
The price elasticity of demand is the most critical aspect of a business firm, since it provides the most crucial information about customers! Knowledge of the price elasticity of demand provides information to a business firm on how consumers would react to price changes, allowing the firm to identify the profit-maximizing price to charge consumers.
Price Elasticity of Demand Example
Suppose that the price of wheat is equal to USD 4/bu of wheat, and increases to USD 6/bu. Due to the higher price, suppose that wheat millers reduce their purchases of wheat from 10 million bushels (m bu) to 8 million bushels. The price elasticity of demand for wheat can be calculated using Equation \ref{1.9}. By convention, the initial values of $P$ and $Q^d$ are used in the elasticity calculation for the variables $P$ and $Q^d$.
$E_d = \frac{\%ΔQ^d}{\%ΔP} = \frac{(ΔQ^d/ΔP)}{(P/Q^d)} = \left(\frac{\text{8-10 m bu}}{\text{6-4 USD/mt}}\right)\cdot\left(\frac{\text{4 USD/bu}}{\text{10 m bu}}\right)\label{1.9}$
Notice that the units cancel: there are (m bu) in both the numerator and denominator, and (USD/bu) also appears in both numerator and denominator. This allows the math to be greatly simplified:
$E_d = \left(\frac{-2}{2}\right)\cdot\left(\frac{4}{10}\right) = \frac{-1}{0.4} = -0.4$
The price elasticity of demand is always negative, due to the Law of Demand. By convention, economists take the absolute value to make $E_d$ positive. For example, in this case, $E_d= – 0.4$, then $\mid E_d \mid = 0.4$. The own price elasticity of demand provides important information about the wheat market: how responsive wheat buyers are to a change in price. To interpret the elasticity, it means that for a one percent increase in price, the quantity demanded of wheat will decrease by 0.4 percent. This is a relatively inelastic response, since the change in quantity demanded is smaller than the price change.
Elasticities are classified into three categories, based on consumer responsiveness to a one percent change in price.
\begin{align*} &\text{Price elastic } & \mid E_d \mid &> 1\ &\text{Price inelastic} & \mid E_d \mid &< 1\ &\text{Unitary elastic} & \mid E_d \mid &= 1\end{align*}
Goods that are price elastic have substitutes available, and the percentage change in quantity demanded will decrease more than the percentage increase change in price $(\%ΔQ^d > \%ΔP$, therefore $\mid E_d\mid > 1)$. A price inelastic good, on the other hand, will have a smaller percentage change in quantity demanded than the percentage increase in price $(\%ΔQ^d < \%ΔP$, therefore $\mid E_d \mid < 1)$. For unitary elastic goods, the percentage change in quantity demanded is equal to the percentage change in price $(\%ΔQ^d = \%ΔP$, therefore $\mid E_d\mid = 1)$.
Elastic and Inelastic Demand Examples
To compare elastic and inelastic demands, think of a student who would like to purchase a pack of cigarettes during a late night study session for an exam. If the student arrives at the convenience store to find that the price of Marlboros, her usual brand, has doubled, she could switch to many other brands: Lucky Strikes, Winstons, etc. The demand for Marlboro cigarettes is price elastic (left panel, Figure $1$). The price elasticity of demand depends on the availability of substitutes. An elastic demand will have a relatively flat slope, since a small change in price results in a relatively larger change in quantity demanded.
On the other hand, if the convenience store increases the price of all cigarettes, the student will pay for a pack, since there are no substitutes for all cigarettes (right panel, Figure $1$). More narrowly defined goods will have larger absolute values of own price elasticities, since there are more substitutes for narrowly defined goods. For example, apples are more price elastic than all fruit, and green shirts are more price elastic than all shirts. An inelastic good will have a steep slope, since the change in quantity demanded is small relative to the change in price.
Figure $2$ shows a range of own price elasticities, from perfectly inelastic to perfectly elastic.
A good that is perfectly inelastic is one that consumers purchase no matter what the price is. Within a certain range of prices, this could be food or electricity. In this case, quantity demanded is completely unresponsive to changes in price: $\mid E_d\mid = 0$. An inelastic demand is one where the percentage change in price is larger than the percentage change in quantity demanded: $\%ΔQ^d < \%ΔP$, and $\mid E_d\mid < 1$. Goods that are price inelastic are characterized by consumers being unresponsive to price changes. Goods that are price elastic exhibit relatively high levels of consumer responsiveness to price movements. For elastic goods, the percentage change in quantity demanded is larger than the percentage change in price: $\%ΔQ^d > \%ΔP$, and $\mid E_d\mid > 1$. A perfectly elastic good is characterized by a horizontal demand curve. In this case, if the price of the good is increased even one cent, all customers decrease purchases of the good to zero. An individual wheat farmer’s crop is an example. If the farmer tries to raise the price by one cent more than the prevailing market price, no consumers would purchase her wheat. There are a large number of perfect substitutes available from other wheat farmers, so the price elasticity is infinite, and the good is called, “perfectly elastic.”
Own Price Elasticity of Supply: Es
Producer responsiveness to a change in price is measured with the own price elasticity of supply, often called the price elasticity of supply, or the elasticity of supply (Es). The formula for the price elasticity of supply is given in Equation \ref{1.10}:
$E_s = \frac{\%ΔQ^s}{\%ΔP}. \label{1.10}$
Own Price Elasticity of Supply = the percentage change in quantity supplied given a one percent change in the good’s own price, ceteris paribus.
The own price elasticity of supply is always positive, because of the Law of Supply: there is a direct, positive relationship between the quantity supplied of a good and the good’s own price, ceteris paribus. Similar to the price elasticity of demand, the price elasticity of supply is categorized into three elasticity classifications.
\begin{align*} &\text{Price elastic } & E_s &> 1\ &\text{Price inelastic} & E_s &< 1\ &\text{Unitary elastic} & E_s &= 1\end{align*}
A good with an elastic supply is one where the percentage change in quantity supplied is greater than the percentage change in price: $\%ΔQ^s > \%ΔP$, and $E_s > 1$. Since $E_s$ is always positive, the absolute value is not necessary (redundant). A good with an inelastic supply has a smaller percentage change in quantity supplied, given a percent change in price: $\%ΔQ^s < %ΔP$, and $E_s < 1$. A good with unitary elasticity of supply has equal percent changes in quantity supplied and price: $\%ΔQ^s = \%ΔP$, and $E_s = 1$. Figure $3$ illustrates the different categories of the own price elasticity of supply.
Income Elasticity: Ei
The income elasticity $(E_i)$ measures how consumers of a good respond to a one percent increase in income $(I)$, as shown in Equation /ref{1.11}:
$E_i = \frac{\%ΔQ^d}{\%ΔI}.\label{1.11}$
The income elasticity is defined in a similar way as the price elasticities.
Income Elasticity = the percentage change in demand given a one percent change in income, ceteris paribus.
Income elasticities are also categorized into responsiveness classifications. A normal good is one that increases with an increase in income $(E_i > 0)$. There are two subcategories of normal goods: necessities and luxury goods. Notice that necessity goods and luxury goods are normal goods. They represent subgroups of the normal category, since $E_i$ is positive in both cases.
\begin{align*} &\text{Normal Good } & E_i &> 0\ &\text{Necessity Good} & 0 &< E_i < 1\ &\text{Luxury Good} & E_i &> 1\ &\text{Inferior Good} & E_i &< 1\end{align*}
The graphs of the relationship between income and demand are called, “Engel Curves,” named for Ernst Engel (1821-1896), a German statistician who first investigated the impact of income on consumption.
A necessity good is a normal good that has a positive, but small, increase in demand given a one percent increase in income. Food is an example, since consumers increase the consumption of food with an increase in income, but the total amount of food consumed reaches an upper limit. This is shown in Figure $4$, left panel.
A luxury good is one that has increasing demand as income increases, as shown in the right panel of Figure $4$. Good such as boats, golf club memberships, and expensive clothing are examples of luxury goods. Inferior goods (Figure $5$) are characterized by lower levels of consumption as income increases: ramen noodles and used clothes are examples.
It is important to point out that a good can be a normal good at low income levels, and an inferior good at higher income levels. Hamburger (ground beef) is an example. At low levels of income, hamburger consumption might increase when income rises (Figure $6$). However, at higher levels of income, consumers might shift out of ground beef and into more expensive meats such as steak. Figure $6$ shows that the same good can be both a normal good and an inferior good, at different levels of income.
Cross Price Elasticity of Demand: Edxy
The cross price elasticity of demand measures the responsiveness of demand for one good with respect to a change in the price of another good.
$E_{dxy} = \frac{\%ΔQ^d_y}{\%ΔP_x}\label{1.12}$
Cross Price Elasticity of Demand = the percentage change in the demand of one good given a one percent change in a related good’s price, ceteris paribus.
The cross price elasticity is important for two categories of related goods: substitutes and complements in consumption. Substitutes in consumption will have a positive cross price elasticity of demand, since consumers will decrease purchases of the good that has the price increase, and buy more substitute goods. Complements in consumption are goods that are consumed together, like macaroni and cheese. If the price of macaroni increases, then consumption of both macaroni and cheese decreases.
\begin{align*} &\text{Substitutes in Consumption } & E_{dxy} &= \frac{\%ΔQ^d_y}{\%ΔP_x} > 0\ &\text{Complements in Consumption } & E_{dxy} &= \frac{\%ΔQ^d_y}{\%ΔP_x} < 0\ &\text{Unrelated Goods in Consumption } & E_{dxy} &= \frac{\%ΔQ^d_y}{\%ΔP_x} = 0 \end{align*}
Unrelated goods have a cross price elasticity of demand equal to zero. This is because a change in the price of a good has no effect on the quantity demanded of an unrelated good.
Cross Price Elasticity of Supply: Esxy
The cross price elasticity of supply captures the responsiveness of the supply of one good, given a change in the price of another good.
$E_{sxy} = \frac{\%ΔQ^s_y}{\%ΔP_x} \label{1.13}$
Cross Price Elasticity of Supply = the percentage change in the supply of one good given a one percent change in a related good’s price, ceteris paribus.
Substitutes in production are goods that are produced “either/or,” such as corn and soybeans. The same resources (land, machinery, labor, etc.) could be used to produce either corn or soybeans, but the two crops can not be grown on the same land at the same time. The cross price elasticity of supply of substitutes in production is negative. If the price of corn increases, for example, then producers will devote more land to corn and less to soybeans.
\begin{align*} &\text{Substitutes in Production } & E_{sxy} &= \frac{\%ΔQ^s_y}{\%ΔP_x} < 0\ &\text{Complements in Production } & E_{sxy} &= \frac{\%ΔQ^s_y}{\%ΔP_x} > 0\ &\text{Unrelated Goods in Production } & E_{sxy} &= \frac{\%ΔQ^s_y}{\%ΔP_x} = 0 \end{align*}
Complements in production are goods that are produced together, such as beef and leather. Complements in production have a positive cross price elasticity: if the price of beef increases, both more beef and more leather will be supplied to the market. Unrelated goods in production have a cross price elasticity of supply equal to zero, since the price of an unrelated good has no impact on the demand of the other unrelated good.
Price Elasticities and Time
The magnitude of the price elasticity of supply measures how easy it is for the firm to adjust to price changes. In the immediate run (a short time period), the firm can not adjust the production process, so the supply is typically perfectly inelastic. In the short run, a time period when some inputs are fixed and some inputs are variable, the firm may be able to adjust some inputs, so supply is inelastic, but not perfectly inelastic. In the long run, all inputs are variable, and the firm can make adjustments to the production process. In this case, supply is elastic. As more time passes, the price elasticity of supply increases.
This relationship also holds for the price elasticity of demand. If the price of a good increases in the immediate run, there is little consumers can do other than purchase the good. Air travelers who have an emergency that they need to attend to will pay a high price of an airline ticket on the same day of the flight. As time passes, there are more options available to the consumer, and the price elasticity of demand becomes more elastic with the passage of time.
Elasticity of Demand along a Linear Demand Curve
Interestingly, the elasticity of demand changes along a linear demand curve. This is due to the calculation of the own price elasticity of demand as percentage change in quantity demanded caused by a percentage change in the price of the good. In Figure $7$, the slope of the demand function is constant: it does not change over the entire demand curve.
For example, suppose that the inverse demand function is given by: $P = 10 – Q^d$, where $P$ is the price of the good and $Q^d$ is the quantity demanded. In this case, the vertical intercept (y-intercept) is equal to 10, and the slope is equal to negative one. It should be emphasized that, in this case, the slope is constant and equal to minus one for the entire demand curve.
The elasticity of demand, however, changes in value quite dramatically from the y-intercept to the x-intercept. It changes from a value of zero on the x-axis to a value of negative infinity on the y-axis. The cause is the calculation of percentage change.
Consider an example of a mouse and an elephant. If both gain one pound, the weight gain is identical, but the percentage change is vastly different. Suppose that the mouse weighs one-tenth of a pound, the elephant weighs 10,000 pounds, and the total weight gain for both the mouse and the elephant is one pound. The percentage weight gain is $\%ΔWG = \dfrac{ΔWG}{WG_0}$, where $ΔWG$ is the change in weight, and $WG_0$ is the initial weight gain. For the mouse,
$\%ΔWG \text{ mouse } = \frac{ΔWG}{WG_0} = \frac{1 \text{ lb}}{0.1 \text{ lbs}} = 10 = 1000 \text{ percent!} \label{1.14}$
For the elephant,
$\%ΔWG \text{ elephant} = \frac{ΔWG}{WG_0} = \frac{1 \text{ lb}}{10,000 \text{ lbs}} = 0.0001 = 0.01 \text{ percent!} \label{1.15}$
The take home message of the story is that the total weight gain was identical for both the elephant and the mouse (one pound), whereas the percentage weight gain was enormously different.
This is also true of the elasticity of demand along the linear demand curve. Consider the point where the linear demand curve crosses the x-axis. At this point, the price is equal to zero. Suppose that we raised the price by one unit to find out how responsive consumers are to an increase in price. The price elasticity of demand is:
$E_d = \frac{\%ΔQ^d}{\%ΔP}.\label{1.16}$
At the x-intercept, the percentage change in price $(\%ΔP)$ is equal to $dfrac{ΔP}{P} = \dfrac{1}{0} =$ infinity. The elasticity of demand is equal to the percentage change in quantity demanded $(\%ΔQ)$ divided by the percentage change in price $(\%ΔP =$ infinity). Thus, $E_d = 0$ at the x-intercept, since dividing any number by infinity is equal to zero.
How responsive are consumers to a change in price at the vertical axis? At the y-intercept, the percentage change in quantity demanded $(\%ΔQ_d)$ is equal to $\dfrac{ΔQ_d}{Q_d} = \dfrac{1}{0} =$ infinity. Therefore, the elasticity of demand is equal to the percentage change in quantity demanded $(\%ΔQ =$ infinity) divided by the percentage change in price $(\%ΔP)$. Thus, $\mid E_d\mid =$ infinity at the y-intercept.
At the midpoint, the price elasticity of demand is equal to negative one.
$E_d = \frac{\%ΔQ^d}{\%ΔP} = \left(\frac{ΔQ^d}{ΔP}\right)\left(\frac{P}{Q^d}\right) = -1 \label{1.17}$
At the midpoint, the slope of the demand curve is equal to minus one $(\dfrac{ΔQ^d}{ΔP} = 1)$, and the price is equal to the quantity demanded $(P = Q^d)$. Therefore the own price elasticity of demand at the midpoint of a linear demand curve is equal to minus one $\left(\dfrac{ΔQ^d}{ΔP}\right)\left(\dfrac{P}{Q_d}\right) = -1$.
A valuable lesson is learned in this example: be careful to distinguish between the slope of a demand curve and the elasticity of demand. When interpreting graphs, the slope is not a good determinant of elasticity, since a graph could be drawn steep or shallow depending on the units. The elasticity is related to the slope, but it is not equal to the slope!
Agricultural Policy Example of Elasticity of Demand
The impact of agricultural policies depends critically on the elasticity of demand. It was claimed earlier that the price elasticity of demand is the most important thing that a business firm can know. This section provides evidence of the importance of the price elasticity of demand. The own price elasticity of demand for food is inelastic in a domestic economy with no trade. Everyone must eat, and the caloric intake will not be greatly influenced by the price of food.
This changes enormously in a global economy. In an open economy that has international trade, there are many overseas customers for food exports, and many competing nations that export food. For example, the US is a major wheat exporter. Other wheat exporting nations include: Canada, Australia, Argentina, the European Union (EU), and many of the former Soviet nations in Eastern Europe such as Ukraine. In this case, the US faces a highly elastic demand for wheat in the global economy: if the US increased the price of wheat above the world price, wheat importers would shift purchases from the US to other wheat exporters. A global economy changes the effectiveness of price policies enormously.
Prior to 1972, the United States agricultural sector could be characterized as a domestic economy, with less food and agricultural trade. In this case, the demand for food was primarily domestic, and thus relatively inelastic (Figure $8$, left panel).
Starting in 1933, agricultural price supports increased the price of wheat above the market equilibrium level. This policy worked well, as long as the surplus was eliminated. One way to eliminate the surplus was through acreage restrictions, which limited the number of acres planted to wheat ($ΔQ$ in the left panel of Figure $8$). Acreage limitations and production quotas were used to decrease the quantity of wheat in the market. These policies worked well prior to 1972, since the US agricultural economy was primarily domestic, characterized by an inelastic demand curve. The decrease in quantity led to a larger increase in price $(ΔP > ΔQ^d)$, given the inelastic demand in a domestic economy.
In 1972, major changes in international exchange rate policies, together with poor weather in Asia, led to the globalization of US food and agricultural markets. A larger percentage of the US wheat crop was exported, and the inelastic demand that prevailed prior to 1972 became more elastic in the globalized environment (right panel, Figure $8$). Although the wheat market became globalized, the policies did not. During the 1980s, the US maintained price supports and production controls in the seven basic commodities (defined by the USDA as: wheat, corn, sorghum, sugar, cotton, rice, and tobacco). These policies were counterproductive, as they priced the grain out of the world market. The US attempted to increase the market share of wheat trade, only to find that the US price was higher than the other major wheat exporters.
Figure $8$ shows why the policies implemented in 1933 were hurting more than they were helping. Production controls were decreasing the quantity of wheat. In the domestic economy (left panel of Figure $8$, pre-1970), this achieved the objectives of the policies: wheat producer were made better off, since the increase in price was greater than the decrease in quantity. This all changed in the globalized world after 1972 (right panel of Figure $8$, post 1972). With an elastic demand, the decrease in quantity did not result in large price increases. Price supports raised the US price above the world price of wheat. These policies were not working, and in 1996, they were changed to make the US grain industry more competitive in the global market. Today, a large fraction of all grain produced in the US is exported.
In summary, policies intended to help producers have greatly divergent outcomes, depending on the price elasticity of demand. In a domestic economy, the demand for food and agricultural products is typically inelastic. In this case, production controls and price supports will achieve the policy goal of helping producers: the price increases will be larger than the quantity decreases (left panel, Figure $8$). In a global economy, the demand for food and agricultural goods is elastic: there are many nations that export grains (right panel, Figure $8$). In this case the policy that helps producers the most is technological change, which will shift the supply curve to the right. With an elastic demand, the increase in quantity is larger than the decrease in price.
This is the same strategy that Walmart utilizes: everyday low prices. Sam Walton found that the increase in sales due to low prices more than offset the decrease in price $(ΔQ^d > ΔP)$. This is true in any market characterized by an elastic demand. Since most consumer goods in the United States have many substitutes, Walton’s lower-price strategy led to Walmart becoming the most successful retailer in the history of the world.
Calculation of Market Supply and Demand Elasticities
In Section 1.3.4 above, inverse supply and demand curves were used to calculate the equilibrium price and quantity of phones.
The inverse demand and supply functions were:
$P = 100 – 2Q^d \label{1.18}$
$P = 20 + 2Q^s \label{1.19}$
Where $P$ is the price of phones in USD/unit, and $Q$ is the quantity of phones in millions. By setting the two equations equal to each other, the intersection of the inverse supply and demand curves was found, yielding the equilibrium market price and quantity:
$P^e = \text{ USD } 60\text{/phones,}$
and
$Q^e = 20 \text{ million phones.}$
The graph of the phone market is replicated in Figure $9$.
To calculate the own price elasticities of supply and demand, simple calculus provides an easy solution. Recall the definition of the own price elasticity of demand:
$E_d = \frac{\%ΔQ^d}{\%ΔP} = \left(\frac{ΔQ_d}{ΔP}\right)\left(\frac{P}{Q_d}\right) = \left(\frac{∂Q_d}{∂P}\right)\left(\frac{P}{Q_d}\right)\label{1.20}$
The delta sign $(Δ)$ refers to a small change in a variable. This is the same as the derivative sign, “$∂$.” The difference is that the derivative indicates in infinitesimally small change, whereas the delta sign is a discrete change, which is the same idea, just larger. Therefore, the delta signs can be replaced with the derivative signs in the equation that defines the price elasticity of demand. For example, the slope of a function is $\dfrac{Δy}{Δx}$. The slope of the function at a given point on the function is $\dfrac{∂y}{∂x}$.
The last expression in the equation shows that to calculate $E_d$, use the derivative of quantity with respect to price, and the levels of $P$ and $Q$. At the equilibrium point, the equilibrium levels of $P$ and $Q$ are known. To find the derivative, begin by taking the derivative of the inverse demand equation (Equation \ref{1.18}).
$\frac{∂P}{∂Q^d} = – 2\label{1.21}$
This is simply the power function rule from calculus [if $y =ax^b$, then $\dfrac{∂y}{∂x} = abx^{b-1}$]. Notice something important: the derivative of the inverse demand equation is the inverse of what is needed to calculate the price elasticity. This is due to the inverse demand function being, “price-dependent,” with $P$ on the left hand side. To find the derivative $\dfrac{∂Q^d}{∂P}$, invert the derivative by dividing one by the derivative.
$\frac{∂Q^d}{∂P} = –\left(\frac{1}{2}\right) \label{1.22}$
$E_d = \frac{\%ΔQ^d}{\%ΔP} = \left(\frac{ΔQ^d}{ΔP}\right)\left(\frac{P}{Q^d}\right) = \left(\frac{∂Q^d}{∂P}\right)\left(\frac{P}{Q^d}\right) = -\left(\frac{1}{2}\right)\left(\frac{60}{20}\right) = -1.5\label{1.23}$
The absolute value of the own price elasticity of demand at the equilibrium point is:
$\mid E_d\mid = 1.5$. The demand for phones is elastic: if the price were increased one percent, the decrease in phone purchases would be 1.5 percent.
The own price elasticity of supply can be found using the same procedure:
$E_s = \frac{\%ΔQ^s}{\%ΔP} = \left(\frac{ΔQ^s}{ΔP}\right)\left(\frac{P}{Q^s}\right) = \left(\frac{∂Q^s}{∂P}\right)\left(\frac{P}{Q^s}\right)\label{1.24}$
First, take the derivative of the inverse supply function (Equation \ref{1.19}).
$\frac{∂P}{∂Q^s} = + 2 \label{1.25}$
Invert this derivative to find the derivative needed to calculate the price elasticity of supply:
$\frac{∂Q^s}{∂P} = + \left(\frac{1}{2}\right). \label{1.26}$
Then plug in the ingredients of the own price elasticity of supply:
$E_s = \frac{\%ΔQ^s}{\%ΔP} = \left(\frac{ΔQ^s}{ΔP}\right)\left(\frac{P}{Q^s}\right) = \left(\frac{∂Q^s}{∂P}\right)\left(\frac{P}{Q^s}\right) = \left(\frac{1}{2}\right)\left(\frac{60}{20}\right) = 1.5 \label{1.27}$
The price elasticity of supply is also elastic: a one percent increase in price results in a 1.5 percent increase in the quantity supplied of phones.
Two points are worth mentioning here. First, the price elasticities of supply and demand are not always symmetrical, as they are in this case (-1.5 and +1.5). The elasticities depend on the shape of the inverse supply and demand functions. The symmetry of the functions used here can be seen in Figure $9$. When the inverse supply and demand functions are not symmetrical, the absolute values of the elasticities will be of different magnitudes. The second important point concerns the use of inverse supply and demand functions. The inverse functions are used to align with the “backwards” nature of the supply and demand graphs: price is the independent variable, but appears on the vertical axis. To find the derivative needed to calculate the price elasticities, the procedure above first took the derivative of the inverse function, then inverted it to achieve $\dfrac{∂Q}{∂P}$. This derivative could also be found be first, inverting the inverse function to get the quantity isolated on the left hand side, then taking the derivative. This alternative procedure will result in the same elasticity calculation as the one used above. To test your knowledge, try this procedure to double check your answers! | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/01%3A_Introduction_to_Economics/1.04%3A_Welfare_Economics_-_Consumer_and_Producer_Surplus.txt |
Introduction to Welfare Economics
Welfare economics is concerned with how well off individuals and groups are. Welfare economics is not about government programs to assist the needy… that is a different type of welfare. In economics, welfare economics is used to see how the welfare, or well-being of individuals and groups changes with a change in policies, programs, or current events.
Welfare Economics = The study and calculation of gains and losses to market participants from changes in market conditions and economic policies.
Consumer Surplus and Producer Surplus
The two most important groups that are studied in welfare economics are producers and consumers. The concepts of Consumer Surplus $(CS)$ and Producer Surplus $(PS)$ are used to measure the wellbeing of consumers and producers, respectively.
Consumer Surplus $(CS)$ = A measure of how well off consumers are. Willingness to pay minus the price actually paid.
Producer Surplus $(PS)$ = A measure of how well off producers are. Price received minus the cost of production.
The intuition of consumer surplus provides a good method of learning the concept. Suppose that I am on my way to the store to purchase a hammer, and I think to myself, “I am willing to pay six dollars for the hammer.” When I arrive at the store, I find that the price of the hammer is four dollars. My consumer surplus is equal to two dollars: the willingness to pay minus the actual price paid. In this manner, we can add up all consumers in the market to measure consumer surplus for all consumers. This can be seen in Figure $1$. If each point on the demand curve is considered an individual consumer, then $CS$ is the difference between each point on the demand curve and the price line. The demand curve represents the consumers’ willingness and ability to pay for a good. The $CS$ area is a triangle, and equal to the level of consumer surplus in the market (Figure $1$).
Similarly, the intuition of producer surplus is a good place to start. If a wheat producer can produce a bushel of wheat for four dollars, and she receives six dollars per bushel when she sells her wheat, then her level of producer surplus is equal to two dollars. Producer surplus is the price received minus the cost of production. In Figure $1$, this is the difference between the price line and the supply curve. The market supply curve was derived by summing all individual firms’ marginal cost curves. Therefore, the supply curve represents the cost of production. The $PS$ area is the area identified in Figure $1$.
These areas can be quantified, or measured, to find the dollar value of consumer surplus and producer surplus. These measures place a dollar value on the wellbeing of producers and consumers.
Mathematics of Consumer and Producer Surplus: Phone Market
Remember that the inverse supply and demand for phones was given by:
$P = 100 – 2Q^d, \label{1.28}$
and
$P = 20 + 2Q^s. \label{1.29}$
Where P is the price of phones in dollars/unit, and Q is the quantity of phones in millions. The equilibrium price and quantity of phones were calculated above in section 1.4.10:
\begin{align*}P^e &= \text{ USD } 60\text{/phone}\Q^e &= 20 \text{ million phones.}\end{align*}
These values, together with the supply and demand functions, allow us to measure the well-being of both consumers and producers. From geometry, the area of a triangle is one half base times height. To calculate $CS$ and $PS$, multiply the base of the triangle times the height of the triangle in Figure $2$, then multiply by one half, or 0.5 (Equations \ref{1.30} and \ref{1.31}).
\begin{align} CS &= 0.5(100 – 60)(20) = 0.5(40)(20) = 400 \text{ million USD}\label{1.30}\PS &= 0.5(60 – 20)(20) = 0.5(40)(20) = 400 \text{ million USD}\label{1.31}\end{align}
We will use the concepts of consumer surplus and producer surplus extensively in what follows, where we will explore the consequences of policies, international trade, and immigration in food and agriculture.
The units for both $CS$ and $PS$ are in terms of dollars (USD). These measures capture how well off consumers and producers are in dollars, or the dollar value of their happiness, or well-being. The units are price units multiplied by quantity units, or $(\text{USD/phone})\cdot(\text{million phones})$. Notice that the phone units are in both the numerator and denominator, so they are cancelled, leaving million dollars.
1.6 The Motivation for and Consequences of Free Trade
The Motivation for Free Trade and Globalization
Globalization and free trade result in enormous economic benefits to nations that trade. These benefits have led to high incomes in many nations throughout the world, particularly since 1950. As with all national policies, there are benefits and costs to international trade: there are winners and losers to globalization. When trade is voluntary, the gains are mutually beneficial, and the overall benefits are greater than the costs. There is a strong motivation to trade, and the nations of the world continue to become more globalized over time.
A nation’s consumption possibilities are vastly increased with trade. In nations North of the equator such as the USA, Japan, EU, and China, fresh fruit and vegetables can be purchased during the winter from nations in the Southern hemisphere. In the United States, tropical products including coffee, sugar, bananas, cocoa, and pineapple are imported, since the costs of producing these goods are much lower in tropical climates than in the USA.
The principle of comparative advantage provides large benefits to individuals, nations, and firms that specialize in what they do best, and trade for other goods. This process greatly expands the consumption possibilities of all nations, due to efficiency gains that arise from specialization and gains from trade. For example, if Canada specializes in wheat production, and Costa Rica produces bananas, both nations could be better off through specialization and trade.
A nation that does not trade with other nations is called a closed economy.
Closed Economy = A nation that does not trade. All goods and services consumed must be produced within the nation. There are no imports or exports.
Open Economy = A nation that allows trade. Imports and exports exist.
If a nation does not trade, then consumers in the closed economy must only consume what it produces. In this case, quantity supplied must equal quantity demanded $(Q^s = Q^d)$. Trade allows this equality to be broken, providing the opportunity for imports $(Q^s < Q^d)$ or exports $(Q^s > Q^d)$. The concepts of Excess Supply and Excess Demand will be introduced in the next section to aid in understanding the motivation and consequences of free trade.
1.6.2 Excess Supply and Excess Demand
We will use wheat as an example to see how and why trade occurs. We will investigate wheat trade between the USA and Japan. Japan is one of the largest international buyers of wheat from the United States. The USA is a wheat exporter. The left panel of Figure $3$ show the USA wheat market. Define $P_e$ to be the price of wheat in the exporting nation. At price $P_e$, domestic consumption $(Q_d)$ is equal to domestic production $(Q_s)$.
Excess supply is defined to be the quantity of exportable surplus, or $Q^s – Q^d$. At prices higher than $P_e$, the quantity supplied becomes greater than the quantity demanded, and excess supply exists.
Excess Supply $(ES)$ = Quantity supplied minus quantity demanded at a given price, $Q^s – Q^d$.
At prices higher than $P_e$, wheat producers increase the quantity supplied along the supply curve $Q^s$ due to the Law of Supply. Wheat consumers decrease purchases of wheat along the demand curve $Q^d$, due to the Law of Demand. The result is a surplus, or excess supply, at the higher price. If excess supply existed in a closed economy, market forces would come into play to bring the higher price back down to the market equilibrium level, $P_e$. In an open economy, however, it is possible to maintain the high market price through exports. In the right-hand panel of Figure $3$, the $ES$ function represents excess supply, equal to the horizontal distance between $Q^s$ and $Q^d$ in the left-hand panel. Note that $ES = 0$ at $P_e$, and becomes larger as the price of wheat increases.
Free trade allows the USA to use its resource to produce more wheat than it consumes, and export what is left over to enhance what producer revenues. Trade also provides the opportunity to buy imported goods from other nations.
Define $P_i$ to be the price of wheat in the importing nation (Japan in this case). An importing nation such as Japan is characterized by a price lower than the domestic market equilibrium price $(P_i)$, where $Q^s = Q^d$ (Figure $3$). In an importing nation, quantity demanded is greater than quantity supplied $(Q^s < Q^d)$, and the price is lower than the market equilibrium price. If the price is lower than $P_i$ in Figure $3$, consumers increase purchases of the good due to the Law of Demand, and producers decrease production of the good, following the Law of Supply. This results in an Excess Demand for the good.
Excess Demand $(ED)$ = Quantity demanded minus quantity supplied at a given price, $Q^d – Q^s$.
Note that any shift in either supply or demand of wheat in the importing nation will shift the $ED$ curve. Domestic events in the markets for traded goods have international consequences. What happens in China has a large impact on USA wheat producers. Next, the exporting and importing nations will be linked through international trade.
The $ED$ curve shown in the right-hand panel of Figure $3$ represents excess demand, equal to the horizontal distance between $Q^d$ and $Q^s$ in the left-hand panel. Note that $ED = 0$ at $P_i$, and becomes larger as the price of wheat decreases.
1.6.3 Three Panel Diagram of Trade between Two Nations
Now consider the wheat exporting nation (USA) and wheat importing nation (Japan) in the same diagram, Figure $4$. The wheat market in the exporting nation is shown in the left panel, and the wheat market in the importing nation is shown in the right panel (Figure $4$). The trade sector is in the middle panel. Excess demand $(ED)$ is downward sloping, and is derived from the domestic supply $(Q^s_i)$ and demand $(Q^d_i)$ in the importing nation, Japan in this case. Excess Supply $(ES)$ is upward sloping, derived from the supply $(Q^s_e)$ and demand $(Q^d_e)$ curves in the exporting nation, the USA. In reality, the right panel is composed of many nations: all countries that import wheat. For simplicity, the model here is for one importing nation and one exporting nation. As price $(P_i)$ decreases in Japan, quantity demanded $(Q^d_i)$ increases and quantity supplied $(Q^s_i)$ decreases, causing $ED$ to have a negative slope. Similarly, price $(P_e)$ increases in the exporting nation (USA) result in a higher quantity supplied of wheat $(Q^s_e)$ and a lower quantity demanded $(Q^d_e)$. Equilibrium in the global wheat market is found in the center panel at the point where $ED = ES$.
The quantity of wheat traded $(Q_T)$ is equal to $ED$ at the world price $(P_w)$, which is also equal to $ES$ at the world price. Note that it must be true that $ED=ES$: any imported goods in one nation must be exported by the other nation. Therefore, $Q_T = ED = ES = (Q^d_i – Q^s_i) = (Q^s_e – Q^d_e)$.
In a multi-nation model, this equilibrium would occur when the sum of all wheat supplied from all exporting nations $(= ΣES = Q^s_e – Q^d_e)$ is equal to the sum of all wheat demanded from all importing nations $(= ΣED = Q^d_i -Q^s_i)$. The equilibrium quantity traded $(Q_T)$ is equal to the quantity of wheat imported by Japan $(ED_i)$ and the quantity of wheat exported by the USA $(ES_e)$ since exports must equal imports ($Q_T$ = exports = imports). This equilibrium in the world market determines the world price $(P_w)$, which is the price of wheat for all trading partners, Japan and the USA in this model.
The three-panel diagram demonstrates two important characteristics about free trade. First, the motivation for trade is simple: “buy low and sell high.” If a price difference exists between two locations, arbitrage provides profit opportunities for traders. A firm (or nation) that buys wheat at a lower price in the USA and sells the wheat at a higher price in Japan can earn profits. Note that this simple model ignores transportation costs and exchange rates. Second, anything that affects the supply or demand of wheat in either trading nation affects the global price and quantity of wheat. Therefore, all consumers and producers of a good are interconnected: the welfare of all wheat producers and consumers is affected by weather, growing conditions, food trends, and all other supply and demand determinants in all trading nations.
This point is enormously important in a globalized economy: the well-being of all producers and consumers depends on people, politicians, and current events all over the globe. The three-panel diagram is useful in understanding the determinants of food and agricultural exports: all supply and demand shifters in all wheat exporting and importing nations. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/01%3A_Introduction_to_Economics/1.05%3A_The_Motivation_for_and_Consequences_of_Free_Trade.txt |
• 2.1: Price Ceiling
In some circumstances, the government believes that the free market equilibrium price is too high. If there is political pressure to act, a government can impose a maximum price, or price ceiling, on a market.
• 2.2: Price Support
• 2.3: Quantitative Restriction
• 2.4: Import Quota
• 2.5: Taxes
Taxes are often imposed to provide government revenue. The government also uses taxes to decrease the consumption of a good such as alcohol or tobacco. These taxes are called “sin taxes,” on goods that are not favored by society. These goods often have inelastic demands, which allows the government to apply a tax and earn revenues. Taxes can also be used to meet environmental objectives, or other societal goals: goods such as gasoline and coal emissions are taxed.
• 2.6: Subsidies
• 2.7: Immigration
Labor-intensive agriculture such as fruit and vegetable production in high income nations employs immigrant workers and pays low wages. These workers offer an enormous contribution to the agricultural economy through hard work in the production of food and fiber. However, it is possible that immigration can have a negative impact on rural towns, since the provision of public services such as medical facilities, schools, and housing for low-wage workers is often costly.
• 2.8: Welfare Impacts of International Trade
Thumbnail: A diagram to demonstrate welfare economics. (CC BY-SA 3.0; Patrickbeardmore via Wikipedia)
02: Welfare Analysis of Government Policies
In some circumstances, the government believes that the free market equilibrium price is too high. If there is political pressure to act, a government can impose a maximum price, or price ceiling, on a market.
Price Ceiling = A maximum price policy to help consumers.
A price ceiling is imposed to provide relief to consumers from high prices. In food and agriculture, these policies are most often used in low-income nations, where political power is concentrated in urban consumers. If food prices increase, there can be demonstrations and riots to put pressure on the government to impose price ceilings. In the United States, price ceilings were imposed on meat products in the 1970s under President Richard M. Nixon. Price ceilings were also used for natural gas during this period of high inflation. It was believed that the cost of living had increased beyond the ability of family earnings to pay for necessities, and the market interventions were used to make beef, other meat, and natural gas more affordable.
Price ceilings are often imposed on housing prices in US urban areas. Rent control has been a longtime feature in New York City, where rent-controlled apartments continue to have low rental rates relative to the free market rate. The boom in the software industry has increased housing prices and rental rates enormously in the San Francisco Bay Area, Seattle, and the Puget Sound region. Rent control is being considered in both places to make San Francisco and Seattle more affordable for middle-class workers.
Welfare Analysis
Welfare analysis can be used to evaluate the impacts of a price ceiling. In what follows, we will compare a baseline free market scenario to a policy scenario, and compare the benefits and costs of the policy relative to the baseline of free markets and competition. Consider the price ceilings imposed on the natural gas markets. The purpose, or objective, of this policy was to help consumers. We will see that the policy does help some consumers, but makes other consumers worse off. The policy also hurts producers.
This unanticipated outcome is worth restating: price ceilings help some consumers, but hurt other consumers. All producers are made worse off. This outcome is not the intent of policy makers. Economists play an important role in the analysis and communication of policy outcomes to policy makers.
The baseline scenario for all policy analysis is free markets. Figure $1$ shows the free market equilibrium for the natural gas market. The quantity of natural gas is in trillion cubic feet (tcf) and the price of natural gas in in dollars per million cubic feet (USD/mcf).
Social welfare is maximized by free markets, because the size of the welfare area $CS + PS$ is largest under the free market scenario. As we will see, any government intervention into a market will necessarily reduce the total level of surplus available to consumers and producers. All price and quantity policies will help some individuals and groups, hurt others, and have a net loss to society. Policy makers typically ignore or downplay individuals and groups who are negatively affected by a proposed policy. The two triangles CS and PS are as large as possible in Figure $1$.
The price ceiling policy is evaluated in Figure $2$, where $P’$ is the price ceiling. Here, the government has passed a law that does not allow natural gas to be bought or sold at any price higher than $P’ (P’ < P)$. For a price ceiling to have an impact, it must be “binding.” This occurs only when the price ceiling is set below the market price $(P’ < P)$. If the price ceiling were set above $P (P’ > P)$, it would have no effect, since the good is bought and sold at the market price, which is below the price ceiling, and legally permissible. Such a law would not be binding on market transactions.
If the price ceiling is set at $P’$, then the new equilibrium quantity under the price ceiling $(Q’)$ is found at the minimum of quantity demanded ($Q^d$) and quantity supplied ($Q^s$), as in Equation \ref{2.0}.
$Q^{\prime} = min(Q^s, Q^d) \label{2.0}$
This condition states that the quantity at any nonequilibrium price $(P)$ will be the smallest of production or consumption. At the low price P’, producers decrease quantity supplied, and consumers increase quantity demanded, resulting in $Q’ = Q^s$ (Figure $2$). This is the maximum amount of natural gas placed on the market, although consumers desire a much larger amount.
The first step in the welfare analysis is to assign letters to each area in the price ceiling graph. Next, the letters corresponding to the baseline free market scenario are recorded (initial, or baseline, values have a subscript 0), followed by the surpluses under the price ceiling (ending values have a subscript 1). Finally, the change from free markets to the price policy are calculated to conclude the qualitative analysis of a price ceiling. If the supply and demand curves have numbers (actual data) associated with them, a numerical analysis can be conducted.
The initial, baseline, free market values in the natural gas market at market equilibrium price $P$ are:
$CS_0 = A + B\nonumber$
and
$PS_0 = C + D + E.\nonumber$
Social welfare is defined as the total amount of surplus available in the market, $CS + PS$:
$SW_0 = A + B + C + D + E.\nonumber$
After the price ceiling is put in place, the price is P’, and the quantity is Q’. New surplus values are found in the same way as under free markets. Consumer surplus is the willingness to pay minus price actually paid, or the area beneath the demand curve and above the price line at the new price P’: ($A + C$). Producer surplus is the price received minus the cost of production, or the area above the supply curve and below the price line ($E$):
$CS_1 = A + C,\nonumber$
$PS_1= E,\nonumber$
and
$SW_1 = A + C + E.\nonumber$
Recall that social welfare (SW) is equal to the sum of all surpluses available in the market: $SW = CS + PS$. The welfare analysis outcomes are found by calculating the changes in surplus:
\begin{align*} ΔCS &= CS_1 – CS_0 = + C – B \[4pt] ΔPS &= PS_1 – PS_0 = – C – D \[4pt] ΔSW &= SW_1 – SW_0 = – B – D \end{align*}
The results are fascinating, since the sign of the change in consumer surplus is ambiguous: the sign of $\Delta CS$ depends on the relative magnitude of areas $C and B$. If demand is elastic, and supply is inelastic, the price ceiling is more likely to yield a positive change in consumer surplus $(C > B)$. The policy makes some consumers better off, and some consumers worse off. The consumers located on the demand curve between the origin (0, 0) and $Q’$ are made better off by area $C$, as they purchase natural gas at a lower price $(P’ < P)$. Consumers located on the demand curve between $Q’$ and $Q$ have a lower willingness to pay than consumers located between the origin and $Q’$, and are made worse off by the price ceiling $(-B)$ since they are unable to purchase natural gas at the lower price ceiling $(P’ <P)$. The price ceiling created a shortage of natural gas, as natural gas producers reduce the quantity supplied in reaction to the legislated lower price. The decrease in quantity supplied of natural gas makes these consumers unable to buy the good.
Natural gas producers are made unambiguously worse off by the price ceiling: both the price $(P)$ and the quantity $(Q)$ are decreased $(P’ < P; Q’ < Q)$, and the change in producer surplus due to the policy is unambiguously negative $(– C – D)$
The term deadweight loss (DWL) is used to designate the loss in surplus to the market from government intervention, in this case a price ceiling. Deadweight loss is found by reversing the negative sign on the change in social welfare $(–ΔSW)$:
$DWL = –ΔSW = B + D.\nonumber$
The deadweight loss area $BD$ is called the welfare triangle, and is typical for market interventions. Interestingly, and perhaps unexpectedly, all government interventions have deadweight loss to society. Free markets are voluntary, with no coercion. Any price or quantity restriction will necessarily reduce the surplus available to producers and/or consumers in a market.
In current debates over rent control in congested urban areas, economists continue to point out the potential impact of rent control policies: a reduction in affordable housing. These policies are often put in place in spite of economic views, with mixed results. Renters who can find a rent-controlled property win, but many renters are unable to find housing, and must relocated outside the urban center and commute to work from a distant home.
As indicated above, price ceilings on food and agricultural products are most often used in low-income nations, such as in Asia and Sub-Saharan Africa. Price supports for food and agricultural products are most often used in high-income nations such as the US, European Union (EU), Japan, Australia, and Canada.
Quantitative Analysis
In this example, that beef consumers lobby the government to pass a price ceiling on beef products. This happened in the USA in the 1970s, during a period of high inflation. Beef consumers believe that prices are too high and democratically elected officials give their constituents what they want. Suppose that the inverse supply and demand for beef are given by:
$P = 20 – 2Q^d, \label{2.1}$
and
$P = 4 + 2Q^s. \label{2.2}$
Where P is the price of beef in USD/lb, and Q is the quantity of beef in million lbs. The equilibrium price and quantity of beef can be calculated by setting the inverse supply and demand equations equal to each other to achieve:
$P^e = \text{ USD } 12\text{/lb beef,}\nonumber$
and
$Q^e = 4 \text{ million lbs beef.}\nonumber$
These values, together with the supply and demand functions, allow us to measure the well-being of both consumers and producers before and after the price ceiling policy is implemented (Figure $3$).
The free-market equilibrium levels of CS and PS are designated with the subscript 0, calculated in equations \ref{2.3} and \ref{2.4}.
$CS_0 = A + B = 0.5(20 – 12)(4) = 0.5(8)(4) = \text{ USD } 16 \text{ million} \label{2.3}$
$PS_0 = C + D + E = 0.5(12 – 4)(4) = 0.5(8)(4) = \text{ USD } 16 \text{ million} \label{2.4}$
The level of social welfare is the sum of all surplus in the market, as in Equation \ref{2.5}.
$SW_0 = A + B + C + D + E = 0.5(20 – 4)(4) = 0.5(16)(4) = \text{ USD } 32 \text{ million} \label{2.5}$
Assume that the price ceiling is set by the Government at $P’ = 10$ USD/lb beef. The quantity is found by finding the minimum of quantity supplied and quantity demanded. In the case of a binding price ceiling $(P’ < P)$, the quantity supplied will be the relevant quantity, since producers will produce only $Q’$ lbs of beef. Consumers will desire to purchase a much larger amount at $P’ < P$, but are unable to at the lower price $P’$, since production falls from $Q$ to $Q’$. The quantity of $Q’$ is found by substituting the new price into the inverse supply equation.
$P = 4 + 2Q^s = 10 = 4 + 2Q^s,\nonumber$
therefore,
$Q’ = 3\nonumber$
The price ceiling $(P’)$ and reduced quantity $(Q’)$ can be seen in Figure $3$. Next, the levels of $CS, PS,$ and $SW$are calculated at the price ceiling level. To find the surplus level of area $A$, split the shape into one triangle and one rectangle by substitution of $Q’ = 3$ into the inverse demand curve to get $P = 14$. Area $A$ is equal to:$0.5(20 – 14)3 + (14 – 12)3 = 9 + 6 = 15$ million USD. We are now ready to calculate the level of surplus for the price ceiling.
\begin{align} CS_1 &= A + C = 15 + (12 – 10)(3) = 15 + 6 = \text{ USD 21 million}\ PS_1 &= E = 0.5(10 – 4)(3) = 0.5(6)(3) = \text{ USD }9 \text{ million}\SW_1 &= A + C + E = 21 + 9 = \text{ USD 30 million}\end{align}
The changes in welfare due to the price ceiling are:
\begin{align} ΔCS &= CS_1 – CS_0 = + C – B = 21 – 16 = \text{ USD }+ 5 \text{ million}\ΔPS &= PS_1 – PS_0 = – C – D = 9 – 16 = \text{ USD }– 7 \text{ million}\ΔSW &= SW_1 – SW_0 = – B – D = 30 – 32 = \text{ USD } – 2 \text{ million}\end{align}
The Dead Weight Loss (DWL) of the price ceiling is the loss to social welfare, of the negative of the change in social welfare:
$DWL = – ΔSW = 2 \text{ USD million.}\label{2.13}$
The quantitative analysis of a price ceiling provides timely, important, and interesting results. First, only a subset of consumers are made better off due to a price ceiling. These consumers win because they pay a lower price for the good under the price ceiling than in the free market $(P’ < P)$. Second, some consumers are made worse off due to the price ceiling, since the quantity of the good available is reduced $(Q’ < Q)$. This is because producers reduce the quantity supplied if the price is lowered (the Law of Supply). Third, all producers of the good are made unambiguously worse off due to the price ceiling, since both price and quantity are reduced $(P’ < P; Q’ < Q)$.
The magnitude of the consumer gains and losses are determined by the elasticities of supply and demand. Elastic demand and inelastic supply provide larger consumer benefits, since area $B$ in Figure $3$ is relatively small under these conditions. If demand is inelastic and supply is elastic, consumers are less likely to gain from the price ceiling, as area $C$ in Figure $3$ is relatively small in this case. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/02%3A_Welfare_Analysis_of_Government_Policies/2.01%3A_Price_Ceiling.txt |
This section continues the welfare analysis of price policies by investigating the welfare analysis of a price support, also called a minimum price. Price supports are intended to help producers. The outcome of the welfare analysis demonstrates that price supports can increase producer surplus, but in many cases at a large cost to the rest of society. Figure $1$ shows the impact of a price support in the wheat market. This policy is more likely to be enacted in a high-income nation where agricultural producers are a small group that can be more easily subsidized by a large economy.
Price Support = A minimum price policy enacted to help producers.
The price support mandates that all wheat be bought or sold at a minimum price of $P^{\prime}$. If the price support were set at a level lower than the market equilibrium price $(P^{\prime} < P)$, it would have no effect (it would not be “binding”). The quantity of wheat on the market depends on how the policy works. There are three possibilities for how the price support is implemented: (1) no surplus exists, (2) the surplus exists, and (3) the government purchases the surplus. Each case will be described in detail in what follows.
Case One: Price Support with No Surplus
The first case is the simplest, but least realistic. In Case One, we assume that producers correctly forecast the quantity demanded, and produce only enough to meet demand. No surplus exists. In Figure $1$, the price support is set by the government at $P^{\prime}$. It is assumed in this case that producers forego increasing quantity supplied along the supply curve to price $P^{\prime}$, and instead produce only enough wheat to meet consumer needs, $Q^{\prime}$:
$Q^{\prime} = \min(Q^s, Q^d).\nonumber$
In Case One, no surplus exists. The producers produce and sell only enough wheat to meet the low level of quantity demanded, $Q^{\prime}$. The initial, free market surplus levels are:
\begin{align*} CS_0 &= A + B + C\ PS_0 &= D + E\ SW_0 &= A + B + C + D + E \end{align*}
After the price support is put in place, the new levels of surplus are;
$CS_1 = A,\nonumber$
$PS_1= B + D,\nonumber$
and
$SW_1 = A + B + D.\nonumber$
Changes in surplus from free markets to the price support with no surplus are:
$ΔCS = - B - C,\nonumber$
$ΔPS = + B - E,\nonumber$
$ΔSW = - C - E\nonumber$
and
$DWL = -ΔSW = C + E\nonumber$
Consumers are unambiguously worse off: price is higher $(P^{\prime} > P)$ and quantity is lower $(Q^{\prime} < Q)$, relative to the free market case. Producers may or may not be better off, depending on the relative size of areas B and E. If demand is inelastic and supply is elastic, it is more likely that producer surplus is higher with the price support $(B > E)$. This reflects the analysis of price support in the previous chapter. The producers with low production costs, located on the supply curve between the origin (0, 0) and $Q^{\prime}$, are made better off since they receive a higher price for the wheat that they produce $(P^{\prime} > P)$. The high-cost producers, located on the supply curve between $Q^{\prime}$ and $Q$, are made worse off, since they no longer produce wheat.
The deadweight loss (DWL) equals welfare triangle CE. The price support helps producers, if demand is sufficiently inelastic, but at the expense of the rest of society. In high-income nations such as the US, this policy transfers surplus from the average consumer to producers. Wheat producers have higher levels of income and wealth than the average consumers, so the policy represents a transfer of income to individuals who are better off than the consumers.
Case Two: Price Support When Surplus Exists
Case Two is more realistic than Case One. In Case Two, wheat producers increase quantity supplied to $Q^{\prime\prime}$, found at the intersection of $P^{\prime}$ and the supply curve. This is shown in Figure $2$. At the price support level $P^{\prime}$, consumers purchase only $Q^{\prime}$, so a surplus exists equal to $Q^{\prime\prime} – Q^{\prime}$.
Surplus = Quantity supplied minus quantity demanded = $Q^s – Q^d$.
Note that this surplus of quantity shared the same world as consumer surplus and producer surplus, but refers to an excess quantity instead of an excess value. The initial values of surplus at free market levels are:
$CS_0 = A + B + C,\nonumber$
$PS_0= D + E,\nonumber$
and
$SW_0 = A + B + C + D + E.\nonumber$
After the price support is put in place, the new levels of surplus reflect the large costs of producing the surplus, with no buyers at the high price $(P’ > P)$. The cost of producing the surplus is the area under the supply curve, between $Q’$ and $Q’’$: GHI. The area GHI is the cost of producing the surplus, $Q’’ – Q’$, because it is the area under the supply curve. The surplus levels with the price support, assuming that the surplus exists are:
$CS_1 = A,\nonumber$
$PS_1= B + D – G – H – I,\nonumber$
and
$SW_1 = A + B + D – G – H – I.\nonumber$
Changes in surplus from free markets to the price support with the surplus are:
$ΔCS = - B - C,\nonumber$
$ΔPS = + B - E - G - H - I,\nonumber$
$ΔSW = - C - E - G - H - I,\nonumber$
and
$DWL = - ΔSW = C + E + G + H + I.\nonumber$
The impact on consumers remains the same as in Case One, but the impact on producers is much costlier. The surplus is costly to produce, and does not have a buyer. If this policy were to be implemented, the government would have to do something with the surplus of wheat created by the policy, or market forces would put pressure on the wheat market to return to equilibrium.
If the surplus remained, market forces would put downward pressure on the price of what making the price support more difficult and more expensive to maintain. This leads to Case Three, where the government purchases the surplus grain and removes it from the market, described in the next section.
Case Three: Price Support When Government Purchases Surplus
The surplus created by a price support is costly to producers, and if nothing is done to eliminate the surplus, the policy does not achieve the objective of helping producers. There are three methods for the government to eliminate the surplus:
1. Destroy the surplus,
2. Give the surplus away domestically, or
3. Give the surplus away internationally.
In all three cases, the government purchases the surplus at the price support level. This is the only way to maintain the price support. Without government purchases, the surplus would result in market forces that put downward pressure on the price of wheat. Destroying the surplus is not politically popular, since it involves eliminating food when there are hungry people in the world. The US did this in earlier decades by dumping surplus grain in the ocean, killing baby pigs, and dumping milk on the ground. This was not popular with consumers, producers, or politicians. The practice of destroying food to maintain higher food prices is not used today.
Domestic food programs make more sense as a method for eliminating the surplus. School breakfast and school lunch programs make use of the food surpluses by assisting those in need. Food aid is using the surplus food from the USA to alleviate hunger in other nations. Food aid and other forms of international assistance are popular programs that help the US producers and the recipient nation’s consumers. Food aid can be controversial, since it lowers food prices in receiving nations, and can cause “dependency” of the recipient on the donor nation.
Food aid results in lower prices, which cause a decrease in the quantity of food supplied in the recipient nation. Although food aid may alleviate hunger and/or starvation, it decreases the incentives for a nation to produce food. This is a true paradox, making food and agricultural policy a challenge for policy makers: there are winner and losers that result from all public policies.
Case Three is shown in Figure $3$, where the government purchases the surplus $(Q’’ – Q’)$.
As in the two previous cases, the initial values of surplus at free market levels are:
$CS_0 = A + B + C,\nonumber$
$PS_0= D + E,\nonumber$
$G_0 = 0,\nonumber$
and
$SW_0 = A + B + C + D + E.\nonumber$
Note that the government $(G)$ is included in this case of the price support. After the price support is put in place, the new levels of surplus reflect the large costs of the government buying the surplus. The cost of producing the surplus $(Q’’ – Q’)$ at the price support level equals $P’$ multiplied by $(Q’’ – Q’)$, which is equal to area $CEFGHI$ (Figure $3$), the cost of producing the surplus. The surplus levels with the price support, assuming that the surplus exists are:
$CS_1 = A,\nonumber$
$PS_1= B + C + D + E + F,\nonumber$
$G_1 = – C – E – F – G – H – I,\nonumber$
and
$SW_1 = A + B + D – G – H – I.\nonumber$
Changes in surplus from free markets to the price support with the surplus are:
$ΔCS = - B - C,\nonumber$
$ΔPS = + B + C + F,\nonumber$
$ΔG = - C - E - F - G - H - I,\nonumber$
$ΔSW = - C - E - G - H - I,\nonumber$
and
$DWL = - ΔSW = C + E + G + H + I.\nonumber$
The total societal surplus changes are identical in Cases Two and Three: $DWL = CEGHI$ in both cases. The distribution of benefits is quite different, however. In Case Two, the producers bear the large costs of overproduction: $-GHI$. In Case Two, the government has attempted to help producers, but has decreased producer surplus due to the unintended consequence of wheat growers producing too much food at the high level of the price support. If the government does purchase the surplus as in Case 3, these high costs are shifted to taxpayers, and producers are helped by the price support program.
The price support does meet the objective of helping producers in Case 3, but at a high cost to society. As in the case of the price ceiling, the price support results in losses to society $(DWL > 0)$. This is true of all government interventions into the market. The maximum level of surplus occurs with free markets and free trade. In food and agriculture, there are numerous cases of government intervention into markets, reflecting objectives other than maximizing social welfare.
In some circumstances, policy makers determine that the distributional consequences of a policy are more important than maximizing social welfare. In these cases, who gets what determines policy outcomes, rather than the overall efficiency of the market.
Quantitative Analysis of a Price Support
In this example, wheat producers are successful in their efforts to convince Congress to pass a law that authorizes a price support for wheat. Suppose that the inverse supply and demand for wheat are given by:
$P = 10 – Q^d, \label{2.1}$
and
$P = 2 + Q^s.\label{2.2}$
Where $P$ is the price of wheat in USD/MT, and $Q$ is the quantity of wheat in million metric tons (MMT). The equilibrium price $(P^e)$ and quantity of wheat $(Q^e)$are calculated by setting the inverse supply and demand equations equal to each other to achieve:
$P^e = \text{USD } 6\text{/MT wheat,}\nonumber$
and
$Q^e = 4 \text{ MMT wheat.}\nonumber$
These values, together with the supply and demand functions, allow us to measure the changes in surpluses for both consumers and producers due to the price support (Figure $4$).
The calculations will proceed by directly determining the changes in surplus, rather than calculating the initial and ending values of surplus, as we did above for the price ceiling. To find the dollar values of the areas in Figure $4$, recall that you can always find a price or quantity by substitution of a $P$ or $Q$ into the inverse supply or inverse demand curve. There is always enough information provided to find prices, quantities, and the areas that represent surplus values.
Assume that the price support is set at $P’ = 8$ USD/MT wheat. The quantity is found by the minimum of quantity demanded $(Q^d)$ and quantity supplied $(Q^s)$: $\min(Q^s, Q^d)$. Therefore, the quantity is $Q’ = Q^d$, the quantity demanded. Surplus changes are:
\begin{align*} ΔCS &= – B – C = – 4 – 2 = – 6 \text{ USD million} \[4pt] ΔPS &= + B – E = + 4 – 2 = + 2 \text{ USD million} \[4pt] ΔG &= 0\[4pt] ΔSW &= – C – E = – 2 – 2 = – 4 \text{ USD million}\ DWL &= – ΔSW = C + E = + 4 \text{ USD million} \end{align*}
The government does nothing in Case One, and wheat producers supply only enough wheat to the market to meet consumer demand. Case Two is shown in Figure $5$. In Case Two, producers produce a large amount of wheat $(Q’’)$ due to the high price $P’$. There is a large surplus created, but in this case there is no intervention by the government. Wheat producers have very large costs, since they produce 6 million metric tons $(Q’’)$, and consumers only purchase 2 million metric tons ($Q’$, Figure $5$).
\begin{align*} ΔCS &= – B – C = – 4 – 2 = – 6 \text{ USD million}\ΔPS &= + B – E – G – H – I = + 4 – 26 = – 22 \text{ USD million}\ΔG &= 0\ΔSW &= – C – E – G – H – I = – 28 \text{ USD million}\DWL &= – ΔSW = C + E + G + H + I = + 28 \text{ USD million}\end{align*}
In Case Three, the government intervenes and buys the surplus. This allows the price to stay at the price support level, $P’$. Quantity supplied and quantity demanded are at the same levels as in Case Two, but the government’s expenses are large, and wheat producers benefit from the high price $(P’ > P^e)$ and larger quantity sold $(Q’’ > Q^e$, Figure $6$).
\begin{align*} ΔCS &= – B – C = – 4 – 2 = – 6 \text{ USD million}\ΔPS &= + B + C + F = 4 + 2 + 4 = + 10 \text{ USD million}\ΔG &= – C – E – F – G – H – I = – 32 \text{ USD million}\ΔSW &= – C – E – G – H – I = – 28 \text{ USD million}\DWL &= – ΔSW = C + E + G + H + I = + 28 \text{ USD million}\end{align*}
In Case Three, the price support has large costs, paid for by the government. One benefit that is not explicitly included is the food aid that could be provided to domestic and foreign consumers. These benefits would provide noneconomic gains, but no added surplus value to the program. Price supports can increase producer surplus, but at a cost. Government interventions often have unintended consequences, such as the surplus of grain in this case.
After World War II, European nations subsidized food and agriculture heavily. Since they had experienced massive food shortages during the War, Europeans did not want to rely on other nations for food. The large subsidies resulted in large surpluses of food that had to be exported at below-market prices to maintain the high food prices within Europe. In the next section, we will discuss quantitative restrictions as another means of increasing prices in food and agriculture. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/02%3A_Welfare_Analysis_of_Government_Policies/2.02%3A_Price_Support.txt |
Governments in high income nations often subsidize agricultural producers. A price support is one public policy intended to increase producer surplus. The unintended consequence of the price support is a large surplus that is costly to either producers, the government, or both. Another policy intended to help producers is a quantitative restriction, also called output control or supply control. The idea of supply control is to decrease output in order to increase the price. The analysis of elasticity in Chapter One demonstrated that this policy would work only if the demand for the good is inelastic.
A quantitative restriction in the wheat market is shown in Figure $1$. Wheat output is restricted to $Q’ < Q^e$, resulting in a higher price $P’ > P^e$. The welfare analysis of this policy is identical to that of a price support: if wheat output is reduced by an amount that raises the price to P’, the policy is equivalent to Case One of the Price Support analyzed in the previous section. Therefore, the welfare analysis of the quantitative restriction in Figure $1$ is:
$ΔCS = – B – C,\nonumber$
$ΔPS = + B – E,\nonumber$
$ΔSW = – C – E,\nonumber$
and
$DWL = – ΔSW = C + E.\nonumber$
The magnitudes of these welfare changes depend on the elasticities of supply and demand. Note that producers only gain if the demand curve is sufficiently inelastic. If wheat demand is sufficiently inelastic relative to the elasticity of supply, then $B > E$, and the change in producer surplus is positive. However, if the demand is sufficiently elastic relative to the elasticity of supply, then $B < E$, and producers lose. This result emphasizes one of the important agricultural policy conclusions of this course: in a global economy, the demand for agricultural goods is elastic due to global competition, and price supports and supply control will hurt producers more than they will help them. This was the result found in Section 1.4.9 above. The welfare analysis of the quantitative restriction highlights this important policy contribution.
The benefit of the quantitative restriction is the lack of a surplus, which is a costly weakness of price supports. One difficulty with supply control is administration and enforcement. Wheat producers will not be free to choose how much wheat that they produce. Instead, the government will allow only a certain amount of wheat produced by each farmer, called a quota. This quantitative restriction can be accomplished through acreage controls also, where wheat producers can only plant a percentage of their total acreage to wheat. This is an imperfect policy, since producers could increase yield per acre on the acres that they are allowed to plant. If the output is restricted, it is difficult to enforce the policy, and if overproduction occurs, it is difficult to remove the surplus.
Although government programs and policies are well intended, they often cause unintended consequences. Price supports and quantitative restrictions can help producers, but at the expense of consumers.
2.04: Import Quota
The large benefits of free trade have been emphasized in this book. Free markets and free trade are based on voluntary, mutually-beneficial transactions that make both trading partners better off. The global economic gains from free trade have been enormous, as they enhance efficiency of resource use. Comparative advantage and gains from trade allow each individual, firm, or nation to “do what they do best, and trade for the rest.”
Not everyone wins from trade, however. As was emphasized in Section 1.6.3 above, the overall net benefits are positive, but there are winners and losers from trade. Specifically, producers in importing nations and consumers in exporting nations lose due to price changes that negatively affect them. Like all public policies, free trade has winners and losers. The overall size of the economy is maximized under free markets and free trade, but there are distributional consequences that result in winners and losers.
Trade barriers are most often erected to protect domestic producers from imports. Sugar is produced in the United States, but at higher production costs than sugar production in tropical climates found in Cuba, the Dominican Republic, and Haiti. If free trade prevailed, all sugar consumed in the USA would be imported, since it is cheaper to buy sugar than to produce it domestically. Sugar producers are interested in maintaining sugar production in the USA, as this is how they make their living. Agricultural trade policy has limited sugar imports to a much smaller amount than the free trade level, through a sugar quota, demonstrated in Figure $1$. The import reduction makes sugar in the USA more scarce, and therefore more valuable.
In a closed economy, market forces ensure that supply and demand are equal $(Q^s = Q^d)$. If the USA were a closed economy, the price of sugar would be very high, well above the world market price of sugar $P_w$. Suppose that the USA is a “small nation” purchaser of sugar: this means that the USA is a “price taker,” facing a constant world price of sugar for all quantities purchased. The assumption of an importing nation being a small nation, or price taker, simplifies our analysis. In the real world, the importer may be large enough to influence the world price of sugar through large purchases of sugar on the global market.
This is represented as a horizontal line at Pw in Figure $1$. At the world price of $P_w$, the USA would produce $Q^s$ domestically and consume $Q^d$. The difference between quantity demanded and quantity supplied is imports $(Q^d – Q^s)$. The equality of domestic supply and demand has been broken by the ability to import less expensive sugar from other nations. Note that in a situation with no imports, the domestic price in the USA would be at the intersection of $Q^s$ and $Q^d$.
Domestic sugar producers lobby the government for protection, and receive it in the form of a sugar quota, meaning a maximum amount of sugar imports. The right to import sugar is auctioned off to the highest bidders, who pay for the right to import sugar. Suppose that the quota is set at $Q^{d’} – Q^{s’}$. This level of quota is binding, since $(Q^{d’} – Q^{s’}) < (Q^d – Q^s)$.
This level of imports is the horizontal distance between $Q^{d’}$ and $Q^{s’}$ in Figure $1$. At this quota level, the price of sugar increases to $P’$, since the quantity of sugar in the market is reduced from free market levels. At this high price $(P’ > P_w)$, quantity supplied increases from $Q^s$ to $Q^{s’}$, and quantity demanded decreases from $Q^d$ to $Q^{d’}$. These changes are due to the quota, which decreases the amount of sugar allowed into the country. Sugar producers are pleased with this policy, since the price is higher and domestic quantity supplied $(Q^{s’} > Q^s)$ larger.
Welfare Analysis of an Import Quota
The welfare analysis of the import quota identifies the changes in economic surplus of producers, consumers, and the government. The government gains from selling the import quota permits to the sugar importers. These firms will compete with each other to win the right to import sugar. The firms will bid up the price in an auction until the price is equal to the market value of the quota. In this case, the market value is equal to $(P’ – P_w)$, since this is the gain from importing one pound of sugar into the USA. The complete welfare analysis is:
$ΔCS = – A – B – C – D,\nonumber$
$ΔPS = + A,\nonumber$
$ΔG = + C,\nonumber$
$ΔSW = – B – D,\nonumber$
and
$DWL = B + D.\nonumber$
Producers gain, but with large costs to consumers. The government gains from the sale of the quota permits (or licenses) to sugar importers. Sugar consumers are made much worse off from this policy. The area $B$ is called the “production loss” of the policy. This area is equal to the losses of using scarce resources to produce sugar in the USA instead of buying it at the world price. Area $B$ represents the production costs, since it is the area under the supply curve, and above the world price $(P_w)$. These resources could be more efficiently used producing something other than sugar. Area $D$ is called the “consumption loss” of the import quota. This is the area under the demand curve and above the world price $(P_w)$, which represents the extra dollars spent by US consumers buying domestic sugar instead of low-cost imported sugar. Areas $B$ and $D$ represent the loss in social welfare, or the deadweight loss, of the government intervention. Free markets and free trade would provide efficiency of resource use and lower costs to consumers.
In the USA, sugar prices are typically one to two times higher than the world price, resulting in billions of dollar losses to sugar consumers. This policy also has an interesting unintended consequence. High fructose corn syrup (HFCS) is a perfect substitute in consumption for sucrose (sugar made from sugar cane or sugar beets). Corn producers lobby the government to maintain the sugar import quota, to keep the price of sugar high. When the sugar price is high, buyers of sugar (Coca Cola, Pepsi, Mars, etc.) switch out of sucrose and into fructose. Corn farmers are among the largest supporters of the sugar import quota! The positive impact on corn producers is a truly unique and unanticipated cause and effect of protection of domestic sweetener producers from foreign competition.
Quantitative Welfare Analysis of an Import Quota
Suppose that the inverse demand and supply of sugar are given by:
$P = 100 – Q^d,\nonumber$
and
$P = 10 + Q^s,\nonumber$
Where $P$ is the price of sugar in USD/lb, and $Q$ is the quantity of sugar in million pounds. Suppose also that the world price of sugar is given by $P_w = 20$ USD/lb, as shown in Figure $2$. In a closed economy, there would be no imports or exports, so $Q^s = Q^d$ at this market equilibrium, where supply is equal to demand:
$Q^e = 45 \text{ million pounds of sugar}\nonumber$
and
$P^e = 55 \text{ USD/lb}.\nonumber$
This is a high price of sugar relative to the world price, occurring at the intersection of $Q^s$ and $Q^d$ in Figure $2$. If imports are allowed, the USA can break the equality of production and consumption $(Q^s \neq Q^d)$ through imports of less expensive sugar $(Q^s < Q^d)$. If we assume that the USA is a “small nation” in sugar trade, then the USA is a price taker, and can import as much or as little sugar as it desires at the world price. Note that a “large nation” means that a country is a price maker, and has enough market power to influence the price of the imported good.
The free market equilibrium in an open economy can be calculated by substitution of the world price into the inverse supply and demand functions. At the world price, $Q^s = 10$ m lbs sugar and $Q^d = 80$ m lbs sugar. Imports are equal to $Q^s – Q^d = 70$ m lbs sugar. Social welfare is maximized at this free trade equilibrium, since sugar is produced by the lowest cost producers. Ten million pounds of sugar are produced by domestic producers along the supply curve below the world price, and 70 m lbs are produced by foreign sugar producers at the world price of 20 USD/lb.
Now assume that a sugar import quota is implemented, equal to 50 m lbs of sugar. To be binding, the import quota must be less than the free-market level of imports. Since this is less than the free trade import level, it will decrease the amount of sugar available in the USA, and cause price to increase. The sugar price that results from the quota $(P’)$ can be calculated using the inverse supply and demand curves and the import quota:
$Q^{d’} – Q^{s’} = 50,\nonumber$
$P’ = 100 – Q^{d’},\nonumber$
and
$P’ = 10 + Q^{s’}.\nonumber$
Rearranging the first equation: $Q^d = 50 + Q^s$. Substitution of the first equation into the inverse demand equation yields:
$P’ = 100 – (50 + Q^{s’}) = 50 – Q^{s’}.\nonumber$
This equation can be set equal to the inverse supply equation:
$P’ = 50 – Q^{s’} = 10 + Q^{s’}.\nonumber$
Solving for $Q^{s’}$:
$2Q^{s’} = 50 – 10,\nonumber$
or
$Q^{s’} = 40/2 = 20 \text{ m lbs sugar}.\nonumber$
Substituting this into the import equation and the inverse supply function yield:
$P’ = 30 \text{ USD/ lb},\nonumber$
and
$Q^{d’} = 70 \text{ m lbs sugar}.\nonumber$
These values are all shown in Figure $2$. Notice that quantity supplied has increased $(Q^{s’} > Q^s)$ and quantity demanded has decreased $(Q^{d’} < Q^d)$ due to the import quota and the resulting higher price $(P’ > P_w)$. The welfare analysis can now be conducted by calculation of the areas in the graph.
\begin{align*} ΔCS &= – A – B – C – D = – 750 \text{ USD million}\ ΔPS &= + A = + 150 \text{ USD million}\ΔG &= + C = + 500 \text{ million}\ΔSW &= – B – D = – 100 \text{ USD million}\ DWL &= B + D = + 100 \text{ million}\end{align*}
The government gains area $C$ by auctioning off the permits that allow firms to import sugar. The quantitative results confirm that import restrictions help domestic producers, but at thigh costs to domestic consumers. The high price for sugar also provides support to corn producers, since High Fructose Corn Syrup (HFCS) is a close substitute to sucrose. Taxes are analyzed in the next section. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/02%3A_Welfare_Analysis_of_Government_Policies/2.03%3A_Quantitative_Restriction.txt |
Taxes are often imposed to provide government revenue. The government also uses taxes to decrease the consumption of a good such as alcohol or tobacco. These taxes are called “sin taxes,” on goods that are not favored by society. These goods often have inelastic demands, which allows the government to apply a tax and earn revenues. Taxes can also be used to meet environmental objectives, or other societal goals: goods such as gasoline and coal emissions are taxed.
There are two types of tax: (1) specific tax, and (2) ad valorem tax.
Specific Tax = A tax imposed per-unit of the good to be taxed.
Ad Valorem Tax = A tax imposed as a percentage of the good to be taxed.
Both types of tax have the same qualitative effects, so we will study a specific, or per-unit tax ($t =$ USD/unit). Taxes result in price changes for both buyers and sellers of the taxed good. The welfare analysis of a tax provides important results on who pays for the tax: the buyers or sellers? The term, “tax incidence,” refers to how a tax is divided between buyers and sellers. Let $P_b$ be the buyer’s price, and $P_s$ be the seller’s price. In markets without a tax, buyers and sellers both pay the same equilibrium market price $(P_b = P_s = P^e)$. In a market for a taxed good, however, this equality is broken. With a tax, the buyer’s price is higher than the seller’s price by the amount of the tax:
$P_b = P_s + t\nonumber$
Economists say that the tax drives a wedge between the buyer’s price and the seller’s price, as shown in Figure $1$. The specific tax $(t)$ is equal to the vertical distance between $P_b$ and $P_s$. The tax incidence, or who pays for the tax, depends of the elasticities of supply and demand.
Welfare Analysis of a Tax
The welfare analysis of the tax compares the initial market equilibrium with the post-tax equilibrium.
$ΔCS = – A – B,\nonumber$
$ΔPS = – C – D,\nonumber$
$ΔG = + A + C,\nonumber$
$ΔSW = – B – D,\nonumber$
and
$DWL = B + D.\nonumber$
In Figure $1$, the incidence of the tax is equal between buyers and sellers of gasoline $(P^b – P^e = P^e – P^s)$. This is because the supply and demand curves are drawn symmetrically. In the real world, the tax incidence will depend on the supply and demand elasticities. The pass through fraction is the percentage of the tax “passed through” from producers to consumers.
$\textbf{Pass Through Fraction } = \frac{E^s}{E^s – E^d}\nonumber$
We will calculate this for the gasoline market in the next section.
Quantitative Welfare Analysis of a Tax
Suppose that the inverse demand and supply of gasoline are given by:
$P_b = 8 – Q^d,\nonumber$
and
$P_s = 2 + Q^s,\nonumber$
Where $P$ is the price of gasoline in USD/gal, and $Q$ is the quantity of gasoline in million gallons. Market equilibrium is found where supply equals demand: $Q^e = 3$ million gallons of gasoline and $P^e = P_b = P_s = 5$ USD/gal of gasoline (Figure $2$).
With the tax, the price relationship is given by:
$P_b = P_s + t.\nonumber$
Assume that the government sets the tax equal to 2 USD/gal $(t = 2)$. Substitution of the inverse supply and demand equations into the price equation yields:
$8 – Q^d = 2 + Q^s + 2\nonumber$
Since $Q^d = Q^s = Q’$ after the tax:
\begin{align*} 4 &= 2Q’\[4pt] Q’ &= 2 \text{ million gallons of gasoline}.\end{align*}
The quantity can be substituted into the inverse supply and demand equations to find the buyer’s and seller’s prices.
$P_b = 6 \text{ USD/gal,}\nonumber$
and
$P_s = 4 \text{ USD/gal.}\nonumber$
These prices are shown in Figure $2$. The welfare analysis is:
\begin{align*}ΔCS &= – A – B = – 2.5 \text{ USD million}\[4pt] ΔPS &= – C – D = – 2.5 \text{ USD million}\[4pt] ΔG &= + A + C = + 4 \text{ USD million}\[4pt] ΔSW &= – B – D = – 1 \text{ USD million}\[4pt] DWL &= B + D = + 1 USD \text{ million}\end{align*}
Note that the change in social welfare equals the sum of the welfare changes due to the tax: $ΔSW = ΔCS + ΔPS + ΔG$.
The pass through fraction can now be calculated to find the tax incidence.
$PTF = \frac{E_s}{E_s – E_d}\nonumber$
The elasticity of demand at the market equilibrium is equal to $-\dfrac{5}{3}$, and the supply elasticity is $+\dfrac{5}{3}$. See Section 1.4.10 above for a review of how to calculate elasticities.
This yields:
$PTF = \frac{E_s}{E_s – E_d} = \frac{(5/3)}{(5/3 – (–5/3))} = \left(\frac{5}{3}\right)\left(\frac{10}{3}\right) = 0.5.\nonumber$
This result shows that consumers pay for exactly one half of the tax, and producers pay for one half of the tax. The tax achieved the objective of increasing government revenues, but it did lower the quantity of the good produced and consumed, with lower social welfare. In free markets, consumers re able to pay the lower market price and consume more of the good. Producers receive a higher price, and produce and sell a larger quantity of the good than in the no-tax case. Therefore, taxes imposed by the government decrease social welfare, but allow the government to provide goods and services such as national defense at the federal level; highways, schools, and jails at the State level; and roads and parks at the local level. The next section will discuss subsidies.
2.06: Subsidies
The policy objective of a subsidy is to help producers, or encourage the use of a good. The seller’s price is higher than the buyer’s price by the amount of the subsidy $(s)$.
$P_s = P_b + s\nonumber$
The subsidy is the vertical distance between the seller’s price and the buyer’s price, as shown in Figure $1$.
Welfare Analysis of a Subsidy
The welfare analysis of the subsidy compares the initial market equilibrium with the post-subsidy equilibrium.
$ΔCS = + C + D + E,\nonumber$
$ΔPS = + A + B,\nonumber$
$ΔG = – A – B – C – D – E – F,\nonumber$
$ΔSW = – F,\nonumber$
and
$DWL = F.\nonumber$
Both consumers and producers gain from the subsidy, but at a large cost to tax payers (the government).
Quantitative Welfare Analysis of a Subsidy
Suppose that the inverse demand and supply of corn are given by:
$P_b = 12 – 2Q^d,\nonumber$
and
$P_s = 2 + 2Q^s,\nonumber$
Where $P$ is the price of corn in USD/bu, and $Q$ is the quantity of corn in billion bushels. Market equilibrium is found where supply equals demand: $Q^e = 2.5$ billion bu of corn and $P^e = P_b = P_s = 7$ USD/bu of corn (Figure $2$).
With the subsidy, the price relationship is given by:
$P_s = P_b + s.\nonumber$
Assume that the government sets the corn subsidy equal to 2 USD/bu. Substitution of the inverse supply and demand equations into the price equation yields:
$2 + 2Q^s = 12 – 2Q^d + 2\nonumber$
Since $Q^d = Q^s = Q’$ after the tax:
\begin{align*} 4Q’ &= 12\[4pt] Q’ &= 3 \text{ billion bushels of corn.}\end{align*}
The quantity can be substituted into the inverse supply and demand equations to find the buyer’s and seller’s prices.
$P_b = 6 \text{ USD/bu},\nonumber$
and
$P_s = 8 \text{ USD/bu}.\nonumber$
These prices are shown in Figure $1$. The welfare analysis is:
\begin{align*}ΔCS &= + C + D + E = + 2.75 \text{ USD billion}\[4pt] ΔPS &= + A + B = + 2.75 \text{ USD billion}\[4pt] ΔG &= – A – B – C – D – E – F = – 6 \text{ USD billion}\[4pt] ΔSW &= – F = – 0.5\text{ USD billion}\[4pt] DWL &= F = + 0.5 \text{ USD billion}\end{align*}
Note again that the change in social welfare equals the sum of the welfare changes due to the tax: $ΔSW = ΔCS + ΔPS + ΔG$. Although the deadweight loss is not large, the government cost is large, making subsidies effective in helping producers and encouraging consumption of the good, but expensive for society. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/02%3A_Welfare_Analysis_of_Government_Policies/2.05%3A_Taxes.txt |
Labor-intensive agriculture such as fruit and vegetable production in high income nations employs immigrant workers and pays low wages. These workers offer an enormous contribution to the agricultural economy through hard work in the production of food and fiber. However, it is possible that immigration can have a negative impact on rural towns, since the provision of public services such as medical facilities, schools, and housing for low-wage workers is often costly.
Most farm workers in the USA are immigrants, in spite of the massive labor-saving technological change over many decades. Technological change has occurred in many crops through mechanization and the use of agricultural chemicals: over time, machines and chemicals have replaced farm workers in the USA and other high income nations. The number of persons employed on US farms has been stable for several decades, due to two offsetting forces: (1) a large increase in the production of hand-harvested fruits and vegetables, and (2) rapid labor-saving technological change. Most of these farm workers live in “farm work communities,” defined as cities with a population under 20,000 that are typically poor and growing rapidly.
In theory, the economic impact of immigration on rural communities could be either positive or negative. New immigrants can stimulate job and wage growth through induced economic activity from the increased demand for housing, food, clothing, and services. However, it is possible that immigration and growth in the local labor supply could result in lower wages and displaced employment opportunities for existing workers. The actual economic outcome is highly complex, dynamic, and difficult to measure. Immigration has resulted in the description of the USA as a “melting pot” of people and groups all nationalities, ethnicities, races, and religions. Immigration is often controversial, as existing groups may clash with more recent immigrants.
Welfare Analysis of Immigration: Short Run
Welfare analysis can be usefully utilized to better understand the economic impact of immigration. Adjustments take time, so initial impacts can differ markedly from long run impacts. The economic impacts depend crucially on both the number of migrants and the skill level of new migrant workers. Economic theory suggests that the destination, or receiving nation has large economic benefits from immigration, but there are winners and losers. Who wins and who loses depends on the wage structure, and availability and mobility of capital, as explained below.
It is important to emphasize that if capital is mobile, and can adjust quickly, and technology can adapt to changing labor composition, then the economy with migrants is a larger version of the original economy before immigration. In this case, the native-born workers are neither winners nor losers. Economic adjustments to new immigrants require time, and it is during the transition to the new workers that winners and losers occur.
When immigration occurs, goods that are produced using migrant labor have an increase in production. In Figure $1$, the wage rate is the price of labor, the initial demand for labor is given by $Q^d_0$, and the labor supplied by native workers (original workers in the receiving nation) is $Q^s_0$, which is assumed to be perfectly inelastic at $L_0$ million workers. The real-world labor supply is not fixed, as it is shown in Figure $1$, as higher wages will result in more work supplied to the market. However, the inelastic model illustrated in Figure $1$ is good approximation of labor markets: the qualitative results of the model accurately depict the real world.
The initial, pre-immigration labor market equilibrium occurs at $E_0$, characterized by wage rate $W_0$ and labor supply $L_0$. The total social welfare in this market is the sum of producer surplus and consumer surplus $(SW = PS + CS)$. This is the area under the demand curve at $L_0 (=ABD)$. Recall that the workers are the suppliers of labor, thus producer surplus is the economic value of worker well-being. The consumer in this case is the firms, since the employers purchase (hire) labor. The consumer surplus in the labor market shown here is the economic value to the business firms, or employers.
Before immigration occurs, producer surplus is the entire rectangle $(BD)$. The supply curve is vertical in this case, causing the area under the supply curve to be nonexistent. The workers receive this amount of income, since area $BD$ is equal to the wage rate times the quantity of labor employed in the economy $(W_0\cdot L_0)$. Firms, or employers of workers, receive the consumer surplus, which before immigration occurs is equal to area $A$ in Figure $1$.
After immigration occurs, the labor force is increased by the number of migrants $(M)$: $L_1 = L_0 + M$. In the short run, no adjustments in the labor and capital market take place, and the result of an increase in the quantity of labor is a decrease in the price of labor: the wage rate falls from $W_0$ to $W_1$. Native workers lose area B in producer surplus, with a new level of economic surplus equal to $D (W_1\cdot L_0)$. Migrants receive wage rate $W_1$, and migrant earnings are equal to area $E (W_1\cdot M)$. Presumably, the wage rate earned in the receiving nation is larger than the wage rate available in the immigrants’ nation of origin. The wage rate is also likely to be large enough to induce workers to change locations, which can be a costly transition. Employers in the receiving nation are the winners, as consumer surplus (economic value of the firms who hire either native workers or migrants) increases from area $A$ before immigration to area $ABC$ after immigration.
The welfare analysis of immigration can be summarized in the usual way:
\begin{align*}ΔCS &= \text{ employer gains } = + B + C\[4pt] ΔPS_L &= \text{ native worker gains } = − B\[4pt] ΔPS_M &= \text{ migrant worker gains } = + E\[4pt] ΔSW &= \text{ net gain to entire economy } = + C + E \end{align*}
Notice that there is a net gain in total economic activity due to immigration: the magnitude of economic activity in the receiving nation is larger after immigration occurs. This is due to the influx of new resources, bringing economic value and spending. This differs from government interventions in the free market economy. This is because government programs and policies all result in a loss of voluntary exchange between buyers and sellers, and dead weight loss. In the case of labor immigration into a nation, more voluntary exchange takes place, with large overall economic benefits to the receiving nation. The controversy surrounding immigration is the distributional effects: in the short run, native workers lose, due to decreasing wages. As the economy adjusts to the new workers, the benefits become larger and the negative impacts are diminished, as will be explained in the next section.
Welfare Analysis of Immigration: Long Run
Workers and firms can make many adjustments once the new migrants join the economy. In an economy with many types of skilled and unskilled workers, native workers can take jobs in areas of their comparative advantage, and invest in human capital (education and training) to allow them to increase wages by moving out of low paying jobs and into high paying jobs. Given sufficient time, migrants can do this too, and will move into higher paying jobs as new waves of immigration occur.
Migrants who bring capital or work skills with them can enter growing sectors, such as technology, medicine, and services. The demand for labor in these areas is large and growing, so wages continue to increase together with new workers entering the economy.
In the long run, this type of adjustment in capital and labor markets, together with technological change, will result in economic growth, and broad-based wage and income growth in the receiving economy. The USA has had high levels of immigration simultaneous with high and growing levels of income for most of its history: immigration has catalyzed economic growth in the high income nations of the world. This desirable outcome does require change, adjustment, and in many cases labor migration, both occupational and locational. Growth mandates change, and change is often difficult. This is one of the major features of free markets and free trade. When economic agents are free to make decisions in their own interest, great things can happen. But improvement requires change. When workers and their families are free to locate where they desire to live and work, economic growth is likely to occur, but the transition can be challenging, and when cultures and values differ, controversy can occur.
The long run effects of immigration can be seen in Figure $2$. New workers joining the economy cause an increase in the aggregate demand for goods in the economy, and this economic growth entices firms to produce more goods. More production requires more workers, and the demand for labor increases from $Q^d_0$ to $Q^d_1$. The long run equilibrium is found at $E_1$. The increase in labor demand offsets the downward pressure on wage rates, resulting in wages returning to their original level, $W_0$. The economy grows, so consumer surplus (economic value of employers, or business firms) increases to include the area under the demand curve and above the new price line: $AFG$. Native worker earnings are restored to their initial level $(BD)$, and migrant worker surplus is increased to $CHE$.
The overall economy gains significantly once these adjustments have occurred. Adding more resources to an economy in the long run, given sufficient time for the transition to occur, will yield large economic growth, as the economy is growing by the size of the new migrant labor force.
\begin{align*}ΔCS &= \text{ employer gains } = + F + G\[4pt] ΔPS_L &= \text{ native worker gains } = 0\[4pt] ΔPS_M &= \text{ migrant worker gains } = + C + H + E \[4pt] ΔSW &= \text{ net gain to entire economy } = + C + E + F + G + H \end{align*}
The potential gains from immigration can be thwarted during periods of economic recession, when the overall demand for goods increases at a decreasing rate. This economic stagnation can lead to a decrease in the demand for labor. When native workers face poor economic conditions, they are less likely to favor new migrants.
In agriculture, re cent immigrants perform many tasks that native workers would not do at the low wages offered to migrants. These tasks can include meatpacking, chemical application, and harvesting fruit and vegetables. The USA currently allows millions of workers to enter the country and work in farm jobs. If this supply of workers were to be eliminated, the cost of labor would rise enormously and the cost of food would increase. To examine the gains and benefits of migration of agricultural workers, the next section broadens the welfare analysis to include a model of two nations: the receiving nation and the nation of migrant origin.
Welfare Analysis of Labor Immigration into the USA from Mexico
To demonstrate the effects of the movement of labor from one nation to another, the three panel diagram of Section 1.6.3 can be usefully employed. The welfare analysis of agricultural labor migration from Mexico to the USA provides a summary of who wins, who loses,, and by how much. The analysis demonstrates that both Mexico and the USA have net gains from labor migration. However, as in all economic changes, there are winners and losers. Figure $3$ shows labor movements for the receiving nation (USA) in the left panel, and the source nation (Mexico) in the right panel. The trade sector is represented in the middle panel.
If the two nations have isolated labor markets, wages in the USA $(W_{USA})$ are higher than wages in Mexico $(W_{MEX})$. This wage differential $(W_{USA} > W_{MEX})$ provides the motivation for workers to leave Mexican jobs and migrate to the United States. When the movement of labor is possible, the number of migrants is shown in the middle panel, equal to $Q_T$ million hours of work. If $Q_T$ hours of work are transferred from Mexico to the USA, the wage rates are equalized at $W^*$ in both nations. Note that this model ignores exchange rates and transportation costs of migration.
The graphical model demonstrated in Figure $3$ also assumes freedom of movement between the two nations. In agriculture, there is considerable freedom for farm workers to enter the USA from Mexico to supply labor to farms. The H-2A Temporary Agricultural Program allows foreign-born workers to legally enter the United States to perform seasonal farm labor on a temporary basis for up to 10 months. The seasonal needs of crop farmers (fruit, vegetables, and grains) can be met with this program, but most livestock producers, (ranches, dairies, and hog and poultry operations) are not able to use the program. An exception is made for livestock producers on the range, (sheep and goat producers), who can use H-2A workers year-round.
The welfare analysis in Figure $3$ shows the same results for labor as were obtained for commodities such as wheat in Section 1.6.3. Winners include consumers (employers) in the importing (receiving) nation, and producers (workers) in the exporting (source) nation. In this case, US farmers who employ migrant workers are made better off by area $(A + B)$, but native workers (USA workers employed prior to immigration) are made worse off by area $A$. The gains and losses are due to the decrease in wages from $W_{USA}$ to $W^*$. The movement of workers out of Mexico results in gains for Mexican workers (area $C + D$), but losses for employers of workers in Mexico (area $C$). This is due to the wage increase in Mexico from $W_{MEX}$ to $W^*$.
Both origin and receiving nations have net benefits: area $B$ in the USA and area $D$ in Mexico. This result explains why immigration has been a large, significant feature in US history (the United States is often referred to as a “Nation of Immigrants”). The gains and losses in each nation demonstrate why immigration continue to be controversial issue: large economic gains and losses in each nation.
In the long run, the gains to immigration are large for the recipient nation. This is for two reasons: (1) migrant workers are most often complementary to native workers: low-skill immigrants combine with high-skill native workers to enhance productivity for all workers in the receiving nation, and (2) increased population generates increased demand for all goods and services in the USA, resulting in enhanced economic conditions for all workers in the receiving nation. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/02%3A_Welfare_Analysis_of_Government_Policies/2.07%3A_Immigration.txt |
The welfare analysis of international trade can be conducted using the three-panel diagram (Figure \(1\)). The welfare impacts on wheat consumers and producers can be calculated by measuring the changes in consumer surplus and producer surplus before trade (time \(t=0\)) and after trade (time \(t=1\)). The welfare changes for the exporting nation are shown in the left panel of Figure \(1\). Prior to trade, the closed economy price is \(P_e\) at the closed economy market equilibrium, where \(Q^s = Q^d\). After trade, export opportunities allow the price to increase to the world price \(P_w\). Quantity supplied increases and quantity demanded decreases. Consumers lose, since the price is now higher \((P_w > P_e)\) and the quantity consumed lower. The loss in consumer surplus is equal to area \(A\), the area between the two price lines and below the demand curve: \(ΔCS = -A\). Producers receive a higher price \((P_w > P_e)\) and a larger quantity, and an increase in producer surplus equal to the area between the two price lines and above the supply curve: \(ΔPS = + A + B\) (Figure \(1\)).
The net gain to all groups in the exporting nation, or change in social welfare \((SW)\), is defined to be \(ΔSW = ΔCS + ΔPS\). Thus, \(ΔSW = +B\), since area \(A\) represents a transfer of surplus (dollars) from consumers to producers in the exporting nation (USA). Interestingly and importantly, the exporting nation is better off with international trade \(ΔSW > 0\). However, not all individuals and groups are made better off with trade. Wheat producers in the exporting nation gain, but wheat consumers in the exporting nation lose. Trade has a positive overall net benefit.
In the importing nation, consumers win and producers lose from trade (right panel, Figure \(1\):). The pre-trade price in the importing nation is \(P_i\), and trade provides the opportunity for imports \((Q^d > Q^s)\). With imported wheat, the market price falls from Pi to the world price \(P_w\). Quantity demanded increases and quantity supplied decreases. Consumers gain at the lower price \((P_w < P_i)\): \(ΔCS = + C + D\). Producers lose at the lower price \((P_w < P_i)\): \(ΔPS = – C\). The net gain to the importing nation, or change in social welfare \((SW)\) is \(ΔSW = + D\). The area \(C\) represents a transfer of surplus from producers to consumers in the importing nation. As in the exporting nation, the net gains are positive, but not everyone is helped by trade. Producers in importing nations will oppose trade. This is a general result from out model of trade: producers in importing nations will oppose trade, since they face competition from imported goods.
The results of the three-panel model clarify and explain the politics behind trade agreements. Politicians representing the entire nation will support the overall benefits from trade, brought about by efficiency gains from globalization. However, representatives of constituent groups who are hurt by trade will oppose new free trade agreements. A large number of trade barriers are erected to protect domestic producers from import competition, including tariffs, quotas, and import bans.
The world is better off due to globalization and trade: the global economy gains areas \(B + D\) from producing wheat in nations that have superior grain production characteristics. These efficiency gains provide real economic benefits to both nations. However, globalization requires change, and many workers and resources will have to change jobs (and many times locations) to achieve the potential gains. Labor with specific skills and other inflexibilities will have high adjustment costs to globalization. However, there have been huge increases in the incomes of trading nations due to moving resources from less efficient employments into more efficient employment over time.
The three-panel diagram highlights who gains and who loses from trade. Producers in exporting nations and consumers in importing nations gain, in many cases enormously. Producers in importing nations and consumers in exporting nations lose, and in many cases lose a great deal. Industrial workers and textile workers in the USA and the EU used to be employed in one of the major sectors of the economy. Today, these jobs are in nations with low labor costs: China, Indonesia, Malaysia, and Viet Nam are examples.
Should a nation support free trade? The economic analysis provides an answer to this question: unambiguously yes. The overall benefits to society outweigh costs, with the net benefits equal to areas \(B\) and \(D\) in Figure \(1\). Economists have devised the Compensation Principle for situations when there are both gains and losses to a public policy.
Compensation Principle = A decision rule where if the prospective winners gain enough to compensate the prospective losers, then the policy should be undertaken, from an economic point of view.
The actual compensation can be difficult to achieve in the real world, but the net benefits of the program suggest that if society gain from the policy, it should be undertaken.
Agricultural producers in most high income nations are subsidized by the government. These subsidies can be viewed as compensation for the impacts of the adoption of labor-saving technological change over time. Technological change has made agriculture in the USA and the EU enormously productive. However, it has led to massive migration of labor out of agriculture. Subsidies can be viewed as the provision of compensation for the massive substitution of machines and chemicals for labor in agricultural production. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/02%3A_Welfare_Analysis_of_Government_Policies/2.08%3A_Welfare_Impacts_of_International_Trade.txt |
Thumbnail: A monopsonist employer maximizes profits by choosing the employment level L, that equates the marginal revenue product (MRP) to the marginal cost MC, at point A. The wage is then determined on the labour supply curve, at point M, and is equal to w. By contrast, a competitive labour market would reach equilibrium at point C, where labour supply S equals demand. This would lead to employment L' and wage w'. (CC BY 2.5; SilverStar via Wikipedia).
03: Monopoly and Market Power
This chapter will explore firms that have market power, or the ability to set the price of the good that they produce.
Market Power = Ability of a firm to set the price of a good. Also called monopoly power.
A monopoly is defined as a single firm in an industry with no close substitutes. An industry is defined as a group of firms that produce the same good.
Monopoly = A single firm in an industry with no close substitutes.
The phrase, “no close substitutes” is important, since there are many firms that are the sole producer of a good. Consider McDonalds Big Mac hamburgers. McDonalds is the only provider of Big Macs, yet it is not a monopoly because there are many close substitutes available: Burger King Whoppers, for example.
Market power is also called monopoly power. A competitive firm is a “price taker.” Thus, a competitive firm has no ability to change the price of a good. Each competitive firm is small relative to the market, so has no influence on price. On the other hand, firms with market power are also called “price makers.”
Price Taker = A competitive firm with no ability to set the price of a good.
Price Maker = A noncompetitive firm with market power, defined as the ability to set the price of a good.
A monopolist is considered to be a price maker, and can set the price of the product that it sells. However, the monopolist is constrained by consumer willingness and ability to purchase the good, also called demand. For example, suppose that an agricultural chemical firm has a patent for an agricultural chemical used to kill weeds, a herbicide. The patent is a legal restriction that permits the patent holder to be the only seller of the herbicide, as it was invented by the company through their research program. In Figure \(1\), an agricultural chemical firm faces an inverse demand curve equal to: \(P = 100 – Q^d\), where \(P\) is the price of the agricultural chemical in dollars per ounce (USD/oz), and \(Q^d\) is the quantity demanded of the chemical in million ounces (m oz).
The monopolist can set a price, but the resulting quantity is determined by the consumers’ willingness to pay, or the demand curve. For example, if the price is set at \(P_0\), consumers will purchase \(Q_0\). Alternatively, the monopolist could set quantity at \(Q_0\), and consumers would be willing to pay \(P_0\) for \(Q_0\) units of the chemical. Thus, a monopolist has the ability to set any price that it would like to, but with important limitation: the monopolist is constrained by consumer willingness to pay for the product. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/03%3A_Monopoly_and_Market_Power/3.01%3A_Market_Power_Introduction.txt |
The profit-maximizing solution for the monopolist is found by locating the biggest difference between total revenues $(TR)$ and total costs $(TC)$, as in Equation \ref{3.1}.
$\max π = TR – TC \label{3.1}$
Monopoly Revenues
Revenues are the money that a firm receives from the sale of a product.
Total Revenue [TR] = The amount of money received when the producer sells the product. $TR = PQ$
Average Revenue [AR] = The average dollar amount receive per unit of output sold $AR = \dfrac{TR}{Q}$
Marginal Revenue [MR] = the addition to total revenue from selling one more unit of output.
\begin{align*} MR &= \frac{ΔTR}{ΔQ} = \frac{∂TR}{∂Q}.\[4pt] TR &= PQ \text{ (USD)}\[4pt] AR &= \frac{TR}{Q} = P\text{ (USD/unit)}\[4pt] MR &= \frac{ΔTR}{ΔQ} = \frac{∂TR}{∂Q} \text{ (USD/unit)} \end{align*}
For the agricultural chemical company:
\begin{align*} TR &= PQ = (100 – Q)Q = 100Q – Q^2 \[4pt] AR &= P = 100 – Q\[4pt] MR &= \frac{∂TR}{∂Q} = 100 – 2Q \end{align*}
Total revenues for the monopolist are shown in Figure $1$. Total Revenues are in the shape of an inverted parabola. The maximum value can be found by taking the first derivative of TR, and setting it equal to zero. The first derivative of TR is the slope of the TR function, and when it is equal to zero, the slope is equal to zero.
$\max TR = PQ \frac{∂TR}{∂Q} = 100 – 2Q = 0 Q^* = 50 \label{3.2}$
Substitution of $Q^*$ back into the $TR$ function yields $TR = \text{ USD } 2500$, the maximum level of total revenues (Figure $1$).
It is important to point out that the optimal level of chemical is not this level. The optimal level of chemical to produce and sell is the profit-maximizing level, which is revenues minus costs. If the monopolist were to maximize revenues instead of profits, it might cost too much relative to the gain in revenue. Profits include both revenues and costs.
Average revenue $(AR)$ and marginal revenue $(MR)$ are shown in Figure $2$. The marginal revenue curve has the same y-intercept and twice the slope as the average revenue curve. This is always true for linear demand (average revenue) curves. This is one of the major take home messages of economics: maximize revenues may cost too much to make it worth it. For example, a corn farmer who maximizes yield (output per acre of land) may be spending too much on inputs such as fertilizer and chemicals make the higher yield payoff. From an economic perspective, the corn farmer should consider both revenues and costs.
Monopoly Costs
The costs for the chemical include total, average, and marginal costs.
Total Cost [TC] = The sum of all payments that a firm must make to purchase the factors of production. The sum of Total Fixed Costs and Total Variable Costs. $TC = C(Q) = TFC + TVC.$
Total Fixed Costs [TFC] = Costs that do not vary with the level of output.
Total Variable Costs [TVC] = Costs that do vary with the level of output.
Average Cost [AC] = total costs per unit of output. $AC = \dfrac{TC}{Q}$. Note that the terms, Average Costs and Average Total Costs are interchangeable.
Marginal Cost [MC] = the increase in total costs due to the production of one more unit of output. $MC = \dfrac{ΔTC}{ΔQ} = \dfrac{∂TC}{∂Q}$.
\begin{align*} TC &= C(Q)\text{ (USD)}\[4pt] AC &= \frac{TC}{Q}\text{ (USD/unit)}\[4pt] MC &= \frac{ΔTC}{ΔQ} = \frac{∂TC}{∂Q} \text{ (USD/unit)}\end{align*}
Suppose that the agricultural chemical firm is a constant cost industry. This means that the per-unit cost of producing one more ounce of chemical is the same, no matter what quantity is produced. Assume that the cost per unit is ten dollars per ounce (10 USD/oz).
\begin{align*} TC &= 10Q\[4pt] AC &= \frac{TC}{Q} = 10\[4pt] MC &= \frac{ΔTC}{ΔQ} = \frac{∂TC}{∂Q} = 10\end{align*}
The total costs are shown in Figure $3$, and the per-unit costs are in Figure $4$.
Monopoly Profit-Maximizing Solution
There are three ways to communicate economics: verbally, graphically, and mathematically. The firm’s profit maximizing solution is one of the major features and important conclusions of economics. The verbal explanation is that a firm should continue any activity as long as the additional (marginal) benefits are greater than the additional (marginal) costs. The firm should continue the activity until the marginal benefit is equal to the marginal cost. This is true for any activity, and for profit maximization, the firm will find the optimal, profit maximizing level of output where marginal revenues equal marginal costs $(MR = MC)$.
The graphical solution takes advantage of pictures that tell the same story, as in Figures $5$ and $6$. Figure $5$ shows the profit maximizing solution using total revenues and total costs. The profit-maximizing level of output is found where the distance between $TR$ and $TC$ is largest: $π = TR – TC$. The solution is found by setting the slope of $TR$ equal to the slope of $TC$: this is where the rates of change are equal to each other $(MR = MC)$.
The same solution can be found using the marginal graph (Figure $6$). The firm sets $MR$ equal to $MC$ to find the profit-maximizing level of output $(Q^*)$, then substitutes $Q^*$ into the consumers’ willingness to pay (demand curve) to find the optimal price $(P^*)$. The profit level is an area in Figure $6$, defined by $TR$ and $TC$. Total revenues are equal to price times quantity $(TR = P^*Q)$, so $TR$ are equal to the rectangle from the origin to $P^*$ and $Q^*$. Total costs are equal to the rectangle defined by the per-unit cost of ten dollars per ounce times the quantity produced, $Q^*$. If the $TC$ rectangle is subtracted from the TR rectangle, the shaded profit rectangle remains: profits are the residual of revenues after all costs have been paid $(π = TR – TC)$.
The math solution for profit maximization is found by using calculus. The maximum level of a function is found by taking the first derivative and setting it equal to zero. Recall that the inverse demand function facing the monopolist is $P = 100 – Q^d$, and the per unit costs are ten dollars per ounce.
\begin{align*} \max π &= TR – TC\[4pt] &= P(Q)Q – C(Q)\[4pt] &= (100 – Q)Q – 10Q\[4pt] &= 100Q – Q^2 – 10Q\[4pt] \frac{∂π}{∂Q} &= 100 – 2Q – 10 = 0\[4pt] 2Q &= 90\[4pt] Q^* &= 45 \text{ million ounces of agricultural chemical}\end{align*}
The profit-maximizing price is found by substituting $Q^*$ into the inverse demand equation:
$P^* = 100 – Q^* = 100 – 45 = 55 \text{ USD/ounce of agricultural chemical}.\nonumber$
The maximum profit level can be found by substitution of $P^*$ and $Q^*$ into the profit equation:
$π = TR – TC = P(Q)Q – C(Q) = 55\cdot 45 – 10\cdot 45 = 45\cdot 45 = 2025 \text{ million USD}.\nonumber$
This profit level is equal to the distance between the $TR$ and $TC$ curves at $Q^*$ in Figure $5$, and the profit rectangle identified in Figure $6$. The profit-maximizing level of output and price have been found in three ways: verbally, graphically, and mathematically. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/03%3A_Monopoly_and_Market_Power/3.02%3A_Monopoly_Profit-Maximizing_Solution.txt |
We have located the profit-maximizing level of output and price for a monopoly. How does the monopolist know that this is the correct level? How is the profit-maximizing level of output related to the price charged, and the price elasticity of demand? This section will answer these questions. The firm’s own price elasticity of demand captures how consumers of a good respond to a change in price. Therefore, the own price elasticity of demand captures the most important thing that a firm can know about its customers: how consumers will react if the good’s price is changed.
The Monopolist’s Tradeoff between Price and Quantity
What happens to revenues when output is increased by one unit? The answer to this question reveals useful information about the nature of the pricing decision for firms with market power, or a downward sloping demand curve. Consider what happens when output is increased by one unit in Figure $1$.
Increasing output by one unit from $Q_0$ to $Q_1$ has two effects on revenues: the monopolist gains area $B$, but loses area $A$. The monopolist can set price or quantity, but not both. If the output level is increased, consumers’ willingness to pay decreases, as the good becomes more available (less scarce). If quantity increases, price falls. The benefit of increasing output is equal to $ΔQ\cdot P_1$, since the firm sells one additional unit $(ΔQ)$ at the price $P_1$ (area $B$). The cost associated with increasing output by one unit is equal to $ΔP\cdot Q_0$, since the price decreases $(ΔP)$ for all units sold (area $A$). The monopoly cannot increase quantity without causing the price to fall for all units sold. If the benefits outweigh the costs, the monopolist should increase output: if $ΔQ\cdot P_1 > ΔP\cdot Q_0$, increase output. Conversely, if increasing output lowers revenues $(ΔQ\cdot P_1 < ΔP\cdot Q_0)$, then the firm should reduce output level.
The Relationship between MR and Ed
There is a useful relationship between marginal revenue $(MR)$ and the price elasticity of demand $(E^d)$. It is derived by taking the first derivative of the total revenue $(TR)$ function. The product rule from calculus is used. The product rule states that the derivative of an equation with two functions is equal to the derivative of the first function times the second, plus the derivative of the second function times the first function, as in Equation \ref{3.3}.
$\frac{∂(yz)}{∂x} = \left(\frac{∂y}{∂x}\right)z + \left(\frac{∂z}{∂x}\right)y \label{3.3}$
The product rule is used to find the derivative of the $TR$ function. Price is a function of quantity for a firm with market power. Recall that $MR = \frac{∂TR}{∂Q}$, and the equation for the elasticity of demand:
$E^d = \frac{(∂Q/∂P)P}{Q}\nonumber$
This will be used in the derivation below.
\begin{align*} TR &= P(Q)Q\[4pt] \frac{∂TR}{∂Q} &= \left(\frac{∂P}{∂Q}\right)Q + \left(\frac{∂Q}{∂Q}\right)P\[4pt] MR &= \left(\frac{∂P}{∂Q}\right)Q + P\end{align*}
next, divide and multiply by $P$:
\begin{align*}MR &= [\frac{(∂P/∂Q)Q}{P}]P + P\[4pt] &= [\frac{1}{E_d}]P + P\[4pt] &= P\left(1 + \frac{1}{E_d}\right)\end{align*}
This is a useful equation for a monopoly, as it links the price elasticity of demand with the price that maximizes profits. The relationship can be seen in Figure $2$.
$MR = P\left(1 + \frac{1}{E_d}\right) \label{3.4}$
At the vertical intercept, the elasticity of demand is equal to negative infinity (section 1.4.8). When this elasticity is substituted into the $MR$ equation, the result is $MR = P$. The $MR$ curve is equal to the demand curve at the vertical intercept. At the horizontal intercept, the price elasticity of demand is equal to zero (Section 1.4.8, resulting in $MR$ equal to negative infinity. If the $MR$ curve were extended to the right, it would approach minus infinity as $Q$ approached the horizontal intercept. At the midpoint of the demand curve, $P$ is equal to $Q$, the price elasticity of demand is equal to $-1$, and $MR = 0$. The $MR$ curve intersects the horizontal axis at the midpoint between the origin and the horizontal intercept.
This highlights the usefulness of knowing the elasticity of demand. The monopolist will want to be on the elastic portion of the demand curve, to the left of the midpoint, where marginal revenues are positive. The monopolist will avoid the inelastic portion of the demand curve by decreasing output until $MR$ is positive. Intuitively, decreasing output makes the good more scarce, thereby increasing consumer willingness to pay for the good.
Pricing Rule I
The useful relationship between $MR$ and $E_d$ in Equation \ref{3.4} can be used to derive a pricing rule.
\begin{align*} MR &= P\left(1 + \frac{1}{E_d}\right)\[4pt] MR &= P + \frac{P}{E_d}\end{align*}
Assume profit maximization [$MR = MC$]
\begin{align*} MC &= P + \frac{P}{E_d}\[4pt] –\frac{P}{E_d} &= P – MC\[4pt] –\frac{1}{E_d} &= \frac{P – MC}{P}\[4pt] \frac{P – MC}{P} &= –\frac{1}{E_d}\end{align*}
This pricing rule relates the price markup over the cost of production $(P – MC)$ to the price elasticity of demand.
$\frac{P – MC}{P} = –\frac{1}{E_d} \label{3.5}$
A competitive firm is a price taker, as shown in Figure $3$. The market for a good is depicted on the left hand side of Figure $3$, and the individual competitive firm is found on the right hand side. The market price is found at the market equilibrium (left panel), where market demand equals market supply. For the individual competitive firm, price is fixed and given at the market level (right panel). Therefore, the demand curve facing the competitive firm is perfectly horizontal (elastic), as shown in Figure $3$.
The price is fixed and given, no matter what quantity the firm sells. The price elasticity of demand for a competitive firm is equal to negative infinity: $E_d = -\inf$. When substituted into Equation \ref{3.5}, this yields $(P – MC)P = 0$, since dividing by infinity equals zero. This demonstrates that a competitive firm cannot increase price above the cost of production: $P = MC$. If a competitive firm increases price, it loses all customers: they have perfect substitutes available from numerous other firms.
Monopoly power, also called market power, is the ability to set price. Firms with market power face a downward sloping demand curve. Assume that a monopolist has a demand curve with the price elasticity of demand equal to negative two: $E_d = -2$. When this is substituted into Equation \ref{3.5}, the result is: $\dfrac{P – MC}{P} = 0.5$. Multiply both sides of this equation by price $(P)$: $(P – MC) = 0.5P$, or $0.5P = MC$, which yields: $P = 2MC$. The markup (the level of price above marginal cost) for this firm is two times the cost of production. The size of the optimal, profit-maximizing markup is dictated by the elasticity of demand. Firms with responsive consumers, or elastic demands, will not want to charge a large markup. Firms with inelastic demands are able to charge a higher markup, as their consumers are less responsive to price changes.
In the next section, we will discuss several important features of a monopolist, including the absence of a supply curve, the effect of a tax on monopoly price, and a multiplant monopolist. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/03%3A_Monopoly_and_Market_Power/3.03%3A_Marginal_Revenue_and_the_Elasticity_of_Demand.txt |
The Absence of a Supply Curve for a Monopolist
There is no supply curve for a monopolist. This differs from a competitive industry, where there is a one-to-one correspondence between price $(P)$ and quantity supplied $(Q^s)$. For a monopoly, the price depends on the shape of the demand curve, as shown in Figure $1$. A mathematical “function” is defined as a one-to-one correspondence between each point in the range $(x)$ and the domain $(y)$. A supply curve, then, requires a single price $(P)$ for each quantity $(Q)$. This graph shows that there is more than one price associated with each quantity. At quantity $Q_0$, for demand curve $D_1$, the monopolist maximizes profits by setting $MR_1 = MC$, which results in price $P_1$. However, for demand curve $D_2$, the monopolist would set $MR_2=MC$, and charge a lower price, $P_2$. Since there is more than one price associated with a single quantity $(Q_0)$, there is no one-to-one correspondence between price and quantity supplied, and no supply curve for a monopolist.
The Effect of a Tax on a Monopolist’s Price
In a competitive industry, a tax results in an increase in price that is based on the incidence of the tax. The price increase is a fraction of the tax, less than the tax amount. The tax incidence depends on the magnitude of the elasticities of supply and demand. In a monopoly, it is possible that the price increase from a tax is greater than the tax itself, as shown in Figure $2$. This is an interesting and nonintuitive result!
Before the tax, the monopolist sets $MR = MC$ at $Q_0$, and sets price at $P_0$. After the tax is imposed, the marginal costs increase to $C + t$. The monopolist sets $MR = MC = C + t$, produces quantity $Q_1$, and charges price $P_1$. The increase in price $(P_1 – P_0)$ is larger than the tax rate $(t)$, the vertical distance between the $C + t$ and $MC$ lines. In this case, consumers of the monopoly good are paying more than 100 percent of the tax rate. This is because of the shape of the demand curve: it is profitable for the monopoly to reduce quantity produced to increase the price.
Multiplant Monopolist
Suppose that a monopoly has two or more plants (factories). How does the monopolist determine how much output should be produced at each plant? Profit-maximization suggests two guidelines for the multiplant monopolist. Suppose that the monopolist operates $n$ plants.
1. Set $MC$ equal across all plants: $MC_1 = MC_2 = … = MC_n$, and
2. Set $MR = MC$ in all plants.
A mathematical model of a multiplant monopolist demonstrates profit-maximization. The result is interesting and important, as it shows that multiplant firms will not always close older, less efficient plants. This is true even if the older plants have higher production costs than newer, more efficient plants.
Suppose that a monopolist has two plants, and total output $(Q_T)$ is the sum of output produced in plant 1 $(Q_1)$ and plant 2 $(Q_2)$.
$Q_1 + Q_2 = Q_T \label{3.6}$
The profit-maximizing model for the two-plant monopolist yields the solution. The costs of producing output in each plant differ. Assume that the old plant (plant 1) is less efficient than the new plant (plant 2): $C_1 > C_2$.
\begin{align*} \max π &= TR – TC\[4pt] &= P(Q_T)Q_T – C_1(Q_1) – C_2(Q_2)\[4pt] \frac{∂π}{∂Q_1} &= \frac{∂TR}{∂Q_1} – C_1’(Q_1) = 0\[4pt] \frac{∂π}{∂Q_2} &= \frac{∂TR}{∂Q_2} – C_2’(Q_2) = 0\end{align*}
The profit-maximizing solution is:
$MR = MC_1 = MC_2 \label{3.7}$
The multiplant monopolist solution is shown in Figure $3$. The marginal cost curve for plant 1 is higher than the marginal cost curve for plant 2, reflecting the older, less efficient plant. Rather than shutting the less efficient plant down, the monopolist should produce some output in each plant, and set the $MC$ of each plant equal to $MR$, as shown in the graph. Let $MC_T$ be the total (sum) of the marginal cost curves: $M_T = MC_1 + MC_2$. The profit maximizing quantity $(Q_T)$ is found by setting $MR$ equal to $MC_T$. At the profit maximizing quantity $(Q_T)$, the monopolist sets price equal to $P$, found by plugging $Q_T$ into the consumers’ willingness to pay, or the demand curve $(D)$.
To find the quantity to produce in each plant, the firm sets $MC_1 = MC_2 = MC_T$ to find the profit-maximizing level of output in each plant: $Q_1$ and $Q_2$. The outcome of the multiplant monopolist yields useful conclusions for any firm: continue using any input, plant, or resource until marginal costs equal marginal revenues. Less efficient resources can be usefully employed, even if more efficient resources are available. The next section will explore the determinants and measurement of monopoly power, also called market power. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/03%3A_Monopoly_and_Market_Power/3.04%3A_Monopoly_Characteristics.txt |
In this section, the determinants and measurement of monopoly power are examined.
The Lerner Index of Monopoly Power
Economists use the Lerner Index to measure monopoly power, also called market power. The index is the percent markup of price over marginal cost.
$L = \dfrac{P – MC}{P} \label{3.8}$
The Lerner Index is a positive number $(L \geq 0)$, increasing in the amount of market power. A perfectly competitive firm has a Lerner Index equal to zero $(L = 0)$, since price is equal to marginal cost $(P = MC)$. A monopolist will have a Lerner Index greater than zero, and the index will be determined by the amount of market power that the firm has. A larger Lerner Index indicates more market power. In Section 3.3.3, a Pricing Rule was derived: $\dfrac{P – MC}{P} = –\dfrac{1}{E^d}$, where $E^d$ is the price elasticity of demand. Substitution of this pricing rule into the definition of the Lerner Index provides the relationship between the percent markup and the price elasticity of demand.
$L = \dfrac{P – MC}{P} = – \dfrac{1}{E^d} \label{3.9}$
An example of a Lerner Index might be Big Macs. There are substitutes available for Big Macs, so if the price increases, consumers can buy a competing brand such as Whoppers. In the case of a good with close substitutes, the price elasticity of demand is larger (more elastic), causing the percent markup to be smaller: the Lerner Index is relatively small. A monopoly is defined as a single seller in an industry with no close substitutes. Therefore, a monopoly that produces a good with no close substitutes would have a higher Lerner Index.
A second pricing rule can be derived from Equation \ref{3.9}, if we assume that the firm maximizes profits $(MR = MC)$. In that case, the relationship between price and marginal revenue is equal to: $MR = P(1 + \frac{1}{E^d})$. If profit-maximization $(MR = MC)$ is assumed, then:
$MC = P \left(1 + \dfrac{1}{E^d} \right) \label{3.10}$
Rearranging:
$P = \dfrac{MC}{1 + \dfrac{1}{E^d}} \label{3.11}$
This is a useful equation, as it relates price to marginal cost. For example, a perfectly competitive firm has a perfectly elastic demand curve ($E^d =$ negative infinity). Substitution of this elasticity into the pricing rule yields $P = MC$. For a monopoly that has a price elasticity equal to –2, $P = 2MC$. The price is two times the production costs in this case. To summarize:
1. if $E^d$ is large, the firm has less market power, and a small markup
2. if $E^d$ is small, the firm has more market power, and a large markup.
A monopoly example is useful to review monopoly and the Lerner Index. Suppose that the inverse demand curve facing a monopoly is given by: $P = 500 – 10Q$. The monopoly production costs are given by: $C(Q) = 10Q^2 + 100Q$. Profit-maximization yields the optimal monopoly price and quantity.
\begin{align*} \max π &= TR – TC\[4pt] &= P(Q)Q – C(Q)\[4pt] &= (500 – 10Q)Q – (10Q^2 + 100Q)\[4pt] &= 500Q – 10Q^2 – 10Q^2 – 100Q\[4pt] \frac{∂π}{∂Q} &= 500 – 20Q – 20Q – 100 = 0\[4pt] 40Q &= 400\[4pt] Q^* &= 10 \text{ units}\[4pt] P^* &= 500 – 10Q^* = 500 – 100 = 400 \text{ USD/unit}.\end{align*}
To calculate the value of the Lerner Index, price and marginal cost are needed (Equation \ref{3.9}).
\begin{align*} MC &= C’(Q) = 20Q + 100.\[4pt] MC^* &= 20(10) + 100 = 300 \text{ units}\[4pt] L &= \frac{P – MC}{P} = \frac{400 – 300}{400} = \frac{100}{400} = 0.25\end{align*}
This result can be checked with the pricing rule: $\dfrac{P – MC}{P} = – \dfrac{1}{E^d}$.
$E^d = \left(\frac{∂Q}{∂P}\right)\left(\frac{P}{Q}\right)\nonumber$
For this monopoly, $\dfrac{∂P}{∂Q} = –10$. This is the first derivative of the inverse demand function. Therefore, $\dfrac{∂Q}{∂P} = – \dfrac{1}{10}$.
\begin{align*} E^d &= \left(\frac{∂Q}{∂P}\right)\left(\frac{P}{Q}\right) = \left(– \frac{1}{10}\right)\left(\frac{400}{10}\right) = – \frac{400}{100} = – 4.\[4pt] L &= \frac{P – MC}{P} = – \frac{1}{E^d} = \frac{–1}{–4} = 0.25.\end{align*}
The same result was achieved using both methods, so the Lerner Index for this monopoly is equal to 0.25.
Welfare Effects of Monopoly
The welfare effects of a market or policy change are summarized as, “who is helped, who is hurt, and by how much.” To measure the welfare impact of monopoly, the monopoly outcome is compared with perfect competition. In competition, the price is equal to marginal cost $(P = MC)$, as in Figure $1$. The competitive price and quantity are $P_c$ and $Q_c$. The monopoly price and quantity are found where marginal revenue equals marginal cost $(MR = MC)$: $P_M$ and $Q_M$. The graph indicates that the monopoly reduces output from the competitive level in order to increase the price $(P_M > P_c$ and $Q_M < Q_c)$. The welfare analysis of a monopoly relative to competition is straightforward.
\begin{align*} ΔCS &= – AB\[4pt] ΔPS &= +A – C \[4pt] ΔSW &= – BC\[4pt] DWL &= BC \end{align*}
Consumers are losers, and the benefits of monopoly depend on the magnitudes of areas $A$ and $C$. Since a monopolist faces an inelastic supply curve (no close substitutes), area $A$ is likely to be larger than area $C$, making the net benefits of monopoly positive.
The monopoly example from the previous section 3.5.1 shows the magnitude of the welfare changes. From above, the inverse demand curve is given by $P = 500 – 10Q$, and the costs are given by $C(Q) = 10Q^2 + 100Q$. In this case, $P_M = 400$ USD/unit and $Q_M = 10$ units (see section 3.5.1 above). The competitive solution is found where the demand curve intersects the marginal cost curve.
\begin{align*} 500 – 10Q &= 20Q + 100\[4pt] 30Q &= 400\[4pt] Q_c &= 13.3 \text{ units}\[4pt] P_c &= 500 – 10(13.3) = 500 – 133 = 367 \text{ USD/unit}\[4pt] ΔCS &= – AB = –(400 – 367)10 – (0.5)(400 – 367)(13.3 – 10) = – 330 –54.5 = – 384.5 \text{ USD}\[4pt] ΔPS &= +A – C = +330 – (0.5)(367 – 300)(13.3 – 10) = +330 – 110.5 = +219.5 \text{ USD}\[4pt] ΔSW &= – BC = (0.5)(100)(3.3) = – 165 \text{ USD}\[4pt] DWL &= BC = 165 \text{ USD}\end{align*}
The welfare analysis of monopoly has been used by the government to justify breaking up monopolies into smaller, competing firms. In food and agriculture, many individuals and groups are opposed to large agribusiness firms. One concern is that these large firms have monopoly power, which results in a transfer of welfare from consumers to producers, and deadweight loss to society. It will be shown below that outlawing or banning monopolies would have both benefits and costs. There is some economic justification for the existence of large firms due to economies of scale and natural monopoly, as will be explored below. Next, the sources of monopoly power will be listed and explained.
Sources of Monopoly Power
There are three major sources of monopoly power:
1. the price elasticity of demand $(E^d)$,
2. the number of firms in a market, and
3. interaction among firms.
The price elasticity of demand is the most important determinant of market power, due to the pricing rule: $L = \frac{P – MC}{P} = – \frac{1}{E^d}$. When the price elasticity is large $(\mid E^d\mid > 1)$, demand is relatively elastic, and the firm has less market power. When the price elasticity is small $(\mid E^d\mid < 1)$, demand is relatively inelastic, and the firm has more market power.
The price elasticity of demand depends on how large the firm is relative to the market. The firm’s price elasticity of demand is always more elastic than the market demand:
$\mid E^d_{firm}\mid > \mid E^d_{market}\mid.$
If the price of the firm’s output is increased, consumers can substitute into outputs produced by other firms. However, if all firms in the market increase the price of the good, consumers have no close substitutes, so must pay the higher price (Figure $2$). Therefore, the firm’s price elasticity of demand is more elastic than the market demand. The firm’s price elasticity of demand depends on how large the firm is relative to the other firms in the market.
The second determinant of market power is the number of firms in an industry. This is related to Figure $2$. If a firm is the only seller in an industry, then the firm is the same as the market, and the price elasticity of demand is the same for both the firm and the market. The more firms there are in a market, the more substitutes a consumer has available, making the price elasticity of demand more elastic as the number of firms increases. In the extreme case, a perfectly competitive firm has numerous other firms in the industry, causing the demand curve to be perfectly elastic, $P = MC$, and $L = 0$. To summarize, the more firms there are in an industry, the less market power the firm has.
The number of firms in an industry is determined by the ease or difficulty of entry. This market feature is captured by the concept of, “Barriers to Entry.” Barriers to entry include:
1. patents,
2. copyrights,
3. contracts,
4. economies to scale (natural monopoly),
5. excess capacity, and
6. licenses.
Each of these barriers to entry increases the difficulty of entering a market when positive economic profits exist. Economies to scale and natural monopoly are defined and described in the next section. The number of firms is important, but the number of “major firms” is also important. Some industries are characterized by one or two dominant firms. These large firms often exert market power.
The third source of market power is interaction among firms. This will be extensively discussed in Chapter 5, “Oligopoly.” If firms compete aggressively with each other, less market power results. On the other hand, if firms cooperate and act together, the firms can have more market power. When firms join together, they are said to “collude,” or act as if they were a single firm. These strategic interactions between firms form the heart of the discussion in Chapter 5, and the foundation for game theory, explored in Chapters 6 and 7.
Natural Monopoly
A natural monopoly is a firm that has a high level of costs that do not vary with output.
Natural Monopoly = A firm characterized by large fixed costs.
Recall that total costs are the sum of total variable costs and total fixed costs $(TC = TVC + TFC)$. The fixed costs are those costs that do not vary with the level of output. When fixed costs are high, then average total costs are declining, as seen in Figure $3$.
Another way of describing high fixed costs is the term, “economies of scale.”
Economies of Scale = Per-unit costs of production decrease when output is increased.
Figure $3$ shows the defining characteristic of a natural monopoly: declining average costs $(AC)$. This means that the demand curve intersects the $AC$ curve while it is declining. At some point, the average costs will increase, but for firms characterized by economies of scale, the relevant range of the $AC$ curve is the declining portion, of the left side of a typical “U-shaped” cost function.
The reason for the name, “natural monopoly” can also be found in Figure $3$. The demand curve has a portion above the $AC$ curve, so positive profits are possible. Suppose that the monopoly was making positive economic profits, and attracted a competitor into the industry. The second firm would cause the demand facing each of the two firms to be cut in half. This possibility can be seen in Figure $3$: if two firms served the customers, each firm would have a demand curve equal to the $MR$ curve. This is because for a linear demand curve, the $MR$ curve has the same y-intercept and twice the slope. Notice the position of the $MR$ curve for a natural monopoly: it lies everywhere below the $AC$ curve. Therefore, positive profits are not possible for two firms serving this market. The demand is not large enough to cover the fixed costs.
The fixed costs are typically large investments that must be made before the good can be sold. For example, an electricity company must build both a huge generating plant and a distribution network that connects all residences and businesses to the power grid. These enormous costs do not vary with the level of output: they must be paid whether the firm sells zero kilowatt hours or one million kilowatt hours. The average fixed costs decline as they are spread out over larger quantities $(AFC = \dfrac{TFC}{Q})$. As the output $(Q)$ increases, average costs $(AC = \dfrac{TC}{Q})$ decline.
This feature is true for many large businesses, and provides economic justification for large firms: the per-unit costs of production are smaller, providing lower costs to consumers. There is a tradeoff for consumers who purchase goods from large firms: the cost is lower due to economies of scale, but the firm may have market power, which can result in higher prices. This tradeoff makes the economic analysis of large firms both fascinating and important to society. Current examples include the giant technology companies Microsoft, Apple, Google, and Amazon.
Natural monopolies have important implications for how large businesses provide goods to consumers, as is explicitly shown in Figure $3$. The industry in Figure $3$ is a natural monopoly, since demand intersects average costs while they are declining. If a single firm was in the depicted industry, it would set marginal costs equal to marginal revenues $(MR = MC)$, and produce and sell $Q_M$ units of output at a price equal to $P_M$. The price is high: consumers lose welfare and society is faced with deadweight losses.
If competition were possible, price would be set at marginal cost $(P = MC)$. The resulting price and quantity under competition would be $P_C$ and $Q_C$ (Figure $3$. This is a desirable outcome for the consumers. However, there is a major problem with this outcome: price is below average costs, and any business firm that charged the competitive price $P_C$ would be forced out of business. In this case, the firm does not have enough revenue to cover the fixed costs. The natural monopoly is considered a “market failure” since there is no good market-based solution. A single monopoly firm could earn enough revenue to stay in business, but consumers would pay a high monopoly price $P_M$. If competition occurred, the consumers would pay the cost of production $(PC)$, but the firms would not cover their costs.
One solution to a natural monopoly is government regulation. If the government intervened, it could set the regulated price equal to average costs $(P_R = AC)$, and the regulated quantity equal to $Q_R$. This solves the problem of natural monopoly with a compromise: consumers pay a price just high enough to cover the firm’s average costs. This analysis explains why the government regulates many public utilities for electricity, natural gas, water, sewer, and garbage collection.
The next section will investigate monopsony, or a single buyer with market power. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/03%3A_Monopoly_and_Market_Power/3.05%3A_Monopoly_Power.txt |
A monopsony is defined as a market characterized by a single buyer.
Monopsony = single buyer of a good.
Monopsony power is market power of buyers. A firm with monopsony power is a buyer that is large enough relative to the market to influence the price of a good. Competitive firms are price takers: prices are fixed and given, no matter how little or how much they buy. In food and agriculture, beef packers are often accused of having market power, and pay lower prices for cattle than the competitive price. This section will explore the causes and consequences of monopsony power.
Terminology of Monopsony
Consider any decision from an economic point of view. Thinking like an economist results in comparing the benefits and costs of any decision. This section will apply economic thinking to the quantity and price of a purchase. It will follow the same economic approach that has been emphasized, but will define new terminology to distinguish the buyer’s decision (monopsony) from the seller’s decision (monopoly). It is useful to recall the meaning of supply and demand curves. The demand curve represents the consumers’ willingness and ability to pay for a good. The demand curve is downward sloping, reflecting scarcity: larger quantities are less scarce, and thus less valuable. The supply curve represents the producers’ cost of production, and is upward sloping. As more of a good is produced, the marginal costs of production increase, since it requires more resources to produce larger quantities. These economic principles will be useful in what follows, an analysis of a buyer’s decision to purchase a good.
The economic approach to the purchase of a good is to employ marginal decision making by continuing to purchase a good as long as the marginal benefits outweigh the marginal costs. The following terms are defined to aid in our analysis of buyer’s market power.
Marginal Value (MV) = The additional benefits of buying one more unit of a good.
Marginal Expenditure (ME) = The additional costs of buying one more unit of a good.
Average Expenditure (AE) = The price paid per unit of a good.
A review of competitive buyers and sellers is a good starting point for our analysis.
Figure $1$ demonstrates the competitive solution for a competitive buyer and a competitive seller. The competitive buyer faces a price that is fixed and given $(P^*)$. The price is constant because the buyer is so small relative to the market that her purchases do not affect the price. Average expenditures $(AE)$ and marginal expenditures $(ME)$ for this buyer are constant and equal $(AE = ME)$. The buyer will continue purchasing the good until the marginal benefits, defined to be the marginal value $(MV)$ are equal to marginal expenditures $(ME)$ at $q^*$, the optimal, profit-maximizing level of good to purchase.
A competitive seller takes the price as fixed and given $(P^*)$. The price is constant because the seller is so small relative to the market that his sales do not affect the price. Average revenues $(AR)$ and marginal revenues $(MR)$ for this seller are constant and equal $(AR = MR)$. The seller will continue producing and selling the good until the marginal benefits, defined to be the marginal revenues $(MR)$ are equal to marginal costs $(MC)$ at $q^*$, the optimal, profit-maximizing level of good to produce.
A monopsony uses the same decision making framework, comparing marginal benefits and marginal costs. The distinction is that a monopsony is large enough relative to the market to influence the price. Thus, the monopsony faces an upward-sloping supply curve: as the monopsony purchases more of the good, it drives the price up (Figure $2$).
Since the firm is large, when it purchases more of a good, it drives the price higher. The average expenditure $(AE)$ curve is the supply curve of the good faced by a monopsony. An example might be Ford Motor Company. When Ford purchases more steel (or glass or tires), the firm is so large relative to the market for steel that it drives the price up. Steel companies will need to buy more resources to produce more steel, and it will cost them more, since Ford is so large a buyer in the steel market.
The profit-maximizing solution is found by setting $MV = ME$, and purchasing the corresponding quantity $Q_M$. Note that the monopsony is restricting quantity, as a monopoly restricts output to drive the price up. However, a monopsony restricts quantity in order to drive the price down to $P_M$. The monopsony is buying less than the competitive output $(Q_C)$ and paying a price $P_M$) lower than the competitive price $(P_M < P_C)$.
It is worth answering the question, “why is $ME > AE$?” The monopsony faces an upward-sloping supply curve $(AE)$. This reflects the higher cost of bringing more resources into the production of the good purchased by the monopsony. This can be seen in Figure $3$. Next, the relationship between $AE$ and $ME$ is derived. This derivation will be familiar, as it is the same as the relationship between $AR$ and $MR$ from section 3.3.2 above. The derivation of $ME$ uses the product rule.
\begin{align*} AE &= P(Q) \[4pt] TE &= P(Q)Q \[4pt] ME &= \frac{∂TE}{∂Q} = \left(\frac{∂P}{∂Q}\right)Q + P\[4pt] ME &= AE + (∂P/∂Q)Q\end{align*}
The first term in the expression for $ME$ is $AE$, which corresponds to area $B$ in Figure $3$ $(P_0ΔQ)$. Average expenditure is equal to $P_0$ at quantity $Q_0$. The second term, $\left(\dfrac{∂P}{∂Q}\right)Q$, is equal to area $A$ in the diagram. This area represents the change in price given a small change in quantity $\left(\dfrac{∂P}{∂Q}\right)$, multiplied by the quantity $(Q)$. For a competitive firm, $\left(\dfrac{∂P}{∂Q}\right) = 0$, since the competitive firm is a price taker. For a competitive firm, $AE = ME$, as shown in the left of Figure $1$. For a monopsony, the firm pays the initial price plus the increase in price caused by an increase in output. The monopsony must pay this new price ($P_1$ in Figure $3$) for all units purchased $(Q)$. This causes $ME$ to be above $AE$.
It is instructive to view the monopoly graph next to the monopsony graph (Figure $4$).
The monopoly in the left panel of Figure $4$ restricts output to drive up the price. The monopoly output is less than the competitive output $(Q_M < Q_C)$, and the monopoly price is higher than the competitive price $(P_M > P_C)$. The monopsony in the right panel of Figure $4$ restricts output to drive down the price $(P_M < P_C)$. Both firms are maximizing profit by using the market characteristics that they face.
Welfare Effects of Monopsony
To measure the welfare impact of monopsony, the monopsony outcome is compared with perfect competition. In competition, the price is equal to marginal cost $(P_C = MC)$, as in Figure $5$. The competitive price and quantity are $P_c$ and $Q_c$. The monopsony price and quantity are found where marginal value $(MV)$ equals marginal expenditure $(MV = ME)$: $P_M$ and $Q_M$. The graph indicates that the monopsony reduces output from the competitive level in order to decrease the price $(P_M < P_c$ and $Q_M < Q_c)$. The welfare analysis of a monopsony relative to competition is straightforward.
\begin{align*} ΔCS &= +A – B\[4pt] ΔPS &= –AC\[4pt] ΔSW &= – BC\[4pt] DWL &= BC\end{align*}
Consumers are winners, and the benefits of monopsony depend on the magnitudes of areas $A$ and $B$.
Sources of Monopsony Power
There are three major sources of monopsony power, analogous to the three determinants of monopoly power:
1. the price elasticity of market supply $(E^s)$,
2. the number of buyers in a market, and
3. interaction among buyers.
The price elasticity of supply is the most important determinant of monopsony power, and the monopsony benefits from an inelastic supply curve. When the price elasticity is large $(E^s > 1)$, the supply is relatively elastic, and the firm has less market power. When the price elasticity is small $(E^s < 1)$, the demand is relatively inelastic, and the firm has more market power. This is shown in Figure $6$.
The second determinant of monopsony power is the number of firms in an industry. If a firm is the only buyer in an industry, the firm is a monopsony, and has market power. The more firms there are in a market, the more competition the firm faces, and the less market power.
The third source of monopsony power is interaction among firms. If firms compete aggressively with each other, less monopsony power results. On the other hand, if firms cooperate and act together, the firms can have more monopsony power. The next Chapter will explore how firms with market power determine optimal prices. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/03%3A_Monopoly_and_Market_Power/3.06%3A_Monopsony.txt |
• 4.1: Introduction to Pricing with Market Power
In economics, the firm’s objective is assumed to be to maximize profits. Firms with market power do this by capturing consumer surplus, and converting it to producer surplus. A monopoly finds the profit-maximizing price and quantity by setting MR equal to MC. This strategy maximizes profits for a firm setting a single price and charging all customers the same price. In some situations, it is possible for a monopolist to increase profits beyond the single price monopoly solution.
• 4.2: Price Discrimination
Price discrimination is the practice of charging different prices to different customers.
• 4.3: Intertemporal Price Discrimination
Intertemporal price discrimination provides a method for firms to separate consumer groups based on willingness to pay. The strategy involves charging a high price initially, then lowering price after time passes. Many technology products and recently-released products follow this strategy.
• 4.4: Peak Load Pricing
The demand for many goods is larger during certain times of the day or week. For example, roads are congested during rush hours during the morning and evening commutes. Electricity has larger demand during the day than at night. Ski resorts have large (peak) demands during the weekends, and smaller demand during the week.
• 4.5: Two-Part Pricing
A monopoly or any firm with market power can increase profits by charging a price structure with a fixed component, or entry fee, and a variable component, or usage fee. Two-Part Pricing (also called Two Part Tariff) is a form of pricing in which consumers are charged both an entry fee (fixed price) and a usage fee (per-unit price).
• 4.6: Bundling
Bundling is the practice of selling two or more goods together as a package. Bundling is a widely-practiced sales strategy that takes advantage of differences in consumer willingness to pay for different goods.
• 4.7: Advertising
Advertising is a huge industry, with billions spent every year on marketing products. Are these enormous expenditures worth it? The benefits of increased sales and revenues must be at least as large as the increased costs to make it a good investment. In this section, the profit-maximizing level of advertising will be identified and evaluated.
Thumbnail: www.pexels.com/photo/green-a...t-lot-1656665/
04: Pricing with Market Power
In economics, the firm’s objective is assumed to be to maximize profits. Firms with market power do this by capturing consumer surplus, and converting it to producer surplus. In Figure \(1\), a monopoly finds the profit-maximizing price and quantity by setting \(MR\) equal to \(MC\). This strategy maximizes profits for a firm setting a single price \((P_M)\) and charging all customers the same price. In some situations, it is possible for a monopolist to increase profits beyond the single price monopoly solution. Figure \(1\) shows that there are two sources of consumer willingness to pay that the monopoly has not taken advantage of by producing a quantity of \(Q_M\) and selling it at price \(P_M\).
Area \(A\) along the demand curve represents consumers with a higher willingness to pay than the monopoly price \(P_M\). Area \(B\) represents consumers who have been priced out of the market, since the monopoly price is higher than their willingness to pay. These two groups of consumers represent two areas of untapped consumer surplus for a monopoly.
The monopoly price \(P_M\) represents the profit-maximizing price if the monopolist is constrained to set only a single price, and charge all customers the same single price. However, if the monopolist could charge more than one price, it may be able to capture more consumer surplus (willingness to pay) and convert it into producer surplus (profits). This Chapter describes and explains several pricing strategies for firms with market power. These strategies enhance profits over and above the single price profit level shown in Figure \(1\). The strategies include price discrimination, peak-load pricing, and two-part pricing. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/04%3A__Pricing_with_Market_Power/4.01%3A_Introduction_to_Pricing_with_Market_Power.txt |
Price discrimination is the practice of charging different prices to different customers. There are three forms of price discrimination, defined and explained in what follows.
Price Discrimination = charging different prices to different customers.
First Degree Price Discrimination
First degree price discrimination is the extreme form of charging different prices to different consumers, and makes use of the concept of “reservation price.” A consumer’s maximum willingness to pay is defined to be their reservation price.
Reservation Price = The maximum price that a consumer is willing to pay for a good.
First Degree Price Discrimination = Charging each consumer her reservation price.
First degree price discrimination is shown in Figure $1$, where the initial levels of consumer surplus $(CS_0)$ and producer surplus $(PS_0)$ are defined for the competitive equilibrium. The competitive quantity is $Q_C$, and the competitive price is $P_C$. A monopoly could charge a price $P_M$ at quantity $Q_M$ to maximize profits with a single price.
Each individual’s willingness to pay is given by a point on the demand curve. If the firm knows each consumer’s maximum willingness to pay, or reservation price, it can transfer all consumer surplus to producer surplus. The firm extracts every dollar of surplus available in the market by charging each consumer the maximum price that they are willing to pay. First degree price discrimination results in levels of producer surplus and consumer surplus $PS_1$ and $CS_1$, as shown in Equation \ref{4.1}.
$PS_1 = PS_0 + CS_0; CS_1 = 0. \label{4.1}$
Every dollar of consumer surplus has been transferred to the firm. First degree price discrimination is also called, “Perfect Price Discrimination.”
In most circumstances, it is difficult for the firm to practice first degree price discrimination. First, it is difficult to charge different prices to different consumers. In many cases, it is illegal to charge different prices to different people. Second, it is difficult and costly to elicit reservation prices from every consumer. Therefore, first degree price discrimination is an extreme, idealized case of charging different prices to different consumers. It is rare in the real world.
“Imperfect Price Discrimination” is a term used to describe markets that approach perfect price discrimination. Examples of imperfect price discrimination include car sales and college tuition rates for students in college. Car dealerships often post a “sticker price” and then lower the actual price, depending on how much the consumer is willing to pay. Successful car sales people are often those who have exceptional abilities to discern exactly how much each consumer is willing to pay, or their reservation price. Colleges and universities use imperfect price discrimination by offering scholarships and financial aid packages to students based on their willingness to enroll and attend an institution.
Imperfect price discrimination is shown in Figure $2$, where different groups of consumers are charged different prices based on their willingness to pay. Price $P_1$ is a high price to capture consumers with high willingness to pay, price $P_2$ is the monopoly price $(P_M)$, and price $P_3$ is the competitive price. If a firm can distinguish different consumer groups’ willingness to pay, it can enhance profits through this form of price discrimination.
Second Degree Price Discrimination
Second Degree Price Discrimination is a quantity discount.
Second Degree Price Discrimination = Charging different per-unit prices for different quantities of the same good.
Second degree price discrimination is a common form of pricing and packaging. Consider an example of two different sized packages of salsa with different prices per unit. Suppose that consumers have different preferences for different sized salsa packages, and different demand curves reflect this.
For simplicity, assume that there are two consumers (consumer 1 and consumer 2) and two choices of package size ($A$ and $B$).
$A$: $8 \text{ oz jar, price } = 2 \text{ USD, price per unit } = 0.25 \text{ USD/oz}$
$B$: $32 \text{ oz jar, price } = 4.80 \text{ USD, price per unit } = 0.15 \text{ USD/oz}$
Figure $3$ shows consumer demand for each of the two consumers.
Consumer 1 has a preference for smaller quantities. This consumer could be a single person who desires to purchase a small jar of salsa. Consumer 1’s demand curve demonstrates that she is willing to pay for the 8 ounce jar of salsa $(A)$, but not the 32 ounce jar $(B)$. This is because $A$ lies below demand curve $D_1$, but not $B$. On the other hand, consumer 2 desires the large jar of salsa, perhaps this is a family of four persons. Consumer 2 is willing to purchase the 32 ounce jar $(B)$, but not the 8 ounce jar $(A)$. This is because $B$ lies below the demand curve $D_2$, but not $A$.
It can be shown that the salsa firm can enhance profits by offering both sizes $A$ and $B$. Assume that the costs of producing salsa are equal to ten cents per ounce:
$MC = 0.10 \text{ USD/oz}.\nonumber$
Situation One. Firm sells 8-ounce jar only.
Consumer 1 buys, Consumer 2 does not buy.
\begin{align*} Q &= 8 oz; P = 0.25 \text{ USD/oz; }MC = 0.10 \text{ USD/oz}\[4pt] π_1 &= (P – MC)Q = (0.25 – 0.10)8 = (0.15)8 = 1.20 \text{ USD}\end{align*}
Situation Two. Firm sells 32-ounce jar only.
Consumer 2 buys, Consumer 1 does not buy.
\begin{align*} Q &= 32 \text{ oz; } P = 0.15 \text{ USD/oz; } MC = 0.10 \text{ USD/oz}\[4pt]π_2 &= (P – MC)Q = (0.15 – 0.10)32 = (0.05)32 = 1.60 \text{ USD}\end{align*}
Situation Three. Firm sells both 8-ounce and 32-ounce jars.
Consumer 1 buys 8 ounce jar, Consumer 2 buys 32 ounce jar.
$π_3 = (0.25 – 0.10)8 + (0.15 – 0.10)32 = (0.15)8 + (0.05)32 = 2.80 \text{ USD}$
Profits are larger if different sized packages are sold at the same time. Second degree price discrimination takes advantage of differences between consumers, and is usually more profitable than offering a good in only one package size. This explains the huge diversity of package sizes available for a large number of consumer goods.
Third Degree Price Discrimination
Third degree price discrimination is a practice of charging different prices to different consumer groups.
Third Degree Price Discrimination = Charging different prices to different consumer groups.
A firm that faces more than one group of consumers can increase profits by offering a good at different prices to groups of consumers with different levels of willingness to pay. The firm will maximize profits by setting the marginal revenue $(MR)$ for each consumer group equal to the marginal cost of production $(MC)$. This solution is shown in Equation \ref{4.2} for two consumer groups:
$MR_1 = MR_2 = MC. \label{4.2}$
Two things are interesting about this result. First, the firm practicing third degree price discrimination is simply following the profit-maximizing strategy of continuing any activity as long as the benefits outweigh the costs. The firm will stop when marginal benefits from selling the good to both groups are equal to the marginal costs of producing the good. Second, this solution is similar to the solution for the multiplant monopoly: $MC_1 = MC_2 = MR$. Profit-maximizing firms use the same strategy for multiple plants and multiple consumers groups: set $MR$ equal to $MC$ in all circumstances.
Movie theaters often offer a student discount to students, as well as discounts for children, senior citizens, and military personnel. It may seem as if the theaters and other firms that offer these discounts are being nice to these groups. In reality, however, the firms are practicing third degree price discrimination to maximize profits! These groups of consumers have more elastic demands for movies, and would purchase a smaller number of movie tickets if the price was not discounted for them. A numerical example will demonstrate how third degree price discrimination works. Suppose that movie tickets are in thousands.
\begin{align*} \text{ Movie ticket price } &= 12 \text{ USD/ticket}\[4pt] \text{ Student ticket price } &= 7 \text{ USD/ticket }\end{align*}
Inverse Demand for movies: $P_1 = 20 – 4Q_1$
Inverse Demand for students: $P_2 = 10 – Q_2$
\begin{align*} MC &= 4 \text{ USD/ticket}\[4pt]\max π &= TR – TC\[4pt] &= TR_1 – TC_1 + TR_2 – TC_2\[4pt] &= P_1Q_1 – 4Q_1 + P_2Q_2 – 4Q_2\[4pt] &= (20 – 4Q_1)Q_1 – 4Q_1 + (10 – Q_2)Q_2 – 4Q_2\[4pt] &= 20Q_1 – 4Q_1^2 – 4Q_1 + 10Q_2 – Q_2^2 – 4Q_2\[4pt] \frac{∂π}{∂Q_1} &= 20 – 8Q_1 – 4 = 0\[4pt] 8Q_1 &= 16\[4pt] Q_1^* &= 2 \text{ thousand movie tickets}\[4pt]P_1^* &= 20 – 4(2) = 12 \text{ USD/ticket}\[4pt] \frac{∂π}{∂Q_2} &= 10 – 2Q_2 – 4 = 0\[4pt] 2Q_2 &= 6\[4pt] Q_2^* &= 3 \text{ thousand student movie tickets}\[4pt] P_2^* &= 10 – (3) = 7 \text{ USD/ticket for students}\end{align*}
The third degree price discrimination strategy is graphed in Figure $4$.
A pricing rule for third degree price discrimination can be derived. Recall the pricing rule that was derived for a monopoly in Chapter 3:
$MR = P\left(1 + \frac{1}{E^d}\right) \label{4.3}$
This pricing rule can be extended to include two groups of consumers, as follows.
\begin{align*} MR_1 &= MR_2 = MC\[4pt] P_1\left(1 + \frac{1}{E_1}\right) &= P_2\left(1 + \frac{1}{E_2}\right)\[4pt] \frac{P_1}{P_2} &= \frac{1 + \frac{1}{E_2}}{1 + \frac{1}{E_1}}\end{align*}
The pricing rule for the third degree price discriminating firm shows that the highest price is charged to the consumer group with the smallest (most inelastic) price elasticity of demand $(E^d)$. This follows what we have learned about the elasticity of demand: consumers with an elastic demand will switch to a substitute good if the price increases, whereas consumers with an inelastic demand are more likely to pay the price increase.
The next section will present intertemporal price discrimination, or charging different prices at different times. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/04%3A__Pricing_with_Market_Power/4.02%3A_Price_Discrimination.txt |
Intertemporal price discrimination provides a method for firms to separate consumer groups based on willingness to pay. The strategy involves charging a high price initially, then lowering price after time passes. Many technology products and recently-released products follow this strategy.
Intertemporal Price Discrimination = charging a high price initially, then lowering price after time passes.
Intertemporal price discrimination is similar to second degree price discrimination, but charges a different price across time. Second degree price discrimination charges a different price for different quantities at the same time. Intertemporal price discrimination is shown in Figure \(1\).
The first group has a higher willingness to pay for the good, as shown by demand curve \(D_1\). This group will pay the higher initial price charged by the firm. A new iPhone release is a good example. Over time, Apple will lower the price to capture additional consumer groups, such as group two in Figure \(1\). In this fashion, the firm will extract a larger amount of consumer surplus than with a single price.
Intertemporal price discrimination can also be shown in a slightly different graph. The key feature of intertemporal price discrimination is a high initial price, followed by lower prices charged over time, as shown in Figure \(2\). In this graph, the firm initially charges price Pt to capture the high willingness to pay of some consumers. Over time, the firm lowers price to \(P_{t+1}\), and later to \(P_{t+2}\) to capture consumer groups with lower willingness to pay.
The concept of intertemporal price discrimination explains why new products are often priced at high prices, and the price is lowered over time. In the next section, peak-load pricing will be introduced.
4.04: Peak Load Pricing
The demand for many goods is larger during certain times of the day or week. For example, roads are congested during rush hours during the morning and evening commutes. Electricity has larger demand during the day than at night. Ski resorts have large (peak) demands during the weekends, and smaller demand during the week.
Peak Load Pricing = Charging a high price during demand peaks, and a lower price during off-peak time periods.
Figure \(1\) demonstrates the demand for electricity during the day. Demand curve \(D_1\) represents demand at off-peak hours at night. The electricity utility company will charge a price \(P_1\) for the off-peak hours. The costs of producing electricity increase dramatically during peak hours. Electricity generation reaches the capacity of the generating plants, causing larger quantities of electricity to be expensive to produce. For large coal-fired plants, when capacity is reached, the firm will use natural gas to generate the peak demand. To cover these higher costs, the firm will charge the higher price \(P_2\) during peak hours. The same graph represents a large number of other goods that have peak demand at different times during a day, week, or year (ski resorts, toll roads, parking lots, etc.).
Economic efficiency is greatly improved by charging higher prices during peak times. If the utility were required to charge a single price at all times, it would lose the ability to charge consumers an appropriate price during peak demand periods. Charging a higher price during peak hours provides an incentive for consumers to switch consumption to off-peak hours. This saves society resources, since costs are lower during those times.
An example is electricity consumption. If consumers are charged higher prices during peak hours, they are able to shift some electricity demand to night, the off-peak hours. Dishwashers, laundry, and bathing can be shifted to off-peak hours, saving the consumer money and saving society resources. Electricity companies also promote “smart grid” technology that automatically turns thermostats down when individuals and families are not at home… saving the consumer and society money.
The next section will discuss a two-part tariff, or charging consumers a fixed fee for the right to purchase a good, and a per-unit fee for each unit purchased. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/04%3A__Pricing_with_Market_Power/4.03%3A_Intertemporal_Price_Discrimination.txt |
A monopoly or any firm with market power can increase profits by charging a price structure with a fixed component, or entry fee, and a variable component, or usage fee.
Two-Part Pricing (also called Two Part Tariff) = A form of pricing in which consumers are charged both an entry fee (fixed price) and a usage fee (per-unit price).
Examples of two-part pricing include a phone contract that charges a fixed monthly charge and a per-minute charge for use of the phone. Amusement parks often charge an admission fee and an additional price per ride. Golf clubs typically charge an initiation fee and then usage fees based on meals eaten and golf rounds played. College football tickets usually require a “donation” to the athletic department, used for scholarships, and a per-ticket charge for the tickets.
Two-part pricing is shown in Figure $1$, where a monopoly graph is presented.
Suppose that the graph represents an individual consumer’s demand. In competitive equilibrium (subscript 0), price is equal to $MC$, output is equal to $Q_0$, and producer and consumer surplus are given by:
\begin{align*} PS_0 &= 0\[4pt] CS_0 &= +ABCDE\end{align*}
The firm charges a price equal to the constant marginal cost $(P = MC)$, and there is no producer surplus. Consumers receive the total area between the demand curve (willingness to pay) and the price line (price paid), equal to area $ABCDE$.
A profit-maximizing firm (subscript 1) that charged a single price would maximize profits by producing $Q_1$ units of the good, and charging a price of $P_1$. Surplus levels would be:
\begin{align*} PS_1 &= +CD\[4pt] CS_1 &= +AB\end{align*}
In this case, consumers have transferred areas $C$ and $D$ to producers, but still have surplus equal to area $AB$. Producers interested in increasing profits could devise a two-part pricing strategy that transfers more consumer surplus into producer surplus. Since $CS > 0$, consumers are willing to pay more than the monopoly price, and firms can extract a greater level of consumer surplus. The firm could charge an entry fee $(T)$, and consumers would be willing to pay as long as the fee was less than their consumer surplus at the monopoly level $(CS_1 = AB)$.
Consider the following two-part pricing scheme (subscript 2):
Usage fee: $P_2 = MC$
Entry fee: $T = A+B+C+D+E$ [$T$ is set equal $CS_0 = CS$ under competition]
\begin{align*} PS_2 &= +ABCDE\[4pt]CS_2 &= 0\end{align*}
With a two-part pricing scheme, the firm has extracted every dollar of willingness to pay from consumers. The total amount of producer surplus under two-part pricing is given by:
$PS_2 = T + (P_2 – MC)Q_2 = ABCDE\nonumber$
Notice that the firm earns zero profit from the usage fee ($P_2$ = per-unit fee), since it sets the usage fee equal to the cost of production ($P_2 = MC$). All of the profits come from the entry fee ($T$ = fixed price) in this case.
To summarize, a two-part tariff for consumers with identical demands would (1) set usage fee (price per unit) equal to $MC (P = MC)$, and (2) set a membership fee (entry fee) equal to consumer surplus at this price $(T = CS$ at $P = MC)$. The two-part price will result in (1) $CS = 0$, and (2) $PS = T + (P – MC)Q = T$.
A numerical example will further elucidate the two-part price. Assume that an individual’s inverse demand curve is given by: $P = 20 – 2Q$, and the cost function is $C(Q) = 2Q$. The firm seeks to find the optimal, profit-maximizing two-part tariff. The situation is shown in Figure $2$.
The firm will set the usage fee (per-unit price) equal to marginal cost: $P^* = MC = 2$. At this price, the quantity sold is found by substitution of the price into the inverse demand function: $2 = 20 – 2Q$, or $2Q = 18, Q^* = 9$ units, as shown in Figure $2$. Next, the firm will determine the entry fee (fixed price), by calculating the area of consumer surplus at this price: $CS = 0.5(20 – 2)(9 – 0) = 0.5(18)(9) = 9\cdot 9 = 81$ USD. Therefore, the firm sets the usage fee: $T = 81$ USD. The resulting levels of surplus are $CS = 0$ and $PS = 81$ USD. To summarize, the optimal two-part tariff is to set the usage fee equal to marginal cost and the entry fee equal to the level of consumer surplus at that price: $P^* = 2$ USD/unit, $T^* = 81$ USD.
In our investigation of two-part pricing, identical consumer demands have been assumed. In the real world, consumer demands may differ quite markedly across individuals. Given this possibility, the two-part pricing strategy can be summarized as follows.
1. If consumer demands are nearly identical, a two-part pricing scheme could increase profits by charging a price close to marginal cost and an entry fee.
2. If consumer demands are different, a two-part pricing scheme or a single price scheme could be utilized by setting a price well above marginal cost and a lower entry fee to capture all consumers. Or, set a single price.
In the next section, commodity bundling will be explained and explored. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/04%3A__Pricing_with_Market_Power/4.05%3A_Two-Part_Pricing.txt |
The practice of bundling is that of selling two or more goods together as a package.
Bundling = The practice of selling two or more goods together as a package.
Bundling is a widely-practiced sales strategy that takes advantage of differences in consumer willingness to pay for different goods. McDonalds Happy Meals are an example of bundling, since the customer purchases a hamburger, French fries, beverage, and toy as a single purchase. McDonalds was an innovator in bundling, and has expanded the practice to include “Value Meals.” Communication companies often package internet service, cable television, and phone service together into a package.
Bundling Examples
A simple example of bundling is a value meal at a fast food restaurant. To make things simple, assume that there are two consumers ($A$ and $B$), two products (burger and fries) and marginal costs are equal to zero. The zero-cost assumption is not realistic, but the model results do not change when we assume zero costs.
Table 4.1 shows the reservation prices (willingness to pay) for both consumers for each good.
Recall that the reservation price is the maximum amount that a consumer is willing to pay for a good. The reservation price for the bundle (shown in the right column of table 4.1) is simply the sum of the two reservation prices for the burger and fries. Next, a comparison is made between selling the two good individually versus selling them as a bundle.
CASE ONE: Sell each product individually.
$Π_{burger}$
1. If set $P_{burger} = 6\text{ USD/unit}$, $A$ buys, $Π_{burger} = 6\cdot 1 = 6\text{ USD}$
2. If set $P_{burger} = 4\text{ USD/unit}$, $A$ and $B$ buy, $Π_{burger} = 4\cdot 2 = 8\text{ USD}$
$\rightarrow$ Set $P^*_{burger} = 4 \text{ USD/unit}$; $Π_{burger} = 8 \text{ USD}$
$Π_{fries}$
1. If set $P_{fries} = 2\text{ USD/unit}$, $A$ and $B$ buy, $Π_{fries} = 2\cdot 2 = 4\text{ USD}$
2. If set $P_{fries} = 3\text{ USD/unit}$, $B$ buys, $Π_{fries} = 3\cdot 1 = 3\text{ USD}$
$\rightarrow$ Set $P^*_{fries} = 2\text{ USD/unit}$; $Π_{fries} = 4\text{ USD}$
$Π_{\text{total individual}}$
$Π_{\text{total individual}} = P_b Q_b + P_f Q_f = 4\cdot 2 + 2\cdot 2 = 8 + 4 = 12\text{ USD}$
CASE TWO: Bundle burger and fries into a single package.
$Π_{bundle}$
1. If set $P_{bundle} = 8\text{ USD/unit}$, $A$ buys, $Π_{bundle} = 8\cdot 1 = 8\text{ USD}$
2. If set $P_{bundle} = 7\text{ USD/unit}$, $A$ and $B$ buy, $Π_{bundle} = 7\cdot 2 = 14\text{ USD}$
$\rightarrow$ Set $P^*_{bundle} = 7\text{ USD/unit}$; $Π_{bundle} = 14\text{ USD}$
Bundling increases profit from 12 to 14 USD. This result will occur if the reservation prices are inversely correlated. To see this, work out the profits for selling goods individually and as a bundle for the reservation prices that appear in Table 4.2.
Bundling enhances profits only when consumers have uncorrelated reservation prices. In this way, bundling takes advantage of differences in consumer willingness to pay.
Many firms have used “Green Bundling” to tie goods with environmental or sustainable goods (natural, organic, local, etc.). As long as consumer preferences for the good and the sustainability goal are uncorrelated, this strategy will increase profits.
Tying
A practice related to bundling is tying.
Tying = The practice of requiring a customer to purchase one good in order to purchase another.
Tying is a specific form of bundling. An example is Microsoft selling Windows software together with Internet Explorer, a web browser. A second example is printers and ink cartridges. Many hardware companies make a great deal of profit from selling ink cartridges for printers. The cartridges do not have a universal shape, so must be purchased specifically for each printer. The next section will discuss advertising. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/04%3A__Pricing_with_Market_Power/4.06%3A_Bundling.txt |
Advertising is a huge industry, with billions spent every year on marketing products. Are these enormous expenditures worth it? The benefits of increased sales and revenues must be at least as large as the increased costs to make it a good investment. In this section, the profit-maximizing level of advertising will be identified and evaluated.
One important point about advertising is the costs associated with advertising expenditures. If advertising works, it increases sales of the product. There are two major costs, the direct costs of advertising and the additional costs associated with increasing production if the advertising is effective. A typical analysis sets the marginal revenues of advertising equal to the marginal costs of advertising
$MR_A = MC_A.$
This would be correct if the level of output remained constant. However, the output level will increase if advertising works, and the additional costs of increased output must be taken into account for a comprehensive and correct analysis, as will be shown below.
Graphical Analysis of Advertising
The graph for advertising is shown in Figure $1$. Notice the two major effects of advertising and marketing efforts:
1. an increase in demand, in this case from $D_0$ to $D_A$, and
2. an increase in costs, shown here as the movement from $ATC_0$ to $ATC_A$.
In the analysis shown here, advertising costs are considered to be fixed costs that do not vary with the level of output. This is true for a billboard, or television commercial. Note that the marginal costs do not change, since marginal costs are variable costs. The analysis could be easily extended to include variable advertising costs.
Economic analysis of advertising and marketing is straightforward: continue to advertise as long as the benefits outweigh the costs. In Figure $1$, the optimal level of advertising occurs at quantity $Q_A$ and price $P_A$. Profits with advertising are shown by the rectangle $π_A$. If profits with advertising are larger than profits without advertising $(π_A > π_0)$, then advertising should be undertaken.
In general, if the increase in sales $(D_A – D_0)$ is larger than the increase in costs, advertising should be undertaken. The optimal level of advertising can be found using marginal economic analysis, as described in the next section.
General Rule for Advertising
The profit-maximizing level of advertising can be derived, and the outcome is interesting and important, since it diverges from setting the marginal costs of advertising equal to the marginal revenues of advertising. Note that the graphical and mathematical analyses of advertising presented here could be used for any marketing program, not only advertising campaigns.
Assume that the demand for a product is given in Equation \ref{4.4}, where quantity demanded $(Q^d)$ is a function of price $(P)$ and the level of advertising $(A)$.
$Q^d = Q(P, A) \label{4.4}$
This demand equation differs from the usual approach of using an inverse demand equation. For this model, it is more useful to use the actual demand equation instead of an inverse demand equation $[P=P(Q^d)]$. The profit equation is shown in Equation \ref{4.5}, where the cost function is given by $C(Q)$.
\begin{align} \max π &= TR – TC \label{4.5}\[4pt] \max π &= PQ(P, A) – C(Q) – A \end{align}
The profit-maximizing level of advertising $(A^*)$ is found by taking the first derivative of the profit function, and setting it equal to zero. This derivative is slightly more complex than usual, since the quantity that appears in the cost function depends on advertising, as shown in Equation \ref{4.4}. Therefore, to find the first derivative, we will need to use the chain rule from calculus, which is used to differentiate a composition of functions, such as the derivative of the function $f(g(x))$ shown in Equation \ref{4.6}.
$\text{ If } f(g(x)) \text{ then} \frac{∂f}{∂x} = f’(g(x))\cdot g’(x) \label{4.6}$
The chain rule simply says that to differentiate a composition of functions, first differentiate the outer layer, leaving the inner layer unchanged [the term $f'(g(x))$], then differentiate the inner layer [the term $g'(x)$].
In Equation \ref{4.5}, the cost function is a composition of the cost function and the demand function: $C(Q(P, A))$. So the derivative
$\dfrac{∂C}{∂A} = C’(Q(A))\cdot Q’(A) = \left(\dfrac{∂C}{∂Q}\right)\cdot\left(\dfrac{∂Q}{∂A}\right).$
Thus, the first derivative of the profit equation with respect to advertising is given by:
$\frac{∂π}{∂A} = P\left(\frac{∂Q}{∂A}\right) – \left(\frac{∂C}{∂Q}\right)\cdot\left(\frac{∂Q}{∂A}\right) – 1 = 0$
Rearranging, the first derivative can be written as in Equation \ref{4.7}:
$P\left(\frac{∂Q}{∂A}\right) = MC\cdot\left(\frac{∂Q}{∂A}\right) + 1.\label{4.7}$
The term on the left hand side is marginal revenues of advertising $(MR_A)$, and the term on the right hand side is the marginal cost of advertising $(MC_A = 1)$, plus the additional costs associated with producing a larger output to meet the increased demand resulting from advertising [$MC\cdot\left(\dfrac{∂Q}{∂A}\right)$].
This result can be used to find an optimal “rule of thumb” for advertising, or a “General Rule for Advertising.” There are three preliminary definitions that will be useful in deriving this important result. First, the advertising to sales ratio is given by $\dfrac{A}{PQ}$, and reflects the percentage of advertising in total revenues (price multiplied by quantity, $PQ$). Second, the advertising elasticity of demand is defined.
Advertising Elasticity of Demand (EA) = The percentage change in quantity demanded resulting from a one percent change in advertising expenditure.
$E^A = \frac{\%ΔQ^d}{\%ΔA} = \left(\frac{∂Q}{∂A}\right)\left(\frac{A}{Q}\right). \label{4.8}$
Third, recall the Lerner Index $(L)$, a measure of monopoly power. We derived the relationship between the Lerner Index and the price elasticity of demand, shown in Equation \ref{4.9}:
$L = \frac{P – MC}{P} = – \frac{1}{E^d}.\label{4.9}$
With these three preliminary equations, we can derive a relatively simple and very useful general rule of advertising from the profit-maximizing condition for advertising, given in Equation \ref{4.7}.
\begin{align*} P\left(\frac{∂Q}{∂A}\right) &= MC\cdot\left(\frac{∂Q}{∂A}\right) + 1\[4pt](P – MC)\left(\frac{∂Q}{∂A}\right) &= 1\[4pt] \frac{P – MC}{P}\cdot \left(\frac{∂Q}{∂A}\right)\left(\frac{A}{Q}\right) &= \left(\frac{A}{PQ}\right)\[4pt] \frac{A}{PQ} &= -\frac{E^A}{E^d}\end{align*}
This simple rule states that the profit-maximizing advertising to sales ratio (A/PQ) is equal to minus the elasticity of advertising $(E^A)$ divided by the price elasticity of demand $(–E^d)$. The result is simple and powerful: (1) if the elasticity of advertising is large, increase the advertising to sales ratio, and (2) if the price elasticity of demand is large, decrease the advertising to sale ratio. A firm with monopoly power, or a higher Lerner Index, will want to advertise more ($E^d$ small), since the marginal profit from each additional dollar of advertising or marketing expenditure is greater.
Most business firms have at least crude approximations of the two elasticities needed to use this simple rule. Many firms advertise less than the optimal rate, since marketing can appear to be expensive if it is a large percentage of sales. However, simple economic principles can be used to determine the optimal, profit-maximizing level of advertising and/or marketing expenditures using this simple rule. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/04%3A__Pricing_with_Market_Power/4.07%3A_Advertising.txt |
• 5.1: Market Structures
Perfect competition is on one end of the market structure spectrum, with numerous firms. Monopoly is the other extreme of the market structure spectrum, with a single firm. Monopolies have monopoly power, or the ability to change the price of the good. Monopoly power is also called market power, and is measured by the Lerner Index. This chapter defines and describes two intermediary market structures: monopolistic competition and oligopoly.
• 5.2: Monopolistic Competition
Monopolistic competition is a market structure defined by free entry and exit, like competition, and differentiated products, like monopoly. Differentiated products provide each firm with some market power. Advertising and marketing of each individual product provide uniqueness that causes the demand curve of each good to be downward sloping. Free entry indicates that each firm competes with other firms and profits are equal to zero on long run equilibrium.
• 5.3: Oligopoly Models
Oligopoly is a market structure with few firms and barriers to entry. There is often a high level of competition between firms, as each firm makes decisions on prices, quantities, and advertising to maximize profits. Since there are a small number of firms in an oligopoly, each firm’s profit level depends not only on the firm’s own decisions, but also on the decisions of the other firms in the oligopolistic industry.
• 5.4: Oligopoly, Collusion, and Game Theory
Collusion occurs when oligopoly firms make joint decisions, and act as if they were a single firm. Collusion requires an agreement, either explicit or implicit, between cooperating firms to restrict output and achieve the monopoly price. This causes the firms to be interdependent, as the profit levels of each firm depend on the firm’s own decisions and the decisions of all other firms in the industry. This strategic interdependence is the foundation of game theory.
Thumbnail: www.pexels.com/photo/photo-of-discount-sign-2529787/
05: Monopolistic Competition and Oligopoly
Market Structure Spectrum and Characteristics
Table 5.1 shows the four major categories of market structures and their characteristics.
Table 5.1 Market Structure Characteristics
Perfect Competition Monopolistic Competition Oligopoly Monopoly
Homogeneous good
Differentiated good
Differentiated good
One good
Numerous firms
Many firms
Few firms
One firm
Free entry and exit
Free entry and exit
Barriers to entry
No entry
Perfect competition is on one end of the market structure spectrum, with numerous firms. The word, “numerous” has special meaning in this context. In a perfectly competitive industry, each firm is so small relative to the market that it cannot affect the price of the good. Each perfectly competitive firm is a price taker. Therefore, numerous firms means that each firm is so small that it is a price taker.
Monopoly is the other extreme of the market structure spectrum, with a single firm. Monopolies have monopoly power, or the ability to change the price of the good. Monopoly power is also called market power, and is measured by the Lerner Index.
This chapter defines and describes two intermediary market structures: monopolistic competition and oligopoly.
Monopolistic Competition = A market structure characterized by a differentiated product and freedom of entry and exit.
Monopolistically Competitive firms have one characteristic that is like a monopoly (a differentiated product provides market power), and one characteristic that is like a competitive firm (freedom of entry and exit). This form of market structure is common in market-based economies, and a trip to the grocery store reveals large numbers of differentiated products: toothpaste, laundry soap, breakfast cereal, and so on.
Next, we define the market structure oligopoly.
Oligopoly = A market structure characterized by barriers to entry and a few firms.
Oligopoly is a fascinating market structure due to interaction and interdependency between oligopolistic firms. What one firm does affects the other firms in the oligopoly.
Since monopolistic competition and oligopoly are intermediary market structures, the next section will review the properties and characteristics of perfect competition and monopoly. These characteristics will provide the defining characteristics of monopolistic competition and oligopoly.
Review of Perfect Competition
The perfectly competitive industry has four characteristics:
1. Homogenous product,
2. Large number of buyers and sellers (numerous firms),
3. Freedom of entry and exit, and
4. Perfect information.
The possibility of entry and exit of firms occurs in the long run, since the number of firms is fixed in the short run.
An equilibrium is defined as a point where there is no tendency to change. The concept of equilibrium can be extended to include the short run and long run.
Short Run Equilibrium = A point from which there is no tendency to change (a steady state), and a fixed number of firms.
Long Run Equilibrium = A point from which there is no tendency to change (a steady state), and entry and exit of firms.
In the short run, the number of firms is fixed, whereas in the long run, entry and exit of firms is possible, based on profit conditions. We will compare the short and long run for a competitive firm in Figure \(1\). The two panels in Figure \(1\) are for the firm (left) and industry (right), with vastly different units. This is emphasized by using “q” for the firm’s output level, and “Q” for the industry output level. The graph shows both short run and long run equilibria for a perfectly competitive firm and industry. In short run equilibrium, the firms faces a high price \((P_{SR})\), produces quantity \(Q_{SR}\) at \(P_{SR} = MC\), and earns positive profits \(π_{SR}\).
Positive profits in the short run \((π_{SR} > 0)\) lead to entry of other firms, as there are no barriers to entry in a competitive industry. The entry of new firms shifts the supply curve in the industry graph from supply \(S_{SR}\) to supply \(S_{LR}\). Entry will occur until profits are driven to zero, and long run equilibrium is reached at \(Q^*_{LR}\). In the long run, economic profits are equal to zero, so there is no incentive for entry or exit. Each firm is earning exactly what it is worth, the opportunity costs of all resources. In long run equilibrium, profits are zero \((π_{LR} = 0)\), and price equals the minimum average cost point \((P = \min AC = MC)\). Marginal costs equal average costs at the minimum average cost point. At the long run price, supply equals demand at price \(P_{LR}\).
Review of Monopoly
The characteristics of monopoly include: (1) one firm, (2) one product, and (3) no entry (Table 5.1). The monopoly solution is shown in Figure \(2\).
Note that long-run profits can exist for a monopoly, since barriers to entry halt any potential entrants from joining the industry. In the next section, we will explore market structures that lie between the two extremes of perfect competition and monopoly. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/05%3A__Monopolistic_Competition_and_Oligopoly/5.01%3A_Market_Structures.txt |
Monopolistic competition is a market structure defined by free entry and exit, like competition, and differentiated products, like monopoly. Differentiated products provide each firm with some market power. Advertising and marketing of each individual product provide uniqueness that causes the demand curve of each good to be downward sloping. Free entry indicates that each firm competes with other firms and profits are equal to zero on long run equilibrium. If a monopolistically competitive firm is earning positive economic profits, entry will occur until economic profits are equal to zero.
Monopolistic Competition in the Short and Long Runs
The demand curve of a monopolistically competitive firm is downward sloping, indicating that the firm has a degree of market power. Market power derives from product differentiation, since each firm produces a different product. Each good has many close substitutes, so market power is limited: if the price is increased too much, consumers will shift to competitors’ products.
Short and long run equilibria for the monopolistically competitive firm are shown in Figure \(1\). The demand curve facing the firm is downward sloping, but relatively elastic due to the availability of close substitutes. The short run equilibrium appears in the left hand panel, and is nearly identical to the monopoly graph. The only difference is that for a monopolistically competitive firm, the demand is relatively elastic, or flat. Otherwise, the short run profit-maximizing solution is the same as a monopoly. The firm sets marginal revenue equal to marginal cost, produces output level \(q^*_{SR}\) and charges price \(P_{SR}\). The profit level is shown by the shaded rectangle \(π\).
The long run equilibrium is shown in the right hand panel. Entry of other firms occurs until profits are equal to zero; total revenues are equal to total costs. Thus, the demand curve is tangent to the average cost curve at the optimal long run quantity, \(q^*_{LR}\). The long run profit-maximizing quantity is found where marginal revenue equals marginal cost, which also occurs at \(q^*_{LR}\).
Economic Efficiency and Monopolistic Competition
There are two sources of inefficiency in monopolistic competition. First, dead weight loss \((DWL)\) due to monopoly power: price is higher than marginal cost \((P > MC)\). Second, excess capacity: the equilibrium quantity is smaller than the lowest cost quantity at the minimum point on the average cost curve \((q^*_{LR} < q_{minAC})\). These two sources of inefficiency can be seen in Figure \(2\).
First, there is dead weight loss \((DWL)\) due to market power: the price is higher than marginal cost in long run equilibrium. In the right hand panel of Figure \(2\), the price at the long run equilibrium quantity is \(P_{LR}\), and marginal cost is lower: \(P_{LR} > MC\). This causes dead weight loss to society, since the competitive equilibrium would be at a larger quantity where \(P = MC\). Total dead weight loss is the shaded area beneath the demand curve and above the \(MC\) curve in figure \(2\).
The second source of inefficiency associated with monopolistic competition is excess capacity. This can also be seen in the right hand panel of Figure \(2\), where the long run equilibrium quantity is lower than the quantity where average costs are lowest \((q_{minAC})\). Therefore, the firm could produce at a lower cost by increasing output to the level where average costs are minimized.
Given these two inefficiencies associated with monopolistic competition, some individuals and groups have called for government intervention. Regulation could be used to reduce or eliminate the inefficiencies by removing product differentiation. This would result in a single product instead of a large number of close substitutes.
Regulation is probably not a good solution to the inefficiencies of monopolistic competition, for two reasons. First, the market power of a typical firm in most monopolistically competitive industries is small. Each monopolistically competitive industry has many firms that produce sufficiently substitutable products to provide enough competition to result in relatively low levels of market power. If the firms have small levels of market power, then the deadweight loss and excess capacity inefficiencies are likely to be small.
Second, the benefit provided by monopolistic competition is product diversity. The gain from product diversity can be large, as consumers are willing to pay for different characteristics and qualities. Therefore, the gain from product diversity is likely to outweigh the costs of inefficiency. Evidence for this claim can be seen in market-based economies, where there is a huge amount of product diversity.
The next chapter will introduce and discuss oligopoly: strategic interactions between firms! | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/05%3A__Monopolistic_Competition_and_Oligopoly/5.02%3A_Monopolistic_Competition.txt |
An oligopoly is defined as a market structure with few firms and barriers to entry.
Oligopoly = A market structure with few firms and barriers to entry.
There is often a high level of competition between firms, as each firm makes decisions on prices, quantities, and advertising to maximize profits. Since there are a small number of firms in an oligopoly, each firm’s profit level depends not only on the firm’s own decisions, but also on the decisions of the other firms in the oligopolistic industry.
Strategic Interactions
Each firm must consider both: (1) other firms’ reactions to a firm’s own decisions, and (2) the own firm’s reactions to the other firms’ decisions. Thus, there is a continuous interplay between decisions and reactions to those decisions by all firms in the industry. Each oligopolist must take into account these strategic interactions when making decisions. Since all firms in an oligopoly have outcomes that depend on the other firms, these strategic interactions are the foundation of the study and understanding of oligopoly.
For example, each automobile firm’s market share depends on the prices and quantities of all of the other firms in the industry. If Ford lowers prices relative to other car manufacturers, it will increase its market share at the expense of the other automobile companies.
When making decisions that consider the possible reactions of other firms, firm managers usually assume that the managers of competing firms are rational and intelligent. These strategic interactions form the study of game theory, the topic of Chapter 6 below. John Nash (1928-2015), an American mathematician, was a pioneer in game theory. Economists and mathematicians use the concept of a Nash Equilibrium $(NE)$ to describe a common outcome in game theory that is frequently used in the study of oligopoly.
Nash Equilibrium = An outcome where there is no tendency to change based on each individual choosing a strategy given the strategy of rivals.
In the study of oligopoly, the Nash Equilibrium assumes that each firm makes rational profit-maximizing decisions while holding the behavior of rival firms constant. This assumption is made to simplify oligopoly models, given the potential for enormous complexity of strategic interactions between firms. As an aside, this assumption is one of the interesting themes of the motion picture, “A Beautiful Mind,” starring Russell Crowe as John Nash. The concept of Nash Equilibrium is also the foundation of the models of oligopoly presented in the next three sections: the Cournot, Bertrand, and Stackelberg models of oligopoly.
Cournot Model
Augustin Cournot (1801-1877), a French mathematician, developed the first model of oligopoly explored here. The Cournot model is a model of oligopoly in which firms produce a homogeneous good, assuming that the competitor’s output is fixed when deciding how much to produce.
A numerical example of the Cournot model follows, where it is assumed that there are two identical firms (a duopoly), with output given by $Q_i (i=1,2)$. Therefore, total industry output is equal to: $Q = Q_1 + Q_2$. Market demand is a function of price and given by $Q^d = Q^d(P)$, thus the inverse demand function is $P = P(Q^d)$. Note that the price depends on the market output $Q$, which is the sum of both individual firm’s outputs. In this way, each firm’s output has an influence on the price and profits of both firms. This is the basis for strategic interaction in the Cournot model: if one firm increases output, it lowers the price facing both firms. The inverse demand function and cost function are given in Equation \ref{5.1}.
$P = 40 – QC(Q_i) = 7Q_i \label{5.1}$
with $i = 1,2$.
Each firm chooses the optimal, profit-maximizing output level given the other firm’s output. This will result in a Nash Equilibrium, since each firm is holding the behavior of the rival constant. Firm One maximizes profits as follows.
\begin{align*} \max π_1 &= TR_1 – TC_1\[4pt] \max π_1 &= P(Q)Q_1 – C(Q_1) \text{ [price depends on total output } Q = Q_1 + Q_2]\[4pt] \max π_1 &= [40 – Q]Q_1 – 7Q_1\[4pt] \max π_1 &= [40 – Q_1 – Q_2]Q_1 – 7Q_1\[4pt] \max π_1 &= 40Q_1 – Q_1^2 – Q_2Q_1 – 7Q_1\[4pt] \frac{∂π_1}{∂Q_1} &= 40 – 2Q_1 – Q_2 – 7 = 0\[4pt] 2Q_1 &= 33 – Q_2\[4pt] Q_1^* &= 16.5 – 0.5Q_2\end{align*}
This equation is called the “Reaction Function” of Firm One. This is as far as the mathematical solution can be simplified, and represents the Cournot solution for Firm One. It is a reaction function since it describes Firm One’s reaction given the output level of Firm Two. This equation represents the strategic interactions between the two firms, as changes in Firm Two’s output level will result in changes in Firm One’s response. Firm One’s optimal output level depends on Firm Two’s behavior and decision making. Oligopolists are interconnected in both behavior and outcomes.
The two firms are assumed to be identical in this duopoly. Therefore, Firm Two’s reaction function will be symmetrical to the Firm One’s reaction function (check this by setting up and solving the profit-maximization equation for Firm Two):
$Q_2^* = 16.5 – 0.5Q_1\nonumber$
The two reaction functions can be used to solve for the Cournot-Nash Equilibrium. There are two equations and two unknowns $(Q_1$ and $Q_2)$, so a numerical solution is found through substitution of one equation into the other.
\begin{align*} Q_1^* &= 16.5 – 0.5(16.5 – 0.5Q_1)\[4pt] Q_1^* &= 16.5 – 8.25 + 0.25Q_1\[4pt] Q_1^* &= 8.25 + 0.25Q_1\[4pt] 0.75Q_1^* &= 8.25\[4pt] Q_1^* &= 11\end{align*}
Due to symmetry from the assumption of identical firms:
\begin{align*} Q_i &= 11 & i &= 1,2\[4pt] Q &= 22\text{ units}\[4pt]P &= 18 \text{ USD/unit}\end{align*}
Profits for each firm are:
$π_i = P(Q)Q_i – C(Q_i) = 18(11) – 7(11) = (18 – 7)11 = 11(11) = 121 \text{ USD}\nonumber$
This is the Cournot-Nash solution for oligopoly, found by each firm assuming that the other firm holds its output level constant. The Cournot model can be easily extended to more than two firms, but the math does get increasingly complex as more firms are added. Economists utilize the Cournot model because is based on intuitive and realistic assumptions, and the Cournot solution is intermediary between the outcomes of the two extreme market structures of perfect competition and monopoly.
This can be seen by solving the numerical example for competition, Cournot, and monopoly models, and comparing the solutions for each market structure.
In a competitive industry, free entry results in price equal to marginal cost $(P = MC)$. In the case of the numerical example, $P_C = 7$. When this competitive price is substituted into the inverse demand equation, $7 = 40 – Q$, or $Q_c = 33$. Profits are found by solving $(P – MC)Q$, or $π_c = (7 – 7)Q = 0$. The competitive solution is given in Equation \ref{5.2}.
$P_c = 7 \text{ USD/unit } Q_c = 33 \text{ units } π_c = 0 \text{ USD}\label{5.2}$
The monopoly solution is found by maximizing profits as a single firm.
\begin{align*} \max π_m &= TR_m – TC_m\[4pt] \max π_m &= P(Q_m)Q_m – C(Q_m)\text{ [price depends on total output } Q_m]\[4pt] \max π_m &= [40 – Q_m]Q_m – 7Q_m\[4pt] \max π_m &= 40Q_m – Q_m^2 – 7Q_m\end{align*}
\begin{align*} \frac{∂π_m}{∂Q_m} &= 40 – 2Q_m – 7 = 0\[4pt] 2Q_m &= 33\[4pt] Q_m^* &= 16.5\[4pt] P_m &= 40 – 16.5 = 23.5\[4pt]π_m &= (P_m – MC_m)Q_m = (23.5 – 7)16.5 = 16.5(16.5) = 272.25 \text{ USD}\end{align*}
The monopoly solution is given in Equation \ref{5.3}.
\begin{align} P_m &= 23.5 \text{ USD/unit} \label{5.3} \[4pt] Q_m &= 16.5 \text{ units}\nonumber \[4pt] π_m &= 272.5 \text{ USD}\nonumber \end{align}
The competitive, Cournot, and monopoly solutions can be compared on the same graph for the numerical example (Figure $1$).
The Cournot price and quantity are between perfect competition and monopoly, which is an expected result, since the number of firms in an oligopoly lies between the two market structure extremes.
Bertrand Model
Joseph Louis François Bertrand (1822-1900) was also a French mathematician who developed a competing model to the Cournot model. Bertrand asked the question, “what would happen in an oligopoly model if each firm held the other firm’s price constant?” The Bertrand model is a model of oligopoly in which firms produce a homogeneous good, and each firm takes the price of competitors fixed when deciding what price to charge.
Assume two firms in an oligopoly (a duopoly), where the two firms choose the price of their good simultaneously at the beginning of each period. Consumers purchase from the firm with the lowest price, since the products are homogeneous (perfect substitutes). If the two firms charge the same price, one-half of the consumers buy from each firm. Let the demand equation be given by $Q^d = Q^d(P)$. The Bertrand model follows these three statements:
1. If $P_1 < P_2$, then Firm One sells $Q^d$ and Firm Two sells 0,
2. If $P_1 > P_2$, then Firm One sells 0 and Firm Two sells $Q^d$, and
3. If $P_1 = P_2$, then Firm One sells $0.5Q^d$ and Firm Two sells $0.5Q^d$.
A numerical example demonstrates the outcome of the Bertrand model, which is a Nash Equilibrium. Assume two firms sell a homogeneous product, and compete by choosing prices simultaneously, while holding the other firm’s price constant. Let the demand function be given by $Q^d = 50 – P$ and the costs are summarized by $MC_1 = MC_2 = 5$.
(1) Firm One sets $P_1 = 20$, and Firm Two sets $P_2 = 15$. Firm Two has the lower price, so all customers purchase the good from Firm Two.
\begin{align*} Q_1 &= 0\[4pt] Q_2 &= 35\[4pt] π_1 &= 0\[4pt] π_2 &= (15 – 5)35 = 350 \text{ USD}.\end{align*}
After period one, Firm One has a strong incentive to lower the price $(P_1)$ below $P_2$.The Bertrand assumption is that both firms will choose a price, holding the other firm’s price constant. Thus, Firm One undercuts $P_2$ slightly, assuming that Firm Two will maintain its price at $P_2 = 15$. Firm Two will keep the same price, assuming that Firm One will maintain $P_1 = 20$.
(2) Firm One sets $P_1 = 14$, and Firm Two sets $P_2 = 15$. Firm One has the lower price, so all customers purchase the good from Firm One.
\begin{align*} Q_1 &= 36\[4pt] Q_2 &= 0\[4pt] π_1&= (14 – 5)36 = 324 \text{ USD},\[4pt] π_2 &= 0.\end{align*}
After period two, Firm Two has a strong incentive to lower price below $P_1$. This process of undercutting the other firm’s price will continue and a “price war” will result in the price being driven down to marginal cost. In equilibrium, both firms lower their price until price is equal to marginal cost: $P_1 = P_2 = MC_1 = MC_2$. The price cannot go lower than this, or the firms would go out of business due to negative economic profits. To restate the Bertrand model, each firm selects a price, given the other firm’s price. The Bertrand results are given in Equation \ref{5.4}.
\begin{align} P_1 &= P_2 = MC_1 = MC_2 \label{5.4} \[4pt] Q_1 &= Q_2 = 0.5Q_d\nonumber \[4pt] π_1 &= π_2 = 0 \text{ in the } SR \text{ and } LR.\nonumber \end{align}
The Bertrand model of oligopoly suggests that oligopolies are characterized by the competitive solution, due to competing over price. There are many oligopolies that behave this way, such as gasoline stations at a given location. Other oligopolies may behave more like Cournot oligopolists, with an outcome somewhere in between perfect competition and monopoly.
Stackelberg Model
Heinrich Freiherr von Stackelberg (1905-1946) was a German economist who contributed to game theory and the study of market structures with a model of firm leadership, or the Stackelberg model of oligopoly. This model assumes that there are two firms in the industry, but they are asymmetrical: there is a “leader” and a “follower.” Stackelberg used this model of oligopoly to determine if there was an advantage to going first, or a “first-mover advantage.”
A numerical example is used to explore the Stackelberg model. Assume two firms, where Firm One is the leader and produces $Q_1$ units of a homogeneous good. Firm Two is the follower, and produces $Q_2$ units of the good. The inverse demand function is given by $P = 100 – Q$, where $Q = Q_1 + Q_2$. The costs of production are given by the cost function: $C(Q) = 10Q$.
This model is solved recursively, or backwards. Mathematically, the problem must be solved this way to find a solution. Intuitively, each firm will hold the other firm’s output constant, similar to Cournot, but the leader must know the follower’s best strategy to move first. Thus, Firm One solves Firm Two’s profit maximization problem to know what output it will produce, or Firm Two’s reaction function. Once the reaction function of the follower (Firm Two) is known, then the leader (Firm One) maximizes profits by substitution of Firm Two’s reaction function into Firm One’s profit maximization equation. All of this is shown in the following example.
Firm One starts by solving for Firm Two’s reaction function:
\begin{align*} \max π_2 &= TR_2 – TC_2\[4pt]\max π_2 &= P(Q)Q_2 – C(Q_2)& &\text{[price depends on total output } Q = Q_1 + Q_2]\[4pt] \max π_2 &= [100 – Q]Q_2 – 10Q_2\[4pt] \max π_2 &= [100 – Q_1 – Q_2]Q_2 – 10Q_2\[4pt] \max π_2 &= 100Q_2 – Q_1Q_2 – Q_2^2 – 10Q_2\[4pt] \frac{∂π_2}{∂Q_2} &= 100 – Q_1 – 2Q_2 – 10 = 0\[4pt] 2Q_2 &= 90 – Q_1\[4pt] Q_2^* &= 45 – 0.5Q_1\end{align*}
This is the reaction function of the follower, Firm Two. Next, Firm One, the leader, maximizes profits holding the follower’s output constant using the reaction function.
\begin{align*} \max π_1 &= TR_1 – TC_1\[4pt] \max π_1 &= P(Q)Q_1 – C(Q_1)& &\text{[price depends on total output } Q = Q1 + Q2] \[4pt] \max π_1 &= [100 – Q]Q_1 – 10Q_1\[4pt] \max π_1 &= [100 – Q_1 – Q_2]Q_1 – 10Q_1\[4pt] \max π_1 &= [100 – Q1 – (45 – 0.5Q_1)]Q_1 – 10Q_1 & &\text{[substitution of One’s reaction function]}\[4pt] \max π_1 &= [100 – Q_1 – 45 + 0.5Q_1]Q_1 – 10Q_1\[4pt] \max π_1 &= [55 – 0.5Q_1]Q_1 – 10Q_1\[4pt] \max π_1 &= 55Q_1 – 0.5Q_1^2 – 10Q_1\[4pt] \frac{∂π_1}{∂Q_1} &= 55 – Q_1 – 10 = 0\[4pt] Q_1^* &= 45\end{align*}
This can be substituted back into Firm Two’s reaction function to solve for $Q_2^*$.
\begin{align*} Q_2^* &= 45 – 0.5Q_1 = 45 – 0.5(45) = 45 – 22.5 = 22.5\[4pt] Q &= Q_1 + Q_2 = 45 + 22.5 = 67.5\[4pt] P &= 100 – Q = 100 – 67.5 = 32.5\[4pt] π_1 &= (32.5 – 10)45 = 22.5(45) = 1012.5 \text{ USD}\[4pt] π_2 &= (32.5 – 10)22.5 = 22.5(22.5) = 506.25 \text{ USD}\end{align*}
We have now covered three models of oligopoly: Cournot, Bertrand, and Stackelberg. These three models are alternative representations of oligopolistic behavior. The Bertand model is relatively easy to identify in the real world, since it results in a price war and competitive prices. It may be more difficult to identify which of the quantity models to use to analyze a real-world industry: Cournot or Stackelberg?
The model that is most appropriate depends on the industry under investigation.
1. The Cournot model may be most appropriate for an industry with similar firms, with no market advantages or leadership.
2. The Stackelberg model may be most appropriate for an industry dominated by relatively large firms.
Oligopoly has many different possible outcomes, and several economic models to better understand the diversity of industries. Notice that if the firms in an oligopoly colluded, or acted as a single firm, they could achieve the monopoly outcome. If firms banded together to make united decisions, the firms could set the price or quantity as a monopolist would. This is illegal in many nations, including the United States, since the outcome is anti-competitive, and consumers would have to pay monopoly prices under collusion.
If firms were able to collude, they could divide the market into shares and jointly produce the monopoly quantity by restricting output. This would result in the monopoly price, and the firms would earn monopoly profits. However, under such circumstances, there is always an incentive to “cheat” on the agreement by producing and selling more output. If the other firms in the industry restricted output, a firm could increase profits by increasing output, at the expense of the other firms in the collusive agreement. We will discuss this possibility in the next section.
To summarize our discussion of oligopoly thus far, we have two models that assume that a firm holds the other firm’s output constant: Cournot and Stackelberg. These two models result in positive economic profits, at a level between perfect competition and monopoly. The third model, Bertrand, assumes that each firm holds the other firm’s price constant. The Bertrand model results in zero economic profits, as the price is bid down to the competitive level, $P = MC$.
The most important characteristic of oligopoly is that firm decisions are based on strategic interactions. Each firm’s behavior is strategic, and strategy depends on the other firms’ strategies. Therefore, oligopolists are locked into a relationship with rivals that differs markedly from perfect competition and monopoly. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/05%3A__Monopolistic_Competition_and_Oligopoly/5.03%3A_Oligopoly_Models.txt |
Collusion and Game Theory
Collusion occurs when oligopoly firms make joint decisions, and act as if they were a single firm. Collusion requires an agreement, either explicit or implicit, between cooperating firms to restrict output and achieve the monopoly price. This causes the firms to be interdependent, as the profit levels of each firm depend on the firm’s own decisions and the decisions of all other firms in the industry. This strategic interdependence is the foundation of game theory.
Game Theory = A framework to study strategic interactions between players, firms, or nations.
A game is defined as:
Game = A situation in which firms make strategic decisions that take into account each others’ actions and responses.
A game can be represented as a payoff matrix, which shows the payoffs for each possibility of the game, as will be shown below. A game has players who select strategies that lead to different outcomes, or payoffs. A Prisoner’s Dilemma is a famous game theory example where two prisoners must decide separately whether to confess or not confess to a crime. This is shown in Figure $1$.
The police have some evidence that the two prisoners committed a crime, but not enough evidence to convict for a long jail sentence. The police seek a confession from each prisoner independently to convict the other accomplice. The outcomes, or payoffs, of this game are shown as years of jail sentences in the format $(A, B)$ where $A$ is the number of years Prisoner $A$ is sentenced to jail, and $B$ is the number of years Prisoner $B$ is sentenced to jail. The intuition of the game is that if the two Prisoners “collude” and jointly decide to not confess, they will both receive a shorter jail sentence of three years.
However, if either prisoner decides to confess, the confessing prisoner would receive only a single year sentence for cooperating, and the partner in crime (who did not confess) would receive a long 15-year sentence. If both prisoners confess, each receives a sentence of 8 years. This story forms the plot line of a large number of television shows and movies. The situation described by the prisoner’s dilemma is also common in many social and business interactions, as will be explored in the next chapter.
The outcome of this situation is uncertain. If both prisoners are able to strike a deal, and “collude,” or act cooperatively, they both choose to NOT CONFESS, and they each receive three year sentences, in the lower right hand outcome of Figure $1$. This is the cooperative agreement: $(\text{NOT, NOT}) = (3,3)$. However, once the prisoners are in this outcome, they have a temptation to “cheat” on the agreement by choosing to CONFESS, and reducing their own sentence to a single year at the expense of their partner. How should a prisoner proceed? One way is to work through all of the possible outcomes, given what the other prisoner chooses.
A Solution to the Prisoner’s Dilemma: Dominant Strategy
(1) If $\text{B CONF, A}$ should $\text{CONF } (8 < 15)$
(2) If $\text{B NOT, A}$ should $\text{CONF } (1 < 3)$
…$A$ has the same strategy no matter what $B$ does: $\text{CONF}$.
(3) If $\text{A CONF, B}$ should $\text{CONF } (8 < 15)$
(4) If $\text{A NOT, B}$ should $\text{CONF } (1 < 3)$
…$B$ has the same strategy no matter what $A$ does: $\text{CONF}$.
Thus, $A$ chooses to $\text{CONFESS}$ no matter what. This is called a Dominant Strategy, since it is the best choice given any of the strategies selected by the other player. Similarly, $\text{CONFESS}$ is the dominant strategy for prisoner $B$.
Dominant Strategy = A strategy that results in the highest payoff to a player regardless of the opponent’s action.
The Equilibrium in Dominant Strategies for the Prisoner’s Dilemma is $(\text{CONF, CONF})$. This is an interesting outcome, since each prisoner receives eight-year sentences: $(8, 8)$. If they could only cooperate, they could both be better off with much lighter sentences of three years.
A second example of a game is the decision of whether to produce natural beef or not. Natural beef is typically defined as beef produced without antibiotics or growth hormones. The definition is difficult, since it means different things to different people, and there is no common legal definition. This game is shown in Figure $2$, where Cargill and Tyson decide whether to produce natural beef.
There are two players in the game: Cargill and Tyson. Each firm has two possible strategies: produce natural beef or not. The payoffs in the payoff matrix are profits (million USD) for the two companies: $(π_{Cargill}, π_{Tyson})$.
Strategy = Each player’s plan of action for playing a game.
Outcome = A combination of strategies for players.
Payoff = The value associated with possible outcomes.
In this game, profits are made from the premium associated with natural beef. If only one firm produced natural beef,
Dominant Strategy for the Natural Beef Game
(1) If $\text{TYSON NAT, CARGILL}$ should $\text{NAT } (10 > 8)$
(2) If $\text{TYSON NO, CARGILL}$ should $\text{NAT } (12 > 6)$
…$\text{CARGILL}$ has the same strategy no matter what $\text{TYSON}$ does: $\text{NAT}$.
(3) If $\text{CARGILL NAT, TYSON}$ should $\text{NAT } (10 > 8)$
(4) If $\text{CARGILL NO, TYSON}$ should \text{NAT} (12 > 6)\)
…$\text{TYSON}$ has the same strategy no matter what $\text{CARGILL}$ does: $\text{NAT}$.
Both firms choose to produce natural beef, no matter what, so this is a Dominant Strategy for both firms. The Equilibrium in Dominant Strategies is $(\text{NAT, NAT})$. The outcome of this game demonstrates why all beef processors have moved quickly into the production of natural beef in the past few years, and are all earning higher levels of profits. Beef producers have also moved rapidly into organic beef, local beef, grass-fed beef, and even plant-based “beef.”
Prisoner’s Dilemmas are very common in oligopoly markets: gas stations, grocery stores, garbage companies are frequently in this situation. If all oligopolists in a market could agree to raise the price, they could all earn higher profits. Collusion, or the cooperative outcome, could result in monopoly profits. In the USA, explicit collusion is illegal. “Price setting” is outlawed to protect consumers. However, implicit collusion (tacit collusion) could result in monopoly profits for firms in a prisoner’s dilemma. For example, if gas stations in a city such as Manhattan, Kansas all matched a higher price, they could all make more money. However, there is an incentive to cheat on this implicit agreement by cutting the price and attracting more customers away from the other firms to your own gas station. Firms in a cooperative agreement are always tempted to break the agreement to do better.
The Nash Equilibrium calculated for the three oligopoly models (Cournot, Bertand, and Stackelberg) is a noncooperative equilibrium, as the firms are rivals and do not collude. In these models, firms maximize profits given the actions of their rivals. This is common, since collusion is illegal and price wars are costly. How do real-world oligopolists deal with prisoner’s dilemmas is the topic of the next section.
Rigid Prices: Kinked Demand Curve Model
Oligopolists have a strong desire for price stability. Firms in oligopolies are reluctant to change prices, for fear of a price war. If a single firm lowers its price, it could lead to the Bertrand equilibrium, where price is equal to marginal costs, and economic profits are equal to zero. The kinked demand curve model was developed to explain price rigidity, or oligopolist’s desire to maintain price at the prevailing price, $P^*$.
The kinked demand model asserts that a firm will have an asymmetric reaction to price changes. Rival firms in the industry will react differently to a price change, which results in different elasticities for price increases and price decreases.
(1) If a firm increases price, $P > P^*$, other firms will not follow
… the firm will lose most customers, the demand is highly elastic above $P^*$
(2) If a firm decreases price, $P < P^*$, other firms will follow immediately
…each firm will keep the same customers, demand is inelastic below $P^*$
The kinked demand curve is shown in Figure $3$, where the different reactions of other firms leads to a kink in the demand curve at the prevailing price $P^*$.
In the kinked demand curve model, $MR$ is discontinuous, due to the asymmetric nature of the demand curve. For linear demand curves, $MR$ has the same y-intercept and two times the slope… resulting in two different sections for the $MR$ curve when demand has a kink. The graph shows how price rigidity occurs: any changes in marginal cost result in the same price and quantity in the kinked demand curve model. As long as the $MC$ curve stays between the two sections of the $MR$ curve, the optimal price and quantity will remain the same.
One important feature of the kinked demand model is that the model describes price rigidity, but does not explain it with a formal, profit-maximizing model. The explanation for price rigidity is rooted in the prisoner’s dilemma and the avoidance of a price war, which are not part of the kinked demand curve model. The kinked demand model is criticized because it is not based on profit-maximizing foundations, as the other oligopoly models.
Two additional models of pricing are price signaling and price leadership.
Price Signaling = A form of implicit collusion in which a firm announces a price increase in the hope that other firms will follow suit.
Price signaling is common for gas stations and grocery stores, where price are posted publically.
Price Leadership = A form of pricing where one firm, the leader, regularly announces price changes that other firms, the followers, then match.
There are many examples of price leadership, including General Motors in the automobile industry, local banks may follow a leading bank’s interest rates, and US Steel in the steel industry.
Dominant Firm Model: Price Leadership
A dominant firm is defined as a firm with a large share of total sales that sets a price to maximize profits, taking into account the supply response of smaller firms. The dominant firm model is also known as the price leadership model. The smaller firms are referred to as the “fringe.” Let $F =$ fringe, or many relatively small competing firms in the same industry as the dominant firm. Let $Dom =$ the dominant firm. The market demand for the good $(D_{mkt})$ is equal to the sum of the demand facing the dominant firm $(D_{dom})$ and the demand facing the fringe firms $(D_F)$.
$D_{dom} = D_{mkt} – D_F \nonumber$
Total quantity (QT) is also the sum of output produced by the dominant and fringe firms.
$Q_T = Q_{dom} + Q_F \nonumber$
The dominant firm model is shown in Figure $4$. The supply curve for the fringe firms is given by $S_F$, and the marginal cost of the dominant firm is $MC_{dom}$. Recall that the marginal cost curve is the firm’s supply curve. The dominant firm has the advantage of lower costs due to economies of scale. In what follows, the dominant firm will set a price, allow the fringe firms to produce as much as they desire, and then find the profit-maximizing quantity and price with the remainder of the market.
To find the profit-maximizing level of output, the dominant firm first finds the demand curve facing the dominant firm (the dashed line in Figure $4$), then sets marginal revenue equal to marginal cost. The dominant firm’s demand curve is found by subtracting the supply of the fringe firms $(S_F)$ from the total market demand $(D_{mkt})$.
$D_{dom} = D_{mkt} – S_F \nonumber$
The dominant firm demand curve is found by the following procedure. The y-intercept of the dominant firm’s demand curve occurs where $S_F$ is equal to the $D_{mkt}$. At this point, the fringe firms supply the entire market, so the residual facing the dominant firm is equal to zero. Therefore, the demand curve of the dominant firm starts at the price where fringe supply equals market demand. The second point on the dominant firm demand curve is found at the y-intercept of the fringe supply curve $(S_F)$. At any price equal to or below this point, the supply of the fringe firms is equal to zero, since the supply curve represents the cost of production. At this point, and all prices below this point, the market demand $(D_{mkt})$ is equal to the dominant firm demand $(D_{dom})$. Thus, the dashed line below the y-intercept of the fringe supply is equal to the market demand curve. The dominant firm demand curve for prices above this point is found by drawing a line from the y-intercept at price $(S_F = D_{mkt})$ to the point on the market demand curve at the price of the $S_F$ y-intercept. This is the dashed line above the $S_F$ y-intercept.
Once the dominant firm demand curve is identified, the dominant firm maximizes profits by setting marginal revenue equal to marginal cost at quantity $Q_{dom}$. This level of output is then substituted into the dominant firm demand curve to find the price $P_{dom}$. The fringe firms take this price as given, and produce $Q_F$. The sum of $Q_{dom}$ and $Q_F$ is the total output $Q_T$.
In this way, the dominant firm takes into account the reaction of the fringe firms while making the output decision. This is a Nash equilibrium for the dominant firm, since it is taking the other firms’ behavior into account while making its strategic decision. The model effectively captures an industry with one dominant firm and many smaller firms.
Cartels
A cartel is a group of firms that have an explicit agreement to reduce output in order to increase the price.
Cartel = An explicit agreement among members to reduce output to increase the price.
Cartels are illegal in the United States, as the cartel is a form of collusion. The success of the cartel depends upon two things: (1) how well the firms cooperate, and (2) the potential for monopoly power (inelastic demand).
Cooperation among cartel members is limited by the temptation to cheat on the agreement. The Organization of Petroleum Exporting Countries (OPEC) is an international cartel that restricts oil production to maintain high oil prices. This cartel is legal, since it is an international agreement, outside of the American legal system. The oil cartel’s success depends on how well each member nation adheres to the agreement. Frequently, one or more member nations increases oil production above the agreement, putting downward pressure on oil prices. The cartel’s success is limited by the temptation to cheat. This cartel characteristic is that of a prisoner’s dilemma, and collusion can be best understood in this way.
A collusive agreement, or cartel, results in a circular flow of incentives and behavior. When firms in the same industry act independently, they each have an incentive to collude, or cooperate, to achieve higher levels of profits. If the firms can jointly set the monopoly output, they can share monopoly profit levels. When firms act together, there is a strong incentive to cheat on the agreement, to make higher individual firm profits at the expense of the other members. The business world is competitive, and as a result oligopolistic firms will strive to hold collusive agreements together, when possible. This type of strategic decisions can be usefully understood with game theory, the subject of the next two Chapters. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/05%3A__Monopolistic_Competition_and_Oligopoly/5.04%3A_Oligopoly%2C_Collusion%2C_and_Game_Theory.txt |
• 6.1: Game Theory Introduction
Game theory is the study of strategic interactions between players. The key to understanding strategic decision making is to understand your opponent’s point of view, and to deduce his or her likely responses to your actions.
• 6.2: Cooperative Strategy (Collusion)
Cooperative Strategy is a strategy that leads to the highest joint payoff for all players. Thus, the cooperative strategy is identical to collusion, where players work together to achieve the best joint outcome.
Thumbnail: From an evolutionary game theory the four alternative social forms of strategic interaction. (Public Domain; Pearson Scott Foresman via Wikipedia)
06: Game Theory
Game theory was introduced in the previous chapter to better understand oligopoly. Recall the definition of game theory.
Game Theory = A framework to study strategic interactions between players, firms, or nations.
Game theory is the study of strategic interactions between players. The key to understanding strategic decision making is to understand your opponent’s point of view, and to deduce his or her likely responses to your actions.
A game is defined as:
Game = A situation in which firms make strategic decisions that take into account each other’s’ actions and responses.
A payoff is the outcome of a game that depends of the selected strategies of the players.
Payoff = The value associated with a possible outcome of a game.
Strategy = A rule or plan of action for playing a game.
An optimal strategy is one that provides the best payoff for a player in a game.
Optimal Strategy = A strategy that maximizes a player’s expected payoff.
Games are of two types: cooperative and noncooperative games.
Cooperative Game = A game in which participants can negotiate binding contracts that allow them to plan joint strategies.
Noncooperative Game = A game in which negotiation and enforcement of binding contracts are not possible.
In noncooperative games, individual players take actions, and the outcome of the game is described by the action taken by each player, along with the payoff that each player achieves. Cooperative games are different. The outcome of a cooperative game will be specified by which group of players become a cooperative group, and the joint action that the group takes. The groups of players are called, “coalitions.” Examples of noncooperative games include checkers, the prisoner’s dilemma, and most business situations where there is competition for a payoff. An example of a cooperative game is a joint venture of several companies who band together to form a group (collusioin).
The discussion of the prisoner’s dilemma led to one solution to games: the equilibrium in dominant strategies. There are several different strategies and solutions for games, including:
1. Dominant strategy
2. Nash equilibrium
3. Maximin strategy (safety first, or secure strategy)
4. Cooperative strategy (collusion).
Equilibrium in Dominant Strategies
The dominant strategy was introduced in the previous chapter.
Dominant Strategy = A strategy that results in the highest payoff to a player regardless of the opponent’s action.
Equilibrium in Dominant Strategies = An outcome of a game in which each firm is doing the best that it can regardless of what its competitor is doing
Recall the prisoner’s dilemma from Chapter Five.
Prisoner’s Dilemma: Dominant Strategy
(1) If $\text{B CONF, A}$ should $\text{CONF } (8 < 15)$
(2) If $\text{B NOT, A}$ should $\text{CONF } (1 < 3)$
…$A$ has the same strategy $\text{(CONF)}$ no matter what $B$ does.
(3) If $\text{A CONF, B}$ should $\text{CONF } (8 < 15)$
(4) If $\text{A NOT, B}$ should $\text{CONF } (1 < 3)$
…$B$ has the same strategy $\text{(CONF)}$ no matter what $A$ does.
Thus, the equilibrium in dominant strategies for this game is $\text{(CONF, CONF) } = (8,8)$.
Nash Equilibrium
A second solution to games is a Nash Equilibrium.
Nash Equilibrium = A set of strategies in which each player has chosen its best strategy given the strategy of its rivals.
To solve for a Nash Equilibrium:
(1) Check each outcome of a game to see if any player wants to change strategies, given the strategy of its rival.
(a) If no player wants to change, the outcome is a Nash Equilibrium.
(b) If one or more player wants to change, the outcome is not a Nash Equilibrium.
A game may have zero, one, or more than one Nash Equilibria. The Prisoner’s Dilemma is shown in Figure $1$. We will determine if this game has any Nash Equilibria.
Prisoner’s Dilemma - Nash Equilibrium
(1) Outcome $= \text{ (CONF, CONF)}$
(a) Is $\text{CONF}$ best for $A$ given $\text{B CONF}$? Yes.
(b) Is $\text{CONF}$ best for $B$ given $\text{A CONF}$? Yes.
…$\text{(CONF, CONF)}$ is a Nash Equilibrium.
(2) Outcome $= \text{ (CONF, NOT)}$
(a) Is $\text{CONF}$ best for $A$ given $\text{B NOT}$? Yes.
(b) Is $\text{NOT}$ best for $B$ given $\text{A CONF}$? No.
…$\text{(CONF, NOT)}$ is not a Nash Equilibrium.
(3) Outcome $= \text{ (NOT, CONF)}$
(a) Is $\text{NOT}$ best for $A$ given $\text{B CONF}$? No.
(b) Is $\text{CONF}$ best for $B$ given $\text{A NOT}$? Yes.
…$\text{(NOT, CONF)}$ is not a Nash Equilibrium.
(4) Outcome $= \text{ (NOT, NOT)}$
(a) Is $\text{NOT}$ best for $A$ given $\text{B NOT}$? No.
(b) Is $\text{NOT}$ best for $B$ given $\text{A NOT}$? No.
…$\text{(NOT, NOT)}$ is not a Nash Equilibrium.
Therefore, $\text{(CONF, CONF)}$ is a Nash Equilibrium, and the only one Nash Equilibrium in the Prisoner’s Dilemma game. Note that in the Prisoner’s Dilemma game, the Equilibrium in Dominant Strategies is also a Nash Equilibrium.
Advertising Game
In this advertising game, two computer software firms (Microsoft and Apple) decide whether to advertise or not. The outcomes depend on their own selected strategy and the strategy of the rival firm, as shown in Figure $2$.
Advertising: Dominant Strategy
(1) If $\text{APP AD, MIC}$ should $\text{AD } (20 > 5)$
(2) If $\text{APP NOT, MIC}$ should $\text{NOT } (14 > 10)$
…different strategies, so no dominant strategy for Microsoft.
(3) If $\text{MIC AD, APP}$ should $\text{AD } (20 > 5)$
(4) If $\text{MIC NOT, APP}$ should $\text{NOT } (14 > 10)$
…different strategies, so no dominant strategy for Apple.
Thus, there are no dominant strategies, and no equilibrium in dominant strategies for this game.
Advertising: Nash Equilibria
(1) Outcome $= \text{ (AD, AD)}$
(a) Is $\text{AD}$ best for $MIC$ given $\text{APP AD}$? Yes.
(b) Is $\text{AD}$ best for $\text{APP$ given $\text{MIC AD}$? Yes.
…$\text{(AD, AD)}$ is a Nash Equilibrium.
(2) Outcome $= \text{ (AD, NOT)}$
(a) Is $\text{AD}$ best for $\text{MIC}$ given $\text{APP NOT}$? No.
(b) Is $\text{NOT}$ best for $\text{APP}$ given $\text{MIC AD}$? No.
…$\text{(AD, NOT)}$ is not a Nash Equilibrium.
(3) Outcome $= \text{ (NOT, AD)}$
(a) Is $\text{NOT}$ best for $\text{MIC}$ given $\text{APP AD}$? No.
(b) Is $\text{AD}$ best for $\text{APP}$ given $\text{MIC NOT}$? No.
…$\text{(NOT, AD)}$ is not a Nash Equilibrium.
(4) Outcome $= \text{ (NOT, NOT)}$
(a) Is $\text{NOT}$ best for $\text{MIC}$ given $\text{APP NOT}$? Yes.
(b) Is $\text{NOT}$ best for $\text{APP}$ given $\text{MIC NOT}$? Yes.
…$\text{(NOT, NOT)}$ is a Nash Equilibrium.
There are two Nash Equilibria in the Advertising game: $\text{(AD, AD)}$ and $\text{(NOT, NOT)}$. Therefore, in the Advertising game, there are two Nash Equilibria, and no Equilibrium in Dominant Strategies.
It can be proven that in game theory, every Equilibrium in Dominant Strategies is a Nash Equilibrium. However, a Nash Equilibrium may or may not be an Equilibrium in Dominant Strategies.
Maximin Strategy (Safety First; Secure Strategy)
A strategy that allows players to avoid the largest losses is the Maximin Strategy.
Maximin Strategy = A strategy that maximizes the minimum payoff for one player.
The maximin, or safety first, strategy can be found by identifying the worst possible outcome for each strategy. Then, choose the strategy where the lowest payoff is the highest.
6.1.9 Prisoner’s Dilemma: Maximin Strategy (Safety First)
We use Figure $2$ to find the Maximin Strategy for the Prisoner’s Dilemma.
(1) Player $A$
(a) If $\text{CONF}$, worst payoff $= 8$ years.
(b) If $\text{NOT}$, worst payoff $= 15$ years.
…$A$’s Maximin Strategy is $\text{CONF } (8 < 15)$.
(2) Player $B$
(a) If $\text{CONF}$, worst payoff $= 8$ years.
(b) If $\text{NOT}$, worst payoff $= 15$ years.
…$B$’s Maximin Strategy is $\text{CONF } (8 < 15)$.
Therefore, the Maximin Equilibrium for the Prisoner’s Dilemma is $\text{(CONF, CONF)}$. This outcome is also an Equilibrium in Dominant Strategies, and a Nash Equilibrium.
Advertising Game: Maximin Strategy (Safety First)
(1) $\text{MICROSOFT}$
(a) If $\text{AD}$, worst payoff $= 10$.
(b) If $\text{NOT}$, worst payoff $= 5$.
…MICROSOFT’s Maximin Strategy is $\text{AD } (5 < 10)$.
(2) $\text{APPLE}$
(a) If $\text{AD}$, worst payoff $= 10$.
(b) If $\text{NOT}$, worst payoff $= 5$.
…$\text{APPLE}$’s Maximin Strategy is $\text{AD } (5 < 10)$.
Therefore, the Maximin Equilibrium in the Advertising game is $\text{(AD, AD)}$. Recall that this outcome is one of two Nash Equilibria in the advertising game: $\text{(AD, AD)}$ and $\text{(NOT, NOT)}$. If both players choose Maximin, there is only one equilibrium: $\text{(AD, AD)}$.
1. The relationships between the game theory strategies can be summarized:
2. An Equilibrium in Dominant Strategies is always a Maximin Equilibrium.
3. A Maximin Equilibrium is NOT always an Equilibrium in Dominant Strategies.
4. An Equilibrium in Dominant Strategies is always a Nash Equilibrium. A Nash Equilibrium is NOT always an Equilibrium in Dominant Strategies. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/06%3A_Game_Theory/6.01%3A_Game_Theory_Introduction.txt |
The cooperative strategy is defined as the best joint outcome for both players together.
Cooperative Strategy = A strategy that leads to the highest joint payoff for all players.
Thus, the cooperative strategy is identical to collusion, where players work together to achieve the best joint outcome. In the Prisoner’s Dilemma (Figure 6.1), the cooperative outcome is found by summing the two players’ outcomes together, and finding the outcome that has the smallest jail sentence for the prisoners together: $\text{(NOT, NOT) } = (3, 3)$.
This outcome is the collusive solution, which provides the best outcome if the prisoners could make a joint decision and stick with it. Of course, there is always the temptation to cheat on the agreement, where each player does better for themselves, at the expense of the other prisoner.
Similarly, the cooperative outcome in the advertising game (Figure 6.2) is $\text{(AD, AD) } = (20, 20)$. This outcome provides the highest profits $(= 40$ million USD) to both firms. Note that the advertising game is not a prisoner’s dilemma, since there is no incentive to cheat once the cooperative solution has been achieved.
Game Theory Example: Steak Pricing Game
A pricing game for steaks if shown in Figure $1$. In this game, two beef processors, Tyson and JBS, are determining what price to charge for steaks. Suppose that these two firms are the major players in this steak market, and the outcomes depend on the strategies of both firms, since players choose which company to purchase from based on price. If both firms choose low prices, the outcome is low profits. Additional profits are earned by choosing high prices. However, when both firms have high prices, there is an incentive to undercut the other firm with a low price, to increase profits at the expense of the other firm.
Steak Pricing Game: Dominant Strategy
(1) If $\text{TYSON LOW, JBS}$ should $\text{LOW } (2 > 0)$
(2) If $\text{TYSON HIGH, JBS}$ should $\text{LOW } (12 > 10)$
…the dominant strategy for $\text{TYSON}$ is $\text{LOW}$.
(3) If $\text{JBS LOW, TYSON}$ should $\text{LOW } (2 > 0)$
(4) If $\text{JBS HIGH, TYSON}$ should $\text{LOW } (12 > 10)$
… the dominant strategy for $\text{JBS}$ is $\text{LOW}$.
The Equilibrium in Dominant Strategies for the Steak Pricing game is $\text{(LOW, LOW)}$. This is an unexpected result, since it is a less desirable scenario than $\text{(HIGH, HIGH)}$ for both firms. We have seen that an Equilibrium in Dominant Strategies is also a Nash Equilibrium and a Minimax Equilibrium. These results will be checked in what follows.
Steak Pricing Game: Nash Equilibrium
(1) Outcome $= \text{(LOW, LOW)}$
(a) Is $\text{LOW}$ best for $\text{JBS}$ given $\text{TYSON LOW}$? Yes.
(b) Is $\text{LOW}$ best for $\text{TYSON}$ given $\text{JBS LOW}$? Yes.
…$\text{(LOW, LOW)}$ is a Nash Equilibrium.
(2) Outcome $= \text{(LOW, HIGH)}$
(a) Is $\text{LOW}$ best for $\text{JBS}$ given $\text{TYSON HIGH}$? Yes.
(b) Is $\text{HIGH}$ best for $\text{TYSON}$ given $\text{JBS LOW}$? No.
…$\text{(LOW, HIGH)}$ is not a Nash Equilibrium.
(3) Outcome $= \text{(HIGH, LOW)}$
(a) Is $\text{HIGH}$ best for $\text{JBS}$ given $\text{TYSON LOW}$? No.
(b) Is $\text{LOW}$ best for $\text{TYSON}$ given $\text{JBS HIGH}$? Yes.
…$\text{(HIGH, LOW)}$ is not a Nash Equilibrium.
(4) Outcome $= \text{(HIGH, HIGH)}$
(a) Is $\text{HIGH}$ best for $\text{JBS}$ given $\text{TYSON HIGH}$? No.
(b) Is $\text{HIGH}$ best for $\text{TYSON}$ given $\text{JBS HIGH}$? No.
…$\text{(HIGH, HIGH)}$ is not a Nash Equilibrium.
Therefore, there is only one Nash Equilibrium in the Steak Pricing game: $\text{(LOW, LOW)}$.
Steak Pricing Game: Maximin Equilibrium (Safety First)
(1) $\text{JBS}$
(a) If $\text{LOW}$, worst payoff $= 2$.
(b) If $\text{HIGH}$, worst payoff $= 0$.
…$\text{JBS}$’ Maximin Strategy is $\text{LOW } (0 < 2)$.
(2) $\text{TYSON}$
(a) If $\text{LOW}$, worst payoff $= 2$.
(b) If $\text{HIGH}$, worst payoff $= 0$.
…$\text{TYSON}$’s Maximin Strategy is $\text{LOW } (0 < 2)$.
The Maximin Equilibrium in the Steak Pricing game is $\text{(LOW, LOW)}$. Interestingly, if both firms cooperated, they could achieve much higher profits.
Steak Pricing Game: Cooperative Equilibrium (Collusion)
Both JBS and Tyson can see that if they were to cooperate, either explicitly or implicitly, profits would increase significantly. The cooperative outcome is $\text{(HIGH, HIGH) } = (10,10)$. This is the outcome with the highest combined profits. Both firms are better off in this outcome, but each firm has an incentive to cheat on the agreement to increase profits from 10 m USD to 12 m USD. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/06%3A_Game_Theory/6.02%3A_Cooperative_Strategy_%28Collusion%29.txt |
• 7.1: Repeated and Sequential Games
A game that is played only once is called a “one-shot” game. Repeated games are games that are played over and over again and actions are taken and payoffs received over and over again. Many oligopolists and real-life relationships can be characterized as a repeated game. Strategies in a repeated game are often more complex than strategies in a one-shot game, as the players need to be concerned about the reactions and potential retaliations of other players.
• 7.2: First Mover Advantage
The first mover advantage is similar to the Stackelberg model of oligopoly, where the leader firm had an advantage over the follower firm. In many oligopoly situations, it pays to go first by entering a market before other firms. In many situations, it pays to determine the firm’s level of output first, before other firms in the industry can decide how much to produce. Game theory demonstrates how many real-world firms determine their output levels in an oligopoly.
Thumbnail: www.pexels.com/photo/battle-board-game-castle-challenge-277124/
07: Game Theory Applications
Repeated Games
A game that is played only once is called a “one-shot” game. Repeated games are games that are played over and over again.
Repeated Game = A game in which actions are taken and payoffs received over and over again.
Many oligopolists and real-life relationships can be characterized as a repeated game. Strategies in a repeated game are often more complex than strategies in a one-shot game, as the players need to be concerned about the reactions and potential retaliations of other players. As such, the players in repeated games are likely to choose cooperative or “win-win” strategies more often than in one shot games. Examples include concealed carry gun permits: are you more likely to start a fight in a no-gun establishment, or one that allows concealed carry guns? Franchises such as McDonalds were established to allow consumers to get a common product and consistent quality at locations new to them. This allows consumers to choose a product that they know will be the same, given the repeated game nature of the decision to purchase meals at McDonalds.
Sequential Games
A sequential game is played in “turns,” or “rounds” like chess or checkers, where each player takes a turn.
Sequential Game = A game in which players move in turns, responding to each others’ actions and reactions.
Product Choice Game One
An example of a sequential game is the product choice game shown in Figure \(1\).
In this game, two cereal producers (Kelloggs and General Mills) decide whether to produce and sell cereal made from wheat or oats. If both firms select the same category, both firms lose five million USD, since they have flooded the market with too much cereal. However, the two firms split the two markets, with one firm producing wheat cereal and the other firm producing oat cereal, both firms earn ten million USD. In this situation, it helps both firms if they can decide which firm goes first, to signal to the other firm. It does not matter which firm produces wheat or oat cereal, as long as the two firms divide the two markets. This type of repeated game can be solved by one firm going first, or signaling to the other firm which product it will produce, and letting the other firm take the other market.
Product Choice Game Two
It might be that one of the two markets is more valuable than the other. This situation is shown in Figure \(2\).
This cereal market game is very similar to the previous game, but in this case the oat cereal market is worth much more than the wheat cereal market. As in the Product Choice One game, if both firms select the same market, both lose five million USD. Similarly, if each firm chooses a different market, then both firms make positive economic profits. The difference between the two product choice games is that the earnings are asymmetrical in the Product Choice Two game (Figure \(2\)): the firm that is in the oat cereal market earns 20 million USD, and the firm in the wheat cereal market earns 10 million USD. In this situation, both firms will want to choose OAT first. If Kelloggs is able to choose OAT first, then it is in General Mill’s best interest to select WHEAT. The player in this sequential game who goes first has a first player advantage, worth ten million USD. Each firm would be willing to pay up to ten million USD for the right to select first. In a repeated game, the market stabilizes with one firm producing oat cereal, and the other firm producing wheat cereal. There is no advantage for either firm to switch strategies, unless the firm can play OAT first, causing the other firm to move into wheat cereal. | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/07%3A_Game_Theory_Applications/7.01%3A_Repeated_and_Sequential_Games.txt |
The first mover advantage is similar to the Stackelberg model of oligopoly, where the leader firm had an advantage over the follower firm. In many oligopoly situations, it pays to go first by entering a market before other firms. In many situations, it pays to determine the firm’s level of output first, before other firms in the industry can decide how much to produce. Game theory demonstrates how many real-world firms determine their output levels in an oligopoly.
First Mover Advantage Example: Ethanol
Ethanol provides a good example of the first-mover advantage. Consider an ethanol market that is a Stackelberg duopoly. To review the Stackelberg model, assume that there are two ethanol firms in the same market, and the inverse demand for ethanol is given by
$P = 120 – 2Q,$
where $P$ is the price of ethanol in USD/gallon, and $Q$ is the quantity of ethanol in million gallons. The cost of producing ethanol is given by $C(Q) = 12Q$, and total output is the sum of the two individual firm outputs:
$Q = Q_1 + Q_2.$
First, suppose that the two firms are identical, and they are Cournot duopolists. To solve this model, Firm One maximizes profits:
\begin{align*} \max π_1 &= TR_1 – TC_1\[4pt]\max π_1 &= P(Q)Q_1 – C(Q_1)& &\text{[price depends on total output } Q = Q_1 + Q_2]\[4pt] \max π_1 &= [120 – 2Q]Q_1 – 12Q_1\[4pt] \max π_1 &= [120 – 2Q_1 – 2Q_2]Q_1 – 12Q_1\[4pt] \max π_1 &= 120Q_1 – 2Q_1^2 – 2Q_2Q_1 – 12Q_1\[4pt] \frac{∂π_1}{∂Q_1} &= 120 – 4Q_1 – 2Q_2 – 12 = 0\[4pt] 4Q_1 &= 108 – 2Q_2\[4pt] Q_1^* &= 27 – 0.5Q_2 \text{ million gallons of ethanol}\end{align*}
This is Firm One’s reaction function. Assuming identical firms, by symmetry:
$Q_2^{*} = 27 – 0.5Q_1$
The solution is found through substitution of one equation into the other.
\begin{align*} Q_1^{*} &= 27 – 0.5(27 – 0.5Q_1) \[4pt] Q_1^{*}&= 27 – 13.5 + 0.25Q_1^{*}\[4pt] Q_1^{*} &= 13.5 + 0.25Q_1^{*}\[4pt] 0.75Q_1^{*} &= 13.5\[4pt] Q_1^{*} &= \text{18 million gallons of ethanol} \end{align*}
Due to symmetry from the assumption of identical firms:
\begin{align*} Q_i &= 18 \text{ million gallons of ethanol, } i = 1,2\[4pt]Q &= 36 \text{ million gallons of ethanol}\[4pt] P &= 48 \text{ USD/gallon ethanol}\end{align*}
Profits for each firm are:
$π_i = P(Q)Q_i – C(Q_i) = 48(18) – 12(18) = (48 – 12)18 = 36(18) = 648 \text{ million USD}$
This result shows that if each firm produces 18 million gallons of ethanol, each firm will earn 648 million USD in profits. This is shown in Figure $1$, where several different possible output levels are shown as strategies for Firm A and Firm B, together with payoffs.
Next, suppose that the two firms are not identical, and that one firm is a leader and the other is the follower. By calculating the Stackelberg model solution, the possible outcomes of the game can be derived, as shown in Figure $1$.
In the Stackelberg model, assume that Firm One is the leader and Firm Two is the follower. In this case, Firm One solves for Firm Two’s reaction function:
\begin{align*} \max π_2 &= TR_2 – TC_2\[4pt] \max π_2 &= P(Q)Q_2 – C(Q_2)& &\text{[price depends on total output } Q = Q_1 + Q_2]\[4pt] \max π_2 &= [120 – 2Q]Q_2 – 12Q_2\[4pt] \max π_2 &= [120 – 2Q_1 – 2Q_2]Q_2 – 12Q_2\[4pt] \max π_2 &= 120Q_2 – 2Q_1Q_2 – 2Q_2^2 – 12Q_2\[4pt] \frac{∂π_2}{∂Q_2} &= 120 – 2Q_1 – 4Q_2 – 12 = 0\[4pt] 4Q_2 &= 108 – 2Q_1\[4pt] Q_2^* &= 27 – 0.5Q_1\end{align*}
Next, Firm One, the leader, maximizes profits holding the follower’s output constant using the reaction function:
\begin{align*} \max π_1 &= TR_1 – TC_1\[4pt] \max π_1 &= P(Q)Q_1 – C(Q_1)& &\text{[price depends on total output } Q = Q1 + Q2] \[4pt] \max π_1 &= [120 – 2Q]Q_1 – 12Q_1\[4pt] \max π_1 &= [120 – 2Q_1 – 2Q_2]Q_1 – 12Q_1\[4pt] \max π_1 &= [120 – 2Q_1 – 2(27 – 0.5Q_1)]Q_1– 12Q_1 & &\text{[substitution of One’s reaction function]}\[4pt] \max π_1 &= [120 – 2Q_1 – 54 + Q_1]Q_1 – 12Q_1\[4pt] \max π_1 &= [66 – Q_1]Q_1– 12Q_1\[4pt] \max π_1 &= 66Q_1 – Q_1^2 – 12Q_1\[4pt] \frac{∂π_1}{∂Q_1} &= 66 – 2Q_1 – 12 = 0\[4pt] 2Q_1^* &= 54\[4pt]Q_1^* &= 27 \text{ million gallons of ethanol}\end{align*}
This can be substituted back into Firm Two’s reaction function to solve for $Q_2^*$.
\begin{align*} Q_2^* &= 27 – 0.5Q_1 = 27 – 0.5(27) = 27 – 13.5 = 13.5 \text{ million gallons of ethanol}\[4pt] Q &= Q_1 + Q_2 = 27 + 13.5 = 40.5 \text{ million gallons of ethanol}\[4pt] P &= 120 – 2Q = 120 – 2(40.5) = 120 – 81 = 39 \text{ USD/gallon ethanol}\[4pt] π_1 &= (39 – 12)27 = 27(27) = 729 \text{ million USD}\[4pt] π_2 &= (39 – 12)13.5 = 27(13.5) = 364.5 \text{ million USD}\end{align*}
These results are displayed in Figure $1$. In a one-shot game, the Nash Equilibrium is (18, 18), yielding payoffs of 648 million USD for each ethanol plant in the market. Each firm desires to select 18 million gallons, and have the other firm select 13.5 million gallons, in which case profits would increase to 810 million USD. However, the rival firm will not unilaterally cut production to 13.5, since it would lose profits at the expense of the other firm.
In a sequential game, if Firm A goes first, it will select 27 million gallons of ethanol. In this case, Firm B will choose to produce 13.5 million gallons of ethanol, which is the Stackelberg solution. Firm A, as the first mover, has increased profits from 648 to 729 million USD by being able to go first. This is the first mover advantage.
Empty Threat
Figure $2$ shows a sequential game between two grain seed dealers: Monsanto, a large international agribusiness, and a Local Grower. Monsanto is the dominant firm, and chooses a pricing strategy first. If Monsanto selects a $\text{HIGH}$ price strategy, the Local Grower will select a $\text{LOW}$ price, and both firms are profitable. In this case, the Local Grower has the low price, so makes more money than Monsanto.
Could Monsanto threaten the Local Grower that it would set a $\text{LOW}$ price, to try to cause the Local Grower to set a $\text{HIGH}$ price, and increasing Monsanto profits from 80 million USD to 100 million USD? Monsanto could threaten to set a LOW price, but it is not believable, since Monsanto would have very low payoffs in both outcomes. In this case, Monsanto’s threat is an empty threat, since it is neither credible nor believable.
Pre-Emptive Strike
Suppose two big box stores are considering entering a small town market. If both Walmart and Target enter this market, both firms lose ten million USD, since the town is not large enough to support both firms. However, if one firm can enter the market first (a “pre-emptive strike”), it can gain the entire market and earn 20 million USD. The firm that goes first wins this game in a significant way. This explains why Walmart has opened so many stores in a large number of small cities.
Commitment and Credibility
Figure $3$ shows a sequential game between beef producers and beef packers. In this game, the packer is the leader, and decides to produce and sell $\text{LOW}$ or $\text{HIGH}$ quality beef.
If the packers go first, they will select $\text{LOW}$, since they know that by doing so, the producers would also select $\text{LOW}$. This results in 50 million USD for the packers and 20 million USD for the producers. The producers would prefer the outcome $\text{(HIGH, HIGH)}$, as their profits would increase from 20 to 40 million USD. In this situation, the beef producers can threaten the packers by committing to producing $\text{HIGH}$ quality beef only. The packers will select $\text{LOW}$ if they do not believe the threat, in the attempt to achieve the outcome $\text{(LOW, LOW)}$. However, if the producers can commit to the $\text{HIGH}$ quality strategy, and prove to the packers that they will definitely choose $\text{HIGH}$ quality, the packers would choose $\text{HIGH}$ also, and the producers would achieve 40 million USD.
The producers could come up with a strategy of visibly and irreversibly reducing their own payoffs to prove to the packers that they are serious about $\text{HIGH}$ quality, and cause the packers to choose $\text{HIGH}$ also. This commitment, if credible, could change the outcome of the game, resulting in higher profits for the producers, at the expense of the packers. Such a credible commitment is shown in Figure $4$, which replicates Figure $3$ with the exception of the $\text{LOW}$ outcomes for the producers. If the beef producers sell off their low quality herd, and have no low quality cattle, they change the sequential game from the one shown in Figure 6.9 to the one in Figure 6.10.
If the packers are the leaders in Figure $7$, they select the $\text{HIGH}$ quality strategy. If they select $\text{LOW}$, the producers would choose $\text{HIGH}$, yielding 10 million USD for the packers. When the packers select $\text{HIGH}$, the packers earn 20 million USD. Therefore, a producer strategy of shutting down or destroying the low quality productive capacity results in the desired outcome for the producers: $\text{(HIGH, HIGH)}$. The strategy of taking an action that appears to put a firm at a disadvantage can provide the incentives to increase the payoffs of a sequential game. This strategy can be effective, but is risky. The producers need accurate knowledge of the payoffs of each strategy.
The commitment and credibility game is related to barriers to entry in monopoly. A monopolist often has a strong incentive to keep other firms out of the market. The monopolist will engage in entry deterrence by making a credible threat of price warfare to deter entry of other firms. In many situations, a player who behaves irrationally and belligerently can keep rivals off balance, and change the outcome of a game. Political leaders who appear irrational may be using their unpredictability to achieve long run goals.
A policy example of this type of strategy occurs during bargaining between politicians. If one issue is not going in a desired direction, a political group can bring in another issue to attempt to persuade the other party to compromise.
The “holdup game” is another example of commitment and credibility. Often, once significant resources are committed to a project, the investor will ask for more resources. If the project is incomplete, the funder will often agree to pay more money to have the project completed. Large building projects are often subject to the holdup game.
For example, if a contractor has been paid 20 million USD to build a campus building, and the project is only 50 percent complete, the contractor could halt construction, letting the half-way completed building sit unfinished, and ask for 10 million USD more, due to “cost overruns.” This strategy is often effective, even if a contract is carefully and legally drawn up ahead of time. The contractor has the University right where they want it: stuck with an unfinished building unless they increase the dollars to the project. The contractor is effectively saying, “do it my way, or I quit.” | textbooks/socialsci/Economics/The_Economics_of_Food_and_Agricultural_Markets_(Barkley)/07%3A_Game_Theory_Applications/7.02%3A_First_Mover_Advantage.txt |
Jonathan J.-M. Calède
High-impact practices are associated with learning gains for students and benefits to the university community that include gains in GPA, increased student retention, improved student-instructor interactions, and more supportive campus environments (Kuh, 2008). Research experiences specifically have been demonstrated to benefit students’ knowledge gains, skill acquisition, motivation, identity as researchers, comfort, and engagement (e.g., Follmer et al., 2017; Hanauer et al., 2017; Hunter et al., 2007; Landrum & Nelsen, 2002; Lopatto, 2007; Newell & Ulrich, 2022), including, and sometimes particularly, for students from underrepresented minorities (Daniels et al., 2016; Genet, 2021; Hanauer et al., 2017, 2022; Lopatto, 2007; Malotky et al., 2020; Martin et al., 2021; Matyas et al., 2022; Rodenbusch et al., 2016; Shuster et al., 2019), contributing to reducing the equity gap (Shapiro et al., 2015). CUREs are one tool that contributes to implementing a more equitable model of education moving forward (Elgin et al., 2021).
There is an extensive literature on the benefits of research experiences and CUREs, particularly for students. However, much of this literature is composed of studies that do not control for student-level characteristics, which could explain differences between groups (e.g., GPA, level of preparedness, enrollment preferences, prior research experience). In fact, many studies consist of pre-post comparisons of single groups and do not include comparison groups. Additionally, most of the assessments (e.g., quizzes, tests, and exams) of the content knowledge that was undertaken in published studies was subject-specific, preventing comparisons across learning experiences. Finally, analyses of skill gains and personal development overwhelmingly rely on self-reporting by students (Linn et al., 2015). In addition to cultural and identity issues associated with self-reporting and confidence, it can be difficult to determine whether students have indeed improved in a particular skill through the experience or merely became more confident about their skills (Dolan, 2016). Because CUREs are demonstrated to benefit self-efficacy (Martin et al., 2021), this type of data should be considered with care.
1. Benefits to students and learning gains
CUREs are widely recognized to benefit students. Most of the research is based on pre-post comparisons of self-reported levels of confidence, but numerous studies also include knowledge assessments using quizzes, comparisons with traditional laboratory or recitations sections employing “cookbook” labs and activities, and comparisons with mentored research experiences. There are some analyses that fail to identify strong signals of improved skills (e.g., Brownell et al., 2015), but for many aspects of the research experience, student gains in CUREs meet or exceed those observed in summer internship and mentored student research experience models (Bixby & Miliauskas, 2022; Evans et al., 2021; Frantz et al., 2006; Hanauer et al., 2012; Jordan et al., 2014; Lopatto et al., 2008; Overath et al., 2016; Shaffer et al., 2010; Shapiro et al., 2015; Smith et al., 2021). Student gains in CUREs are higher than those experienced by students in traditional labs (Blumling et al., 2022; Brownell et al., 2012; Evans et al., 2021; Hanauer et al., 2017; Jordan et al., 2014; Lopatto et al., 2008; Newell & Ulrich, 2022; Pavlova et al., 2021; Pontrello, 2015; Wolkow et al., 2014; Wu & Wu, 2022). Analyses of CUREs across institutions, disciplines, and course levels show that the greatest gains are found in CUREs that are implemented over the course of an entire semester, allow for student input in the research process, and focus on novel research with outcomes unknown to both instructors and students (DeChenne-Peters et al. 2023; Mader et al., 2017). Additionally, students enrolling in a course-sequence including more than one CURE see additional gains (Corwin et al., 2022). There have also been reports that student gains are higher or more critical in CUREs at the introductory level than the upper level (Hanauer et al., 2017; Handelsman et al., 2022; Ruttledge, 1998).
Broadly speaking, the impacts of CUREs on students can be assigned to five categories:
1. Gains in content knowledge and technical skills
2. Gains in broadly applicable skills
3. Changes in attitudes towards and understanding of research
4. Gains in confidence and self-efficacy
5. Changes in professional/career paths
Gains in content knowledge and technical skills
One of the strongest gains experienced by students in CUREs is an improved content knowledge. This impact is observed across disciplines including CUREs in molecular biology (Harvey et al., 2014; Makarevitch et al., 2015), plant biology (Ward et al., 2014), ecology (Valliere, 2022a, 2022b), microbiology (DeHaven et al., 2022), geosciences (Gonzales & Semken, 2009; Kortz & van der Hoeven Kraft, 2016), genomics (Drew & Triplett, 2008), ecology (Genet, 2021), and cell biology as well as genetics (Makarevitch et al., 2015; Siritunga et al., 2011). This increased content knowledge can translate to improved grades through time or compared to non-CURE course sections (Blumling et al., 2022; Ing et al., 2021; Jordan et al., 2014; Olimpo et al., 2016; Shaffer et al., 2010; Waynant et al., 2022; Winkelmann et al., 2015). It may also participate in increasing student retention within courses (Blumling et al., 2022) as well as between first and second years (Jordan et al., 2014). Importantly, the time-constraints of CUREs on content coverage does not have a detrimental effect on students’ understanding of biological concepts beyond the focus of the CURE (Jordan et al., 2014) or overall content knowledge in the discipline (Wolkow et al., 2014). Several assessments of CUREs across disciplines have also found strong gains in students’ technical skills (Kortz & van der Hoeven Kraft, 2016) including in computer modelling, software program use, and general computer use (Drew & Triplett, 2008; Pavlova et al., 2021; Williams & Reddish, 2018), enhanced statistical knowledge (Olimpo et al., 2018; Pavlova et al., 2021; Ward et al., 2014), and laboratory techniques (Bixby & Miliauskas, 2022; Evans et al., 2021; Jordan et al., 2014; Large et al., 2022; Siritunga et al., 2011; Stoeckman et al., 2019). CUREs also improve students’ ability and confidence in their ability to design experiments and interpret data (Bixby & Miliauskas, 2022; Blumling et al., 2022; Brownell et al., 2012; Genet, 2021; Kloser et al., 2013; Large et al., 2022; Martin et al., 2021; Pavlova et al., 2021; Shaffer et al., 2014).
Gains in broadly applicable skills
Students also widely report great gains in soft skills. Those skills are applicable to courses and experiences through the student’s academic career and beyond in their professional careers; they are not limited to the discipline associated with the CURE. Thus, students indicate a greater familiarity with the structure of scholarly papers and a greater ability to engage with the primary literature following engagement in a CURE (e.g., DeHaven et al., 2022; Drew & Triplett, 2008; Evans et al., 2021; Jordan et al., 2014; Martin et al., 2021; Shelby, 2019; Valliere, 2022b). Several studies also report improved written and oral communication, including the graphical representation of data and the presentation of both the research process and research findings (DeHaven et al., 2022; Genet, 2021; Jordan et al., 2014; Large et al., 2022; Makarevitch et al., 2015; Shaffer et al., 2014; Shelby, 2019; Stoeckman et al., 2019; Valliere, 2022a, 2022b; Ward et al., 2014; Wiley & Stover, 2014; Williams & Reddish, 2018). This improved ability to communicate is associated with increased willingness and confidence in communicating research (e.g., Kloser et al., 2013; Valliere, 2022a). There is also evidence that CUREs lead to improved time management and organization skills (Kortz & van der Hoeven Kraft, 2016) as well as increased problem-solving skills (Olimpo et al., 2016; Wu & Wu, 2022). Students engaged in CUREs gain an appreciation for the obstacles and challenges of research (Drew & Triplett, 2008). In fact, a major impact of CUREs is their ability to increase tolerance for obstacles in students (Corwin et al., 2022; Evans et al., 2021; Jordan et al., 2014; Large et al., 2022; Stoeckman et al., 2019; Williams & Reddish, 2018; Wu & Wu, 2022). Students report valuing the ability to learn from their mistakes in the CURE (Harrison et al., 2011). Even when research goals are not met, students engaged in CUREs increase their ability to navigate research obstacles (Gin et al., 2018). CUREs lead students to become more active learners who can better think independently, are motivated to learn, and better able to think in new ways (Evans et al., 2021; Harrison et al., 2011; Kortz & van der Hoeven Kraft, 2016; Shaffer et al., 2010). Research has also showed that students value the potential for publication of the research they engage in as part of the CURE (Wiley & Stover, 2014); these publications may facilitate future admission into graduate and professional programs.
Changes in attitudes towards and understanding of research
CUREs are an enjoyable experience for students (Carr et al., 2018; Drew & Triplett, 2008; Harvey et al., 2014; LaForge & Martin, 2022; Pontrello, 2015). Students like the experience of exploring an open-ended question with no known outcome (Brownell et al., 2012; Hanauer et al., 2012; Harrison et al., 2011; Williams & Reddish, 2018), making decisions in their work (Hanauer et al., 2012; Harrison et al., 2011), and the relevance of their work to the scholarly community and the world (Drew & Triplett, 2008; Hanauer et al., 2012; Jordan et al., 2014; LaForge & Martin, 2022; Tomasik et al., 2013). Students often find CUREs to be community-building (e.g., Kulesza et al., 2022; Large et al., 2022; Hanauer et al., 2017; Werth et al., 2022) and recognize the positive impact of CUREs on their professional paths (e.g., Amir et al., 2022). CUREs also contribute to a better understanding of and confidence in the research process and science (Bascom-Slack et al., 2012; Bixby & Miliauskas, 2022; Evans et al., 2021; Freeman et al., 2023; Hanauer et al., 2012; Harrison et al., 2011; Jordan et al., 2014; Kulesza et al., 2022; LaForge & Martin, 2022; Large et al., 2022; Shaffer et al., 2014; Stoeckman et al., 2019). Harrison et al. (2011) reported increased interests in science and research in students who took a CURE. Valliere (2022a) documented a significant increase in students’ perception that they can personally relate to a scientist. Kortz and van der Hoeven Kraft (2016) found an increased general appreciation for science and scientists in students engaged in CUREs. This positive attitude towards research is retained for years (Harvey et al., 2014). CUREs also lead students to identify as researchers or as members of the scholarly community and not merely students more than they do after traditional courses (Hanauer et al., 2017; Mraz-Craig et al., 2018). Additionally, students involved in a CURE were found to have developed a better understanding of the distinctions between hypotheses and theories as well as a deeper grasp of the importance of creativity in research compared to students who completed a traditional lab course (Russell & Weaver, 2011). Interestingly, Dewey et al. (2022) found that different CURE models lead students to perceive scientific research in different ways and identify different elements as central to the culture of research in the discipline. Designing an inclusive CURE experience is therefore critical to the experience of the students and their view of research.
Gains in confidence and self-efficacy
Because of the nature of the data collection undertaken in evaluating many CUREs, much of the information collected on the benefits of CUREs concerns the confidence of students with specific tasks and general self-efficacy. Kortz and van der Hoeven Kraft (2016) reported increased student confidence in talking with other people and more open minds in students. In fact, an increased willingness and ability to engage in conversations and collaborations is a common outcome of CUREs (Brownell et al., 2012; Jordan et al., 2014; Martin et al., 2021; Stoeckman et al., 2019; Vater et al., 2021). Yet, CUREs also promote the ability to work independently (Stoeckman et al., 2019; Jordan et al., 2014). More broadly, students report greater confidence with their ability to conduct research (Fendos et al., 2022; Jordan et al., 2014; Siritunga et al., 2011, Wu & Wu, 2022) and self-efficacy in general (e.g., Hanauer et al., 2017).
Changes in professional/career paths
CUREs have a significant effect on students’ identity as scholars, particularly in the sciences, boosting their intention to pursue a career in STEM (Newell & Ulrich, 2022). In fact, for many students, CUREs help clarify their career path (Jordan et al., 2014; Shaffer et al., 2014; Stoeckman et al., 2019). CUREs increase the matriculation in STEM majors (Rodenbusch et al., 2016). They also directly or indirectly lead to increased graduation rates (Rodenbusch et al., 2016; Waynant et al., 2022). Additionally, several studies have found that engagement in a CURE leads to increased interest in conducting research in other settings following the course (Bascom-Slack et al., 2012; Brownell et al., 2012; Carr et al., 2018; Fendos et al., 2022; Harvey et al., 2014; Overath et al., 2016; Shaffer et al., 2014; Ward et al., 2014). Students also mention feeling better prepared to undertake research projects following their engagement in a CURE (Bascom-Slack et al., 2012; Drew & Triplett, 2008; Jordan et al., 2014; Newell & Ulrich, 2022; Stoeckman et al., 2019; Williams & Reddish, 2018). This leads to greater engagement in traditional research experiences (Harvey et al., 2014). This interest in research also translates to changes in career paths with increased interests and matriculation in graduate school and medical school (Harrison et al., 2011; Bascom-Slack et al., 2012; Shaffer et al., 2014). Many years later, CUREs lead to increased retention in scientific careers (Harvey et al., 2014; Shaffer et al., 2014).
1. Benefits to instructors
CUREs provide instructors with opportunities to improve student learning, help undergraduates develop a portfolio of works that help them reach their career goals, and foster chances for students to gain skills (Desai et al., 2008; Lopatto et al., 2014; see also section C1 above). As such, CUREs enable instructors, postdoctoral researchers, and graduate teaching assistants to pursue the mission of their institution as well as their own educational goals.
CURE instructors may be faculty members, instructors, postdoctoral researchers, or graduate student assistants (Goodwin et al., 2021; Heim & Holt, 2019). Cascella & Jez (2018) presented the argument that CUREs represent a great opportunity to train postdoctoral researchers and graduate students in the roles of instructors and principal investigator. They specifically argue that CUREs provide the opportunity to develop instructional materials, practice active learning methods of teaching, and build an identity and practice as a teacher beyond merely assisting in grading and delivering content (Cascella & Jez, 2018). The nature of CUREs also provides a platform for trainees to become familiar with the functioning and management of large research projects that involve personnel, deadlines, and a budget (see also Desai et al., 2008 for a similar argument in a learning community context). Because of the potential for publication of the results of the original research undertaken in CUREs, trainees also maintain or increase their productivity (in the form of presentations, publications, or the generation of data that can feed into proposals or manuscripts). Little research has been undertaken assessing the experience of graduate teaching assistants (GTAs) in CUREs. Heim and Holt (2019) provided limited data that support the conclusion that GTAs value the experience of mentoring undergraduate research, but many report the experience being very challenging. Goodwin et al. (2021) similarly found that graduate student instructors almost universally recognize the pedagogical value and benefits of CUREs (for both students and GTA), but that many report high costs to teaching CUREs, including time and emotional investment. In fact, in an important link between benefits to undergraduates and benefits to instructors in training, research shows that the individual experiences of students across course sections taught by different GTAs vary widely (Goodwin et al., 2022, 2023). These experiences appear to be affected by the beliefs, motivation, interests, training, and attitude of the GTAs more than their research and teaching experiences (Goodwin et al., 2022, 2023). Specifically, the ability of GTAs to provide support necessary for undergraduate students to persevere through failures and repeat their work is likely an important predictor of the quality of the students’ research experience (Goodwin et al., 2022). The motivation of undergraduate students taking the CURE, in particular, appears to be highly affected by GTA training and practice (Goodwin et al., 2023). Support from instructors of record, peers, and undergraduate teaching assistants is therefore critical to the success of GTAs teaching a CURE as well as the success of their students (Goodwin et al., 2021, 2022, 2023). Kern and Olimpo (2022) have developed an effective training program to help GTA facilitate CUREs.
There are also many benefits of teaching a CURE for principal investigators and lecturers. A survey of 16 instructors across disciplines at the Ohio State University who have implemented a CURE shows a range of positive impacts. The open answers of the instructors were categorized in each of eight categories. Instructors were able to identify more than one benefit of teaching a CURE resulting in 24 coded impacts (Table 2). Three instructors did not identify a benefit (responding “unclear” or “not sure”).
Below, I focus on the benefits of CUREs to instructors that positively impact the career of instructors and are best characterized as self-interests (Desai et al., 2008). Although much less research has been undertaken on instructor gains than students’, CUREs have widely been recognized to benefit instructors. Instructor impacts can be divided into three categories:
1. Engaging in a meaningful research-driven teaching practice
2. Boosting engagement in research and increasing productivity
3. Gaining access to resources and developing a network of colleagues
Engaging in a meaningful research-driven teaching practice
CUREs provide the opportunity for instructors to integrate their research and teaching missions (Fukami, 2013; Shortlidge & Brownell, 2016). This includes the pursuit of one’s research program as well as the chance to go into new directions and satisfy one’s intellectual curiosity (Desai et al., 2008; Roberts and Shell, 2023; Shortlidge et al., 2016). As such, CUREs are more enjoyable to teach than traditional labs (Shortlidge et al., 2016; DeChenne-Peters & Scheuermann, 2022) and enable instructors to meaningfully engage students in the discipline without the barrier of content-coverage (Elgin et al., 2021). Instructors report that CUREs enable them to improve their pedagogical knowledge and develop an interest in the formal assessment of their own teaching (Craig, 2017). CUREs also enable instructors to teach in a way that promotes student enthusiasm and motivation (Lopatto et al., 2014). Some instructors report that CUREs improve their relationships with students (Shortlidge et al., 2016). Others report that CUREs improve their job satisfaction (Shortlidge et al., 2017). CUREs can be components of broader impacts of grant proposals and thus support the success of both research and teaching in one more way (see Shortlidge et al., 2016).
Boosting engagement in research and increasing productivity
For instructors at primarily undergraduate institutions and lecturers at research universities whose faculty model involves high teaching loads, CUREs offer the opportunity to remain involved in research endeavors (Hewlett, 2018; DeChenne-Peters & Scheuermann, 2022) and provide research opportunities to students when uncommitted time available for mentored research experiences is lacking (Gentile et al., 2017). Instructors benefit from the opportunity to keep up with the field of research and the literature (Lopatto et al., 2014). CUREs also increase the confidence of some faculty members in their own research (Shaffer et al., 2010). Because CUREs enable instructors to explore significant research questions that are broadly relevant to the scholarly community, they can be beneficial in generating data for faculty research (Shortlidge et al., 2016). In fact, the significance of the research itself has been found to be a motivator for many instructors (Lopatto et al., 2014), just like for students. Numerous faculty members have reported productivity benefits from CUREs, particularly co-authorship on publications (Lopatto et al., 2014, Shortlidge et al., 2016). Publication is an important outcome and motivator of engagement in the development and implementation of CUREs for both faculty members and students (e.g., Hatfull et al., 2006; Jordan et al., 2014; Ward et al., 2014). For some researchers, CUREs also offer the opportunity to identify and recruit trained and motivated students for mentored research internships (Overath, 2016; Shortlidge et al., 2016; Stoeckman et al., 2019; Ott et al., 2020; Elgin et al., 2021), or even prepare students for mentored research experiences (Fendos et al. 2022). The many research and teaching gains of CUREs lead some instructors to report increased prestige or improved reputation (Lopatto et al., 2014; Shaffer et al., 2010). CUREs are influential for promotion and tenure (Shortlidge et al., 2016).
Gaining access to resources and developing a network of colleagues
Surveys of instructors who have implemented CUREs show their impacts go beyond direct career benefits. CURE instructors value the chance to gain access to new technology on campus and beyond (Shaffer et al., 2010). Involvements in CUREs, particularly when instructors join national collaborative efforts like the Genomics Education Partnership (Lopatto et al., 2008), the Biological Collections in Ecology and Evolution Network (https://bceenetwork.org/), the Malate Dehydrogenase CURE Community (Provost 2022), or the Science Education Alliance Phage Hunters Advancing Genomics and Evolutionary Science (SEA-PHAGES; Jordan et al., 2014) also enable instructors to gain colleagues, grow their network of collaborators, and connect to a larger research and teaching community (Lopatto et al., 2014; Shaffer et al., 2010; DeChenne-Peters & Scheuermann, 2022). Instructors involved in CUREs report professional growth from the experience (Lopatto et al., 2014).
1. Benefits to the institution
Many of the benefits of CUREs to the institution are encapsulated in the benefits to instructors and students. Institutions benefit from higher student retention and graduation rates, training a more diverse workforce, and developing a more inclusive learning environment. They also gain from having happier instructors with higher research outputs and potentially higher tenure rates, who are less likely to leave their positions (Elgin et al., 2021; Shortlidge et al., 2017). CUREs can in fact contribute to increased enrollment (Bell et al., 2017). Similarly to service-learning courses, CUREs can provide institutions the opportunity to partner with business and community organizations (Elgin et al., 2021; Malotky et al., 2020; Silvestri, 2018) and provide actionable information for the students’ community (Smith et al., 2022; Valliere, 2022a) or partners around the world (Kay et al., 2023). CUREs have also been showed to inspire faculty members to seek grant support for research and education (Shaffer et al., 2010), which can bring the institution additional funds through indirect-cost recovery. The CUREs themselves or the outputs from these CUREs can also participate in increasing the prestige of the institution with an increased number of publications, presentations, and awards (e.g., Ahmad & Al-Thani, 2022; Shaffer et al., 2010; Overath, 2016; Bell et al., 2017). Recent calls for institutions to support the development and implementation of CUREs have emphasized their importance in overcoming the opportunity gap in college (e.g., Handelsman et al., 2022). | textbooks/socialsci/Education_and_Professional_Development/A_CURE_for_everyone%3A_A_guide_to_implementing_Course-based_Undergraduate_Research_Experiences_(Calede)/2.01%3A_Why_implement_a_CURE_in.txt |
Jonathan J.-M. Calède
Although CUREs share several characteristics (see Introduction), there exists a diversity of CURE designs and models. Shuster et al. (2019) proposed a taxonomy of CUREs with two main categories: researcher-independent CUREs and researcher-driven CUREs. The former category includes many discovery-based CUREs. Researcher-independent CUREs are not tied to the expertise or research interests of the instructor. As such, they can be supervised by different instructors, may have a greater lifespan, and can potentially foster student-driven questions (Shuster et al., 2019). However, these CUREs are supervised by non-expert researchers who may not be confident in the project and require training (Shuster et al., 2019). Researcher-independent discovery-based CUREs can be replicated across institutions and lead to national programs (e.g., Genné-Bacon & Bascom-Slack, 2018; Jordan et al., 2014; Shaffer et al., 2014). Some of those national programs have common research goals. For example, the Genomics Education Partnership (GEP) is annotating the genome of Drosophila (Shaffer et al., 2010; 2014). Instructors can apply to join these programs and receive centralized training, benefit from the support system put in place by these project’s networks, and exchange with the associated communities of instructors and researchers. One advantage of such programs is the potential time savings made by instructors in developing and initiating a CURE (Gentile et al., 2017). There are also national programs that have been developed to support students and their instructors in undertaking their own research projects. Examples of these programs include The Genome Consortium for Active Teaching (GCAT; Campbell et al., 2007; Walker et al., 2008) and its sister program, GCAT-SEEK (Buonaccorsi et al., 2014; Buonaccorsi et al., 2011) as well as the Ecological Research as Education Network. Both GEP and GCAT have been extensively analyzed for their ability to support CUREs (Lopatto et al., 2014). Malotky et al. (2020) present yet another model of researcher-independent CURE in their CEL-CURE. This CURE involves “Community-Engaged Learning”; the research questions investigated by the students each semester stem from conversations with community partners in an original combination of service-learning and research (Malotky et al., 2020). Similarly, Adkins-Jablonsky et al. (2020) developed a series of CUREs centered around environmental justice in an effort to increase community-engagement as well as science efficacy and identity.
Researcher-driven CUREs are experiences in which the students contribute to the research program of the instructor. Researcher-driven CUREs are mentored by experts who are invested in the success of the CURE and its dissemination to the research community (Shuster et al., 2019). These CUREs are likely to be hypothesis-driven and lead to publication (Fukami, 2013; Shuster et al., 2019). There are some opportunities for researcher-driven CUREs to benefit from the support of national programs–the Keck Geology Consortium provides such an opportunity (De Wet et al., 2009)–but these opportunities are rare and often support small numbers of students (six to nine students in the case of the Keck Consortium). Because of the inherent integration of their teaching and research missions in researcher-driven CUREs, faculty members at research institutions may be more likely to be invested in the development and success of such CUREs as opposed to researcher-independent ones, whereas faculty members at community colleges and institutions with little support for the course development efforts associated with creating a CURE, and limited research funding, can benefit from joining national programs that provide a framework for implementing CUREs.
Thus, there are three possible paths to designing a CURE:
1. Implementing a researcher-independent CURE for which there is a pre-existing structure available through national programs or the peer-reviewed discipline-based education research (DBER) literature
2. Developing a new researcher-independent CURE
3. Designing a unique researcher-driven CURE
When designing a new CURE, just like any other course, instructors should adopt a backward-design approach (Dolan, 2016; Graff, 2011; Hills et al., 2020; Shapiro et al., 2015; Wiggins & McTighe, 1998). This approach requires instructors to first identify the outcomes desired from the CURE, then determine the acceptable evidence that these outcomes have been met, and only afterwards, plan the learning experiences and instruction (Cooper et al., 2017). One important caveat to this structure when designing a researcher-driven CURE is the need to integrate the research goals of the researcher with the learning outcomes for the students. Such integration can and should follow a backward-design approach for the research component as well (Cooper et al., 2017: Table 1). It may be necessary to adjust the research project to meet the desired learning outcomes or revise the learning outcomes of the CURE to match the limitations of the research project (Cooper et al., 2017). The instruction and mentorship of the research experience itself will need to consider several critical issues (e.g., Cooper & Brownell, 2018; Kloser et al., 2011; Shaffer et al., 2014; Zelaya et al., 2020) presented in Table 3.
The issues of Table 3 can be rephrased as framing questions (questions 1-2, 4-5, 7-8, and 10 from Dolan, 2016) to guide the development of a CURE:
1. How will the CURE be integrated into the curriculum? Identifying the target audience of the course, the nature of the course, and its place within the curriculum as well as the CURE’s research goals.
2. To what extent will students have intellectual responsibility and [ownership] of the research? Determining the role the students will have in developing, implementing, and communicating the research.
3. Which components of the research process will be integrated into the CURE? Defining the research experience of the students and guiding them through the literature, data collection, and data analysis.
4. How will research progress be balanced with student learning and development? Designing an inclusive experience that includes peer-reviews and addresses the limitations imposed by the course structure.
5. How will the research learning tasks be structured to foster students’ development as [scholars]? Scaffolding the CURE and individual assignments, fostering reflection, and promoting successful group work.
6. How will students communicate the results of their research? Choosing the appropriate mode of communication to reflect authentic scholarship and ensure equity.
7. How will [the progress and experience of students] be assessed? Adopting a mode of grading that is true to the research process as well as transparent and inclusive for all students.
8. How will research learning tasks change as discoveries are made and initial research questions are answered? Including iteration within the course and ensuring the success of a CURE over the long term.
9. What are the logistical obstacles and solutions for the different steps of the CURE? Overcoming problems accessing and analyzing data and funding a CURE.
10. What are the roles of instructional [and support] staff? Transforming instructors into mentors to support students and teaching assistants.
11. How will the success of the CURE be assessed? Assessing the usefulness of the scaffold of the CURE and its individual activities and evaluating the success of the research and the students.
1. Successful CUREs across disciplines
Many successful CUREs have been developed across institutions, course-levels, and disciplines. Table 4 presents a selection of CUREs, sorted by discipline, focusing on examples that provide templates or curriculum elements for the replication of the experience or its implementation in a different context. These examples can be used to start reflecting on the questions presented above.
Examples of CUREs across disciplines and topic
References for CUREs at the introductory and/or upper level in Anthropology, Biology, Business, Chemistry, Criminal Justice, Engineering, Forensic Science, Geosciences, Human Resource Development, Information security, Linguistics, Mathematics, Psychology, Physics, as well as Writing and Composition.
Discipline Level Topic Reference
Anthropology Both Equity, Health, Obesity, etc. Ruth et al., 2021
Biology Both Botany Ward et al., 2014
Biology Both Conservation Biology Sorensen et al., 2018
Biology Both DBER Cooper & Brownell, 2018
Biology Both DBER Mohammed et al., 2021
Biology Both Ecology Russell et al., 2015
Biology Both Genomics – GEP Lopatto et al., 2008
Biology Both Genomics – GEP Shaffer et al., 2010
Biology Both Genomics – GEP Shaffer et al., 2014
Biology Both Microbiology Adkins-Jablonsky et al., 2020
Biology Both Microbiology Lyles & Oli, 2022
Biology Both Microbiology Zelaya et al., 2020
Biology Both Molecular Biology Russell et al., 2015
Biology Both Molecular Biology/Ecology… Poole et al., 2022
Biology Both Public Health Malotky et al., 2020
Biology Intro Developmental Biology Sarmah et al., 2016
Biology Intro Ecological Genetics Bucklin & Mauger, 2022
Biology Introductory Botany Murren et al., 2019
Biology Introductory Ecology Brownell et al., 2012
Biology Introductory Ecology Fukami, 2013
Biology Introductory Ecology Genet, 2021
Biology Introductory Ecology Kloser et al., 2011
Biology Introductory Ecology Kloser et al., 2013
Biology Introductory Ecology Young et al., 2021
Biology Introductory Field Ecology Stanfield et al., 2022
Biology Introductory Field Ecology Thompson et al., 2016
Biology Introductory Genetics Brownell et al., 2015
Biology Introductory Genetics Mills et al., 2021
Biology Introductory Genetics/Ecology/Evolution… Bakshi et al., 2016
Biology Introductory Genomics Bowling et al., 2016
Biology Introductory Genomics Burnette & Wessler, 2013
Biology Introductory Genomics Chen et al., 2005
Biology Introductory Genomics Evans et al., 2021
Biology Introductory Genomics Hatfull et al., 2006
Biology Introductory Genomics Makarevitch et al., 2015
Biology Introductory Genomics Wiley & Stover, 2014
Biology Introductory Genomics Wolkow et al., 2014
Biology Introductory Genomics – SEA- PHAGES Harrison et al., 2011
Biology Introductory Microbiology Peyton & Skorupa, 2021
Biology Introductory Molecular Biology Hekmat-Scafe et al., 2017
Biology Introductory Neuroscience Waddell et al., 2021
Biology Upper Cell Biology Shapiro et al., 2015
Biology Upper Cell Biology Siritunga et al., 2011
Biology Upper Conservation Biology Gastreich, 2020
Biology Upper Ecology Shapiro et al., 2015
Biology Upper Genetics Delventhal & Steinhauer, 2020
Biology Upper Genetics Li et al., 2016
Biology Upper Genetics McDonough et al., 2017
Biology Upper Genomics Drew & Triplett, 2008
Biology Upper Genomics Dunne et al., 2014
Biology Upper Genomics Harvey et al., 2014
Biology Upper Genomics Martin et al., 2020
Biology Upper Immunology Cooper et al., 2019
Biology Upper Metagenomics Baker et al., 2021
Biology Upper Microbiology DeHaven et al., 2022
Biology Upper Microbiology Jurgensen et al., 2021
Biology Upper Microbiology Pedwell et al., 2018
Biology Upper Microbiology Petrie, 2020
Biology Upper Microbiology Sewall et al., 2020
Biology Upper Microbiology Shapiro et al., 2015
Biology Upper Microbiology Zelaya et al., 2020
Biology Upper Molecular Biology Shanle et al., 2016
Biology Upper Molecular Biology Shapiro et al., 2015
Biology Upper Molecular Biology Shuster et al., 2019
Biology Upper Molecular Biology Siritunga et al., 2011
Biology Upper Physiological Ecology Ramírez-Lugo et al., 2021
Biology Upper Physiology Rennhack et al., 2020
Biology Upper Restoration Ecology Valliere et al., 2022b
Biology Upper Urban Ecology Valliere et al., 2022a
Biology Upper Virology Shapiro et al., 2015
Biology* Introductory Genomics – SEA- PHAGES Jordan et al., 2014
Biology* Introductory Microbiology Genné-Bacon & Bascom-Slack, 2018
Biology* Introductory Plant Microbiome Bascom-Slack et al., 2012
Biology† Introductory Molecular Biology Boltax et al., 2015
Biology† Introductory Molecular Biology Rowland et al., 2012
Biology† Introductory Organismal Biomechanics Full et al., 2015
Business Upper Retail Sternquist et al., 2018
Chemistry Both Biochemistry Roberts et al., 2019
Chemistry Both Biochemistry Shelby, 2019
Chemistry Both Biochemistry Vater et al., 2021
Chemistry Introductory Analytical Chemistry Silvestri, 2018
Chemistry Introductory Biochemistry Bell, 2011
Chemistry Introductory Biochemistry Chaari et al., 2020
Chemistry Introductory Biochemistry Knutson et al., 2010
Chemistry Introductory Bioremediation Silsby et al., 2022
Chemistry Introductory General Chemistry Blumling et al., 2022
Chemistry Introductory General Chemistry Miller et al., 2022
Chemistry Introductory General Chemistry Tomasik et al., 2013
Chemistry Introductory General Chemistry Weaver et al., 2006
Chemistry Introductory General Chemistry Winkelmann et al., 2015
Chemistry Introductory Organic Chemistry Alaimo et al., 2014
Chemistry Introductory Organic Chemistry Carr et al., 2018
Chemistry Introductory Organic Chemistry Cruz et al., 2020
Chemistry Introductory Organic Chemistry Pontrello, 2015
Chemistry Introductory Organic Chemistry Ruttledge, 1998
Chemistry Introductory Organic Chemistry Silverberg et al., 2018
Chemistry Introductory Organic Chemistry Weaver et al., 2006
Chemistry Introductory Organic Chemistry Wilczek et al., 2022
Chemistry Upper Analytical Chemistry Tomasik et al., 2014
Chemistry Upper Biochemistry Ayella & Beck, 2018
Chemistry Upper Biochemistry Colabroy, 2011
Chemistry Upper Biochemistry Large et al., 2022
Chemistry Upper Biochemistry Satusky et al., 2022
Chemistry Upper Biophysical Chemistry Hati & Bhattacharyya, 2018
Chemistry Upper Medicinal Chemistry Williams & Reddish, 2018
Chemistry Upper Polymer Chemistry Karlsson et al., 2022
Chemistry* Introductory General Chemistry Clark et al., 2016
Chemistry† Introductory Biochemistry Rowland et al., 2012
Chemistry† Introductory Organic Chemistry Boltax et al., 2015
Criminal Justice Introductory Crime statistics Kruis et al., 2022
Criminal Justice Introductory Crime statistics McLean et al., 2021
Engineering Introductory Biofluid Mechanics Clyne et al., 2019
Engineering Upper Software Development Abler et al., 2011
Engineering† Introductory Organismal Biomechanics Full et al., 2015
Forensic Science Upper Next Generation Sequencing Elkins & Zeller, 2020
Geosciences Introductory DBER Kortz & van der Hoeven Kraft, 2016
Geosciences Upper Field Geomorphology May et al., 2009
Geosciences Upper Field Glaciology Connor, 2009
Geosciences Upper Field Petrology Gonzales & Semken, 2009
Human Resource Development Upper Any Hwang & Franklin, 2022
Information Security Upper Vulnerability Scanning Xu et al., 2022
Linguistics Introductory Phonetics and Phonology Bjorndahl & Gibson, 2022
Mathematics Both Any Deka et al., 2022
Physics Introductory Solar Physics Werth et al., 2022
Psychology Upper Cognitive Neuroscience Wilson, 2022
Psychology Upper Music Psychology Hernandez-Ruiz & Dvorak, 2020
Writing & Composition Introductory Oral History Parsons et al., 2021
Writing & Composition Introductory Writing Pedagogy Kao et al., 2020
Table 4. Examples of CUREs across disciplines and topic. * CURE implemented at the Ohio State University, † Interdisciplinary CURE
In addition to the many published CUREs, there are also several CUREs that have been developed across the campuses of Ohio State in STEM, social sciences, and humanities. Table 5 provides examples of these courses at the time of writing including the instructor or contact person who may be able to provide documents from their course and/or insights into their experience developing and implementing the CURE.
Examples of unpublished CUREs across disciplines and topic at the Ohio State University
Information for select CUREs in STEM, Social sciences, and Humanities at the Ohio State University
Discipline Course Topic Contact Information
Microbiology Micro 2100 Yeasts and Fermentation Steven Carlson: [email protected]
Evolutionary Biology EEOB 4220 Mammal Ecology and Evolution Ryan Norris: [email protected]
Comparative Studies CS 5189-S Field Ethnography Katherine Borland: [email protected], Jasper Waugh-Quasebarth: [email protected]
Women’s, Gender, and Sexuality Studies WGSST 2550 History of Feminist Thought Mytheli Sreenivas, [email protected]
History HIST 2475 History of the Holocaust Robin Judd, [email protected]
English ENG 4523 Renaissance London: Literature, Culture, and Place, 1540-1660 Chris Highley, [email protected]
Table 5. Examples of unpublished CUREs across disciplines and topic at the Ohio State University.
2. Expected learning outcomes of CUREs
CUREs can help foster student success in different components of the curriculum. They can be implemented at the introductory level as well as in upper classes (Table 3) and even in first-year seminars (Vater et al., 2019); they can lead to thesis projects and fulfill other writing requirements; they can also involve extensive laboratory work (e.g., Pontrello, 2015), field work (e.g., Gonzales & Semken, 2009; Messager et al., 2022; Thompson et al., 2016), or community-based interventions (e.g., Malotky et al., 2020). As such, CUREs may be appropriate as entire classes or elements of classes in foundation courses, theme courses, honors classes, non-major courses, required introductory courses for the major, or upper-level electives.
The first step in the design of a CURE is to identify the desired pedagogical goals for the course. Keep in mind that pedagogical goals, or learning goals, describe what you as instructor, and your program, aim for with the CURE. They give students a general idea of what they will gain from the CURE. Expected learning outcomes (ELOs) describe what students are able to do at the outset of the CURE. ELOs are specific statements that use action verbs to state what student should achieve; they should be measurable and realistic.
There are three important categories of learning goals to consider. The first one concerns discipline-specific knowledge and skills, including technical skills. The second one is concerned with soft skills, including broadly applicable competences in communication and habits that promote success, like the ability to work well as part of a team. The third category of learning goals that is worth including in the framework of a CURE is personal goals. Encouraging students to develop their own learning goals for the CURE can be a powerful way to increase buy-in and ownership of the project (see [Activity 1]). It has also been linked to more positive impacts of the course for students (Lopatto et al., 2022). Whichever of these three categories they belong to, the goals should be aligned with appropriate expected learning outcomes.
Discipline-specific goals can be sourced from program guidelines and department resources. In the case of courses redesigned into CUREs, the learning goals articulated for the non-CURE format of the class should also be considered. Learning goals adapted to CUREs, particularly learning goals that incorporate technical skills, can be found in the CURE literature (e.g., Connor et al., 2022; Hanauer et al., 2017; Mishra et al., 2022; Table 4) and select dedicated publications (e.g., Irby et al., 2018). An additional source of learning goals may be found in the discipline-based education research literature, particularly concept inventories. Concept inventories and concept assessments, more than lists of discipline-specific learning goals, enable the evaluation of student learning through validated sets of multiple-choice questions (Libarkin, 2008). There exists concept inventories for a large number of disciplines and topics, particularly in STEM (Libarkin, 2008; Table 6); there are also databases of learning objectives for some disciplines (e.g., Bioliteracy; Chemistry, Life, the Universe, and Everything).
Learning outcomes can be drawn from program requirements and general education curriculum expectations. At the Ohio State University, competencies and skills that are not discipline-specific are articulated in the expected learning outcomes of the foundations, themes, integrative practices, and embedded literacies of the general education curriculum (Ohio State GE Program). Like at many other institutions, additional expected learning outcomes exist for the Honors program (Honors Program Goals).
Foundations courses enable students to gain a well-rounded education across academic disciplines and modes of inquiry. At Ohio State, students are required to take a course in each of seven fields of study (GE Program Structure), each associated with five to eight ELOs. Theme courses have their own sets of ELOs based on the theme they fall under. Theme courses are particularly appropriate for CUREs because the implementation of a CURE in a theme course enables it to qualify as a high impact practice course. The integrative practice inventory itself dictates ten expectations for these courses; they should be carefully considered at the time of design. Within a major, courses at the introductory or upper level can be valuable contexts to develop CUREs. Courses within the major are driven in part by the ELOs of the embedded literacies. Honors courses should satisfy ELOs from the Honors program.
Instructors should select learning outcomes from the appropriate list based on the nature of the course being developed (see question 1 starting on p. 51). Based on the discipline of the course, a subset of the ELOS for the foundation courses, theme courses, or embedded literacies are relevant. Once the ELOs for the course have been selected, they should be translated to specific outcomes for the CURE being developed. Specific learning outcomes should be center ed around students. All statements should start with “Student will be able to …” This statement should be followed by an action verb appropriate for the goal (see Verb wheel for examples). Each CURE outcome should also be associated with specific assignments that will enable the instructor to assess whether students have met the desired course outcome; in application of the backward-design approach (see p. 32). Examples of such alignment efforts are shown below for select learning outcomes. Instructors developing their own CUREs can contact the Office of Academic Enrichment (https://osas.osu.edu/oae/) as well as the Office of Undergraduate Research and Creative Inquiry (https://ugresearch.osu.edu/) for assistance in translating the learning outcomes of university-wide programs into course-specific ELOs. Best practices in writing and using learning goals are reviewed in Orr et al. (2022) at http://lse.ascb.org/learning-objectives/.
For each of the four curricular context identified above for Ohio State, tables 7-10 show the alignment between select ELOs, their translation for different example CURES in different disciplinary contexts (STEM, social sciences, and humanities or any), and a proposed assignment enabling the evaluation of the ELO. The assessment of learning outcome competency is critical and should be integrated in the design process of the course (see Mishra et al., 2022 for an example).
Examples of ELO alignments for Foundation courses
Select Expected Learning Outcomes for Foundation courses associated with examples of translation to specific CUREs and proposed assessments
Expected Learning Outcome
Translation to specific CUREs
Proposed assessment
Generate ideas and informed responses incorporating diverse perspectives and information from a range of sources, as appropriate to the communication situation Compare results of analyses to published findings Discussion section of the manuscript will be assessed for explicit comparisons with the results of prior studies on different organisms
Integrate perspectives from different actors to explain social interactions Compare and analyze the transcripts of the interviews coding for commonalities
Organize raw narrative data into usable research data Develop an indexing system (e.g., titles and keywords) for organizing oral histories into a searchable database
Draw appropriate inferences from data based on quantitative analysis and/or logical reasoning Interpret results of statistical analyses of phenotypic data in light of ecology Results and discussion sections of the manuscript will be assessed for match between statistical test results, conclusions about significance, and interpretation in terms of locomotion
Correlate quantitative demographic data and questionnaire responses to test specific hypotheses Graphical analysis of the relationship between anthropometric data and views of others’ bodies
Analyze word-frequency data to identify linguistic changes Quantitatively identify important language change events and their relationship with political, historical, and sociological phenomena
Analyze and interpret significant works of visual, spatial, literary and/or performing arts and design Integrate historical context and iconographic analysis to critically assess the representation of past events Write a critical analysis of a colonial art painting from the MET collection
Use historical sources and methods to construct an integrated perspective on at least one historical period, event or idea that influences human perceptions, beliefs, and behaviors Combine the writings of different authors representing opposite interest groups to understand historical upheavals Synthesize into an essay the works of revolutionary and anti-revolutionary proponents to qualify the atmosphere prior to the July revolution in France in the early 19th century
Employ the processes of science through exploration, discovery, and collaboration to interact directly with the natural world when feasible, using appropriate tools, models, and analysis of data As a group, assemble a database from field specimens that will enable novel analyses Collaboratively build a thorough dataset of qualitative and quantitative morphological information from wild plants for the Ohio State Marion prairie
Critically evaluate and responsibly use information from the social and behavioral sciences Assess the value of published data and the context in which these were collected Analyze a time-series of census data and rigorously determine the possible effects of biases on the data
Explain how categories including race, ethnic and gender diversity continue to function within complex systems of power to impact individual lived experiences and broader societal issues Associate personal narratives with patterns in socioeconomic data Review journalistic articles presenting personal stories of people representing race, ethnic and gender diversity and link them with quantitative data from the social science literature
Table 7. Examples of ELO alignments for Foundation courses
Examples of ELO alignments for Theme courses
Select Expected Learning Outcomes for Theme courses associated with examples of translation to specific CUREs and proposed assessments
Expected Learning Outcome
Translation to specific CUREs
Proposed assessment
Identify, reflect on and apply the knowledge, skills and dispositions required for intercultural competence as a global citizen Evaluate the component skills of intercultural competence Reflection on personal competency in the components of intercultural identity
Demonstrate a developing sense of self as a learner through reflection, self-assessment and creative work, building on prior experiences to respond to new and challenging contexts Assess changes in metacognition over the course of the CURE Reflection workbook activities on information literacy
Engage in an advanced, in-depth, scholarly exploration of the topic or idea of sustainability Summarize the state of the research on the management of natural resources Collaborative literature review activity summarizing papers of the students’ choosing on the management of natural energy resources
Analyze the roles of different stakeholders in the adoption of sustainability policies and commitments worldwide Critical essay of the roles of political and economic stakeholders in a decision made at the COP26
Synthesize the historical perspective of the concept of sustainability Write an essay on the historical developments that have led to the modern concept of sustainable development
Explore and analyze health and well-being from theoretical, socio-economic, scientific, historical, cultural, technological, policy and/or personal perspectives Synthesize health outcome data from multiple fields of study Integrate data from the medical literature and experimental work in model animals to inform the relationship between pollution and chronic health conditions
Evaluate how mental attitudes affects health efforts Test the hypothesis that positive mental attitudes lead to significantly greater engagement in healthy behaviors
Summarize historical data on population health Compare health metrics in England between Tudor and Victorian times
Table 8. Examples of ELO alignments for Theme courses
Examples of ELO alignments for Embedded literacies
Select Expected Learning Outcomes for Embedded literacies associated with examples of translation to specific CUREs and proposed assessments
Expected Learning Outcome Translation to specific CUREs Proposed assessment
Apply methods needed to analyze and critically evaluate statistical arguments Re-analyze published data to evaluate the validity of proposed paradigms Graphic of the distributions of isotope masses and associated summary statistics for previously analyzed supernovae
Critical evaluation of published quantitative data of three select psychoanalysis studies
Use network analysis to investigate the World Religions Paradigm
Interpret the results of qualitative data analysis to answer research question(s) Use visual analysis of rock samples to determine lithology in the field Lithological characterization of rock units of geological map
Employ questionnaires to investigate major cultural anthropology questions Survey immigrant populations in the Columbus area to shed light on the processes that help groups of people maintain their culture
Explain the role of theatre reviews in the dramatic arts Synthesize reviews of theatrical performances to characterize the status of theatre through time
Develop scholarly, creative, or professional products that are meaningful to them and their audience Cooperatively develop a publication-quality manuscript in a given journal format Manuscript accompanied by self and peer-reflection of contribution to the final product
Recognize how technologies emerge and change Articulate the significance of a recent technological advancement for future research in chemical engineering Group Pecha-Kucha presentations of case studies involving micro computed tomography in chemical engineering
Appreciate the importance of technological exchanges across the world Create a map of technological advancements that led to the development of the computer mouse
Explain the role of historical advancements in technology on human societies Essay on the consequences of the invention of the printing press on the dissemination of information in Europe during the 15th and 16th centuries
Table 9. Examples of ELO alignments for Embedded Literacies
Examples of ELO alignments for Honors courses
Select Expected Learning Outcomes for Honors courses associated with examples of translation to specific CUREs and proposed assessments
Expected Learning Outcome Translation to specific CUREs Proposed assessment
Reflect on ways in which their learning furthered their aspirations Identify professional skills improved through the CURE Pre/post self-efficacy and aspirations survey (e.g., Martin et al., 2021)
Identify, assess, and compare how scholars from a diversity of perspectives in different fields and disciplines approach their most challenging problems Integrate knowledge from different subfields relevant to your research project Contrast the strengths and weaknesses of molecular and morphological approaches in resolving phylogenetic relationships within Mammalia
Contrast the policy solutions proposed by academics from different leanings Evaluate the differences between the policies suggested by economists from various schools of thought in response to the Great Recession
Undertake a comparative analysis of a topic treated by different writing modes Contrast a non-fiction graphic novel and a biography or historical/journalistic book discussing the same topic
Take on a variety of roles within groups Use appropriate language and communication skills to fulfill a given role within a group Effectively engage in group discussion in the different capacities identified by the instructor
Communicate using modalities that are effective and inclusive relative to the intended audience Model inclusive interactions during group activities Self and peer-evaluations of group work
Articulate what success looks like for them in both personal and professional contexts Develop three personalized learning goals for the semester Self-assess progress on these goals and participate in an end of the semester conference
Demonstrate a growth mindset to integrate new information and ways of thinking Evaluate self-efficacy and acquisition of new skills Reflect on self-efficacy and acquisition of new skills pre and post CURE
Table 10. Examples of ELO alignments for Honors courses | textbooks/socialsci/Education_and_Professional_Development/A_CURE_for_everyone%3A_A_guide_to_implementing_Course-based_Undergraduate_Research_Experiences_(Calede)/2.02%3A_Designing_a_successful_.txt |
Jonathan J.-M. Calède
Designing a CURE is a complex endeavor that requires coordination with staff members, administration, and other instructors. Fortunately, there is now an extensive literature on CUREs that enables the identification of best practices and recommendations for an effective experience. The elements identified in Table 11 should be revisited throughout the process of designing a CURE.
The goal of this section is to use the framing questions presented above (p. 33-34) to guide prospective CURE instructors through the development of the research experience. These questions are not just prompts for reflection; together, they also form a worksheet for the design process.
• For each question, the “Overview” provides an overview of the critical design elements that are identified and developed in this section of the worksheet.
• The “starts here” units guide you through these elements of design. Although the pedagogy-centered issues are presented before the research-centered ones, they are better thought of as parallel tracks.
• The “research and education together” section combines the elements of design from both aspects of the CURE. This is an opportunity to identify and resolve conflicts.
• The “boxes” help remind instructors of the important elements of the question and critical points of contact across (and beyond) campus.
1. How will the CURE be integrated into the curriculum?
Overview: Despite the desire to design the CURE solely around learning goals and research questions, reality requires first the consideration of the place of the CURE in the students’ slate of courses. This section enables you to determine:
1. The audience of your course
2. The level of preparation and prior knowledge you can expect from students
3. The duration of the experience
4. The integration of program requirements into the course
5. The scope and intent of the research
Education starts here: Many of the questions below are best answered in communication with the chair of the undergraduate curriculum committee of your department or unit.
• Will the CURE be a new course or integrated into an existing course?
• If developing a new course:
• Will this course be aimed at majors or non-majors?
• Will this course be an upper-level class with prerequisites or open to freshmen and sophomores?
• Will this course become a requirement for any minor or certificate?
• How many course credits will the CURE represent? Are there constraints on this number? How many hours of contact hours does this correspond to? How will they be divided between lectures, recitations, and labs? What is the total time commitment you can expect from students including homework?
• Will this course be taught as a summer course? A half-semester course? Through two courses over an entire academic year?
• What is the expected enrollment for the course?
• Will this course become a requirement for other courses or be an elective?
• If becoming a requirement:
• What is the knowledge base and skillset that students need to master to continue with the next course?
• If an elective:
• What are the goals and expected learning outcomes of electives in your department, college, or unit? More broadly, what are expectations in your field and related professional avenues?
• If revising an existing course:
• What is the place of this course in the curriculum? The major? Minors or certificates?
• Will the CURE be replacing traditional labs or are you also integrating the lectures and/or recitations into the experience?
• What are the expected learning outcomes of the current version of the course? Are they being revised?
• What is the current enrollment in the course?
• How does the course fit into the curriculum and how does it prepare students for the next steps in their education and career?
Scholarship starts here: When developing a researcher-driven CURE, this element of design is led by the researcher and his team. When developing a researcher-independent CURE, this element of design is driven by the CURE program goals that you are joining.
• What discipline/field of research does the research topic fall under?
• Is the research project novel? What is already known about the possible outcome(s) of the research?
• Are you planning on focusing on a single question/goal or will the class tackle several research questions?
• Is this research hypothesis-driven or exploratory?
• What is the current stage of the research?
• Who are the stakeholders of the research that should be involved? Any collaborators?
• How does the research align with the topic, scope, and knowledge goals of the class (particularly if revising a pre-existing course)?
Research and Education together: Any incompatibility between research and education goals at this point of the development process, including incompatibilities between research goals and class enrollment or number of contact hours, may be fatal. Although revisions to the focus of the research or learning goals of the course are sometimes possible without compromising the student experience and learning, the nature of the course and the scope of the research must come together for the CURE to be possible.
Box 1: Important points
• Contact the chair of the undergraduate curriculum committee of your department or unit.
• Verify the guidelines and requirements of the course approval process when proposing new courses.
• Engage colleagues (staff or faculty) involved in the implementation of existing courses if you are revising them.
• Identify the relevant expected learning outcomes of the department, program, major, minor, or certificate.
• Do not forget the research stakeholders. How can postdocs, graduate students, and undergraduate students in your lab contribute to and benefit from the CURE?
2. To what extent will students have intellectual responsibility and ownership of the research?
Overview: Research shows that student ownership of the research project undertaken as part of the CURE positively impacts their experience of the course and their learning gains (Hanauer et al., 2012; Harrison et al., 2011; Hatfull et al., 2006). Project ownership is a central component of the role of CURE instructors (Hanauer et al., 2022). Different degrees of project ownership may be possible depending on the scope, depth, and conditions of the CURE. They are intimately associated with the level of inquiry that students engage in.
Several levels of inquiry have been defined in the context of laboratory courses (Buck et al., 2008) and revised to fit the framework of CUREs (Brownell & Kloser, 2015). Because CUREs require the investigation of novel questions with no-known outcomes and the communication of the research results, they may only fall under some of these categories (Table 12). The degree of responsibility of the students in the design of the research project is associated with the degree of inquiry desired by the instructor and should be guided by the expected learning outcomes of the course. Together, they will open a range of opportunities for the student’s ownership of their research.
This section enables you to determine:
1. The level of inquiry you are aiming for in the CURE.
2. The choice between students exploring researcher-chosen questions/hypotheses and student-driven questions/hypotheses
3. The amount of input students will have on the design of the research project
4. The ability of students to communicate the research upon its completion
Education starts here: The ELOs for the course are critical in determining everything from the nature of the research activities included in the experience to the scope of the research questions. They should be referenced throughout the course design process.
• What level of inquiry is dictated by the ELOs?
• Are students expected/able to develop original questions directly from observations? Will students be choosing from a range of predetermined questions and topics?
• How will you foster peer discussions and class-wide activities if several questions/hypotheses will be studied by different groups or individuals?
• Will you be giving students an introduction to the existing knowledge on their research topic or are they expected to collect this information directly from the published literature? If the latter, how will you guide this work and/or assess it?
• Are students allowed/capable of collecting their own data based on permitting requirements (e.g., IRB) and ethical best practices?
• Will appropriate analytical procedures be suggested to students or should they propose particular approaches based on their reading of the literature?
• Is the level of inquiry selected compatible with the amount of time given to students?
• Is the level of inquiry selected compatible with the expertise of the students?
• What is the efficacy of students with reading the primary literature? Will you be providing training on this topic?
• What is the familiarity of students with the field of research chosen for the CURE? How much background information is necessary to be able to comfortably initiate the research process?
• Can you expect the students to understand and implement the analytical procedures likely to be involved in the research? Will you be providing the necessary knowledge as part of the class?
• What are the computational skills of the students? Does the analytical protocol require coding skills or can it be implemented in a software program with a GUI?
• In what prior classes or context would the students have acquired the required expertise/competence for the CURE?
• If students will explore their own questions:
• Will you be placing constraints of the breadth and depth of the questions chosen by students?
• What is too narrow a question?
• What is too broad a question?
• Can you mentor the diversity of questions and datasets entailed by this structure?
• Are there topics that could be emotionally difficult for some students? Will these students have the opportunity to work on a different projects or will these projects simply not be validated by the instructor?
• How will students select their hypothesis/question from all possible options?
• What is the process by which a student question/hypothesis is approved for the CURE?
• Can a mock panel (for example composed of the instructor, graduate student researchers, and colleagues) review student proposals?
• If students will be working in groups, how will you balance the group structure with individual questions?
• If students will be exploring researcher-chosen questions:
• Will the questions be prescriptive, or will there be an opportunity for students to refine their question, narrow it, or define a question from a broad topic?
• Is the question chosen compatible with the time frame of the CURE and the skill level of the students?
Scholarship starts here: It is important to recognize that there are constraints on scholarship stemming from regulatory and ethical requirements. Additionally, and no less important to the success of the CURE, its place within the broader research project it is part of, may drive the nature of the deliverable, the breadth and depth of the work, and the level of prescription imposed by the instructor(s).
• What are the existing constraints on the project from other stakeholders (e.g., other lab members, external collaborators, and national programs or networks)?
• What are the existing constraints on the project from existing data, commitments to funding agencies, museum collections, permitting agencies, university rules (including EHS and IRB), formal and informal agreements, and ethical concerns?
• How do these different types of constraints narrow the nature, breadth, and depth, of the questions that can be investigated by the students? How about the analytical protocols?
• If the CURE is researcher-driven:
• What question are you interested in exploring?
• What question/topics are you comfortable/uncomfortable mentoring?
• How much of the research process are you comfortable leaving up to the students?
• What is the deliverable you are hoping for upon completion of the CURE?
• How critical is the success of the research (i.e. the obtention of the deliverable defined above) to you?
• How much time will you be dedicating to the CURE outside of contact hours?
• Is the research goal of the CURE compatible with the time and means given to students?
• Does the CURE represent a standalone project publishable upon its completion or a portion of a bigger project? How does this impact the format of research communication upon completion of the CURE (i.e. does the end product represent a manuscript? A poster presentation? An element of a grant proposal?)
Research and Education together: Legal, ethical, and professional requirements dictated by the field of research, necessary resources for the project, and the research program context of the CURE are the starting point for the level of inquiry that is possible for the CURE. However, there are several existing strategies to increase student engagement and ownership of the project that should be considered:
• Hanauer et al. (2022) presented a model of CURE mentoring that includes a project ownership strategy centered around fostering personal responsibility starting with teaching a scientific protocol and includes promoting research ethics, facilitating peer collaboration, encouraging independence, encouraging engagement and enthusiasm, creating opportunities for presentations, and fostering future educational and career opportunities.
• Students who collect their own data are more invested in the research project and display a greater research identity than students working with pre-existing datasets they did not collect (Cooper et al., 2020a).
• Undergraduate students can work with graduate students or undergraduate researchers who have benefited from mentored research experience or previous iterations of the CURE to contribute to the research questions and methods within a framework (Hanauer et al., 2012).
• Emphasizing to students the significance of the research to the broader community of scholars in the field is important (Hanauer et al., 2012). Research shows that broadly relevant novel research leads to higher ownership of the project by students (Cooper et al., 2019).
• It is sometimes possible to pursue research questions relevant to the community or the students themselves (e.g., Malotky et al., 2020; Silvestri, 2018; Valliere 2022a). Students enjoy the opportunity to choose their own research topic (Amir et al., 2022) and those who are given opportunities to investigate questions relevant to themselves or their community show increased ownership of the research (Hanauer et al., 2012).
• Incorporating meetings that mimic research group meetings in the discipline and poster presentations can promote project ownership (Satusky et al., 2022).
• A CURE design that creates intellectual challenge and encourages problem solving deepens the engagement of students and the significance of the research experience (Hanauer et al., 2012). Challenges and iterations are critical to increasing the ability of students to navigate research obstacles (Gin et al., 2018; Light et al., 2020). In fact, students report valuing the ability to learn from their mistakes (Harrison et al., 2011) and view challenges and iterations as more representative of a real research experience (Goodwin et al., 2021). CUREs should deliberately incorporate iteration and discussions about the importance of iteration in research as part of student development (Light et al., 2020). Iteration enables the development of adaptive strategies by students that benefit them beyond the course (Cooper et al., 2022). A review of the best practices to engage students in problem solving is provided by Frey et al. (2022) and at https://lse.ascb.org/evidence-based-teaching-guides/problem-solving/.
• The CURE should be challenging without being overwhelming; this level of difficulty, paired with instructor support, has previously be showed to foster motivation (Dolan, 2016).
• If students will be developing their own questions, they should be guided through the process of identifying their own research question including the following critical issues:
• What questions have already been asked?
• What is the existing knowledge on the topic?
• What questions have not yet been answered or even asked?
• What are the gaps in the conversation?
• What questions are relevant to the field of research?
• An activity leading students through the process of identifying scholarly significant questions can be followed up with an activity helping them narrow big research questions into manageable CURE projects (see [Activity 2]).
Box 2: Important points
• Students should tackle questions and test hypotheses relevant to the broader (research) community.
• Students should be allowed to make decisions throughout the design and implementation of the research protocols.
• Instructors should rethink their place in the classroom to that of mentors and not merely supervisors.
• There should be multiple opportunities for students to develop their own hypotheses and defend them with evidence from the literature and their own work.
3. Which components of the research process will be integrated into the cure?
Overview: The traditional view of the scientific method involves a series of steps starting with observation and the formulation of a hypothesis followed by the test of this hypothesis and the dissemination of the findings (Voit, 2019). In the social sciences, qualitative, quantitative, and mixed approaches may be adopted to probe questions pertaining to human beings and their societies, but the overall process of inquiry remains the same (Creswell, 2014). The nature of the data collection process and data analyses differs in the humanities, but the scholarly endeavor is still the “evidence-based exploration of a question or hypothesis that is important to those in the discipline in which the work is being done” (University of Washington English Department). Thus, across disciplines, the research process requires the identification of an appropriate research question, whether it be from direct observation or a review of the primary literature, the collection of data, their analysis, the interpretation of outputs, and the communication of the knowledge gained from the research. It is critical to identify which of these elements of the research process will be included in the CURE while considering the constraints of the undergraduate classroom. This section enables you to determine:
1. Your approach in defining research to students and framing their experience as scholars
2. The background information that needs to be provided to the students
3. The role of students in the data collection process
4. The engagement of students with the primary literature
5. The role of students in the data analysis process
6. The role of iteration and impact of failure on the CURE
Education starts here:
• How would your students define “research”?
• What does the research process look like to them?
• What is their understanding of the concept of “Research as Inquiry?”
• What is your understanding of the students’ view of research?
• What information is necessary to understand the basis of the question(s) the students will investigate?
• Are there recently published reviews of the field/topics that students will be researching? Are there other sources of information that can help draw students into the literature (including secondary and tertiary literature, videos, and journalistic writings)?
• Are there important conflicts or controversies in the field that students should know about? What is the current paradigm and are we on the edge of a new one?
• Will you provide a brief introduction to the study system in class drawing upon your knowledge, past iterations of the CURE, and ongoing research on campus (and beyond)?
• Will students be collecting the entire dataset they will analyze? Alternatively, will they be working from pre-existing datasets in whole or in part?
• Are there pilot or example datasets that enable students to visualize their objective while collecting most of the data they will analyze?
• Will specific data collection protocols be enforced by permitting requirements (e.g., IRB) or ethical best practices?
• Are there professional, legal, or ethical training requirements for students to collect data? Consult and analyze data?
• How can you incorporate responsible and ethical conduct of research training into the course to satisfy needs and train reflective and responsible professionals?
• Will students be required to read certain publications or a minimum number of freely chosen publications? At what stages of the research process will engagement with the literature be suggested/enforced?
• How will you guide students towards relevant publications and/or vet their choice of readings?
• How will you make sure that students engage with a variety of sources and read the work of multiple authors representing different points of view?
• Students should be prompted to reflect upon questions like: (1) How does the source contribute to the scholarly conversation? (2) How do others in the field perceive the value of the source? (3) How does the source guide or support my work?
• Will specific analyses be required of students in the form of a detailed protocol or as a list of milestones? If not, how will students choose the analyses they will use? How will they be guided through this process?
• What is the role of published studies as models for the students’ work?
• Will students identify analyses that will be run by scholars with greater computational skills or are they able to run some or all of these analyses themselves?
• How would experimental failure or faulty data collection impact the learning process?
• Are there backup datasets?
• Are there alternative protocols and analyses?
• What is the relevance/significance of null results or a failed protocol to the scholarly community?
• Is there enough time in the CURE to implement alternative protocols? Revise the data collection? Add to the dataset? Repeat an analysis?
Scholarship starts here:
• Is there a need for a formal review of the field of study the students will engage with?
• Can the CURE be the impetus to publish a peer-reviewed review of a topic?
• Would a more accessible summary benefit audiences beyond the CURE including undergraduates starting mentored research experiences in the lab or the general public?
• Is the CURE an opportunity for the instructor/researcher to engage or reengage with specific aspects of the primary literature?
• What is an appropriate sample (including quantity and quality parameters) to address the question of the CURE?
• Will student-collected data represent the entire set of relevant data for this project or are there other elements of the research project they will not engage in?
• Does the use of preexisting data dictate a specific data collection protocol to enable comparisons/integrations?
• What work is necessary ahead of the CURE to make data collection by students possible?
• What checks and balances will be implemented to validate the quality of the student-collected data?
• Do those checks and balances involve peer-review? Instructor review? Outside researcher review?
• Are there analyses that are standard/expected in the field?
• Are some of these analyses computationally too demanding for the CURE?
• Do some of these analyses require coding or statistical training beyond what can reasonably be expected of students?
• Are there critical failure points in the research protocol? Are there data or analyses whose failure would hobble or interrupt research progress?
• What would be the consequence of failure (of a specific analysis, experiment, or data collection) on proposal submissions, publication completions, and professional advancements (including promotion and tenure)?
• What is the role of the CURE in the development of the research program/project(s) of the researcher/instructors/collaborators/graduate students/mentored undergraduate researchers/ etc.?
Research and Education together:
• Do not underestimate the importance of framing the entire CURE experience with a conversation about the nature of research. Students may not always recognize research as information creation and inquiry. “Research as Inquiry” is a core concept underlying the practice of research. Defining research as inquiry means explaining to students that research is an open-ended and messy exploration process focused on information gaps, unanswered questions or problems, involving multiple sources and the interpretation of information, often with no clear right answer, leading to new questions, generating ambiguity, and requiring an open mind, persistence, and flexibility (e.g., [Activity 3]). Research shows that engaging in this type of conversation about the nature of research can lead to higher student outcomes (Lopatto et al., 2022).
• CUREs are designed as authentic engagements in research and as such should incorporate responsible conduct of research (RCR) training to form conscientious professionals (Diaz-Martinez et al., 2021). Many institutions and organizations require ethical conduct of research training. At Ohio State, RCR training is provided through the Collaborative Institutional Training Institute (CITI) course, which can be assigned as homework.
• Discipline-specific best practices in ethical conduct of research should also be implemented as part of the CURE (e.g., Hills and Light 2022). They can be integrated in the grading scheme for the course.
• Consider using mind-mapping exercises to help students identify the existing knowledge of the topic of the CURE and the current debates and controversies in the field of study [Activity 4].
• Established databases provide sources of data that can be used in CUREs (e.g., Gastreich, 2020). Those include museum databases, citizen science databases, and professional databases (Table 13).
Examples of databases that can be leveraged for research projects in CUREs
Research databases across disciplines including Art History, Biology, Earth Sciences, Machine Learning, Physics and Astronomy, as well as Political and Social Sciences
Name URL
Art History
Index of Medieval Art https://theindex.princeton.edu/
Wax collection of Islamic art https://minicomp.github.io/wax/collection/
Biology
AntMaps https://antmaps.org/
AntWeb https://www.antweb.org/
Arctos https://arctos.database.museum/
CalFlora https://www.calflora.org
CalPhotos https://calphotos.berkeley.edu/
COSMIC https://cancer.sanger.ac.uk/cosmic
Diatoms of North America https://diatoms.org/
eBird https://ebird.org/home
featherbase https://www.featherbase.info/en/home
iNaturalist https://www.inaturalist.org/
Madagascar Terrestrial Camera Survey doi/10.1002/ecy.3687
Movebank https://www.movebank.org/cms/movebank-main
NEON https://www.neonscience.org/
NEOTOMA https://www.neotomadb.org/
NOW https://nowdatabase.org/
Paleobiology Database https://paleobiodb.org/
Protein Data Bank https://www.rcsb.org/
VertNet http://vertnet.org/
Earth Sciences
Macrostrat https://macrostrat.org/
Soil grids https://soilgrids.org/
WorldClim https://www.worldclim.org/
USGS Current Water Data https://waterdata.usgs.gov/nwis/rt
LAGOS https://lagoslakes.org
EarthData https://search.earthdata.nasa.gov/search
Stone Lab Algal and Water Quality Laboratory https://ohioseagrant.osu.edu/research/live/water
Mauna Loa Trends in Atmospheric CO2 https://gml.noaa.gov/ccgg/trends/data.html
NASA GISS Surface Temperature Analysis https://data.giss.nasa.gov/gistemp/station_data_v4_globe/
North Temperate Lakes LTER https://lter.limnology.wisc.edu/data
Machine Learning
Machine Learning Repository https://archive.ics.uci.edu/ml/index.php
Physics and Astronomy
SIMBAD http://simbad.u-strasbg.fr/simbad/
Astrophysics Data System https://ui.adsabs.harvard.edu/
Political and Social Sciences
ICPSR https://www.icpsr.umich.edu/web/pages/
Various
Smithsonian Open Access https://www.si.edu/openaccess
Hathi Trust Digital Library https://www.hathitrust.org/datasets
GeoPlatform https://www.geoplatform.gov/
Omeka Project Directory https://omeka.org/classic/directory/
Table 13. Examples of databases that can be leveraged for research projects in CUREs.
• Iterative CUREs enable students to access data from previous years and other lab groups and as such the opportunity to analyze realistic sample sizes that may be difficult or impossible to gather in the time of a CURE (Kloser et al., 2011; see Satusky et al., 2022 and Sun et al., 2020 for examples). The assembly of large datasets also enables long-term outlooks (e.g., Potter et al., 2009).
• It is important to define “Scholarships as Conversation” for students. Scholars and researchers engage in ongoing conversations in which new ideas and research findings are continually being discussed (ACRL) (e.g., [Activities 5-6]).
• Being part of the scholarly conversation is an expectation of academic research and is integral to research and therefore of a CURE.
• The scholarly conversation takes place in the peer-review literature, including books and journal articles.
• Scholarly conversations also take place at conferences.
• The classroom is an environment for scholarly conversations.
• There is an extensive literature on teaching the primary literature. There also exists multiple activities and exercises that enable instructors to introduce the primary literature to undergraduate students. Examples can be found in [Activity 7], Beck (2019), Hammons (2021), Howard et al. (2021), Chen (2018), Carson & Miller (2013), Hartman et al. (2017), Mitra & Wagner (2021), and Smith (2001) as well as the C.R.E.A.T.E. strategy (https://teachcreate.org/roadmaps/) and the science education resource center at Carleton College (Egger; Mogk). The collaborative reading and annotating of articles from the primary literature by students can also be helpful (Cafferty, 2021).
• Providing guidelines and help on exploring the primary literature through formal (or informal) bibliography exercises (e.g., [Activity 8]) helps students find additional resources on their own (e.g., [Activity 9]).
• Encourage students to identify the consensus that have developed, but also competing perspectives and approaches, and how new voices and evidence emerge (e.g., [Activities 9-11]).
• Do not overlook the importance of explaining to students how to evaluate and select sources as well as how to provide citations. Many students are not familiar with the process of searching for references (video).
• You should also engage students to reflect upon best practices of literature review, including:
• Using existing research to guide one’s own work (including on the issue of “tracing the scholarly conversation” [Activity 9]).
• Considering context when evaluating sources. One may not be able to understand the value of a particular piece of scholarship unless they consider the broader conversation (see [Activity 10]).
• Providing attributions.
• Needing to learn the “language” of the conversation before being able to fully participate.
• Acknowledging that joining in the conversation confers both rights and responsibilities.
• Recognizing that one is most likely entering into an ongoing conversation and not a finished one
• Are there freeware programs or programs with user-friendly GUIs that can be used by students to engage in the data analysis process without the need for extensive coding or other computationally difficult tasks (see Acuna et al., 2020 for an example; Zelaya et al., 2022 for an example in a CURE context)? An extensive collection of freeware programs for data visualization and analysis, network analysis, mapping, text analysis, etc. is available at https://guides.osu.edu/DH/digitalhumanities (see also Table 14).
Examples of freeware programs and programs with user-friendly GUIs that can help engage students in analyses
Select freeware programs enabling image analyses, statistical analyses, concept mapping, and visualization of data
Program name URL Type of analyses
ImageJ https://imagej.net/software/fiji/ Image analyses
PAST https://www.nhm.uio.no/english/research/infrastructure/past/ Statistical, time-series, and spatial analyses
jamovi https://www.jamovi.org/ Statistical analyses
VUE https://vue.tufts.edu/ Concept mapping
Google Jamboard https://jamboard.google.com/ Concept mapping / Interactive whiteboard
SwissADME http://www.swissadme.ch/ Drug discovery
UCSF Chimera https://www.cgl.ucsf.edu/chimera/ Visualization and analysis of molecular structures
Table 14. Examples of freeware programs and programs with user-friendly GUIs
that can help engage students in analyses.
• It is important to reflect on the role of the CURE in the development of the research program of the researcher(s). CUREs can help enable the research program of a research group (see figure 2 in Sun et al., 2020), but careful planning and circumscribing of the role(s) of the CURE(s) are critical.
Box 3: Important points
• CUREs should integrate responsible and ethical conduct of research into the student training.
• Consider the use of published data, museum and research databases, data collected by collaborators, and data collected by previous iterations of CUREs.
• Set-up checks and balances for student-collected data.
• Formally training students in the reading and analysis of the primary literature is critical to their engagement with scholarship.
• Build room in the course structure for failure and iteration; they are important elements of the learning process.
4. How will research progress be balanced with student learning and development?
Overview: The strength of CUREs is their combination of research and teaching in a unique pedagogical experience for both students and researcher-instructors; it is also an important source of challenges. The ideal CURE would involve a steady progress of the research process that is accompanied by student gains and learning. These pedagogical goals can sometimes conflict with the research advancing at a necessary pace. Similarly, the required validation and repetition steps of many research protocols may not be needed to fulfill many learning goals. The structure of a course itself with the associated time-constraints can place limits on repeating experiments, increasing sample size, and even fully exploring a particular dataset or question.
This section enables you to determine:
1. The pace of the research process
2. The appropriateness of lessons on replication and statistical power to address the limitations of time in a CURE
3. The possibility to assign some analyses and research tasks to outside researchers
4. The role of peer-review as a combination of the scholarship review process and a pedagogical feedback
5. Inclusivity issues to consider alongside research progress and student development
Education starts here:
• How will you make explicit to students the pace of the CURE?
• Will there be class meetings dedicated to specific data collection efforts, analyses, group brainstorming, peer-discussions, instructor conferences, etc.?
• What activities will be assigned to students as homework?
• What is the schedule of writing of the different sections of the deliverable?
• Can you identify weekly goals for the project?
• When will students be prompted to reflect upon their work? When will they be asked to discuss their work with peers or mentors? When will they be asked to formally review and critique their work or the work of others?
• What are the elements of the research process that will not be authentically included in the CURE?
• Can any of those elements be modeled on smaller datasets by the students?
• Can any of those elements be modelled on a different dataset of the same nature (published or not) as part of a class activity?
• Can some complex analyses demanding high computational skills be demonstrated by the instructor or guests (e.g., a graduate student working on this type of analysis)?
• Are there elements of the research that can be outsourced to the instructor or other researchers and merely explained or showed to students?
• What activities and requirements could represent obstacles to the engagement of all?
• Are some specific lab tasks, field work, data collection protocols, etc. in conflict with student accommodations?
• Are there activities that take place outside of the normal class time but need to happen on a specific day or time (e.g., attending to something in the lab, recording field observations at specific events)?
• Is the literature that students need to consult to complete the work accessible and affordable to them?
• Are there graphical representations of data or media used as part of the research that are not accessible to all?
• Are the tools and support necessary for the success of all available on campus, including IT resources?
• How can you develop a supporting environment in which students explore research, manage their work, and fail safely?
Scholarship starts here:
• Are there specific deadlines associated with the research project including proposal deadlines, abstract deadlines, theses deadlines, manuscript deadlines, and commitments to collaborators or research students (graduate or undergraduate)?
• Is the CURE work necessary for another step of the research process outside of the classroom?
• Is the CURE work integrated with the research of a graduate student or mentored undergraduate student?
• Does the work that students will undertake in the CURE represent the work necessary to complete the research deliverable chosen? Alternatively, is the CURE research only one component of the deliverable?
• What analyses, if any, typically reported in supplementary information, are necessary for validation but cannot be fully incorporated in the CURE?
• Can the research questions and/or tasks be easily divided among groups or students?
• Can you collect the literature relevant to the project (or a representative subset of it) ahead of the CURE to share with students?
• Can you share with students during the CURE examples of papers being published that are relevant to the research?
• What are important research concepts and analyses for the CURE that may require some explanation? Have you completed [Activity 1] yourself?
Research and Education together:
• The milestones of the CURE should mimic the different elements of the process of scholarly work from inception to presentation/publication. Thus, students should produce deliverables that are equivalent (at least in format) to the deliverables of professional scholars (e.g., Bakshi et al., 2016; Gastreich, 2020; Ramírez-Lugo et al., 2021).
• Peer-review and instructor feedback are models for the scholarly review process. Consider using the review framework of a prominent journal in your field or alternative rubrics (see [Activities 12A and 12B]) to guide students through this process and make peer-reviews valuable educational experiences.
• A consistent structure helps instructor and students keep track of the research and learning processes. One such structure is presented in [Appendix 1]. The structure is designed in eight different pedagogical steps that lead to a research deliverable as an integration of the two aspects of the CURE. Other examples of course structures are presented in the published literature (Bakshi et al., 2016; Bell, 2011; Bowling et al., 2016; Carr et al., 2018; Chen, 2018; Colabroy, 2011; DeHaven et al., 2022; Hekmat-Scafe et al., 2017; Mills et al., 2021; Murren et al., 2019; Ramírez-Lugo et al., 2021; Satusky et al., 2022; Sewall et al., 2020; Sweat et al., 2018; Thompson et al., 2016; Waddell et al., 2021; Wilczek et al., 2022; Zelaya et al., 2020).
• Dedicating class time to tutorials associated with formative assessments and/or discussions enables instructor(s) to verify that students are ready to undertake the homework and activities of the CURE.
• Dedicating class time to group meetings enables the instructor(s) to check in on all students/groups of students and address concerns and difficulties. Such meetings should be structured and scaffolded to promote constructive conversations. An example of the possible structure for one of these meetings is presented in [Activity 13].
• Field work can sometimes be implemented through asynchronous self-led field trips to overcome logistical obstacles (Washko, 2021).
• Lessons and interventions on specific topics, including replication and statistical power, may help students understand issues and concepts in research that cannot be undertaken in the CURE itself and overcome misconceptions (Schwartz et al., 2004).
• Conversations with staff members in the Office of Academic Enrichment, Disability Services, the University Libraries, and the Teaching and Learning Center (Drake Institute at the Ohio State University) can help instructors find compromise and solutions to the challenges of implementing research in the classroom.
• The TILT framework (Winkelmes et al., 2016) enables instructors to increase transparency in assignments. Including explicit links between assignments and tasks as part of the purpose of the assignment helps make explicit the progress through the scholarly process. An introduction to the TILT framework is available here.
• Transparency in the assignments can also be improved by working with students through “understanding assignments” activities. Such activities can be done individually or as a group and enable students to make sure they are meeting the expectations of the instructor without being impeded by jargon. It encourages students to think about what it is they are expected to do (examples of assignments available at UNC Chapel Hill).
• Universal Design for Learning (UDL) is a framework that helps design teaching products and structures that can be used by all without the need for accommodations. UDL is a proactive approach to adaptation in the classroom and benefits all regardless of their needs for specialized designs. UDL enables instructors to meet the diversity of their classroom. It includes several key components reviewed in Burgstahler, 2013. UDL includes the implementation of different media, additional scaffolds, technology, etc. (Burgstahler, 2009; Griful-Freixenet et al., 2017), but also expands to other issues. Consider how you can equitably support students through flexibility in deadlines, meetings, and schedules. This can include letting students set their own work hours or provide students the opportunity to attend remotely certain meetings. Learn what users need by conducting check-ins for group access needs during lab meetings and surveying users about the overall accessibility of the course experience. Consult with accommodation services staff to determine how you can optimize research and teaching practices and spaces and how to make them more inclusive for individuals with disabilities and specialized needs. Advocate on behalf of students with disabilities by communicating with accommodation services staff and encourage students to reach out to those services to get the support they need and deserve.
• Mentorship structure can involve peer-researchers and graduate students.
• Any mentorship structure can benefit from direct feedback from students. Communication is key to the success of the CURE.
Box 4: Important points
• Mini-workshops and group or class-wide activities on subsets of the dataset analyzed or published data comparable to those studied by the students can help students understand critical concepts that cannot be authentically explored in the CURE because of time.
• CURE students are integral members of the research team and can benefit from intellectual or data exchanges with other members of the research team, including graduate students, postdocs, and undergraduates in mentored research experiences.
• Consider presenting to students the trajectory of a research project from inception to publication, including the role of formal and informal peer-reviews of ideas, presentation conferences, and manuscripts.
• Consider how universal design for learning can be implemented in the CURE to facilitate the engagement of all students in the experience.
• Implement a mentorship model of the CURE students that fosters a safe environment for self-exploration and self-management (Palmer et al., 2015).
5. How will the research learning tasks be structured to foster students’ development as scholars?
Overview: The design of a CURE requires the incorporation of a carefully thought-through scaffold that enables students to engage with an intellectual challenge that is often beyond any they have encountered before. Well-constructed scaffolds are critical to the success of CUREs because they enable students to tackle appropriately challenging tasks (Lopatto et al., 2020) and support students through failures, leading to greater perseverance (Corwin et al., 2022) . Scaffolds are relevant to the design of the entire CURE, which should include a series of activities and assignments that guide students through the entire project (e.g., Bakshi et al., 2016; Delventhal & Steinhauer, 2020; Hills et al., 2020; Makarevitch et al., 2015; Peyton & Skorupa, 2021; Rennhack et al., 2020). They are also relevant to the design of individual assignments because they help guide students through the tasks of research. This section enables you to determine:
1. The appropriate framework for the CURE assignments
2. The structure of CURE tasks as individual or group assignments
3. The activities that require detailed tutorials or mini workshops
4. The teaching interventions and writing assignments necessary to guide student learning
5. The role of reflective activities in the development of student deliverables
6. How the learning activities will foster an inclusive experience
Education starts here:
• What are the knowledge/skill bottlenecks students will encounter during the CURE? What tasks or expectations might represent obstacles?
• What does group work contribute to this activity? Which activities require group work?
• Is the amount of work simply too much for a single person?
• Is the interaction among students necessary to generate answers?
• How does students’ ability to work well with others factor into the goals of the CURE?
• What does an authentic research team structure resemble in the field of research associated with the CURE? How can the CURE mimic this?
• How will you design groups whose structure promotes inclusivity and constructive intellectual exchanges?
• Are there activities that require tutorial or workshops to be explained?
• Can students perform all necessary analyses for the CURE without particular training?
• Will students need to be introduced to a particular software program to collect or analyze data? To construct figures or tables?
• Can you feasibly guide individual students/groups of students through the experimental components of the CURE without prior training on a smaller/published/example dataset?
• How are you introducing the expectations for graded assignments and deliverables to students? How do they know what is expected of them?
• What are the necessary or helpful intermediate steps to completing all deliverables and graded assignments? What steps do you go through to complete these activities yourself? What about a graduate student? How would you guide a mentored undergraduate researchers outside of the course through this activity? Based on all of this information, ask yourself: “What would a novice do?”
• How can reflection help students develop better deliverables? What is the role of reflection in the development of student deliverables?
• Can a think-pair-share structure facilitate progress or foster the development of deliverables?
• How can you design learning activities that foster an inclusive experience?
Scholarship starts here:
• What are the different steps of the research process that students will be going through? What numbered elements of Figure 2 are necessary or, instead, need to be edited?
• What foundational skills and knowledge of the field of research should be incorporated in the CURE? Does that include skills or knowledge that may not in fact be strictly necessary for the specific research question/hypotheses tested in the CURE?
• Which of the CURE tasks is typically achieved by individuals in a research setting? By groups of people?
• Are certain elements of reflection or of the scaffolding of the research project typically shared through publication (for example as appendices) in professional deliverables?
Research and Education together:
• Instructors often do not consciously realize that the process of research relies upon underlying concepts and practices. Researchers familiar with the research process skip over some of these foundations in their daily engagement in scholarship. Instructors can help students become researchers by explaining the critical concepts underlying the practice of research ([Activity3]).
• Engaging in research can be daunting for students. Reflection workbooks and journaling can help students overcome the challenge of the research process, reduce their anxiety, increase their appreciation of research, and facilitate exchanges between students and instructors (Apgar, 2022). Research shows that training significantly improves students’ reflection skills (Devi et al., 2017).
• The entire course would benefit from a scaffold that guides students (and instructors) through the learning process. Sabel (2020) provides an example of a framework to develop the scaffold of a course. Another example is showed by the Decoding the Disciplines framework (https://decodingthedisciplines.org/) introduced here. [Appendix 1] shows yet another example in which students, for each element of the CURE (e.g., searching the literature, data collection, data analysis, writing the material and methods) engage in a series of activities that lead them to produce a research deliverable (Fig. 2).
• Group meeting agendas can help students prepare the meetings they will have with their teammates during class time. Example of questions and guidelines to prepare various group meetings are provided in [Activity 14].
• There is an extensive literature on the design of groups emphasizing the need to carefully consider the group composition as well as the framing of the group work by the instructor(s). Best practices for group work are summarized in Wilson et al. (2018) and at https://lse.ascb.org/evidence-based-teaching-guides/group-work/
• Consider carefully the size of the groups (Heller & Hollabaugh, 1992).
• Consider carefully the gender and ethnic minority status of students in the composition of the groups (Adams et al., 2002; Micari & Drane, 2011).
• Assigning specific roles to students is helpful to encouraging and structuring the conversation (Heller & Hollabaugh, 1992). Enforcing a rotation between the roles of the students, if possible, can be helpful. There are several strategies for distributing roles presented in the literature (e.g., Olimpo et al., 2016; Winkelmann et al., 2015).
• Including discussions of the nature of intelligence, academic failure, systemic biases, as well as fostering a growth-mindset in students can greatly help students overcome social-comparison concerns (Micari & Pazos, 2014).
• Setting group goals can also be helpful (Werth et al., 2022) and strategies for effective group work are important (Washington University in St. Louis). In addition to assigning specific roles, instructors should consider using group contracts [Activity 18], and peer evaluations (see [Activity 19], which have been showed to encourage student participation (Chang & Brickman, 2018), can facilitate the assessment of the contract compliance, and make easier the identification of conflicts and problems leading to a more rapid resolution. Group contracts and peer evaluations can be used as part of the grading scheme to determine individual contributions.
• The Peer Assessment Factor is a quantitative assessment of the students by themselves and peers that can be used as a formative assessment and a guide for interventions.
• Examples of other activities, and interventions are provided in the PETS Process Manual.
• Frameworks for mini-workshops are presented in the primary literature for many tools and protocols. Some are specifically aimed at CUREs. A statistical workshop is presented by Olimpo et al. (2018). Sewall et al. (2020) presents several mini-workshops on computational tools (R and QIIME2). Alternatively, published protocols might be sufficient to guide students through an experiment or research task without the need for a prior tutorial or workshop (Acuna et al., 2020; Buser et al., 2020; Chen, 2018; Craig, 2017; Goeltz & Cuevas, 2021). Several published CUREs include within the supplementary information files useful protocols and tutorials (e.g., Bucklin & Mauger, 2022; Jurgensen et al., 2021; Poole et al., 2022; Roberts et al., 2019; Zelaya et al., 2022).
• There are numerous published strategies to help scaffold inquiry activities such as authentic research. Several are summarized in a TLRC teaching topic. Additional information on scaffolding is provided in Killpack et al. (2020).
• Writing-to-Learn (WTL) activities can help students build their understanding of the existing literature on the CURE topic, work through their project, and complete their deliverables (Balgopal et al., 2018; Bangert-Drowns et al., 2004; Fry & Villagomez, 2012). Many examples of WTL activities are available online (see in particular the Center for the Study and Teaching of Writinghttps://cstw.osu.edu/writing-learn-critical-thinking-activities-any-classroom). A selection of scaffolding activities derived from WTL concepts that can be implemented in a CURE are presented here, including an activity guiding students through the process of deciding on their data analysis protocol [Activity 20], an activity helping students summarize the existing literature [Activity 21], two sister activities helping students compare their findings to data from the primary literature [Activities 22-23], and an activity meant to help students prepare the discussion section of their paper [Activity 24].
• Tutorials for writing can be developed for each section of the deliverable, including the elements of an IMRaD paper (Introduction, Methods, Results, and Discussion). An example of the structure of such tutorials are showed in [Activity 16]. An example activity to develop students’ graphical skills is showed in [Activity 25].
• Other activities can be designed to scaffold the students’ work including activities encouraging students to predict their results and represent them graphically or activities comparing the introduction and discussion from a single publication to emphasize their roles as bookends of the deliverable.
• Figure 2 shows a possible framework by which reflection can be incorporated in the scaffolding of the course to enable students to build their metacognition while they engage in scholarship (see Denke et al., 2020). Student reflection can be guided by a set of questions and an information literacy framework. An example of such guide is provided in [Activity 26].
• Best practices to build student metacognition are reviewed in Stanton et al. (2021) and at https://lse.ascb.org/evidence-based-teaching-guides/student-metacognition/.
• Think-Pair-Share activities, whereby students first reflect upon their work and then discuss their thoughts with group members before to engage with the rest of the class and the instructor(s), may be helpful in promoting reflection as well as conversations among students; they can also be used as formative assessments by the instructor(s) to improve student performance (Akhtar & Saeed, 2020). Best practices for this active-learning activity have recently been reviewed in Cooper et al. (2021) and Prahl (2017).
• The TILT framework (Winkelmes et al., 2016) enables instructors to increase transparency in the assignments by providing clear descriptions of the purpose, tasks, and evaluation criteria for each activity the students engage in. Examples of applications of this framework are provided at: https://tilthighered.com/tiltexamplesandresources.
• There are numerous sets of best practices and recommendations (e.g., Cooper et al., 2020b; Faulkner et al., 2021; Linder et al., 2015; see question 10 on p. 104) that have been developed to promote an inclusive classroom environment (whether in a CURE or not). Miller et al. (2022) presented a framework to incorporate “antiracist, just, equitable, diverse, and inclusive principles” in the design of a CURE. Best practices for inclusive teaching are presented in Dewsbury and Brame (2019) and at https://lse.ascb.org/evidence-based-teaching-guides/inclusive-teaching/.
Box 5: Important points
• The prior knowledge of the students informs the first rungs of the learning scaffold.
• The ELOs for the course provide the framework for the endpoint of the scaffold.
• Group work should be intentionally designed, supervised, and assessed to lead to accountability and personal learning milestones.
• The scaffold adopted should provide students all required background information, guide them through challenging tasks in an accessible context, provide opportunities for reflection, and enable students to repeat or expand on their work to advance the research process.
• Writing-to-learn activities can facilitate student learning and development as writers.
• Information literacy is critical to student success and can be fostered through reflective work as well as activities promoting metacognition.
• The Transparency framework of TILT enables instructors to articulate the purpose, tasks, and criteria for evaluations of each activity to students, thereby promoting metacognition and reflection.
6. How will students communicate the results of their research?
Overview: The communication of research findings is critical to the process of scholarship, including undergraduate-led research (Spronken-Smith et al., 2013). It is also an important element of CUREs. Many traditional outlets of professional scholars are also appropriate for student research derived from CURE. Thus, presenting research findings at conferences has been showed to greatly benefit undergraduate students (Little, 2020). Publishing the findings of a CURE on a database used by researchers has been demonstrated to lead to increased student motivation (Wiley & Stover, 2014). Finally, publications resulting from CUREs have been associated with both personal and professional benefits for students (Turner et al., 2021). In all forms of deliverables, it is important to consider how the contributions of all members of the CURE will be recognized. Research has showed that there exists a gap between what an instructor or professional researcher might consider necessary for authorship versus what students consider necessary (Turner et al., 2021). This section enables you to determine:
1. The appropriate mode(s) of dissemination of the CURE’s findings
2. The division of labor among students in preparing the deliverable
3. Authorship rules
Education starts here:
• What is an appropriate expectation in terms of deliverable given the time constraints of the CURE?
• Would a specific deliverable be more motivating for students? Have you asked students?
• Is one format of deliverable going to benefit students professionally more than another?
• Which mode of dissemination would be appropriate for a professional scholar at this point in the research? Would that be any different for a student participating in a mentored research experience?
• Will the deliverable be evaluated for a grade?
• Will each student be expected to produce a final document (be it a dataset, poster, manuscript, or proposal) or will the deliverable be prepared by a group of students?
• Will each student prepare one section of the deliverable? Alternatively, will all students be contributing to all sections of the document?
• Will all students in the group be evaluated as a group with a single grade for the document applying to all students? Alternatively, will the students receive an individualized grade?
• If students will be individually graded, what criteria will be retained to differentiate between group members? Are those going to impact authorship order?
• How will the intellectual and practical contributions of the students participating in the CURE be recognized in products of the project?
Scholarship starts here:
• What deliverable are you expecting/hoping for at this stage of the research?
• Are the students completing a project leading to a publication manuscript? Will the students produce a version of the manuscript near-ready for submission or will there be large amounts of work undertaken after the conclusion of the course to bring the manuscript to a submittable form? Who will undertake this work?
• Are the students helping start a new project, developing preliminary results for a proposal or contribute to a database?
• Will the findings of the CURE be appropriate for a conference presentation?
• What are the standards for manuscript authorship in your field? At your institution? In your research group? Alternatively, when are contributors merely acknowledged?
• How will the contributions of instructors and outside researchers be considered for authorship?
Research and Education together:
• Different deliverable formats are not mutually exclusive.
• The format of the CURE deliverable should reflect the authentic mode of delivery of research findings in the field (Kloser et al., 2011).
• Students enjoy the opportunity to present the results of their research at conferences and/or publish their work (Amir et al. 2022).
• Participation in research conferences has been showed to benefit students both by helping students develop their communication and research skills, but also by positively impacting their career and engagement in other activities (Little, 2020).
• The opportunity to publish the findings of the CURE enhances student motivation (Wiley & Stover, 2014).
• Publications can be powerful deliverable for a CURE because:
• They can act as scaffold with the different sections of a manuscript corresponding to elements of the research process
• They represent the scholarly endeavor and can help students gain an appreciation for the process of research.
• They can help build the students’ identity as researchers through co-authorship of a published document.
• They positively impact the careers of students (Turner et al., 2021).
• They can be motivating and thus increase student engagement in the CURE.
• They provide a tool for accountability among students and with the instructor.
• Consider how the participation in a national or university-wide CURE program may dictate the deliverable if the CURE is not researcher-driven.
• Undergraduate research symposia can be great venues for the students to present the results of the CURE. Many colleges and universities as well as some departments at larger institutions organize symposia at least once a year (Presentation Opportunities).
• There are also numerous national conferences dedicated to student research (e.g., SigmaXi; Council on Undergraduate Research).
• In addition to discipline-specific publication venues and journals aimed at professional scholars, there are also open-access online undergraduate research journals (Sun et al., 2020) that may be appropriate outlets for the CURE’s findings.
• The Knowledge Bank of Ohio State (https://library.osu.edu/kb) enables the archiving of intellectual outputs of the university community in an accessible digital format. Other digital repository, some discipline-specific, may also be appropriate (e.g., the Environmental Data Initiative)
• There are online repositories of research products that are citable and can be appropriate for the results of CUREs (e.g., https://figshare.com/).
• Digital and artefactual exhibits can also appropriate venues for the presentation of archival and object or specimen-based research (e.g., Donegan et al., 2022; ENG5612 at Ohio State).
• Multiple online scholarly projects and databases (Table 13) may be appropriate venues for student contributions (e.g., Map of Early Modern London).
• Establishing clear authorship guidelines is critical to a smooth and rewarding experience:
• Students should buy into the rules. It is important to explain the rationale for authorship rules to students.
• Students may also have valuable input on the rules. Consider developing authorship rules specific to the CURE, with the students, that incorporate guidelines and recommendations from relevant institutions as well as best practices in the CURE’s field of research.
• Many universities and colleges have developed authorship guidelines including The Ohio State University (Authorship Guidelines) and other institutions (e.g., Harvard University, University of Michigan, Yale University).
• Many professional societies have also developed best practices documents relating to authorship (e.g., American Physical Society, American Psychological Association, British Sociological Association).
• Some journals and publishers also have rules for authorship (e.g., Nature, British Medical Journal, Taylor & Francis).
• If other instructors or researchers (e.g., graduate students, peer teaching assistants, research collaborators at other universities) will be involved in the project, they should be part of the conversation and agreement on the authorship rules. Consider the authorship rules you employ in your research group.
• There are several published sets of guidelines for authorship that can also be used. The CRediT taxonomy (Honoré et al., 2020), which has been adopted by PLOS journals (PLOS Authorship Guidelines), can provide a starting point to develop guidelines specific to your CURE.
• The copyright services department of the Libraries (https://library.osu.edu/copyright) can help with issues of students’ intellectual property rights.
• In the case of CURE-specific requirements for authorship, consider whether or not it is appropriate to tie authorship to course requirements and achievements (e.g., fulfilling the contract if contract grading, obtaining a certain minimum grade on the deliverable).
• Students should be offered the opportunity to approve the final version of the manuscript they co-author, even after completion of the course.
Box 6: Important points
• Different deliverable formats are not necessarily mutually exclusive.
• Establishing clear guidelines and requirements for authorship is critical.
• Communicating the results of the CURE should reflect authentic scholarly communication in the field.
• Publication manuscripts can serve as scaffold, provide motivation, ensure accountability, and shape student identity.
7. How will the progress and experience of students be assessed?
Overview: The assessment of the students’ work in a CURE enables the instructor and the entire class to stay up to date on the research progress, including the findings of the project and the setbacks. Well-designed assessments also enable students to reflect on their learning, help them identify their difficulties, and provides tools for overcoming obstacles. Because CUREs are elements of formal courses, it is also often necessary to evaluate the students’ work qualitatively and quantitatively for the purpose of a grade. This section enables you to determine:
1. The goals of assessing the CURE work
2. The mode of grading to adopt in the CURE
3. The proportion of the evaluation represented by group work and individual work
4. The proportions of formative assessments and summative assessments
5. The roles of instructors, peers, and self in evaluation.
6. The nature of the assignments graded
7. The need for specific rubrics and grading criteria
8. The importance of an inclusive mode of grading
Education starts here:
• Does all evaluation work have to translate to a grade?
• Can some assignments be evaluated simply for completion/genuine participation?
• Can some scaffolding assignments be reviewed to provide feedbacks without associating the work with a grade?
• What are the goals of the grading in the CURE?
• Will it be used to determine authorship eligibility?
• How will the grading be aligned with the expected learning outcomes (ELOs) for the CURE?
• Which assignments will be graded?
• When you assess student performance, what are you rewarding? Is the grading based on the mastery of the ELOs or on other criteria? Are those criteria made explicit to students?
• Are students given a chance to revise their work and correct their mistakes?
• How do you give students practice with and feedback on performance items that you are rewarding on the final assignment?
• Do you give students the opportunity to reflect on what they are learning or how they are growing?
• What percentage of the class grade will be represented by the CURE grade?
• How does this compare to the time investment made by students?
• Will group assignments be graded or will only individual assignments be graded?
• When group assignments are graded, do all members of the group get the same grade?
• Does the mode of grading lead to increased intra-group conflicts?
• What mode of grading will be adopted in the CURE?
• What proportion of the final grade for the CURE is represented by formative assessments versus summative assessments?
• Are summative assessments based on the formative assessments on which the students got feedback?
• Is there room for self-evaluation in the CURE? What about peer-evaluation beyond peer-reviews of deliverables?
• Can students grade some of their own assignments with the help of a rubric as a metacognition tool? Can this exercise be combined with instructor feedback and a redo enabling students to gain missed points?
• How often will students be completing work reviewed by the instructor(s)?
• How often do students get instructor feedback?
• How many iterations of a given document, section of deliverable, research product will students be producing?
• Will students get rubrics for each assignment?
• How will explicit grading criteria be provided?
• Is it possible to hand out examples of successful products to students as models and examples of works that do not meet expectations to guide their own writing?
• What approaches can you adopt to ensure an equitable and inclusive mode of grading? How can assessments be used to reduce the equity gap?
Scholarship starts here:
• How are scholars evaluated in your field? Can this mode of evaluation be mimicked in the CURE?
• What determines the quality of a deliverable in your field and what aspects of that document quality do students have control over?
• What is a reasonable quality standard to expect for the deliverable of choice? Is it realistic to expect the students to produce a document nearly ready for publication submission or presentation?
• How would you assess the work of a mentored research student engaged in a similar project? What expectation would you have of this student’s deliverable?
• How will the success of the research impact assessment and grading?
Research and Education together:
• The nature of CUREs as authentic research may lead to failures and mistakes; in fact the novel aspect of the CURE will almost certainly lead down unpredictable paths and outcomes. Designing the assessment of the CURE to mitigate the effects of technical problems, negative results, and the steep learning curves of scholarly endeavors is critical to student engagement. The goal of assessments should be to inform students and instructors alike of the progress of students as researchers, not the success of the observations or experiments. The ability to troubleshoot, the resilience of students in the face of failures, the understanding and explanation of errors, and the repetition of tasks should be integrated in the grading scheme. Finding the solution to a problem and understanding why an experiment did not work are progress. A discussion of failure, how to approach failure with students, and the point of view of the instructor in the context of CUREs is provided in Townsend (2022).
• Backward-design requires the alignment of the assessments with the ELOs for the CURE. Instructors should rigorously verify that all ELOs for the CURE are assessed by the end of the course, but also that assessments do not evaluate expectations that are not made explicit to students. Different assessment modes are suitable for diverse learning goals (Verb wheel).
• Transparency in assessment is critical to student success and inclusive teaching. The TILT framework (Winkelmes et al., 2016) presented earlier in this document is particularly helpful: https://tilthighered.com/tiltexamplesandresources.
• Beyond traditional point-based grading, there are other modes of grading that an instructor can consider.
• Criterion-referenced grading provides students with flexibility in the weights of the various components of the course. It enables students to mitigate test anxiety and emphasize formative assessments; it can also be used by students to lessen the consequences of out-of-school responsibilities on their academic endeavors. In practice, you should determine bounds for the weights of the different categories of course assessments and enable students to develop their own formula for the CURE grade. Every student will be graded according to their unique formula (which can easily be setup in a spreadsheet program).
• Contract grading (Inoue, 2019) offers the opportunity to combine flexibility for students and for instructors while maintaining rigor in expectations and avoiding conflicts over grades. Contract grading is a format of grading that does not involve points or letter grades, apart from the final course grade. At the start of the semester, students choose/agree to/sign a contract that sets their path for the semester. Each contract lists the work required of the students for a particular outcome (e.g., an A, a B). If the student satisfactorily completes the work associated with their contract, they will earn that grade. Contracts are often associated with a mid-semester conference to check on progress towards the completion of the contract and reevaluate commitments if necessary. This conference is repeated at the end of the semester to assign the students’ grades. The onus of tracking contract compliance may be placed on the students who are responsible for writing self-evaluations ahead of each conference in which they need to address their work, its quality, their engagement with the class, etc. You should respond to these self-assessments during the conference. You can always disagree with the student in their final assessment of their performance. Students can also be asked to keep track of their time using a labor log. An example of contract grading for writing courses is provided by Inoue (2019). Another example been published online by Cathy N. Davidson. Contracts can be more or less complex incorporating aspects of criterion-referenced grading (Hiller and Hietapelto 2001) or including specifications grading elements (Lindemann and Harbke 2011; see Appendix 2 for a model applied to CUREs).
• Specifications grading can help uphold academic standards and motivate students while reducing grade anxiety and cheating. In specifications grading, the assignments are graded on a pass/fail basis, a check/check minus/check plus/unsatisfactory basis, or an excellent/meets expectations/needs revisions/fragmentary basis. Instructors should provide clear specifications of what constitutes an acceptable work, what qualifies for a check (or earns a check plus), what leads one to meet (or exceed) expectations, etc. Students may also be given the opportunity to revise their work. Assignments may be assessed individually or grouped into modules that are assessed holistically. Modules can be weighed to reflect the complexity of the tasks, their relevance to learning outcomes, and/or their status as formative/summative assessments. Several models of specification grading for different fields have been published (e.g., Blackstone & Oldmixon, 2019; Carlisle, 2020; Elkins, 2016; Howitz et al., 2021).
• Group reviews can be used to assess the contributions of all members of a group to the outputs of the CURE, including specific documents. A group review template is provided in [Activity 19]; others exist in the literature (e.g., Bucklin & Mauger, 2022; Waddell et al., 2021).
• Self-evaluation, also called self-assessment or self-grading can be a powerful tool to promote student pacing and success, primarily as part of formative assessment (Andrade, 2019).
• Many rubrics have been published online (University of Minnesota, Chicago State University) and in the peer-reviewed literature for both writing assignments and CUREs (Bakshi et al., 2016; Kishbaugh et al., 2012; Lee & Le, 2018; Murren et al., 2019; Ramírez-Lugo et al., 2021; Sewall et al., 2020; Waddell et al., 2021).
• An example of a rubric for discussion boards in provided in Appendix 3.
• An example rubric for a manuscript-type deliverable is provided as part of the peer-review presented in [Activity 12].
• Bucklin & Mauger (2022) as well as Merrell et al. (2022) both include rubrics for a poster presentation stemming from CUREs.
• An example rubric for a grant proposal-type deliverable is provided in Rennhack et al. (2020).
• When designing a rubric, consider the following best practices (from Bean, 2011):
• Numbers on the rubric do not add up to 100 and do not represent directly course points.
• The grade associated with the rubric is always presented as a letter or a check/check plus/check minus, not a number.
• Students are explained how the rubric is used in grading.
• A good rubric should include a detailed explanation of the task students are expected to perform, the characteristics of the work that will be evaluated, the levels of mastery that will be considered, and descriptions of the characteristics for each level of expertise. Additional guidelines for the development of rubrics can be found in a number of different publications (Boston University, Brown University, and Arizona State University). You can also consider co-creating the rubric you will use with the students (University of Colorado Boulder).
• Highly structured courses with frequent low stakes assignments increase student engagement (Cavinato et al., 2021), lead to higher performance (Freeman et al., 2011), and help reduce the equity gap (Eddy & Hogan, 2014; Haak et al., 2011).
• Consider using concept inventories to test for knowledge acquisition in the field of research (Table 6).
• (More) authentic formative assessments can be undertaken by reviewing and grading lab notebooks, periodic research updates akin to those a research student would provide in a lab/research group meeting, conference-style abstracts, elevator speeches, or chalk board presentations.
Box 7: Important points
• Assessments should be aligned with ELOs for the CURE.
• Rubrics and clear criteria for success help communicate the expected level of mastery to students.
• Student practice and instructor feedback on formative assessments should be aligned with summative assessments.
• Contract grading, specifications grading, peer-grading, and self-assessment are valid alternatives to traditional numerical grading by instructor.
• Frequent low-stakes assignments can help reduce the equity gap.
• Accountability for personal tasks and peer/self-review facilitate grading of group work.
8. How will research learning tasks change as discoveries are made and initial research questions are answered?
Overview: The process of research is inherently iterative and involves sequential hypothesis testing. As results emerge from observations, experiments, and analyses, new hypotheses are developed and require testing. This is often the case in CUREs. Unlike a scholar’s research program, however, CUREs have strict curricular goals and involve time limits, including a comparatively small number of weekly hours of research engagement and an end to the research process imposed by the end of the instructional period be it a quarter, semester, or other. This section enables you to determine:
1. Whether to include such sequential aspect of research within a single implementation of a CURE or across repetitions of a CURE.
2. How to bring about the transition from one hypothesis to another
3. The need to revise the scaffold and learning tasks of the CURE to fit new hypotheses and research paradigms
4. The importance of planning the research course of the CURE to offer an original experience to successive cohorts of students
Education starts here:
• Will students have time to explore only one hypothesis/question, or will they be able to at least partially engage in a second one?
• For students exploring researcher-chosen questions/hypotheses:
• Will they be able to suggest their own follow up hypothesis/question?
• How will you explain to them how the question/hypothesis they are investigating was developed?
• When revising or repeating a CURE:
• What elements of the CURE are outdated? Any recommended paper or analysis no longer reflecting the knowledge or best practices in the field?
• How will you ensure that the upcoming CURE is not merely a variation on a previous iteration, in which all novelty has been lost?
• What should you change to make sure that the CURE does not become a “cookbook CURE”, a research experience for which the approach and protocols are fully developed and the result guaranteed, leading students to engage in research novel only to them that does not significantly contribute to the field?
• Is the new iteration of the CURE following-up on questions or hypotheses developed in previous versions of the course?
• What advances in the field have been made since the last time the CURE was run? Should planned questions/hypotheses be revised as a consequence?
• Have new best practices in pedagogy and CURE implementation been published since the last iteration of the CURE? Do they mandate revisions to the structure, scaffold, or assessment of the CURE itself?
Scholarship starts here:
• Does the scope of the CURE represent a publishable manuscript in the field of research of the CURE? Will more than one iteration of the CURE be necessary to answer all necessary questions/test all necessary hypotheses?
• Are adjacent or related questions/hypotheses being tested by colleagues, collaborators, research students, thus limiting the scope of the work the students can follow-up on?
• When developing a new researcher-driven CURE:
• What is the place of this CURE within the research program of the researcher(s) involved?
• What are the natural follow-up hypotheses or questions already known?
Research and Education together:
• It is critical to introduce students to the nature of the scholarly endeavor including the fact that research generates more questions than answers and the need for researchers to critically select the questions that they will in fact explore. This discussion can take place as part of the process to select the question investigated in the CURE or not (see [Activity 2]). The roles of funding agencies, other scholars, institutions, and systemic biases in this selection should also be presented. Some aspects of these issues are discussed in https://opentextbc.ca/researchmethods.
• It is possible to demonstrate to students the nature of research and the succession of investigations that lead scholars to build piece-by-piece the puzzle of a particular topic without requiring students to walk these steps themselves through retrospectives and walkthroughs of the history of the current paradigms and questions they will explore.
• CUREs can build on each other over time to enable the investigation of particular topic through a series of studies (Satusky et al., 2022; Sun et al., 2020: figure 2). It is important to incorporate this history when presenting the CURE to students.
• Extensive notetaking during the design and implementation of the CURE enables an instructor to revise the CURE. There are published guides to revising courses and reflecting on pedagogy (see for example McGahan, 2018).
• Instructors teaching researcher-independent CUREs should consider following the research advances in the field of the CURE as they do their scholarship area to maintain up-to-date knowledge of the field, active areas of research, literature reviews, and methodological developments.
Box 8: Important points
• Just like a research program, a CURE changes over time following research inquiry, progress, and setbacks.
• Showing/explaining the development of the CURE overtime to students is integral to showing/explaining the research process.
• Just like the work of students during the CURE is iterative, so is the work of (re)designing the CURE itself.
• Beware of creeping away from a CURE towards a “cookbook lab” with successive iterations.
9. What are the logistical obstacles and solutions for the different steps of the CURE?
Overview: The implementation of a CURE requires students to access the tools of the research trade. These are discipline-dependent, but may include lab space, consumables, specific technologies or equipment, library and documentary resources, research specimens, and computing facilities. Certain CUREs may also involve field experiences, which introduce their own set of logistical challenges. Planning the needs of the CURE is critical to its success and may influence the nature of the research questions investigated with students. This section enables you to determine:
1. The resources necessary for data collection
2. Alternative sources of data
3. The needs of the data analysis
4. The potential for crowd-sourcing the CURE’s support
5. Possible sources of funding to support CUREs
Education starts here:
• What is the budget of the course?
• Is the CURE part of the broader impacts of a grant?
• Is the CURE part of a creative teaching or curriculum redesign effort that can receive funding or logistical support from the institution, professional societies, or funding agencies?
• What help can support staff, including lab technicians, research and teaching assistants, lab coordinators, librarians, IT staff, and museum staff, provide with data collection and analysis? What do they need to be able to help?
• Who can help me think about my CURE development process and the pedagogy of this model of research?
Scholarship starts here:
• What are the consumable needs of the CURE?
• Do loans of specimens need to be secured prior to the start of the CURE? What are the restrictions on the handling of research specimens and materials?
• What equipment, including laboratory equipment and computational resources inclusive of hardware and software programs, is necessary to not only collect and analyze the data, but also prepare deliverables? Is this equipment accessible in teaching or research facilities on campus?
• What permits, certifications, and approvals are necessary for students and instructor(s) to undertake the research?
• What existing databases and online datasets of images, observations, measurements, etc. can be leveraged to facilitate the data collection for the CURE?
• For researcher-driven CUREs:
• Can the CURE be integrated with other data collection efforts of the research group? What are the benefits for the research students involved in contributing their data to the CURE?
• What is the role of research collaborators in the project?
• Are there sources of funding associated with the research ongoing in the laboratory/research group that can legitimately support the research undertaken in the CURE?
Research and Education together:
• Although the cost of a CURE has been reported to be higher than that of a traditional introductory science laboratory (Rodenbusch et al., 2016; Spell et al., 2014), it is much lower than the cost of mentored research experiences or summer research experiences (Rodenbusch et al. 2016; Smith et al., 2021). Some estimates are around \$400 to \$500 per student per course and many CUREs are cheaper (Poole et al., 2022) or even approach no financial costs.
• Online databases (Table 13) and freeware programs (Table 14) enable data collection and analyses at low costs.
• Collaborators and colleagues may be able to share equipment and supplies at low costs. CUREs can be departmental/institutional resources that spur enrollments and raise the profile of the teaching/research unit. Discuss with stakeholders (e.g., department chair, associate dean, and colleagues) the benefits of supporting the CURE.
• Many institutions have some equipment and resources (e.g., supercomputer, computer labs, greenhouse space, imaging facility) whose costs and access are mutualized or free for in-house projects.
• The integration of research and teaching missions of the CURE may enable different sources of funding to support the work (e.g., CUREnet). Those may include grants from professional societies, government agencies, internal competitions at the host institution, etc. (e.g., Council on Undergraduate Research). There are also calls for proposals appropriate for CUREs (NSF Division of Undergraduate Education). Some network CUREs may provide seed funding to implement the course (DeChenne-Peters & Scheuermann, 2022).
• Trainings, permits, and approvals should be secured ahead of the CURE as much as possible. Student-specific trainings should be integrated in the course. Logistical obstacles of research are important part of the curriculum. Students learn of the realities of the process of research and the important legal and ethical regulations that are associated with it. Additionally, students who navigate through these obstacles may gain important skills in project management.
• The involvement of research group members in the CURE should be designed to benefit them in the form of co-authorship, mentoring experience, opportunities for career advancement, funding, etc.
• Teaching assistants, including graduate and undergraduate students, have been showed to be very helpful in supporting undergraduate students enrolled in CUREs (Olson et al., 2022). They should receive necessary training to learn how to teach a CURE (Kern & Olimpo, 2022)
• There are multiple offices and resources at the Ohio State University to support the development of CUREs that are summarized in Table 15.
Box 9: Important points
• Obstacles to data collection can be overcome with research databases, citizen science data, museum databases, and online sets of images and observations or measurements.
• Campus resources including IT, the Libraries system, as well as research lab members and collaborators can help overcome data collection and analysis obstacles.
• CUREs can be integrated with mentored student research (of graduate or undergraduate students) to facilitate funding, mentoring, data collection, data analysis, and the professional development of research students.
10. What are the roles of instructional and support staff?
Overview: Although it may be tempting to think of the instructor (or instructors) as the linchpin of both the research and pedagogical processes of the CURE, the structure of CUREs is inherently student focused. As such, CUREs provide the opportunity for instructors to rethink their identity in the classroom to include a critical mentorship component built around providing a supportive environment in which students are assisted in acquiring knowledge and skills, but also learn the significance of their work (Figure 3).
A successful CURE is easier to implement with the support of other members of the research team, educational support staff, or campus community. A researcher-independent CURE may rely on the resources and structure of a national program, but the success of the CURE will also likely involve campus members who can complete and enhance the work of the instructor. This section enables you to determine:
1. The role(s) of the instructor(s) in mentoring the CURE
2. The fostering of a supportive, motivating environment
3. The role of the instructor and other experienced scholars in the research
4. The expectation and needs of graduate or peer-teaching assistants
5. The support provided by lab managers, technicians, and other staff members
Education starts here:
• How can the experience of CURE students be improved through interactions with students who have previously completed the course, mentored undergraduate researchers, graduate students, or postdoctoral researchers?
• What challenges are students likely to encounter during the CURE?
• What are the obstacles to the mentor-mentee relationship you foresee for the CURE? Consider expert-novice gap, communication skills, and cultural differences as well as “social-distance” (Shanahan, 2018).
• What is the ability and willingness of instructors to commit time to mentoring?
• What is the prior experience of the instructor(s) in mentoring research?
• What training and support is necessary to foster the success of postdoctoral researchers, graduate student instructors, or peer undergraduate assistants as educators?
• How do the ELOs of the CURE impact the model of mentoring that instructors should approach?
• What are the expectations for postdoctoral researchers, graduate student instructors, or peer undergraduate assistants of the instructor-of-record? Of the department/college/graduate school/HR/ etc.?
• What are the ELOs as well as professional and personal goals of postdoctoral researchers, graduate student instructors, or peer undergraduate assistants?
• How can the mentoring of postdoctoral researchers, graduate student instructors, or peer undergraduate assistants by experienced researchers and teachers help them reach their goals?
• What are the roles of the support staff and outside researchers in the CURE? Can these professionals free up instructor time for mentoring by assuming some technical responsibilities?
Scholarship starts here:
• What is the commitment (time and effort) that the instructor will dedicate to the research, particularly in class, that cannot be directed to mentoring?
• What is the responsibility of postdoctoral researchers, graduate student instructors, or peer undergraduate assistants in the research tasks of the CURE?
• What training and support is necessary to foster the success of postdoctoral researchers, graduate student instructors, or peer undergraduate assistants as researchers?
• What mentorship model(s) do you adopt in mentored student research experiences in your own research group and how can it/they be transposed to the classroo
Research and Education together:
• CUREs require a wider vision of the role of the instructor than traditional classrooms because of the need for emotional and research support of students (Cooper et al., 2022; Goodwin et al., 2021; Linn et al., 2015; Shortlidge et al., 2016).
• Different models of the role(s) of instructors in research experiences have been developed, but the consensus is that mentorship covers five critical categories of skills: research, diversity/culture, interpersonal, psychosocial, and sponsorship (Gentile et al., 2017). These five branches of mentorship lead instructors to promote the development of adaptive attitudes in students.
• Instructors can encourage constructive attitudes through (1) framing and scaffolding (designing transparent tasks associated with an explicit awareness of both the pedagogical and research significance of the work), (2) interventions and pedagogical activities (promoting high student efficacy), and (3) explicit integrations of inclusive teaching practices (creating a supportive learning environment).
• Just like explicit expectations and relevance of CURE tasks need to be communicated to students, clear expectations and ELOs should be developed for the work of postdoctoral researchers, graduate student instructors, or peer undergraduate assistants who are engaged in the CURE. These goals should incorporate the personal and professional goals of the postdoctoral researchers, graduate student instructors, or peer undergraduate assistants (see Mabrouk, 2003).
• The TILT framework (see above) can be useful in framing both student activities and postdoctoral researchers, graduate student instructors, or peer undergraduate assistants’ tasks.
• Instructors should strike a balance between “being overly prescriptive, which inhibits creativity and agency, and being insufficiently supportive, which leads to uncertainty, frustration, and a sense of failure” (Hanauer et al., 2012:384).
• Many activities and learning exercises are helpful in engaging students in experiences that promote student efficacy, including:
• Sharing their personal goals for the CURE and expressing their expectations for the CURE.
• Participating in the social network of their research team through meetings, discussion boards, etc.
• Explicitly associating elements of the research process with valuable skills and attitudes for their career aspirations.
• Articulating hypotheses and questions in the context of the CURE’s research goals.
• Conducting experiments, collecting and organizing data.
• Analyzing and interpreting data. Evaluating evidence. Critiquing conclusions.
• Becoming aware of the necessity of experimental failure.
• Understanding that sometimes discoveries emerge from iterative processes.
• Considering the quality of evidence and their relevance to the argument.
• Synthesizing results and drawing conclusions; planning next steps.
• Reading the primary literature, attending relevant seminars, and discussing their work with others.
• Presenting progress reports and comparing ideas in group setting.
• Reflecting on how the process of critique contributes to research progress.
• Consider guiding the cognitive exploration of students by implementing these recommendations modified from Cooper et al. (2022):
• Challenging students to check if their hypothesis is explanatory, clearly stated, and distinguishes between multiple ideas.
• Pushing students to fully explain ideas by asking follow-up questions and asking for clarification.
• Highlighting potential pitfalls of hypotheses and protocols encouraging student to identify the problem and its solution.
• Asking students to explicitly articulate links between research goals, hypothesis, and experiment.
• Encouraging students to consider alternative experimental outcomes or explanations for their predictions or results.
• Redirecting student ideas that are unproductive by bringing student attention back to their original hypothesis/goal.
• Reminding students to think about controls they need to consider in their experiments/analyses.
• Providing assistance with analytical tools or protocols that are merely means to an end and not critical elements of the learning goals, to enable students to focus on course objectives.
• Assessing the time management of the students and propose adjustments to protocol or timeline accordingly.
• Instructors can foster a supportive environment by integrating best practices developed from student focus groups (Faulkner et al., 2021) including the following:
• Contacting students: consider emailing or connecting with students before the semester begins. This initial contact is not only meant to offer practical information, such as class time and location, it also opens the door for a connection between students and instructor. Activate your class website before the class begins. This allows you to post messages for students and gives them the opportunity to reach out with questions.
• Learning about students: you can promote inclusivity by learning about the identities, circumstances, and concerns of your students. You can do so through get-to-know-you surveys as well as start-of-the-semester conferences. This is a good time and place to ask students their pronouns, names, and the pronunciation of the latter.
• Setting the right tone for the class: the first day of class sets the tone for the rest of the semester. As such, it is a critical time to establish an inclusive environment. Creating an inclusive classroom necessitates that each student feels like you see them as a unique individual. It also requires the classroom environment to reflect respect and care for all students. You should set expectations for class discussions and respect of everyone’s background. You also need to encourage everyone to express their ideas and learn from each other. Communicate that you will not tolerate any discriminatory attitudes or behaviors in the classroom.
• Encouraging introductions and setting group atmosphere: creating a group atmosphere that cultivates student collaboration and support throughout the semester is essential for inclusive pedagogy. It helps students create relationships with other students in the classroom and encourage collaboration and support. You can facilitate this collaboration by having students introduce themselves and exchange contact information with the people they will work with.
• Explaining syllabus and expectations: transparency is critical to a supportive class environment. Going over the CURE in detail on the first day of class is a useful way to make students feel welcome and helps you establish course expectations.
• Self-disclosure: the classroom can feel more inclusive when you share aspects from your own life. In particular, it may help students feel less vulnerable. Students see self-disclosure both as a way to get to know you and a sign of mutual respect.
• Being approachable: see your students as the people they are by using their name and correct pronouns, treat them as capable learners, respect them, encourage them, check in on them, make them feel welcome, and do not reinforce rigid power hierarchies. Explicitly tell students how you prioritize their learning and needs as well as your desire to help and support them.
• Staying engaged: pay attention to both the verbal and nonverbal responses of students. Notice the silences and apathy as much as the participation. Hold office hours and consider requiring students to set up a five-minute meeting several times throughout the semester. Holding hybrid office hours both in-person and through online tools like Zoom or Teams can facilitate attendance.
• Providing resources: give students information about campus and community resources, not only on the first day of class, but also throughout the semester. Provide specific directions for resources including academic services, clubs, tutoring, special interest groups, as well as mental health and wellness support.
• Encourage reflection: reflect on your experience in the course, how you engage students in dialogue, how you keep communications with students open, and how you include everyone in the class discussions. Examine your own positions of power, privilege, and vulnerabilities. Encourage your students to do the same and reflect upon their roles in the classroom environment and their interactions with others.
• Consider also the guidelines from Cramer and Prentice-Dunn (2007) for an alternative framing of the elements above into a mentorship model “caring for the whole person”.
• There is an important role for peer-mentoring among students engaged in the CURE. Such mentoring mimics the interactions between researchers and, along with mentorship from more experienced researchers (instructor of record, postdoctoral researchers, graduate teaching assistant, etc.), can help mitigate the frustration inherent to the failures and setbacks of authentic research experiences (Hanauer et al., 2012).
• The expert-novice divide is well recognized across disciplines (Inglis & Alcock, 2012; National Research Council, 2000; Newman et al., 2021; Stofer, 2016) and can sometimes represent an obstacle to mentorship; it may be overcome by involving peer teaching assistants. Undergraduates with prior experience of the CURE or mentored research experience outside of the classroom may be perceived as more approachable than more senior instructor(s); they also have recent experience with the struggles and challenges associated with the knowledge and skills taught in the CURE. They can provide very valuable feedback, including on writing assignments (Cho et al., 2006). Research shows that undergraduate peer assistants are valued by students enrolled in CUREs and facilitate their success (Olson et al., 2022).
• Although peer assistants can sometimes struggle with their identity and role in the classroom (Terrion & Leonard, 2007), training and mentorship of peer assistants can help overcome these difficulties (Handelsman et al., 2005).
• Requiring members of the instructional team to schedule dedicated time to mentoring in their week leads to more successful experiences (Shanahan et al., 2015; Terrion & Leonard, 2007). This is because time scarcity is a major barrier to effective research mentorship (Gentile et al., 2017).
• Communication with postdoctoral researchers, graduate student instructors, or peer undergraduate assistants around the support they need and the resources that would enhance their experience (in both content and format) is important in creating a successful teaching experience for them (BrckaLorenz et al., 2020).
• Goodwin et al. (2021) identified different mentorship roles for CURE graduate student instructors and provided evidence that instructors who embrace their functions as “student supporters” and “research mentors” are more likely to see value for themselves in the CURE and engage in teaching CUREs again. Goodwin et al. (2021) defined “student supporters” as mentors who “[provide] emotional support to students” and “research mentors” as instructors who “[develop] student[s’] autonomy and competence as researcher[s]” (Goodwin et al., 2021: Fig. 3).
• Good mentorship should incorporate socioemotional support, culturally relevant mentoring, and appropriate personal interest in students (Haeger & Fresquez, 2016; Robnett et al., 2018; Shanahan et al., 2015). Consider also sharing your own stories of struggles and failures with research (Jayabalan et al., 2021).
• The affective-motivational research competence model (Wessels et al., 2018) has identified six situations that mentees need support with to engage in research as well as dispositions that can be fostered to help overcome the challenges of these situations (Table 16). These dispositions can be brought about through interventions and scaffolding activities of the CURE aimed at fostering specific experiences and tasks that promote knowledge integration (Linn et al., 2015).
Box 10: Important points
• A CURE is an opportunity to move away from a model of teacher-identity centered around the delivery and assessment of knowledge to one of a mentor.
• Successful CUREs empower students to think for themselves and enable them to make mistakes in a safe and supportive environment.
• A CURE is an authentic research experience and as such can introduce students to collaboration by including outside researchers, laboratory personnel, and staff members.
• Every member of the CURE has goals, needs, and expectations. This is true of the students and instructor(s) of course, but also applies to any other member of the team. Consider carefully the help and means everyone requires to achieve their mission.
11. How will the success of the CURE be assessed?
Overview: Just like the assessment of the students’ work in the CURE enables them to reflect upon their progress and make corrections, the assessment of the CURE itself provides opportunities for redesigns, corrections, and reflection by the instructor(s). Assessing students’ work provides a basis for an often-necessary grade. Similarly, the evaluation of the CURE is integral to the instructor’s annual evaluations, their eligibility for promotions, tenure, and awards. Because CUREs often require financial and/or logistical support, the assessment of the experience enables the demonstration of its efficacy and can facilitate the renewal or expansion of the program. When a CURE is an integral component of the curriculum, particularly as an element of introductory courses, general education requirements, or program prerequisite, its evaluation is an important part of the overall educational experience of students that may become part of program reviews, external evaluations, and validation of the course as satisfying specific certifications or endorsements. There are three overlapping components to the assessment of CUREs: course, instructor, and student outcomes (Brownell & Kloser, 2015) summarized in Figure 4. Section F presents a discussion of the existing approaches in evaluating all three of these elements of CUREs from a programmatic and scholarly point of view; many of these approaches are aimed at incorporating data from large number of students; several may require the involvement of education research collaborators or institutional representatives. Here, the focus is on data collection and analyses that individual instructors can engage in to reflect upon the research progress and the pedagogical framework.
This section enables you to determine:
1. The value of the individual assignments and activities implemented
2. The usefulness of the scaffold
3. The fit of the assessment and learning goals
4. The research outcomes of the CURE and their significance to stakeholders
5. The success of the CURE in promoting the success of all students
Education starts here:
• Are the assessments aligned with the learning goals of the course?
• Are the formative and summative assessments of the students’ learning also assessed for the success of the course activities and scaffold?
• Does the scaffold for the CURE need to be revised?
• Are there bottlenecks to learning that remain to be addressed by the CURE scaffold?
• Are some parts of the course “over-scaffolded”? Is the preparation of the students underestimated for specific activities?
• Are there concept inventories or professional society standards that can be used to assess students’ proficiency on certain topics of the CURE?
• Did students encounter problems with specific activities?
• Are there assignments that were unclear?
• Were there homework assignments that multiple students asked for clarifications about?
• What are the perceived gains from the CURE that students express?
• Can specific assignments’ rubrics be edited to include elements useful to assessing the efficacy of the activity?
• Does your institution have survey tools (including some incorporated into student evaluations) that can be leveraged to assess your pedagogy of the course?
• Can you engage peer instructors in reviewing your course?
• Can you present the CURE you developed at an education conference or in the education session of the annual meeting of your professional society to get feedback and thoughts from colleagues?
• Can you gather data on the success of the CURE in closing the equity gap and promoting the learning and success of all?
Scholarship starts here:
• Can the research findings of the CURE be presented to the research community for feedback?
• Is the work ready to be submitted in the form of a proposal or manuscript?
• Is it appropriate to present the work at a conference?
• Are there other stakeholders of the research who could provide critiques of the work and assess its significance?
Research and Education together:
• The efficacy of the scaffold and activities can be in part assessed through the performance and success of students. Each assessment of the students’ work is also an assessment of the work of the instructor. In many ways, some of the issues associated with CURE assessment can be answered through student assessment, an issue discussed earlier in the document.
• A key element of the assessment of the activities and assignments of the CURE is to define clear goals for each of them. This participates in increasing transparency for students and enables the determination of their success and therefore yours.
• Assessments of students’ work and survey tools can be used to identify knowledge/skill bottlenecks to address in iterations of the CURE.
• There are numerous forms of formative assessment that can be implemented throughout the CURE to help determine the efficacy of the CURE activities and interventions:
• Minute papers or muddy point papers (Anderson & Burns, 2013; Stead, 2005) can help assess whether or not a particular activity fulfilled its learning goals. Example of prompts are available from multiple sources (e.g., Tufts University; University of Wisconsin)
• Concept maps and Venn diagrams can help students synthesize information and be assessed by instructor(s) for learning and efficacy of scaffolding activities (Bauman, 2018; McConnell et al., 2003).
• Group and individual presentations of research updates enables the instructor to assess the efficacy of the scaffold in supporting the research progress.
• Group reviews and self-reviews ([Activity 19]) can help assess group dynamics and the effect of team building efforts.
• Questions can be included in the reflection workbook ([Activity 26]) to enable the assessment of the students’ understanding of concepts presented in the CURE activities, thus determining their efficacy. Student reflections can be evaluated with students to get additional insights in the student experience (McLean et al., 2022).
• Many institutions allow the design of a few custom questions to add to student evaluations of instruction. These custom questions can be used to assess the pedagogy of the course. Examples of questions can be found in the project ownership survey (Hanauer & Dolan, 2014), the classroom and school community inventory (Rovai et al., 2004), the laboratory course assessment survey (Corwin et al., 2015), the classroom undergraduate research experience survey (Denofrio et al., 2007), the science process skills inventory (Arnold et al., 2013), or a combination of these tools (Lo & Le, 2021).
• Some surveys have been developed to specifically assess the efficacy of CUREs, or even specific activities of the course (e.g., Satusky et al., 2022)
• Online survey tools (e.g., Qualtrics and SurveyMonkey) or paper survey tools (including some standard student evaluation forms) administered as part of midterm evaluations and end-of-term evaluations can enable instructors to poll students on affect towards specific activities, perceptions of learning gains, and self-efficacy.
• Concept inventories (Table 6) as well as professional standards (e.g., American Psychological Association) can be used to devise summative assessments.
• Many indicators of learning outcome satisfaction can be assessed by integrating appropriate criteria into grading rubrics (Chamely-Wiik et al., 2014).
• Extensive notetaking of class observations, student behaviors, reflections on pedagogy, problems encountered in the classroom, failures, and successes can be used to revise individual activities as well as their organization.
• Interviews and focus-groups by the instructor or by a neutral third party can enable students to share their thoughts and feelings about the CURE and its outcomes (e.g., (Brownell & Kloser, 2015; Turner et al., 2021; Wooten et al., 2018). This approach can also be used to assess the experience of graduate student instructors and peer assistants to ensure positive and improved experience for all members of the CURE team (see Heim & Holt, 2019). Essays can also be used to gather the thoughts of CURE participants (Wooten et al., 2018).
• Partnering with colleagues and education researchers can be helpful in determining the impact of the CURE on all students and the success of the CURE in mitigating systemic biases and enabling the learning of a diverse student body.
Box 11: Important points
• Take abundant notes during the implementation of the CURE to guide redesigns, expansions, and modifications to the course
• Ask yourself numerous questions when grading and/or assessing student work, for example:
• Did this assignment fulfill its goal of assessing the associated ELOs?
• Did this activity support the associated learning goal(s)?
• Does your institution enable custom questions on student evaluations?
• Can you use surveys or include questions within assignments to gather data from your students on the activities and assignments of the CURE?
• Reviews by peers of manuscripts, conference presentations, and grant proposals are a measure of the success and progress of the research of the CURE | textbooks/socialsci/Education_and_Professional_Development/A_CURE_for_everyone%3A_A_guide_to_implementing_Course-based_Undergraduate_Research_Experiences_(Calede)/2.03%3A_Structuring_your_CURE.txt |
Jonathan J.-M. Calède
Several tools have been designed explicitly for the assessment of CUREs or can be utilized to assess the outcomes of CUREs. Shortlidge and Brownell (2016) present over 30 different assessment tools that can be used to investigate the efficacy of CUREs. Detailed publications are associated with each of those, including for tools specifically designed for CUREs (Corwin et al., 2015; Lopatto et al., 2008; Lopatto, 2004). More recent tools have also been published (Angra & Gardner, 2018; Clemmons et al., 2020; Killpack & Fulmer, 2018; Wang et al., 2018) and there are several databases of assessment tools (e.g., Q4B). New tools are also being developed as the popularity of CUREs increases (e.g., E-CURE). Many of these tools have been validated in multiple CUREs (e.g., Jordan et al., 2014; Shaffer et al., 2014), offering the opportunities for comparisons with the course being assessed. Additional tools exist for specific research experience. For example, there exist guidelines for the assessment of field-based courses (Pyle, 2009; Shortlidge et al., 2021).
One of the obstacles to the rigorous analysis of a CURE through qualitative and quantitative analyses of student surveys is the need for a comparison group (Shortlidge & Brownell, 2016). However, it is possible for instructors to explore the efficacy of their CURE without the need for a formal rigorous experimental setup. Such analyses should focus on the expected learning outcomes (ELOs) of the CURE. Existing analytical tools can be matched to the ELOs of the CURE to enable data collection. Such work should consider the following issues: (1) the sample of students that the survey tool was validated on and (2) the time necessary to administer, score, and analyze the results of the survey tools (Shortlidge & Brownell, 2016). Table 17 presents a selection of assessment tools that are readily accessible in the literature or online, validated, and can be implemented with little to moderate efforts by instructors.
Selected assessment tools for the classroom organized by topic (modified from Shortlidge and Brownell, 2016)
List of assessment tools to explore attitudes about science, cognitive skills, critical thinking, experimental design, communication, and motivation, among others
ATTITUDES ABOUT SCIENCE
Colorado Learning Attitudes about Science Survey Semsar et al., 2011
Classroom Undergraduate Research Experience https://www.grinnell.edu/academics/resources/ctla/assessment/cure-survey
Research on the Integrated Science Curriculum https://www.grinnell.edu/academics/centers-programs/ctla/assessment/risc
COGNITIVE SKILLS
Blooming Biology Tool Crowe et al., 2008
California Critical Thinking Skills Test http://www.insightassessment.com/Products/Products-Summary/Critical-Thinking-Skills-Tests/California-Critical-Thinking-Skills-Test-CCTST
Study Process Questionnaire Biggs et al., 2001
COLLABORATION, DISCOVERY AND RELEVANCE, ITERATION
Laboratory Course Assessment Survey Corwin et al., 2015
Perceived Cohesion scale Bollen and Hoyle, 1990
CRITICAL THINKING
Blooming Biology Tool Crowe et al., 2008
California Critical Thinking Skills Test http://www.insightassessment.com/Products/Products-Summary/Critical-Thinking-Skills-Tests/California-Critical-Thinking-Skills-Test-CCTST
DEEP AND SURFACE LEARNING
Study Process Questionnaire Biggs et al., 2001
ENVIRONMENTAL AWARENESS AND ATTITUDES
Environmental Attitudes Inventory Milfont & Duckitt, 2010
New Ecological Paradigm Scale Dunlap et al., 2000
EXPERIMENTAL DESIGN
Biological Experimental Design Concept Inventory Deane et al., 2014
Expanded Experimental Design Ability Test Brownell et al., 2014
Experimental Design – First Year Undergraduate https://q4b.biology.ubc.ca/concept-inventories/experimental-design-first-year-undergraduate-level/
Experimental Design – Third/Fourth Year Undergraduate Level https://q4b.biology.ubc.ca/concept-inventories/experimental-design-thirdfourth-year-undergraduate-level/
Experimental Design Ability Test Sirum & Humburg, 2011
Rubric for Experimental Design Dasgupta et al., 2014
Tool to assess interrelated experimental design Killpack & Fulmer, 2018
COMMUNICATING RESULTS
Graph Rubric Angra & Gardner, 2018
The Rubric for Science Writing Timmerman et al., 2011
MOTIVATION AND RESILIENCE
Grit Scale Duckworth & Quinn, 2009
National Survey of Student Engagement Kuh, 2009
Science Motivation Questionnaire II Glynn et al., 2011
OWNERSHIP AND BELONGING
Project Ownership Survey Hanauer & Dolan, 2014
Career Decision Making Survey – Self Authorship Creamer et al., 2010
Perceived Cohesion scale Bollen and Hoyle, 1990
Transparency in Learning and Teaching in Higher Education Survey https://tilthighered.com/abouttilt
PERSONAL GAINS
Classroom Undergraduate Research Experience https://www.grinnell.edu/academics/resources/ctla/assessment/cure-survey
Colorado Learning Attitudes about Science Survey Semsar et al., 2011
Research on the Integrated Science Curriculum https://www.grinnell.edu/academics/centers-programs/ctla/assessment/risc
Science Motivation Questionnaire II Glynn et al., 2011
Survey of Undergraduate Research Experiences Lopatto, 2004
Undergraduate Student Self-Assessment Instrument Weston & Laursen, 2015
Transparency in Learning and Teaching in Higher Education Survey https://tilthighered.com/abouttilt
NATURE AND PROCESS OF SCIENCE
Biological Experimental Design Concept Inventory Deane et al., 2014
BioSkills Guide Clemmons et al., 2020
Classroom Test of Scientific Reasoning Lawson et al., 2000
Laboratory Course Assessment Survey Corwin et al., 2015
Views About Sciences Survey Halloun & Hestenes, 1998
Expanded Experimental Design Ability Test Brownell et al., 2014
Experimental Design – First Year Undergraduate https://q4b.biology.ubc.ca/concept-inventories/experimental-design-first-year-undergraduate-level/
Experimental Design – Third/Fourth Year Undergraduate Level https://q4b.biology.ubc.ca/concept-inventories/experimental-design-thirdfourth-year-undergraduate-level/
Experimental Design Ability Test Sirum & Humburg, 2011
Molecular Biology Data Analysis Test Rybarczyk et al., 2014
Rubric for Experimental Design Dasgupta et al., 2014
Test of Scientific Literacy Skills Gormally et al., 2012
The Rubric for Science Writing Timmerman et al., 2011
DATA ANALYSIS AND QUANTITATIVE REASONING
Statistical Reasoning in Biology Concept Inventory Deane et al., 2016
BioSkills Guide Clemmons et al., 2020
Psychological Research Inventory of Concepts Veilleux & Chapman, 2017
Molecular Biology Data Analysis Test Rybarczyk et al., 2014
Table 17. Selected assessment tools for the classroom organized by topic (modified from Shortlidge and Brownell, 2016). | textbooks/socialsci/Education_and_Professional_Development/A_CURE_for_everyone%3A_A_guide_to_implementing_Course-based_Undergraduate_Research_Experiences_(Calede)/2.04%3A_Assessing_CUREs.txt |
Jonathan J.-M. Calède
Numerous studies have focused on barriers to the design and implementation of CUREs affecting instructors. Open answer surveys and Likert-scale questions have been particularly helpful in identifying the main obstacles standing in the way of the development of CUREs (Table 18). Many of these challenges are problems for the implementation of undergraduate research in general, including mentored undergraduate research experiences (Doyle, 2002).
Other obstacles have also been mentioned in the literature. Some represent obstacles specific to CUREs, others are obstacles to undergraduate research or creative teaching in general. These obstacles include the time to publication of research projects (Turner et al., 2021), issues of identity as a teacher in environments that emphasize research productivity (Brownell & Tanner, 2012), time and emotional investment for graduate teaching assistants (Goodwin et al., 2021), fear that the CURE time will take away from content delivery and lead to detrimental effect on understanding of course concepts (Lopatto et al., 2014), the loss of favorite labs and activities not compatible wit the CURE (DeChenne-Peters & Scheuermann, 2022), and the lack of expertise of support staff (Wolkow et al., 2014).
A survey of instructors at the Ohio State University mirrors the data from the literature presented above. The open answers of (a) instructors who have developed and implemented CUREs (N=16), (b) instructors who have developed but not yet implemented a CURE (N=2), and (c) instructors who have not yet developed a CURE (N=10) were categorized in each of the 18 items presented in Table 18, or assigned to new categories to identify the challenges experienced by Ohio State instructors specifically. Instructors were able to identify more than one obstacle to their design and implementation efforts resulting in 47 coded difficulties. None, even within the group of instructors that has not implemented a CURE, expressed a lack of interest in the development of one. Because of the small number of responses associated with CUREs that have been developed, but not yet implemented, two categories are considered: CURE developed, and CURE not yet developed (Table 19).
The data from instructors at The Ohio State University are consistent with prior studies. Yet, they also provide interesting additional information. For example, the comparison between instructors who have developed (and for all but two implemented) a CURE and those who are yet to engage in this practice shows that student obstacles, particularly student preparedness and time investment, are important impediments to the implementation of CUREs; they also may be underestimated by instructors developing a CURE. These issues are usually not presented as major challenges to the development and implementation of CUREs in the literature (but see Spell et al., 2014). Yet, student attitudes are critical to learning gains (Lopatto et al., 2022), suggesting the importance of shaping the approach of students to the course at the start of the CURE. In fact, the only greater obstacle to the development and implementation of CUREs than student preparedness and investment is the lack of instructor time to design and manage the implementation of the experience. The fact that this latter obstacle is identified as the number one hurdle to CURES by both categories of instructors shows that it is both a barrier to the initial development and conceptualization of a CURE as well as the full design and enactment of the course experience. Experienced CURE instructor interviews show that managing the class time for CURE students, and to a lesser extant, the time demand on instructors, are indeed big challenges (DeChenne-Peters & Scheuermann, 2022). One important obstacle identified by Ohio State instructors who have not yet developed a CURE is the need for additional guidance and resources, including model experiences, to support them in developing CUREs.
Additional conversations with colleagues at Ohio State emphasized very practical hurdles to CUREs, centering on hardware, software, and network support of computing needs. The prevalence of tablets and smartphones, often to the detriment of hardware better adapted to the use of professional, technical, and scholarly software programs, among students is a central issue for many instructors. The lack of availability or cost of resources extends to software needs, including for popular programs with a university license that are not free (e.g., NVivo), and the hosting of project websites onto which student deliverables can be uploaded. Even freeware programs may not be easily used because of restrictions on the ability of instructors to install software on university-owned devices and the resulting complexities of coordinating with support staff. Coordination with staff and bureaucratic hurdles extend to cost-sharing between teaching and research budgets; they become overwhelming when incorporating field work into the CURE.
Some perceived obstacles and challenges to the creation of CUREs may in fact be just that, perceived. Research shows that the implementation of CUREs does not in fact lead to a loss of content knowledge, but quite the opposite (Lopatto et al., 2014). Buy-in from colleagues and departments is also often (although not always) not as absent as feared (Govindan et al., 2020; DeChenne-Peters & Scheuermann, 2022).
The potential problems of undergraduate preparedness and time commitment should not be underestimated when developing a CURE. A highly structured scaffold (Figure 2) and the TILT framework (Winkelmes et al., 2016) are important tools to mitigate student difficulties, increase engagement, and guide students through the acquisition of new skills. The use of interventions targeting procrastination, time management, and organizational skills may also be helpful (Häfner et al., 2014; Stevens et al., 2019).
Several possible solutions to some of the other challenges identified above are suggested throughout this document. Some of those are repeated below in Table 20 along with additional ideas and recommendations from the literature.
Possible solutions to some of the challenges posed by the development and implementation of CUREs
Solutions and resources to financial, technical, research, pedagogical, student, and instructor obstacles
Category Possible solution Reference
Financial obstacles Professional societies (and your institution) may offer funding for research and travel to conferences by undergraduate students Matyas et al., 2017
Established databases provide sources of data that can be used in CUREs Table 13
Use collaborations to reduce costs, promote publishability of findings, and distribute costs and rewards of research Govindan et al., 2020
Technical obstacles Use a central course to support undergraduate research efforts across several laboratories Dillon, 2020
Freeware programs can be used by students to engage in data analysis Table 14
Some field work can be undertaken through asynchronous self-led field trips. Washko, 2021
Support teaching assistants to help them overcome the obstacles of implementing the CURE Heim & Holt, 2019
Involve students in the logistics of implementing the CURE and teach them the full scope of research project management Govindan et al., 2020
Work with collaborators, mentored research students, graduate students, and postdocs to distribute the emotional labor and time commitment
Research obstacles Publish the CURE itself in education literature, even if research findings themselves cannot be published
National programs provide research questions and contexts that are relevant to the community and adapted to the classroom setting of the CURE Lopatto et al., 2014
Use collaborations to reduce costs, promote publishability of findings, and distribute costs and rewards of research Govindan et al., 2020
Professional societies offer peer-reviewed curricula and pedagogical resources Matyas et al., 2017
Use CUREs to explore new areas of research and risky research projects to limit pressure for success
Pedagogical obstacles Align core concepts and competencies of the CURE and the class ELOs Petersen et al., 2020
Teach students how to learn in your disciplines by making learning how to learn part of your curriculum Petersen et al., 2020
Have students focus on the problem-solving process, rather than just the correct answer Petersen et al., 2020
Professional societies provide online learning and networking communities for both instructor and students Matyas et al., 2017
Adjust the content to free the time necessary to enable students’ mastery of technical skills Wolkow et al., 2014
Professional societies offer peer-reviewed curriculum and pedagogical resources Matyas et al., 2017
Involve students in the planning and development of the project through the writing of proposals Govindan et al., 2020
Involve students in the logistics of implementing the CURE and teach them the full scope of research project management Govindan et al., 2020
Student obstacles Increased student ownership and involvement in experimental design helps mitigate frustrations Govindan et al., 2020
Use affirming language to frame discussions and activities Govindan et al., 2020
Transparent expectations and grades disjunct from experimental success or publishability of findings can help overcome student fears of unknown
Emphasize to students the significance of the research to the broader community of scholars Cooper et al., 2019; Hanauer et al., 2012
Involve students in the logistics of implementing the CURE Govindan et al., 2020
Involve students in the planning and development of the project through the writing of proposals Govindan et al., 2020
Use group contracts and best practices to design successful group experiences
Instructor obstacles Invite faculty members and institutional stakeholders to students’ CURE colloquium Govindan et al., 2020
Promote the success and outcomes of the CURE to the university community
Select professional societies and published examples of cures can provide guidance for the development of CUREs Govindan et al., 2020
National programs provide workshops and curricular materials Lopatto et al., 2014
National programs may have staff to help troubleshoot Lopatto et al., 2014
Help preparation of instructors and support staff with training videos, more thorough instructor lab manuals, reference sheets, and training workshops Wolkow et al., 2014
Work with collaborators, mentored research students, graduate students, and postdocs to distribute the emotional labor and time commitment
Table 20. Possible solutions to some of the challenges posed by the development and implementation of CUREs. | textbooks/socialsci/Education_and_Professional_Development/A_CURE_for_everyone%3A_A_guide_to_implementing_Course-based_Undergraduate_Research_Experiences_(Calede)/2.05%3A_Overcoming_the_challeng.txt |
In short, we are not here to serve, supplement, back up, complement, reinforce, or otherwise be defined by any external curriculum. – Stephen North
Our field can no longer afford, if it ever could, to have forged a separate peace between classroom and nonclassroom teaching. There is no separate but equal. – Elizabeth H. Boquet and Neal Lerner
The intersecting contexts of on-location tutoring not only serve ... – Holly Bruland
Increasingly, the literature on writing centers and peer tutoring programs reports on what we’ve learned about teaching one-to-one and peer-to-peer from historical, theoretical, and empirical points of view. We’ve re-defined and re-interpreted just how far back the “desire for intimacy” in writing instruction really goes (Lerner “Teacher-Student,” The Idea). We’ve questioned what counts as credible and useful research methods and methodologies (Babcock and Thonus; Liggett, Jordan, and Price; Corbett “Using,” “Negotiating”) and meaningful assessment (Schendel and Macauley). We’ve explored what the implications of peer tutoring are, for not just tutees, but also for tutors themselves (Hughes, Gillespie, and Kail). And we’ve made connections to broader implications for the teaching and learning of writing (for example see Harris “Assignments,” and Soliday Everyday Genres on assignment design and implementation; Greenfield and Rowan, Corbett, Lewis, and Clifford, and Denny on race and identity; Mann, and Corbett “Disability” on learning-disabled students; Lerner The Idea and Corbett, LaFrance, and Decker on the connections between writing center theory and practice and peer-to-peer learning in the writing classroom). Since the first publication of North’s often-cited essay “The Idea of a Writing Center,” quoted above, writing center practitioners and scholars have continued to ask a pivotal question: How closely can or should writing centers, writing classrooms—and the people involved in either or both—collaborate (North “Revisting”; Smith; Hemmeter; Healy; Raines; Soliday “Shifting Roles”; Decker; Sherwood; Boquet and Lerner)?
Yet with all our good intentions, unresolved tensions and dichotomies pervade all our actions as teachers or tutors of writing. At the heart of everything we do reside choices. Foremost among these choices includes just how directive (or interventionist or controlling) versus how nondirective (or noninterventionist or facilitative) we wish to be in the learning of any given student or group of students at any given time. The intricate balancing act between giving a student a fish and teaching him or her how to fish can be a very slippery art to grasp. But it is one we need to think about carefully, and often. It affects how we design and enact writing assignments, how much cognitive scaffolding we build into every lesson plan, or how much we tell students what to do with their papers versus letting them do some of the crucial cognitive heavy-lifting. The nuances of this pedagogical balancing act are brought especially to light when students and teachers in writing classrooms and tutors from the writing center or other tutoring programs are brought together under what Neal Lerner characterizes as the “big cross-disciplinary tent” of peer-to-peer teaching and learning (qtd. in Fitzgerald 73). Like many teachers of writing, I started my career under this expansive tent learning to negotiate directive and nondirective instruction with students from across cultures and across the disciplines.
I started out as a tutor at Edmonds Community College (near Seattle, Washington) in 1997. When I made my way as a GTA teaching my own section of first-year composition at the University of Washington, in 2002, I took my writing-centered attitudes and methods right along with me. My initial problem was how to make the classroom more like the center I felt so strongly served students in more individualized and interpersonal ways. I began to ask the question: Can I make every writing classroom (as much as possible) a “writing center”? Luckily, I soon found out I was not alone in this quest for pedagogical synergy. Curriculum- and classroom-based tutoring offer exciting, dramatic instructional arenas from which to continue asking questions and provoking conversations involving closer classroom and writing center/tutoring connections (Spigelman and Grobman; Moss, Highberg, and Nicolas; Soven; Lutes; Zawacki; Hall and Hughes; Cairns and Anderson; Corbett “Bringing,” “Using,” “Negotiating”). In the Introduction to On Location: Theory and Practice in Classroom-Based Writing Tutoring Candace Spigelman and Laurie Grobman differentiate between the more familiar curriculum-based tutoring, usually associated with writing fellows programs, and classroom-based tutoring, where tutorial support is offered during class (often in developmental writing courses). But just as all writing centers are not alike, both curriculum- and classroom-based tutoring programs differ from institution to institution. There is much variation involved in curriculum- and classroom-based tutoring due to the context-specific needs and desires of students, tutors, instructors, and program administrators: Some programs ask tutors to comment on student papers; some programs make visits to tutors optional, while others make them mandatory; some have tutors attend class as often as possible, while others do not; and some programs offer various hybrid approaches. Due to the considerable overlap in theory and practice between curriculum- and classroom-based tutoring, I have opted for the term course-based tutoring (still CBT) when referring to pedagogical elements shared by both.
The following quotes, from three of the case-study participants this book reports on, begin to suggest the types of teaching and learning choices afforded by CBT, especially for developmental teachers and learners:
I feel like when I’m in the writing center just doing individual sign up appointments it’s much more transient. People come and you don’t see them and you don’t hear from them until they show up and they have their paper with them and it’s the first time you see them, the first time you see their work, and you go through and you help them and then they leave. And whether they come back or not it’s up to them but you’re not really as tied to them. And I felt more tied to the success of the students in this class. I really wanted them to do better.
– Sam, course-based tutor
One of the best features of my introductory English course was the built-in support system that was available to me. It was a small class, and my professor was able to give all of us individual assistance. In addition, the class had a peer tutor who was always available to help me. My tutor helped alleviate my anxiety over the understanding of assignments as she would go over the specifics with me before I started it ... When I did not understand something, my professor and tutor would patiently explain the material to me. My fears lessened as my confidence grew and I took more chances with my writing, which was a big step for me.
– Max, first-year developmental writer
I’d be interested in seeing how having a tutor in my class all the time would work, but at the same time one of the things I’m afraid of is that the tutor would know all the readings that we’re doing and would know the kinds of arguments I’m looking for and they might steer the students in that direction instead of giving that other point of view that I’m hoping they get from the tutor.
– Sarah, graduate writing instructor
We hear the voice of a course-based tutor at the University of Washington (UW), Sam, reflecting on her experiences working more closely with developmental writers in one course. We feel her heightened sense of commitment to these students, her desire to help them succeed in that particular course. We will hear much more about Sam’s experiences in Chapter Three. We also hear the voice of a developmental writer from Southern Connecticut State University (SCSU), Max, a student with autism who worked closely with a course-based tutor. Max intimates how his peer tutor acted much like an assistant or associate teacher for the course. He suggests how this tutor earned his trust and boosted his confidence, helping to provide a warm and supportive learning environment conducive to preparing him for the rigors of academic writing and communication. And, in the third quote, we hear from a graduate student and course instructor at the University of Washington, Sarah, who expresses her concern for having a tutor too “in the know” and how that more intimate knowledge of her expectations might affect the student writer/tutor interaction. We will hear much more from student teachers like Sarah (as well as more experienced classroom instructors) especially in Chapters Three and Four. Experiences like the ones hinted at by these three diverse students (at very different levels) deserve closer listening for what they have to teach us all, whether we feel more at home in the writing center or writing classroom.
Answering Exigencies from the Field(s)
While enough has been written on this topic to establish some theoretical and practical starting points for research, currently there are two major avenues that warrant generative investigation. First, although many CBT programs include one-to-one and group tutorials, there are few studies on the effects of participant interactions on these tutorials (Bruland; Corbett “Using”; and Mackiewicz and Thompson being notable exceptions). And only two (Corbett, “Using”; Mackiewicz and Thompson, Chapter 8) provide transcript reporting and analyses of the tutorials that frequently occur outside of the classroom. Valuable linguistic and rhetorical evidence that bring us closer to an understanding and appreciation of the dynamics of course-based tutoring—and peer-to-peer teaching and learning—can be gained from systematically analyzing what tutorial transcripts have to offer. Second, is the need for research on the effects of CBT with multicultural and nonmainstream students (see Spigelman and Grobman, 227-30). CBT provides the potential means for extending the type of dialogic, multiple-perspectival interaction in the developmental classroom scholars in collections like Academic Literacy in the English Classroom, Writing in Multicultural Settings, Bakhtinian Perspectives on Language, Literacy, and Learning, and Diversity in the Composition Classroom encourage—though not without practical and theoretical drama and complications.
Beyond Dichotomy begins to answer both these needs with multi-method qualitative case studies of CBT and one-to-one conferences in multiple sections of developmental first-year composition at two universities—a large, west coast R1 (the University of Washington, Seattle) and a medium, east coast master’s (Southern Connecticut State University, New Haven). These studies use a combination of rhetorical and discourse analyses and ethnographic and case-study methods to investigate both the scenes of teaching and learning in CBT, as well as the points of view and interpretations of all the participating actors in these scenes—instructors, peer tutors, students, and researcher/program administrator.
This book extends the research on CBT—and the important implications for peer-to-peer learning and one-to-one tutoring and conferencing—by examining the much-needed rhetorical and linguistic connections between what goes on in classroom interactions, planning, and one-to-one tutorials from multiple methodological and analytical angles and interpretive points of view. If we are to continue historicizing, theorizing, and building synergistic partnerships between writing classrooms and the peer tutoring programs that support them, we should have a deeper understanding of the wide array of choices—both methodological and interpersonal—that practitioners have, as well as more nuanced methods for analyzing the rhetorical and linguistic forces and features that can enable or deter closer instructional partnerships. This study ultimately presents pedagogical and methodological conclusions and implications usable for educators looking to build and sustain stronger pedagogical bridges between peer tutoring programs and writing classrooms: from classroom instructors and program administrators in Composition and Rhetoric, to writing center, writing fellows, supplemental instruction, and WAC/WID theorists and practitioners.
The lessons whispered by the participants in this book’s studies echo with pedagogical implications. For teaching one-to-one, what might Sam’s thoughts quoted above about being “more tied to the success of the students” or Sarah’s intimations regarding a tutor being more directly attached to her course add to conversations involving directive/nondirective instruction and teacher/tutor role negotiation? What might Max’s sentiments regarding writing anxiety—and how the pedagogical teamwork of his instructor and tutor in his developmental writing course helped him cope—contribute to our understanding of what pedagogical strategies tutors and teachers might deploy with struggling first-year students? In short, what are teachers, tutors, and student writers getting out of these experiences, and what effects do these interactions have on tutor and teacher instructional choices and identity formations? An important and related question for the arguments in this book, then, becomes how soon can developing/developmental student writers, potential writing tutors, and classroom instructors or teaching assistants be involved in the authoritative, socially, and personally complicated acts of collaborative peer-to-peer teaching and learning? When are they ready to model those coveted Framework for Success in Postsecondary Writing “habits of mind essential for success in college writing?” When are they ready to balance between strategically directing thought and action and holding back when coaching peers to become more habitually curious, open, engaged, creative, persistent, responsible, flexible, and metacognitive? There are important pedagogical connections between how and with whom these habits of mind are fostered and how students develop as college writers (see, for example, Thaiss and Zawacki; Beaufort; Carroll) that studies in CBT can bring into high relief. In sum, this book will explore, elaborate on, and provide some answers to the following central question: How can what we know about peer tutoring one-to-one and in small groups—especially the implications of directive and nondirective tutoring strategies and methods brought to light in this book—inform our work with students in writing centers and other tutoring programs, as well as in writing classrooms? I’ll start this investigation by looking at why we should continue to build bridges that synergistically bring writing classrooms and tutoring programs closer together.
Reclaiming the Writing Classroom into “The Idea of a Writing Center”
Above we discussed the exigencies for this book’s case studies. But bridging and synergizing the best of writing center and writing classroom pedagogies could be considered the uber-exigency that gave birth to CBT programs in the first place. In his pivotal 1984 College English essay, Stephen North passionately let loose the frustrations many writing center practitioners felt about centers being seen as proofreading, or grammar fix-it shops, or as otherwise subservient to the writing classroom. In this polemical “declaration of independence,” North spelled out a, thereafter, much-repeated idea that writing tutors are concerned with producing better writers not necessarily better writing. North’s emphasis on writers’ processes over products, his insistence that the interpersonal talk that foregrounds and surrounds the one-to-one tutorial is what makes writing centers uniquely positioned to offer something lacking in typical classroom instruction (including the notion that tutors are not saddled with the responsibility of institutional judger-grader), touched on foundational writing center ideology. But North’s vehemence would also draw a theoretical and practical dividing line between “we” in the center and “them” in the classroom as well as a host of critiques and counterstatements (North “Revisting”; Smith; Hemmeter; Smulyan and Bolton; Healy; Raines; Soliday “Shifting”; Boquet and Lerner). Further, this divisive attitude may have also contributed to the self-imposed marginalization of the writing center in relation to the rest of the academy, as Jane Nelson and Margaret Garner—in their analyses of the University of Wyoming Writing Center’s history under John and Tilly Warnock—claim occurred in the 1970s and 1980s. The trend for arguing from a perspective of what we can’t or won’t do was stubbornly set.
Though encouraging more of a two-way street between classroom and center, Dave Healy, Mary Soliday (“Shifting”), Teagan Decker, and Margot Soven have all drawn on Harvey Kail and John Trimbur’s 1987 essay “The Politics of Peer Tutoring” to remind us that the center is often that place just removed enough from the power structures of the classroom to enable students to engage in critical questioning of the “seemingly untouchable expectations, goals and motivations of the power structures” that undergraduates must learn within (Decker, “Diplomatic” 22). In another 1987 essay, Trimbur, drawing on Kenneth Bruffee’s notion of “little teachers,” warned practitioners of the problem of treating peer tutors as para- or pre-professionals and to recognize “that their community is not necessarily ours” (294). Bruffee and Trimbur worry that the collaborative effect of peership, or the positive effects of working closer perhaps to the student’s Vygotsykyan zone of proximal development, will be lost if tutors are trained to be too teacherly. Muriel Harris intimates, in her 2001 “Centering in on Professional Choices,” her own personal and professional reasons for why she prefers writing center tutoring and administration over classroom instruction. Commenting on her experience as an instructor teaching writing in the classroom, she opines: “Several semesters passed as I became ever more uneasy with grading disembodied, faceless papers, standing in front of large classes trying to engage everyone in meaningful group discussions, and realizing that I wasn’t making contact in truly useful ways with each student as a writer composing text” (431). She views her experiences in writing centers, in contrast, as enabling her to focus on “the copious differences and endless varieties among writers and ways to uncover those individualities and use that knowledge when interacting with each writer” (433). And there it is again, the scapegoat doing its potentially divisive work via one of the most influential voices in teaching one-to-one and peer-to-peer. Those of us theorizing, practicing, and advocating CBT, then, must stay wary of the sorts of power, authority, and methodological issues that might potentially undermine important pedagogical aspects of the traditional one-to-one tutorial. These same issues of authority—which touch importantly on concepts like trust-building and directive/nondirective tutoring—come into play as we look to the various “parent genres” that inform the theory and practice of the instructional hybrid that is CBT: writing center tutoring, WAC writing fellows programs, peer writing groups, and supplemental instruction (Figure 1).
Figure 1: The parentgenres that inform CBT.
The Protean State of the Field in Course-Based Writing Tutoring
As Spigelman and Grobman describe in their Introduction to On Location, the strength—and concurrent complexity—of CBT lies in large part to the variety of instructional support systems that can constitute its theory and practice, the way these instructional genres mix and begin to blur as they are called upon in different settings and by different participants to form the instructional hybrid that is CBT. The authors draw on Charles Bazerman and Anis Bawarshi to expand the notion of genre from purely a means of textual categorization to a metaphorical conceptualization of genre as location. In Bazerman’s terms genres are “environments for learning. They are locations within which meaning is constructed” (qtd. in Spigelman and Grobman 2). For Bawarshi, “genres do not just help us define and organize texts; they also help us define and organize kinds of situations and social actions, situations and actions that the genres, through their use, rhetorically make possible” (qtd. in Spigelman and Grobman 2). Rather than practice in the center, or in the classroom, rather than seeing teacher here and tutor there and student over there, CBT asks all participants in the dynamic drama of teaching and learning to realize as fully as possible the myriad possible means of connecting. For CBT, genre as location opens to the imagination visions of communicative roads interconnecting locations, communication roads that can be free-flowing or grindingly congested, locations where people inhabit spaces and make rhetorical and discursive moves in sometimes smooth, sometimes frictional ways. For Spigelman and Grobman, this leads to two significant features: a new generic form emerges from this generic blending, “but it also enacts the play of differences among those parent features” (4; emphasis added). This generic play of differences—between parent forms, between participants acting within and upon this ever-blurring, context-based instructional practice—makes CBT such a compelling location for continued rhetorical and pedagogical investigation.
Pragmatics begin to blend with possibilities as we begin to ask what might be. What can we learn from CBT theory and practice that can help us build more synergistic pedagogies in our programs, for our colleagues, with our students? Furthering Spigelman and Grobman’s idea of the play of differences, by critiquing the smaller instructional genres (themselves, already complex), readers will begin to gain an intimate sense of the choices involved in the design of protean, hybrid CBT programs and initiatives. This break-down of the parent instructional genres will also provide further background of the many ways practitioners have strived to forge connections between writing classrooms and writing support systems discussed above, and begin to suggest pedagogical complications like directive/nondirective instruction in the theory and practice of CBT.
Writing Center Tutoring
Writing center tutoring is the most obvious, influential parent genre to start with. Harris, Bruffee, and North have pointed to perhaps the key ingredients that make writing center tutorials an important part of a writing curriculum. Harris has helped many compositionists see that the professional choice of doing or supporting writing center work can add much to both students’ and teachers’ understanding of how writers think and learn. Harris claims, “When meeting with tutors, writers gain the kinds of knowledge about their writing and about themselves that are not possible in other institutional settings” (“Talking” 27). Bruffee similarly makes grand assertions for the role of peer tutoring in institutional change. Bruffee contends peer tutors have the ability, through conversation, to translate at the boundaries between the knowledge communities students belong to and the knowledge communities they aspire to join. Students will internalize this conversation of the community they want to join so they can call on it on their own. This mediating role, he believes, can bring about “changes in the prevailing understanding of the nature and authority of knowledge and the authority of teachers” (Collaborative Learning 110). But this theoretical idea of the ground-shaking institutional change that can be brought about by peer tutoring runs into some practical problems when we consider such dimensions as subject matter expertise, personality, attitude, and just how deeply entrenched the power and authority of the classroom instructor really is. A tutor snug, even smug and secure in his or her belief that they are challenging “the prevailing understanding” and authority of the teacher or institution in one-to-ones may be naively misconstruing the complex nature of what it means to teach a number of individuals, with a number of individual learning styles and competencies, in the writing classroom. Often the voices of hierarchical authority ring loud in tutors’ and students’ ears, understandably transcending all other motives during instructional and learning acts.
Tutors and instructors involved in CBT instructional situations bring their own internalized versions of the “conversations of the communities” they belong to or aspire to join. Some tutors, for example, bring what they have come to understand or believe as the role of a tutor—often imagined as a nondirective, non-authoritarian peer—into classroom situations where students may have internalized a different set of assumptions or beliefs of how instruction should function in order for them to join the sorts of communities they aspire to join. Instructors, in turn, may look to tutors to be more hands-on and directive or more minimalist and traditionally peer-like, often causing authority and role confusion between everyone involved. Bruffee compounds this dilemma of tutor authority with his view of the mediating role of peer tutors. In support of his antifoundational argument for education, in the second edition of Collaborative Learning, Bruffee distinguishes between two forms of peer tutoring programs: monitoring and collaborative. In the monitoring model, tutors “are select, superior students who for all intents and purposes serve as faculty surrogates under faculty supervision. Their peer status is so thoroughly compromised that they are educationally effective only in strictly traditional academic terms” (97). In contrast, Bruffee argues that collaborative tutors: “do not mediate directly between tutees and their teachers” (97); they do not explicitly instruct as teachers do, but rather “guide and support” tutees to help them “translate at the boundaries between the knowledge communities they already belong to and the knowledge communities they aspire to join” (98). Bruffee, however, does acknowledge the fact that no collaborative tutoring program is completely uncompromised by issues of trust and authority, just as no monitoring program consists only of “little teacher” clones.
As we will see in the following sections—and throughout this book—the issues raised by Harris and Bruffee become increasingly multifaceted as social actors play on their notions of what it means to tutor, teach, and learn writing in and outside of the classroom. In CBT situations, the task of assignment translation can take a different turn when tutors have insider knowledge of teacher expectations. The affective or motivational dimension, often so important in tutoring or in the classroom (especially in nonmainstream settings), can either be strengthened or diminished in CBT. And the question of tutor authority, whether more “tutorly” or “teacherly” approaches make for better one-to-one or small-group interactions, begins to branch into ever-winding streams of qualification.
WAC Writing Fellows
This idea of just how and to what degree peer tutoring might affect the power dynamics of the classroom leads us straight into considerations of writing fellows programs. The fact that writing fellows usually comment on student drafts of papers and then meet one-to-one with students, sometimes without even attending class or even doing the same readings as the students (as with Team Four detailed in this book), points immediately to issues of power, authority, and tutor-tutee-teacher trust-building relationships relevant for CBT. The role of the writing fellow also raises the closely related issue of directive/nondirective approaches to peer tutoring. These theoretical and practical challenges hold special relevance for writing fellows (Haring-Smith). While Margot Soven commented on such logistical issues as students committing necessary time, carelessly written student drafts, and issues of time and place in meetings in 1993, the issue most practitioners currently fret over falls along the lines of instructional identity, of pedagogical authority and directiveness. Who and what is a writing fellow supposed to be?
Several writing fellows practitioners report on compelling conflicts during the vagaries of authority and method negotiation (Lutes; Zawacki; Severino and Trachsel; Corroy; Babcock and Thonus 75-77; Corbett “Using,” “Negotiating”). Jean Marie Lutes examines a reflective essay written by a University of Wisconsin, Madison fellow in which the fellow, Jill, describes an instance of being accosted by another fellow for “helping an oppressive academy to stifle a student’s creative voice” (243). Jill defends her role as peer tutor just trying to pass on a repertoire of strategies and skills that would foster her peer’s creativity. Lutes goes on to argue that in their role as writing fellows, tutors are more concerned with living up to the role of “ideal tutor” than whether or not they have become complicit in an institutional system of rigid conventional indoctrination. In an instance of the controlling force of better knowing the professor’s goals in one-to-one interactions, another fellow, Helen, reports how she resorted to a more directive style of tutoring when she noticed students getting closer to the professor’s expectations. Helen concluded that this more intimate knowledge of the professor’s expectations, once she “knew the answer” (250 n.18) made her job harder rather than easier to negotiate. The sorts of give and take surrounding CBT negotiations, the intellectual and social pressures it exerts on tutors, leads Lutes to ultimately argue that “the [writing fellows] program complicates the peer relationship between fellows and students; when fellows comment on drafts, they inevitably write not only for their immediate audience (the student writers), but also for their future audience (the professor)” (239).
Clearly, as these cases report, the issue of changing classroom teaching practices and philosophies (to say nothing of institutional change) is difficult to qualify. It places tutors in a double-bind: The closer understanding of teacher expectations, as Bruffee warned, can cause tutors to feel obligated to share what they know, moving them further away from “peer” status. If they don’t, they may feel as if they are withholding valuable information from tutees, and the tutees may feel the same way, again moving tutors further away from peer status. Yet Mary Soliday illustrates ways this tension can be put to productive use. In Everyday Genres she describes the writing fellows program at the City College of New York in terms of how the collaborations she studied led professors to design and implement improved assignments in their courses. One of the keys to the success of the program, Soliday claims, involves the apprenticeship model, wherein new fellows are paired with veteran fellows for their first semester. Only after experiencing a substantial amount of time watching their mentors interact with professors—witnessing their mentors trying to grasp the purposes and motives of their professorial partners—were these WAC apprentices ready to face the complexities of negotiating pedagogical authority themselves (also see Robinson and Hall). Cautionary tales (like the ones presented in Chapter Three and Four of this book) have also led writing fellow practitioners to attempt to devise some rules of thumb for best practices. Emily Hall and Bradley Hughes, in “Preparing Faculty, Professionalizing Fellows,” report on the same sorts of conflict in authority and trust discussed above with Lutes. They go on to detail the why’s and how’s of training and preparing both faculty and fellows for closer instructional partnerships, including a quote intimated by a fellow that he or she was trained in “a non-directive conferencing style” (32).
But what, exactly, are the features of a “nondirective” conferencing style? Is it something that can be pinpointed and mapped? Is it something that can be learned and taught? And, importantly for this study, what useful connections might be drawn between directive/nondirective one-to-one tutoring and small-group peer response and other classroom-based activities?
Peer Writing Groups
And the pedagogical inter-issues don’t get any less complicated as we turn now to writing groups—what I view as the crucial intersection between writing center, peer tutoring, and classroom pedagogies central to CBT. Influenced by the work of Bruffee, Donald Murray, Peter Elbow, Linda Flower and John Hayes, Anne Ruggles Gere, and Ann Berthoff, in Small Groups in Writing Workshops Robert Brooke, Ruth Mirtz, and Rick Evans attempted to illustrate how students learn the rules of written language in similar ways to how growing children learn oral language—through intensive interaction with both oral and written conversations with their peers and teachers. Marie Nelson’s work, soon after to be deemed the “studio” approach in the work of Rhonda Grego and Nancy Thompson, provides case studies that supported Brooke, Mirtz, and Evans’s claims with compelling empirical evidence. For example, and especially pertinent to the case studies reported on in this book in Chapter Four, Nelson’s study of some 90 developmental and multicultural response groups identified consistent patterns of salutary development in students learning to write and instructors learning to teach. Student writers usually moved in an overwhelmingly predictable pattern from dependence on instructor authority, to interdependence on their fellow group members, ultimately to an internalized independence, confidence and trust in their own abilities (that they could then re-externalize for the benefit of their group mates). Nelson noted that this pattern was accompanied by, and substantially expedited when, the pedagogical attitudes and actions of the TA group facilitators started off more directive in their instruction and gradually relinquished instructional control (for a smaller, 2008, case study that supports Nelson’s findings see Launspach).
But as fast as scholars could publish their arguments urging the use of peer response groups, others began to question this somewhat pretty picture of collaboration. Donald Stewart, drawing on Isabel Briggs Myers, argued that people with different personality types will have more trouble collaborating well with each other. Brooke, Mirtz, and Evans, while ultimately arguing for the benefits of writing groups, also described potential drawbacks like students negotiating sensitive private/public writing issues with others, reconciling interdependent writing situations with other writing teachers and classes they’ve experienced that did not value peer-to-peer collaborative learning, or working with diverse peers or peers unlike themselves. In her 1992 “Collaboration Is Not Collaboration Is Not Collaboration” Harris, focusing on issues like experience and confidence, compares peer response groups and peer tutoring. She explains how tutoring offers the kind of individualized, nonjudgmental focus lacking in the classroom, while peer response is done in closer proximity to course guidelines and with practice in working with a variety of reviewers. She also raises some concerns. One problem involves how students might evaluate each other’s writing with a different set of standards than their teachers: “Students may likely be reinforcing each other’s abilities to write discourse for their peers, not for the academy—a sticky problem indeed, especially when teachers suggest that an appropriate audience for a particular paper might be the class itself” (379). Fifteen years later, Eric Paulson, Jonathan Alexander, and Sonya Armstrong report on a peer response study of fifteen first-year students. The researchers used eye-tracking software to study what students spend time on while reading and responding. The authors found that students spend much more time focused on later-order concerns (LOCs) like grammar and spelling than higher-order concerns (HOCs) like claim and organization, and were hesitant to provide detailed critique. While their study can be criticized due to the fact that the students in the study were responding to an outside text rather than a peer group member’s text, and none of the students had any training or experience in peer response, the findings echo Harris’s concerns regarding students’ abilities to provide useful response. Obviously, the issue here is student authority and confidence. If students have not been trained in the arts of peer response, how can they be expected to give adequate response when put into groups, especially if the student is a first-year or an otherwise inexperienced academic reader and writer? How can we help “our students experience and reap the benefits of both forms of collaboration?” Harris is curious to know (381).
Writing center and peer tutoring programs from Penn State at Berks, UW at Seattle, University of Connecticut at Storrs, and Southern Illinois University at Carbondale, among many others, have answered Wendy Bishop’s call from 1988 to be “willing to experiment” (124) with peer response group work. Tutors have been sent into classrooms to help move students toward meta-awareness of how to tutor each other. In effect, they become tutor trainers, coaching fellow students on strategies to employ while responding to a peer’s paper. But student anxiety around issues of plagiarism and autonomous originality are hard to dispel. Spigelman suggests that students need to know how the collaborative generation of ideas differs from plagiarism. If students can understand how and why authors appropriate ideas, they may be more willing to experiment with collaborative writing (“Ethics”). It follows, then, that tutors, who are adept at these collaborative writing negotiations, can direct fellow students toward understanding the difference. But as with all the issues we’ve been exploring so far, the issue of the appropriation of ideas is as Harris suggests a sticky one indeed. In another essay Spigelman, drawing on Nancy Grimm and Andrea Lunsford, comments on the desires of basic writers interacting with peer group leaders who look to the tutor as surrogate teacher (“Reconstructing”). She relates that no matter how hard the tutors tried to displace their roles as authority figures, the basic writers inevitably complained about not getting enough grammar instruction, or lack of explicit directions. While on the other hand, when a tutor tried to be more directive and teacherly, students resisted her efforts at control as well. Spigelman also relates how she experiences similar reactions from students. Her accounts, as with Lutes above, suggest that it is no easy task experimenting with and working toward restructuring authority in the writing classroom.
In the 2014 collection Peer Pressure, Peer Power: Theory and Practice in Peer Review and Response for the Writing Classroom (Corbett, LaFrance, and Decker) several essays attempt to provide answers to the authority and methods questions Harris and Spigelman raise. One of the recurring themes in the collection is the reevaluated role of the instructor in coaching peer review and response groups. Contributors like Kory Ching and Chris Gerben illustrate how instructors can take an active (directive) role in coaching students how to coach each other in small-group response sessions by actively modeling useful response strategies (also see Hoover). Ellen Carillo uses blogs and online discussions to encourage student conversation and collaborative critical thinking as an inventive, generative form of peer response. Carillo encourages students to question the nature of collaboration and to become more aware of the ways authors ethically participate in conversation as a form of inquiry. And Harris herself, in her afterword to the collection, offers in essence a revisit to her “Collaboration” essay. Like several other authors in the collection, Harris draws on writing center theory and practice, combined with classroom peer response practice, to speculate on how we just might be making some strides in working toward viable writing-center-inspired strategies for successful peer-to-peer reciprocal teaching and learning in writing classrooms. Ultimately, Harris’s summation of the collection, and her thoughtful extensions and suggestions, argue for a huge amount of preparation, practice, and follow-up when trying to make peer response groups work well, suggesting as E. Shelley Reid does, that perhaps peer review and response is the most promising collaborative practice we can deploy in the writing classroom. Harris realizes there are multiple ways of reaching this goal: “Whatever the path to getting students to recognize on their own that that they are going to have the opportunity to become more skilled writers, the goal—to help students see the value of peer review before they begin and then to actively engage in it—is the same” (281). Harris makes it clear that she believes a true team effort is involved in this process of getting students to collaboratively internalize (and externalize) the value of peer response, an effort that must actively involve student writers, instructors, and—as often as possible—peer tutors.
It is important that those practicing peer review and response come to understand just how useful the intellectual and social skills exercised and developed—through the reciprocity between reader/writer, tutor/student writer, tutor/instructor—really can be. Isabel Thompson et al. agree with Harris’s sentiments in their call for studies that compare and contrast the language of writing groups to the language of one-to-one tutorials. This line of inquiry would be especially useful for CBT, since tutors are often involved in working with student writers in peer response groups, usually in the classroom. I attempt exactly this sort of comparative analyses in Chapters Two, Three, and Four.
Supplemental Instruction
The final branch of peer education we will look at, supplemental instruction (SI), is given the least amount of coverage in peer education literature, though it purports to serve a quarter million students across the country each academic term (Arendale). SI draws theoretically from learning theory in cognitive and developmental educational psychology. There are four key participants in the SI program, the SI leader, the SI supervisor, the students, and the faculty instructor. The SI leader attends training before classes start, attends the targeted classes, takes notes, does homework, and reads all assigned materials. Leaders conduct at least three to five SI sessions each week, choose and employ appropriate session strategies, support faculty, meet with their SI supervisor regularly, and assist their SI supervisor in training other SI leaders (Hurley, Jacobs, and Gilbert). SI leaders work to help students break down complex information into smaller parts; they try to help students see the cause/effect relationship between study habits and strategies and resulting performances; and because they are often in the same class each day, and doing the same work as the student, they need to be good performance models. SI leaders try to help students use prior knowledge to help learn new knowledge, and encourage cognitive conflict by pointing out problems in their understandings of information (Hurley, Jacobs, and Gilbert; Ender and Newton). In this sense, supplemental instruction also demands that SI leaders, much like tutors, must negotiate when to be more directive or nondirective in their pedagogical support.
Spigelman and Grobman report on the links between supplemental instruction and composition courses. Drawing on the work of Gary Hafer, they write: “Hafer argues that it is a common misperception that one-to-one tutoring works better than SI in composition courses, which are not identified as high-risk courses and which are thought by those outside the discipline to be void of ‘content’” (236). In Hafer’s view, the goals of SI have more in common with collaborative composition pedagogy than do one-to-one tutorials in the writing center. These choices between what one-to-ones are offering versus what other potential benefits may present themselves with other peer tutoring models make for interesting comparative considerations and potential instructional choices. Several of the case studies I’ve been involved in over the years, including ones reported on in this book, incorporate several prominent features of the SI model, including tutors attending class on a daily basis, doing the course readings, and meeting with student writers outside of class. (For more on SI, visit the website for the International Center for SI housed at the University of Missouri at Kansas City.)
The rest of this book sets up and presents case studies of my experimentation over the years with hybridizing these parent genres that make up CBT. I illustrate the many ups and downs of diverse people with different personalities and views of “best practices” in teaching and learning to write trying to get along, trying to understand how they might best contribute to a synergistic instructional partnership while attempting to realize the best ways to impart the most useful knowledge to developing student writers. Synergy (from the ancient Greek synergia or syn- “together” and ergon “work”) involves identifying the best of what each contributing collaborator has to offer. As we’ve been touching on, one of the most crucial considerations tutors—indeed any teacher—must face in any instructional situation is the issue of how directive versus how nondirective they can, should or choose to be and, importantly, how this intertwines with the issue of authority and trust negotiation. Kenneth Burke writes, “we might well keep in mind that a speaker persuades an audience by the use of stylistic identifications ... So, there is no chance of our keeping apart the meanings of persuasion, identification (‘consubstantiality’) and communication (the nature of rhetoric as ‘addressed’)” (Rhetoric 46). This book aims to focus our attention on the importance of these interpersonal “stylistic identifications,” urging teachers and tutors to consider the true balancing act demanded by the directive/nondirective pedagogical continuum.
Chapter Summaries
Chapter One, takes a careful look at the ongoing rhetoric of directive and nondirective tutoring strategies. This issue has a long history in writing center literature, and it brings us to the heart of some of one-to-one teachers’ most closely-held beliefs and practices. I examine the conflict inherent when tutors are brought into the tighter instructional orbit that is CBT and how practitioners have dealt with thorny issues of instructional authority and role negotiation when moving between center and classroom. Carefully analyzing the literature on peer tutoring, I argue that CBT contexts demand a close reconsideration of our typically nondirective, hands-off approach to tutoring, that tutors involved in CBT, especially with developmental students, can better serve (and be better served) if they are encouraged to broaden their instructional repertoires, if directors and coordinators cultivate a more flexible notion of what it means to tutor in the writing center, in the classroom, and in between. I begin exploring, however, the complications involved in this idealistic notion of instructional flexibility.
Chapter Two offers the multi-method, RAD-research case study methods and methodology employed in Chapters Three and Four. I begin to offer some of the back-story on the dramatic effects the widely varying level of interaction in and out of the classroom—as well as variables like tutor experience, training, identity, and personality—ended up having on participants’ actions in and perceptions of their CBT experiences. I detail methods of analyses for one-to-one tutorials for Chapter Three and peer response groups in Chapter Four.
Chapter Three presents and analyzes the one-to-one tutorials that occurred with four teams from the UW. Audio-recorded one-to-one transcripts are the central focus of analysis used to explore the question: What rhetorical and linguistic patterns surface during one-to-one tutorials, and what relationship (if any) do participant interactions and various CBT contexts have on these one-to-ones? I carefully analyze how the discourse features of tutorial transcripts such as number of words spoken, references to instructors and assignment prompts, overlaps, discourse markers, pauses and silences, and qualifiers hint at larger rhetorical issues involved in the drama of closer collaboration. I attempt to triangulate and enrich these linguistic analyses comparatively with the points of view of participants.
Chapter Four provides the findings and analysis of CBT partnerships from the UW and SCSU engaged in small-group peer review and response facilitation and other classroom interactions. While field notes from in-class observations offer my views, I also present interviews and journal excerpts from the participants and report on feedback from students to provide more perspectives on these interactions. This chapter points to some illuminating findings that, when compared to the studies of one-to-one tutorials from the UW, offer readers an intimate look at the myriad choices practitioners have with CBT—and the teaching and learning implications involved for all participants.
In the Conclusion I discuss implications of this study’s findings in relation to my primary research question: How can what we know about peer tutoring one-to-one and in small groups—especially the implications of directive and nondirective tutoring strategies and methods—inform our work with students in writing centers and other tutoring programs, as well as classrooms? I begin with the implications of how this question played out in all aspects of the case studies, from the participants’ points of view, to the one-to-one tutorial transcript analyses and interpretations, and finally to the peer response sessions and other classroom activities I observed and followed up on. Finally, I open the conclusion to implications for tutor education and development, program building, and I suggest choices for teaching, learning and researching writing including interconnections between one-to-one and small-group teaching and learning. | textbooks/socialsci/Education_and_Professional_Development/Beyond_Dichotomy_-_Synergizing_Writing_Center_and_Classroom_Pedagogies_(Corbett)/1.01%3A_Introduction-_Sharing_Pedagogical_Authority-_Practice_Complicates_Theory_W.txt |
I don’t want students to perceive me as having all the answers, yet very often I do have the answers they are looking for, and the students themselves know it ... What sort of message are we sending to the students we tutor if they perceive us as withholding information vital to their academic success?
– Elizabeth Boquet, “Intellectual Tug-of-War”
Familiar memes—don’t write on the paper, don’t speak more than the student-writer, ask non-directive questions—get passed among cohorts of writing tutors as gospel before they even interact with writers in an everyday setting.
– Anne Ellen Geller, Michele Eodice, Frankie Condon, Meg Carroll, and Elizabeth Boquet
Arguably, no single issue in writing center and peer tutoring theory and practice gets at the heart of one-to-one, small group, or classroom instruction as the question of directive/nondirective teaching methods. The question of how and when tutors (or instructors) should use techniques like open-ended (“Socratic”) questioning versus just telling students what they think they should do, or what the tutor might do themselves if they were in the tutee’s position, raises issues involving tutor authority, tutor-tutee (and even instructor) trust, tutor training (or “tutor education” or “apprenticing”), and writing process versus product—all relevant concerns in any writing instruction situation. However, when the rhetorical situation of typical one-to-one tutoring changes—when tutors, students, and instructors are brought into tighter instructional orbits—so too must typical instructional methods and styles be reconsidered. Further, add into the equation the fact that student writers, tutors, and instructors might have various levels of experience, preparation, and personality and things get even more dramatically complicated. This is the case in situations involving the closer collaboration of CBT programs. How can tutors and tutor coaches (directors, coordinators) adjust their typical tutoring and tutor training styles and methods to accommodate these sorts of multifaceted rhetorical situations?
In their 2008 College English essay, Elizabeth Boquet and Neal Lerner draw on critiques of Stephen North to argue that we need to be more open to experiencing two-way streets in theory, research, and practice—in short, instructional learning—between writing classrooms and writing centers. Lerner argues further in his 2009 The Idea of a Writing Laboratory that writing centers can be much more than physical places or removed sites for tutoring. Writing center theory and practice can branch out into many methods and forms for pedagogical experimentation. He writes, “Rather than a classroom teacher acting as expert witness, jury, and judge in evaluation of students’ writing, writing centers have long offered themselves as nonevaluative, relatively safe places, as experiments in the teaching of writing” (15). But what happens when a tutor travels from that relatively “safe” center to the forbidding land of the “expert” classroom teacher? My experimental research and practice on CBT since 2000 has led me to important questions this chapter addresses: How and in what ways can what we know about the rhetoric of peer tutoring styles and methods from writing fellows, supplemental instruction, writing groups, and teaching one-to-one be applied and studied. Then how and why might we share these finding with all teachers of writing? The rhetoric of the directive/nondirective instructional continuum—so often debated, refined, and even resisted in writing center and other peer tutoring circles—offers much in terms of teaching philosophy, holds great practical and critical promise, and needs to be shared with all teachers of writing. In many ways, the focus on how participants negotiate the directive/nondirective continuum offers immense teaching, learning, and communicative implications. Like Harry Denny, I am interested not only in the pragmatics of peer-to-peer teaching and learning, but what these pragmatics might reveal in terms of the bodies (minds) and politics of the various social actors in these collaborative learning ecologies. How and why can purposefully withholding knowledge from a student—in order to activate their own critical and creative powers—affect the teaching-learning dynamic? When and in what ways can simply telling students or tutors what they should or must do be more or less beneficial?
Much has been written on the nondirective or minimalist tutoring approach (see, for example, Ashton-Jones; Brooks; Harris, Teaching One-to-One) and subsequent critiques of this approach (see Clark “Collaboration,” “Perspectives”; Clark and Healy; Shamoon and Burns; Grimm; Boquet “Intellectual,” Noise; Carino; Geller et al.; Corbett, “Tutoring,” “Negotiating”; compare to Gillespie and Lerner’s notion of control/flexibility). I will begin by analyzing several key texts that comment on and critique general assumptions and influential arguments surrounding this debate, including Irene Clark and Dave Healy’s 1996 “Are Writing Centers Ethical?” and Peter Carino’s 2003 “Power and Authority in Peer Tutoring.” I will move on to review texts that use empirical case-study research in their arguments that CBT contexts demand a close reconsideration of the typically nondirective, hands-off approach to tutoring. Finally, foregrounding the case studies in Chapters Two-Four, I will begin to illustrate in this chapter why—precisely because the idealistic notion of “instructional flexibility” is easier said than done—arguments involving tutoring style, via the directive/nondirective continuum, offer important analytical lenses with which to scrutinize the “play of differences” that occur in various CBT situations.
“Really Useful Knowledge”: The Directive/Nondirective Instructional Continuum and Power and Authority
When diving deeply into a discussion of directive/nondirective tutoring, we soon begin to realize that—as in any educational situation—we are dealing not just with methodological-instructional, but also political and personal, issues. Clark and Healy track the history of the nondirective (or noninterventionist) approach in the “orthodox writing center.” They describe how in the 1970s and early 1980s, in response to open admissions, writing centers began to replace grammar drills and skills with what would become the HOCs/LOCs approach to tutoring. Along with this new instructional focus, however, came a concurrent concern—fear of plagiarism. The fear of plagiarism goes hand-in-hand with the issue of intellectual property rights—or students’ rights and ownership of their own ideas and writing—a political and personal issue pertinent to tutors, students, instructors, and program directors. As we mentioned in the Introduction, this “concern with avoiding plagiarism, coupled with the second-class and frequently precarious status of writing centers within the university hierarchy, generated a set of defensive strategies aimed at warding off the suspicions of those in traditional humanities departments” like English (Clark and Healy 245; also see Nelson and Garner). For Clark and Healy, the resulting restraint on tutor method soon took on the practical and theoretical power of a moral imperative. They describe how influential essays from Evelyn Ashton-Jones, Jeff Brooks, and Thomas Thompson cemented the hands-off approach to one-to-one instruction.
Ashton-Jones juxtaposed the “Socratic dialogue” to the “directive” mode of tutoring. Drawing on Tom Hawkins, she characterized the directive tutor as “shaman, guru, or mentor,” while Socratic tutors are given the more co-inquisitive label “architects and partners.” Practitioners were left to wonder if it could be a good or bad thing if a tutor-tutee relationship develops to the point that the tutee looks to the tutor as somewhat of a “mentor.” (And in CBT situations, especially, as we will discuss below, programs are designed with this question in mind since peer mentorship occurs on a regular basis.) Brooks, in arguing that students must take ownership of their texts, associated directive tutors with editors, good editors perhaps sometimes, but editors nonetheless. Brooks goes so far as to advise that if a tutee seems unwilling to take an active role in the tutorial, that tutors simply mimic the tutee’s unengaged attitude and action. And Thompson urged tutors to avoid having a pen in hand during tutorials. In the name of the Socratic method, he also urges tutors “not to tell students what a passage means or give students a particular word to complete a thought” (Clark and Healy 246).
In an ironic twist, Clark and Healy note that “by being so careful not to infringe on other’s turf—the writer’s, the teacher’s, the department’s, the institution’s—the writing center has been party to its own marginality and silencing” (254). In answer to this perceived marginality and silencing, they offer essays by Marilyn Cooper, Shamoon and Burns, and Muriel Harris, as well as the work of Lev Vygotsky, that value the pedagogical feasibility of modeling and imitation and an epistemological continuum that moves writers outside their texts to some degree. Cooper, for example, in her close reading of Brooks, argues that tutors who focus too intently on students’ papers may be missing out on important chances to help students with important, more general writing issues like how the course is going in general or how to approach assignments in creative ways. For Cooper, and others, a strict minimalist approach forecloses the act of negotiation—the “really useful knowledge”—that could take place in a one-to-one, negotiation that takes both the tutor’s and the tutee’s goals into consideration.
Peter Carino urges writing center personnel to reconsider the importance of the too-often vilified directive tutor. Like Clark and Healy, he sets up for critique the idea of interventionist tutoring as anathema to the strict open-ended questioning style advocated by Brooks. Carino then discusses Shamoon and Burns’s “A Critique of Pure Tutoring” in which the authors explain how master-apprentice relationships function in fruitful and directive ways for art and music students. In the master-apprentice relationship, the master models and the apprentice learns by imitation, from the authority of the master artist, the tricks of the trade. In that essay, Shamoon and Burns also suggest the importance of imitation to classical-rhetorical education. Reflecting on Clark and Healy’s essay, Carino concurs that nondirective approaches are defense mechanisms resulting from the marginalized history of writing centers within the university and their subsequent paranoia over plagiarism. Further, Carino applauds how Nancy Grimm advocates the directive approach so that traditionally marginalized or under-prepared students are not barred from access to mainstream academic culture. (I will continue this discussion below.)
Conclusively, Carino suggests a dialectical approach to the directive/nondirective dilemma, implying that directive tutoring and hierarchical tutoring are not synonymous:
In short, a nonhierarchical environment does not depend on blind commitment to nondirective tutoring methods. Instead, tutors should be taught to recognize where the power and authority lie in any given tutorial, when and to what degree they have them, when and to what degree the student has them, and when and to what degree they are absent in any given tutorial. (109)
He offers a seemingly simple equation for when to be direct and when to be nondirect: the more knowledge the student holds, the more nondirective we should be; the less knowledge the student holds, the more directive we should be. (Suggesting the roles specialist and generalist tutors might also play.) He wisely, affectively qualifies this suggestion, however, by stating that shyer but more knowledgeable students might need a combination of directive prodding to urge them to take responsibility for their work and nondirective questioning to encourage them to share their knowledge, while chattier but less knowledgeable students could benefit from nondirective questions to help curb hasty, misdirected enthusiasm, and directive warnings when they are making obviously disastrous moves. Unfortunately, Carino does not also characterize what to do when the tutor holds more or less subject matter or rhetorical knowledge, or when the tutor is shyer or chattier. And this is where current research in CBT can help explore this question. And this is also where the terms directive/nondirective can be compared to other closely related pedagogical concepts like control/flexibility (Gillespie and Lerner). Interestingly, Carino points to the dichotomy of power and authority that has historically existed between the classroom and the center, complementing and amplifying Clark and Healy’s notion of fear of plagiarism. Because centers have a “safe house” image compared to the hierarchical, grade-crazed image of the classroom, writing center practitioners feel the need to promote a nondirective approach, which they view as sharply contrasting the directive, dominating, imposing nature of the classroom. This attitude has led to some pretty confining dictums—like tutors not holding a pen or pencil in their hand—that can unintentionally hinder helpful teaching and learning.
A minimalist philosophy may sometimes actually cause tutors to (un)intentionally withhold valuable knowledge from students. Muriel Harris recounted in 1992 how a student rated her as “not very effective” on a tutor evaluation because she was trying to be a good minimalist tutor; the student viewed her as ineffective, explaining, “she just sat there while I had to find my own answers” (379). Although we could certainly question the student’s perceptions, the fact that one of writing centers’ most valuable players, admittedly, might sometimes drop the ball prompts us to continue questioning the writing center’s dualized directive/nondirective philosophies. Yet if we do a double-take on Harris’s views on this issue, we see that she has always seen both approaches as important. Clark and Healy point to an earlier work of Harris’s from College English in 1983 “Modeling: A Process Method of Teaching” in which Harris advances a much more directive approach. In describing the benefits of intervening substantially in students’ writing processes Harris asks “what better way is there to convince students that writing is a process that requires effort, thought, time, and persistence than to go through all that writing, scratching out, rewriting, and revising with and for our students?” (qtd. in Clark and Healy 251; emphasis added). Harris, early on, like Shamoon and Burns, understood the value and importance of the ancient rhetorical tradition of modeling and imitation in the service of invention and style. In order to perform such moves as “scratching out” and “rewriting” tutors must have some confidence in their ability (the theoretical and practical feasibility and kairotic timeliness involved) in offering more directive and traditionally “risky” and potentially intrusive suggestions on issues of substance and style.
“What Sort of Message Are We Sending?” Toward a Humble/Smart Balance
The issues presented above—questions of tutor authority, role negotiation, and instructional method and style—while immediately relevant for CBT, also parallel important, somewhat more general, scholarship in writing center theory and practice and student-teacher writing conferences, scholarship with methodological strengths and weaknesses that reflect our field’s developing understanding over time. Laurel Black’s Between Talk and Teaching offers a rigorous examination of the assumptions teachers bring to one-to-one conferences with their students, assumptions applicable for all teachers of writing. Black opens her book with the concept of conferences as one-to-one conversations, which may or may not use the student’s text as the prime mover of conversation. Black points to Lad Tobin’s view of the genealogy of conferencing from “first generation” teacher-focused to “second generation” student-focused conferences in which both leave all agency in the hands of the teacher. What Tobin, and in turn Black, look to is a “third generation” of conferencing “that takes into account the dynamic relationship aspects of each writing conference: the student’s relationship to the text, the teacher’s relationship to the text, and the student’s and teacher’s relationship to each other” through conversation (Tobin qtd. in Black 16). But Black goes on to suggest the complexity of this ideal notion of conferencing when she writes: “Warning bells should go off as we read about conference ‘conversation’” (21). Black’s work on writing conferences offers a rich spectrum of both the larger rhetorical issues of power and authority in conferencing with an attention to micro linguistic features and cues. The strength of Black’s work lies in the acknowledgment and exploration of the complexity of conferences as a speech genre in which, as in one-to-one tutorials, a delicate balance is sought between conversational talk and teaching talk. Black sees the complex interplay between the cognitive, social, and linguistic as contributing forces—to varying degrees, at different locations, in specific moments—to the unstable speech genre that is one-to-one conferencing (echoing to some degree our discussion of the generic “play of differences” in CBT from the Introduction). Yet in Black’s analysis of conference transcripts we do not hear the students’ point of view, nor the instructors’, nor do we get any real sense of what the pre-conference relationship between the students and the instructors are like.
The work of Nancy Grimm, which also displays a concern for the cognitive, social, and linguistic forces in one-to-one teaching, has made a major impact on the ways writing center professionals (re)view their theory and practice. Yet, like Black, her research falls short of providing the surrounding contextual information necessary to make full use of her findings. Her conceptualization of directive/nondirective tutoring can also be held up to scrutiny. In her concise yet theoretically sophisticated 1999 Good Intentions, Grimm juxtaposes the implications of Brian Street’s autonomous and ideological models of literacy to the work we do. Arguing that our traditional hands-off approach to one-to-one instruction is often misguided, she writes:
Writing center tutors are supposed to use a nondirective pedagogy to help students “discover” what they want to say. These approaches protect the status quo and withhold insider knowledge, inadvertently keeping students from nonmainstream cultures on the sidelines, making them guess about what the mainstream culture expects or frustrating them into less productive attitudes. These approaches enact the belief that what is expected is natural behavior rather than culturally specific performance. (31)
Like Cooper five years earlier, Grimm calls for writing center practitioners to move away from a focus on the paper to the cultural and ideological work of literacy: negotiating assignment sheets to see if there might be any room for student creativity or even resistance; making students aware of multiple ways of approaching writing tasks and situations, making tacit academic understandings explicit; rethinking tired admonishments regarding what we cannot do when tutoring one-to-one. Grimm illustrates what a tough job this really is, though, in her analysis of Anne DiPardo’s “‘Whispers of Coming and Going’: Lessons from Fannie.”
While Grimm, drawing on Street and Delpit, forcefully argues for the importance of moving past our infatuation with nondirective tutoring, she may be inadvertently pointing to why it is also perhaps just as important for us to continue to value some of our nondirective strategies—suggesting the truly subtle nature of this issue. DiPardo’s essay describes and analyzes the tutorial relationship between Morgan, an African-American tutor, and Fannie, a Navajo student who just passed her basic writing course and is attempting the required composition course. Both DiPardo and Grimm speculate that Morgan’s repeated attempts to prod and push Fannie toward what Morgan believed was realization or progress, only pushed Fannie away from any productive insights. The tutorial transcript presented by DiPardo illustrates how Morgan dominated the conversation, often interrupting Fannie (though unfortunately we do not get micro-level analysis like how long pauses were after questions, etc.), how Morgan appropriated the conversation, attempting to move Fannie toward her idea of a normal academic essay. While this approach may ostensibly resemble the directive approach advocated by Grimm, Lisa Delpit, and others, what it leads Grimm and DiPardo to conclude is that tutors must be encouraged to practice “authentic listening”: “As DiPardo’s study illustrates, without authentic listening, the very programs designed to address social inequality inadvertently reproduce it, ‘unresolved tensions tugged continually at the fabric of institutional good intentions’ (DiPardo 1992, 126)” (Grimm 69; also see Clark “Perspectives,” 46). Ironically, listening, or allowing the student to talk a little more during one-to-ones to enable them to supposedly be more in control of the tutorial discourse, is one of—perhaps the most fundamental of—nondirective strategies.
Carol Severino, drawing on Ede and Lunsford for her 1992 essay “Rhetorically Analyzing Collaborations,” associates directive tutoring with hierarchical collaboration and nondirective tutoring with dialogic collaboration (recall Carino’s words above). But her analysis of two conferences from two different tutors with the same student points perhaps more emphatically toward our assumptions of what the ideal tutoring session is supposed to sound like. The student is Joe, an older African American returning student taking a class entitled “Race and Ethnicity in Our Families and Lives.” Severino analyzes the transcripts of sessions between Joe and Henry, a high school teacher in his thirties working on his MA in English, and Joe and Eddy, a younger freshman with less teaching experience. Like the sessions that DiPardo and Grimm analyze above, Henry uses his teacherly authority, from the very start of the conference, by asking closed or leading questions that control the flow of the rest of the tutorial. In contrast, during the session between Joe and Eddy, Eddy starts off right away asking Joe open-ended questions like how he feels about the paper, and where he wants to go from there. For Severino, this sets a more conversational, peer-like tone that carries through the rest of the tutorial. Although obviously privileging the nondirective/dialogic approach, Severino concludes by asserting that it is difficult to say which of the above sessions was necessarily “better.” The problem with Severino’s analysis, however, is that we do not get a clear enough picture of exactly what was going on during the tutorial. As with Fannie above, we do not know how Joe felt about the interaction. Perhaps he found greater value in Henry’s more directive approach. Further, we do not know what stage of the draft Joe is in in either tutorial (information that might have contributed to the level of directive or nondirective instruction). Nonetheless, the value in Severino’s overall argument involves her urging those who prepare tutors to avoid prescriptive tutoring dictums that do not take into consideration varying assignment tasks, rhetorical situations, and student personalities and goals—the “always” and “don’t” that can close off avenues for authentic listening and conversation.
Four, more recent, case studies, while also having their limits, inch us closer toward building feasible theoretical frames and methods for analyzing the deployment of—and pedagogical implications of— directive/nondirective instructional strategies. Susan Murphy’s 2006 study of tutorials uses Goffman’s theory of self-presentation and Brown and Levinson’s theory of politeness to frame her argument that analyzing discourse strategies of self-presentation can provide clues to how tutors enact nondirective strategies. Her discourse analysis of four tutorials illustrates various graduate student tutors alternately imposing and displacing authority. One graduate tutor, working with a student on a novel the tutor is unfamiliar with, attempts to perhaps “save face” by aligning himself with the field of English, in the process using jargon like “flashback,” “rhetoric,” and “foreshadowing,” and even going so far as to urge the student to “Go read some criticism. Develop some ideas about the book” (75, 77). On the other hand, another graduate tutor, while also displaying an alignment with the field through the use of the pronoun “we,” alternately distances herself from literary critic experts and aligns herself more closely with the student writer with the pronoun “they.” Murphy argues this sort of desire to save both her own face and the face of the student writer “seems to be a result of a desire to both claim and reject the authority that comes with her role as graduate student, teacher, and consultant,” requiring being smart and humble simultaneously (78). In their 2012 study of tutorials, Jamie White-Farnham, Jeremiah Dyehouse, and Bryna Finer report similar issues with authority and trust in their attempts to map “facilitative” and “directive” tutoring strategies. The authors note the directive strategy of using tag questions like “right?” at the end of sentences to keep students “on board” as well as, like in Murphy’s study, alignment with the authority of the instructor and the field with a phrase like “often, when teachers say that, they do mean ... ” (5). Yet the authors also report having trouble definitively mapping what they call facilitative tutoring.
Two 2009 articles by Isabel Thompson and colleagues provide both breadth and depth of analyses that might help further differentiate and qualify between more directive and nondirective tutoring strategies. Thompson et al.’s “Examining Our Lore” offers a study of 4,078 conference surveys from Auburn University’s English Center to ascertain how “various conference attributes related to writing center mandates affected tutors’ and students’ conference satisfaction” (87-88). 26 of the tutors were graduate students, and 16 undergraduates; 3,330 conferences were conducted with students enrolled in freshman composition courses. The researchers’ cogent findings—based on compelling statistical data—support Carino’s and others’ assertions from above regarding the complex nature of traversing the directive/nondirective continuum. Students reported high satisfaction with tutorials when they felt the tutors were answering their questions; students also reported satisfaction when they felt comfortable during the conference. Despite the fact that tutors were trained in nondirective approaches, tutors reported that the more directive they were, the more satisfied they were with the conference. How much tutors talked (or conversationally “dominated” the session) or how closely tutors acted like “peers” had little statistical effect on student satisfaction. Thompson et al. ultimately support arguments from Clark (“Perspectives”) that, in practice, tutors are unable to avoid being directive, and students, in fact, appreciate this directiveness. Yet, the authors are careful to qualify this claim when they assert:
Neither our survey nor other empirical research about writing center conferences suggests totally discarding nondirective tutoring strategies. Students’ efforts, feelings of being challenged, willingness to take risks, and independence are vital for their engagement ... tutoring strategies have been found most satisfactory when they are flexibly used—when they vary between assuring students’ comfort and ownership of their writing and answering students’ questions to improve writing quality. (96)
This concern with balancing tutorial methods to include attention to both acts of trying to coach students toward strategies to improve their papers (or writing in general) and the pedagogically affective is given a more focused look by Thompson in another 2009 article.
Thompson’s highly detailed microanalyses of one successful tutorial session, “Scaffolding in the Writing Center,” uses the frame of scaffolding to investigate how analysis of both verbal and nonverbal cues might help further contextualize directive and nondirective (or facilitative) tutoring strategies. Thompson’s analyses complements and enriches Severino’s discussed above, by illustrating how a peer undergraduate tutor starts off a session using more typically recognized nondirective strategies, like Eddy, to get the student writer involved and taking ownership of the paper. (Thompson characterizes the tutor and student writer as follows: “The tutor is an experienced and well-respected undergraduate male, a senior majoring in psychology, the student is a female freshman” [425].) But she also details how, as the session progresses, the tutor feels freer to deploy, like Henry, more directive strategies. What results is a more balanced humble/smart session, like the one reported by Murphy above, that both the tutor and tutee rated “highly successful.” Especially promising in regards to mapping/categorizing directive and nondirective strategies is Thompson’s frame of scaffolding. She divides this frame into three categories: one, direct instruction, and two that—for the sake of analysis—we might consider more facilitative or nondirective, cognitive scaffolding and motivational scaffolding. Thompson details why developing trust and comfort requires an active session where verbal cues like backchannels, pauses, and overlaps hint at the “subtle persuasion” involved in moving closer to the fruitful intersubjectivity of the coveted successful tutorial. While the directive instruction category is obviously more in line with directive strategies—giving explanations, answers or examples, or posing leading questions—and cognitive scaffolding sounds very much like nondirective strategies—demonstrating, giving part of an answer or asking an open-ended question then “fading out”—I would argue that the third category, motivational scaffolding—using humor, providing positive or negative feedback, evincing sympathy and empathy—could be considered a nuanced form of nondirective tutoring, perhaps one requiring the sort of facilitative “authentic listening” called for by DiPardo and Grimm. Visually, we might imagine directive/nondirective strategies overlapping at any given moment during tutorials, as in Figure 2.
Figure 2: Overlapping reality of directive/nondirective strategies.
Applying these methodological insights to CBT settings, I want to pose the same “higher risk/higher yield” question that Boquet asks in Noise from the Writing Center of any tutor: “How might I encourage this tutor to operate on the edge of his or her expertise?” (81). Then I want to analyze what happens when tutors must negotiate this challenging new role. What happens when a less-experienced or less-“trained” or perhaps even over-trained tutor attempts to work with a student writer? What happens when tutors—with varying levels of experience or training, with different personalities, with different notions of how they are “supposed” to act—are connected much more closely with the students and instructor of the course?
“They Like to Be Told What to Do”: Negotiating Directive/Nondirective Tutoring Assumptions When Moving between the Writing Center and the Developmental Writing Classroom
Above we discussed how tricky it can be to balance directive/nondirective instructional methods when teaching one-to-one. Others who have reported on their experiences as small-group peer response facilitators (often done in writing classrooms rather than at the center) have echoed these and other concerns—while also expounding on the benefits of small-group tutoring, including opening avenues for closer writing classroom/center connections and teaching students how to better tutor (peer review) each other’s work (Spilman; Lawfer; Shaperenko; Corbett “Bringing,” “Role”; Decker “Diplomatic”). In my earlier work on CBT, I reflect on my experiences visiting classrooms in the late 1990s and early 2000s. In the brief 2002 “The Role of the Emissary” I narrate two visits to classrooms, one where I simply discuss the services of the writing center, and the other where I actually sit in on a peer review and response session. My argument in that early essay calls for writing center tutors to boldly travel into classrooms with full confidence in their abilities to share what they’ve learned about learning to write. But the thinly-veiled attitude I dance in that essay was motivated by a belief touched on in the Introduction of this book: the scapegoating attitude that writing center and one-to-one tutoring is a better teaching-learning paradigm than classroom instruction. In the On Location chapter “Bringing the Noise,” I narrate idealistic scenes involving students, tutors, and instructors getting along famously in the classroom—while illustrating how tutors can embrace more directive instructional roles that can complement more nondirective strategies during peer response facilitation (also see Decker “Diplomatic”; Anderson and Murphy; Gilewicz). I also describe how something as simple as having a tutor visit to talk about her personal experiences with academic writing can offer interpersonal points of identification and connection between tutors and students, students and the academy, and the writing center and the classroom. These sorts of experiences in traversing into classrooms, into the turf of a classroom instructor to listen to fellow students and to talk with them about whatever concerned them most at that time, would provide the impetus for further practice and future experiences. But others in the same collection offer a more conflicting view of what can occur when making the leap between center and classroom—especially when tutors trained in nondirective instructional approaches bring this more hands-off philosophy to the developmental writing classroom.
Barbara Liu and Holly Mandes, though also celebrating overall success in CBT initiatives, describe how certain adjustments had to be made to the typical nondirective approach when tutors were moved into the classroom. The authors explain the transition of moving tutors from the writing center into the classroom for their developmental writing course, English 100Plus at Eastern Connecticut State University in terms of three problematic assumptions: writers usually come to the center of their own accord; the typical one-to-one tutorial is supposed to focus on the writer not the paper; and the writing tutor’s role is of learner, listener, and questioning conversation partner, not expert teacher. Liu and Mandes would soon come to realize that “the nonintrusive, writing center(ed) model in which Eastern’s tutors had been trained did not always meet the needs of the students with whom they were working in the classrooms” (88). Yet the authors maintain that less-prepared writers are often more apprehensive than mainstream student writers because they are aware of being, or have at least been identified by others as, somehow remedial. When tutors are circulating in the classroom, in their zeal to help, they can all too easily “invade the writer’s comfort zone” treading “a thin line between help and invasion” (91). In building a relationship based on trust, tutors come to learn that the demands of on-location tutoring and mentoring may cause them to have to reevaluate and redeploy some of the most cherished pedagogical strategies learned during their tutor training.
Like Liu and Mandes, Melissa Nicolas also points to the fact that this arrangement requires students to meet with tutors, rather than the typically optional writing center meeting. In her “Cautionary Tale” we see the difficulty in tutors moving from a more writing center-like setting to an instructional setting that demands that they move beyond the role of the emissary to closer communicative contact and negotiation with teachers and students in the classroom. This new arrangement puts tutors in a high-risk situation where they may be struggling to apply what they have been taught from orthodox writing center theory and practice to this new and different instructional context. Nicolas reports how this caused authority and role confusion in the tutors. One tutor explained how, even though she tried to downplay her authority while working with students, still “they just always seem to look at me or toward me ... They like to be told what to do ... It’s kind of confusing. It’s sort of like a balancing act where you try not to be in it too much but try to be there, but it’s like you’re not there. It’s hard” (120). The hard fact is that when tutors are in the classroom in the capacity of a helper or assistant of some sort it will look to students as if they must be there for a reason—the reason of course to share some knowledge or skill that the students may not necessarily possess. And just as classroom teachers either learn to balance levels of control and directiveness, questioning and listening, or just letting students run with ideas, tutors and students develop a heightened sense of these instructional moves. Here, again the idea that student desire for what they see as what they need, and the willingness either to oblige the student or not—or tutor desire to live up to the theoretical ideal tutor—is not always an easy choice for peer tutors to make. It is the double-bind that underscores each move the tutor makes whether tutoring one-to-one or collaborating in the classroom.
Finally, we must also factor into the equation that so many developmental classrooms are filled with diverse students, and diverse tutors. In relation to my treatment of Grimm DiPardo, and Severino above, Lisa Delpit insists that “there are codes or rules for participating in power; that is, there is a ‘culture of power’” (“Silenced” 568) that students and teachers must negotiate. Delpit believes that those who hold power are often least aware of it, while those without it are fully aware of their marginal subject positions. Delpit further claims that explicit, direct teaching of these codes or rules enable those outside the margins of power to gain access to the resources needed for positions of power (569). Drawing on a study of cross-cultural interactions by John Gumpertz, Delpit suggests that efforts toward nondirective, power-displacing instruction may actually be less helpful for some students than more direct, power-acknowledging methods. Others (Mann; Neff; Corbett “Learning”) claim that students with various learning disabilities (LDs) require tutors who are willing to take a more active, interventionist role in these students’ learning to write and writing-to-learn performances. These questions of the connections between instructional method and tutor, student, and even instructor identity will resurface repeatedly in the following chapters.
Renegotiating Our Best Intentions
This review of the directive/nondirective literature begins to illustrate why scholars in writing center and peer tutoring theory and practice urge practitioners to keep our pedagogy flexible and attuned to the protean nature of peer collaborative interaction. In short, tutors need to be aware of the rhetorical complexity that any given tutorial or any given visit to a classroom can entail. This complexity means that tutor coaches should stay wary of the all-too-tempting rules of thumb and “familiar memes” Geller et al. caution against in the opening quotes that can lead to Black’s “reductive binaries,” unintentionally cementing strained social relationships between tutors, tutees, and instructors. Writing center and peer tutoring people are proud of our history of caring and focusing attention on the individual learner. But in our quest to always be the good guys, the guide on the side rather than the sage on the page, have we alienated some outside our centered family circles? Harking back to the parent genres in the Introduction, in dramatistic terms, Burke writes that the scapegoat is “in effect a kind of ‘bad parent,’” and that “the alienating of inequities from the self to the scapegoat amounts to a rebirth of the self. In brief, it would promise a conversion to a new principle of motivation—and when such a transformation is conceived in terms of the familial and the substantial, it amounts to a change of parentage” (Grammar 407). Writing center practitioners—like many writing teachers—have perhaps played the blame game too often and for too long, resulting in lopsided theory and practice. Whether blaming the classroom/center discursive goat—plagiarism, teacher assignments, grades—or the directive/nondirective instructional goat, writing center scholarship grapples with ways practitioners might continue to reevaluate and revise our best intentions. CBT theory and practice seeks to reclaim the consubstantiality of the writing center and the writing classroom: moving the idea of a writing center dramatically from physical place to theoretical and practical space, enlarging and enriching the scope of teaching one-to-one and in small groups, and creating a larger arena for rhetorical investigation, reconsideration, and reevaluation.
We can reevaluate the importance of the classical-rhetorical idea of modeling and imitation in the service of invention, arrangement, style, and delivery—in short, in learning how to learn and teach writing. Adding the idea of modeling, a willingness to sometimes take a more hands-on approach to tutoring, can complement a tutor’s instructional repertoire. Tutor coaches (be they directors, or more experienced co-workers) can offer suggestions—or models, or examples—of when it might be more or less appropriate to be more or less directive or nondirective. Something as fundamental as asking a student at the beginning of a tutorial what phase their draft is in, a question that neither Healy and Clark nor Carino address, could go a long way toward setting up just how hands on or off a tutor can be (or how much researchers can surmise from tutorial transcripts). We can (and often do) realize that sometimes it’s all right to give a pointed suggestion, to offer an idea for a subtopic, to give explicit direction on how to cite MLA or APA sources, (in later drafts) to offer examples of alternate wording and sentence constructions, in short, to practice along a continuum of instructional choices both collaborative and empowering, allowing for alternate moments of interpersonal and methodological collegiality and agency-building. Once we feel that our best intentions more closely match our potential for best practices, we can find ways to further question and more rigorously examine these reconsidered notions.
But how well will all my effusive rhetoric above regarding directive and nondirective tutoring—“tutoring on the edge of expertise,” cultivating instructional “flexibility” or a “smart/humble” balance—hold up under both macro-contextual and micro-analytical scrutiny? In the remaining chapters I will undertake one of the most rigorous examinations of in-the-field practices of tutors, instructors, students, and coordinator engaging in the close collaboration of CBT ever attempted. The same questions concerning directive/nondirective tutoring philosophy and strategy and CBT we’ve been touching on in this chapter will resurface, but in much greater depth and detail: How do tutors in various CBT scenarios deal with walking the fine line between collaboration and plagiarism, between intervention and invasion? How does more intimate knowledge of course content, teacher expectations, and/or closer interpersonal connections between teachers and students, affect the ways tutors deploy directive and nondirective strategies? How does tutor training in directive/nondirective strategies and philosophies hinder or enhance their interactions with student writers? And returning to that central question from the introduction: How can what we know about peer tutoring one-to-one and in small groups—especially the implications of directive and nondirective tutoring strategies and methods brought to light in my and others’ case studies—inform our work with students in writing centers and other tutoring programs?
The above scenarios reported in the literature begin to clearly illustrate just how complicated things can get when you combine various instructional aspects of the parent genres, as well as different participant personalities, goals, and instructional experiences and backgrounds. These scenarios take us closer to an understanding of how authority, trust and directive/nondirective method negotiation intertwine to either deter or promote successful CBT partnerships. But in the next chapter I will begin to offer readers a set of methods and methodological tools that will enable a much deeper multi-perspectival, triangulated view of how these pedagogical issues played out in my case-study research. While scholars caution practitioners and experimenters that tutors may need to be more or less directive when interacting more closely with instructors and courses, my study suggests just how tricky this notion really is. I’ll report on tutors whose performances shattered my expectations: tutors with much experience who talked too much and listened too little; conversely, tutors who held back so much that students felt like these tutors weren’t doing all they could to help, or tutors with very little experience identifying—and making meaningful connections—with teachers and fellow students. | textbooks/socialsci/Education_and_Professional_Development/Beyond_Dichotomy_-_Synergizing_Writing_Center_and_Classroom_Pedagogies_(Corbett)/1.02%3A_Chapter_One-_Tutoring_Style_Tutoring_Strategy-_Course-Based_Tutoring_and_t.txt |
If talk, conversation, and teaching are at the center of a writing center’s praxis and pedagogy, then it only makes sense that we should continue using every technique in our methodological tool kit to study and understand them.
– Michael Pemberton
For a classroom-based tutoring program to succeed in providing a multivoiced forum for discussion of student writing, the assessment of that program itself needs to be multivoiced.
– Jane Cogie, Dawn Janke, Teresa Joy Kramer,
and Chad Simpson
My current work in CBT follows Burke’s methodological imperative in an attempt to “use all that there is to use” (Philosophy 23) in case study research of CBT. The research methods employed are designed to be multi-method (Liggett, Jordan, and Price; Corbett “Using”) and RAD or replicable, aggregate, and data-supported (Haswell; Driscoll and Perdue). Thompson et al. arguably hint at a difference between the typical writing center tutorial and the types of teaching and learning that can occur in CBT when they claim, “It is likely that students come to writing centers to improve the grades on their essays and that they expect to feel comfortable during conferences, However, they do not come to writing centers to form peer relationships with tutors” (96). As we’ve touched upon in this book, one of the more potentially positive occurrences afforded by the closer classroom/center interaction is the tighter interpersonal relationships that can form among the participants, including student writers and tutors. Yet this closer connection is precisely why our methods and methodology must be more nuanced. As the rhetorical situations for participants become more seemingly over-determined, our tools of analyses must become even more fine-grained and triangulated to pinpoint and make transparent any possibly determinable variables.
An important action this multi-method triangulation allows is the ability to identify rhetorical and linguistic patterns between one-to-one tutorials and peer response group facilitation. As mentioned in relation to peer response groups in the Introduction, Thompson et al. posit that, in order to get a closer understanding of the way dialogic collaboration is reciprocally realized across tutorial practices, it would be edifying to compare the discourse features of one-to-one tutoring with peer response sessions. This is an especially important consideration for CBT and the complicating play of differences that occur as peer tutors attempt to facilitate peer response groups in the classroom. In the following sections (and again in Chapter Four), I begin my attempt to address what Thompson et al. call for in terms of the comparative analyses of the discourse of one-to-one tutorials and peer response facilitation we started reviewing in the previous chapters.
Data Collection Instruments
In order to get multiple points of view from the case study participants Table 2-1 explains the data collection instruments employed as well as why these particular tools were used.
In the following sections, I describe the settings the participants were recruited from and operated in, and introduce the participants for each respective team. I also spend some time explaining in greater depth my methods and methodologies for analyzing tutorial transcripts and peer response groups for the sessions detailed in Chapters Three and Four. In this extended methodological frame, I outline some of the strengths and weaknesses of other studies of tutorial transcripts and explain steps I’ve taken to account for these strengths and weaknesses in my own methods and methodologies.
Table 2-1. Data collection instruments
Instruments
Purpose and Function
End-of-term interviews with all writing instructors (graduate TAs) and tutors
Intended to ascertain the background experiences of tutors and TAs, to get an overall sense of their perceptions of how their interactions went, to get an idea of what they perceived as their roles, and to see what suggestions or recommendations they might have for better practice. Designed also to get a sense from TAs and tutors how they felt the other participants in their groups, including students, reacted and how this interaction compared to their previous experiences with tutors or tutoring (see Appendix A for interview questions).
Hand-written field notes of in-class peer response sessions
Collect and identify data for both micro-level linguistic analyses and analyses of broader rhetorical frameworks in small-group peer response sessions, and to allow for comparative analyses to one-to-one tutorials (see Categories and Codes for Analyzing Tutorial Transcripts and Small-Group Peer Response Sessions, and Figure 4, below).
End-of-term student questionnaires (see Appendix B)
Designed to get an overall idea of how students felt about their in-class and one-to-one interactions with their tutors, and to gather students’ comparative impressions of this experience in relation to other tutoring experiences they’ve had.
End-of-term student course evaluations
Intended to gather a sense of what students thought about the course and instructor (and tutor) as a whole.
Tutor notes and journals
Intended to supplement and enrich interview and field note data, to ascertain more personalistic observations and reflections.
Course materials, including assignments and syllabi
Intended to provide context for analyses of one-to-one audio recordings, field observations, interviews, and tutor notes/journals
Audio-recordings of 36 one-to-one tutoring sessions (from the UW teams)
Intended to gather data to micro-analyze linguistic features and cues of one-to-one tutorials, in relation to broader rhetorical frameworks. Also intended to collect contextual and linguistic data that can be used to comparatively analyze small-group peer response sessions (see Categories and Codes for Analyzing Tutorial Transcripts and Small-Group Peer Response Sessions, and Figure 4, below).
Settings
In order to start building a clear-as-possible picture of the context surrounding the four UW and two SCSU teams involved at the time these case studies were conducted, I will explain the two UW writing center settings that the tutors hailed from and worked at, as well as the context of how the SCSU tutors were recruited.
The first, the English Department Writing Center (EWC), I am quite familiar with, having worked there as an assistant director from 2000-2008. During the time these case studies were conducted, the EWC offered a tutor training course in writing center theory and practice unique at the UW campus, English 474. In this five-credit course tutors are introduced to the fundamentals of one-to-one instruction. They read from a course packet that includes over twenty-two influential essays and book excerpts; they write argumentative essays on related topics; and they interact in a collaborative classroom environment that revolves around class discussion of readings and peer response workshops of each other’s writing. Tutors are required to observe two one-to-one sessions from experienced fellow tutors before they begin tutoring themselves. Sessions are allotted up to fifty minutes. Once they arrive in the Center to begin practicing what they’ve been studying, tutors find themselves surrounded, easily within listening distance, of other new and experienced tutors conducting tutorials. Often tutors begin to talk informally about everything under the sun between sessions (see Decker, “Academic (Un)Seriousness”). While tutors read essays that describe both directive and nondirective approaches (for example Brooks; Clark, “Collaboration”) the “Mission Statement” for the Center, at the time of this study posted conspicuously on the wall at the Center for all to read, leaned much more toward the minimalist approach. Figure 3 details what tutors “will and will not” do, a chart excerpted directly from the end of the statement. From my experience, the EWC served primarily mainstream students, many from the UW’s mainstream FYC course English 131. All of the tutors I had worked with in CBT initiatives in the past had come from the EWC, including three of the tutors in this study: Megan, Sam, and Julian. Though I had experimented widely with having tutors attached directly to my composition classrooms on a regular basis, the majority of our CBT efforts involved sending tutors into classrooms for briefer peer review and response facilitations (Corbett, “Bringing,” “The Role”; Corbett and Guerra; Corbett and LaFrance; Decker “Diplomatic”; Cogie et al.).
Tutors will collaborate in ...
Tutors will not ...
Brainstorming, outlining, and discovering pre-writing strategies
Developing and clarifying thesis statements
Developing organizational strategies
Recognizing where elaboration or clearer transitions are needed
Determining how and when to document outside sources
Recognizing when more research is needed to support claims
Generate ideas
Suggest or reword thesis
Suggest an organization
Provide vocabulary
Analyze reading materials
Supply content
Figure 3: English Department Writing Center Mission Statement excerpt
The second UW setting, the Instructional Center (IC), a division of the Office of Minority Affairs, provides tutorial services for a variety of courses and subjects (including a writing center) designed for “at risk” students at the UW. I first came into contact with the IC writing center while teaching for the Educational Opportunity Program (EOP), a program that coordinates classes like the two-quarter stretch FYC course, English 104/105, jointly with the Expository Writing Program (EWP). During a visit to the IC in 2003 I spoke with representatives there about the CBT initiatives we had been working on at the EWC. This piqued their interest, and began a relationship that included IC tutors visiting my EOP classrooms to help with peer response. I approached IC administrators again for this study and they found a tutor, Madeleine, willing to participate. I also volunteered as a peer tutor for the IC writing center Spring quarter 2007. During this experience I saw the professional tutors that work for the IC working side-by-side with undergraduate, a couple of graduate, and a couple of volunteer tutors. Interestingly, at the time of this study, the IC did not provide new tutors formal training in writing center theory and practice. New tutors were offered the option of observing sessions with more experienced tutors, if they so desired. In contrast to the EWC, there is no real time limit to sessions, so one-to-ones can easily go over an hour; students can work on their writing and work with tutors intermittently. Like the EWC, the space at the IC is rather small; tutorial sessions are conducted well within hearing distance of each other. So instead of receiving structured and systematic training, new tutors learn on-the-job, through trial and error, and by listening, observing, and talking with experienced tutors. Finally, in contrast to the conspicuously-posted “Mission Statement” of the EWC, the IC has no such mission statement for their writing center. Rather than have methodological mandates, writing tutors for the IC learn very much by trial and error.
The participants from the third setting at SCSU, in contrast to the UW tutors, did not originate from a writing center. When I took the job as co-coordinator of the Composition Program at SCSU, New Haven, in the fall of 2008, I was immediately confronted with more of the same sort of developmental learners I had worked with at the UW: students with lower SAT scores, first-generation and working-class students, more students with learning disabilities—in short, students who needed and could benefit from more focused individualized instructional support. Fresh from my CBT experiences and studies at the UW, I wanted to follow up on what I believed were some of the more successful components of those studies. I felt that something unique and full of potential took place, especially with Madeleine’s Team Three detailed below. So I asked Mya, one of our top instructors of our basic writing course English 110, if she would be interested in participating in this study, and if she had a tutor in her current course she might recommend as a course-based tutor for her subsequent course. She asked the student she had in mind, Gina, and Gina agreed. What followed were two back-to-back terms that illustrate what can happen when continuity between participants in CBT occurs. None of the SCSU teams received any special training to prepare them for their roles as course-based tutors. Rather, they all originated from Mya’s 110 courses, a course that emphasized writing process pedagogies like multiple drafts and peer review and response sessions.
I have lingered on this discussion of settings in order to emphasize the importance of the preparatory environment (preexisting context) that underscores the one-to-one and classroom-based tutoring that occurred in the UW and SCSU case studies. I will touch on possible implications of the differences in these settings’ instructional practices and (where applicable) philosophies in later sections.
Participants
In this section I will introduce the six teams involved in the case studies, the first four from the UW, and the fifth and sixth from SCSU. Readers will begin to get to know the participants and the respective CBT models they worked together in. Later, in Chapters Three and Four where applicable, participants will detail their impressions of how their interactions with students and with each other played out in one-to-one tutorials and classroom peer response sessions and other in-class collaborations. The two models employed were the in-class model and the writing advisor model. Essentially, the in-class model had tutors embedded in the classroom on a day-to-day basis, while the writing advisor model involved tutors much less in the classroom. Details for each TA/tutor team, respectively, are provided below.
Team One: Julian and Anne
Julian, from Team One, is a white, senior English/Comparative Literature major who had worked in the EWC for two years, including a quarter as an in-class tutor with me. Julian commented minimally on papers and met one-to-one with students at the EWC. He also attended two in-class peer reviews. He has the most experience tutoring one-to-one and in the classroom of all the tutors. Having worked with Julian very closely for two years prior to this study, I found him outspoken and highly intelligent.
Anne is a white, third year TA in English Language and Rhetoric. She had one year of teaching experience with first-years prior to this pairing. She had extensive training and experience, about five years, teaching one-to-one for the EWC and CLUE (CLUE, or the Center for Learning and Undergraduate Enrichment, is another campus student-support service that houses an evening writing center.) She had also presented at several national and regional writing center and Composition and Rhetoric conferences.
Table 2-2. Team One descriptions
The Model
The Tutor
The Instructor
Writing Advisor Tutor
Tutor commented on papers and met one-to-one with students at the English Department Writing Center (EWC). He attended two in-class peer response sessions.
Julian is a white, senior English/Comparative Literature major who had worked in the EWC for two years, including a prior quarter as an in-class tutor. He had the most experience tutoring one-to-one and in the classroom of all the tutors.
Anne is a white, third year TA in Language and Rhetoric. She had taught two years of traditional FYC prior to this pairing. She had extensive training and experience in tutoring one-to-one for the EWC.
Team Two: Megan and Laura
Team Two includes Megan and Laura. Megan attended class every day and worked one-to-one with students at the EWC. Megan is a white, senior Communications/English major who had been tutoring at the EWC for two years. She was planning to pursue K-12 teaching. Like all the EWC tutors (except Sam) she took a five-credit course in writing center theory and practice. Megan considered herself not the strongest writer. During her interview she described how struggling with an English class, from which she eventually earned a 4.0, persuaded her to apply to the EWC. Having worked with her an entire summer, to me Megan always seemed very nice (often “bubbly”) and approachable.
Laura is a second year TA and Chinese International student, focusing on postcolonial studies and Asian-American literature. She had one year of teaching experience with first-years prior to this pairing.
Table 2-3: Team Two descriptions
The Model
The Tutor
The Instructor
In-Class Tutor
Tutor attended class every day and worked one-to-one with students at the English Department Writing Center (EWC).
Megan is a white, senior Communications/ English major with two years tutoring in the EWC. She planned to pursue K-12 teaching. Like all the EWC tutors (except Sam) she took a 5-credit course in writing center theory and practice.
Laura is a second year, Chinese international grad student and TA in English Literature. She had one year of teaching experience in a traditional first-year composition (FYC) classroom prior to this pairing.
Team Three: Madeleine and Sydney
Due to her schedule, Madeleine, from Team Three, attended class every other day and worked one-to-one with students at the IC. Madeleine is an African-American sophomore English (creative writing) major who had worked for the IC only one quarter prior to this pairing. She enjoys performing spoken-word poetry. She did not receive any formal training in one-to-one teaching prior to this pairing. She attended a college prep high school and participated in running start. Prior to this study, I was not familiar with the personality or tutoring patterns of Madeleine.
Sydney, a woman of color (African-American) herself, is a second year TA studying nineteenth- and twentieth-century African-American literature. She had about five years of teaching and tutoring experience with high school students and one year of teaching with first-years prior to this pairing. On her wish-list, Sydney had written me a note asking, if at all possible, for a tutor of color. Serendipity worked in her favor in the form of Madeleine, whom I would later learn was the only IC tutor willing to participate in this study.
Table 2-4: Team Three descriptions
The Model
The Tutor
The Instructor
In-Class Tutor
Tutor attended class every other day and worked one-to-one with students at her Center.
Madeleine is an African-American, sophomore creative writing major who had tutored one quarter for her Center prior to this pairing. She did not receive any formal training in teaching one-to-one.
Sydney is a second year, African-American TA in English Literature. She had several years of teaching experience with high school students and one year teaching traditional FYC prior to this pairing.
Team Four: Sam and Sarah
Team Four includes Samantha (Sam) and Sarah. Sam commented on student papers and met one-to-one with students at the EWC. She attended class only once to introduce herself. Sam is a white, senior double English/Biology major who had worked as a tutor for the EWC and for the Dance Program for a total of two years. Although she is the only EWC tutor who did not take the five-credit training course, she had read several articles on writing center theory and practice and co-authored an article on group tutoring and personal statements. When I originally interviewed Sam, she seemed very shy and reserved. I was actually slightly concerned that she might be too reserved for peer tutoring (more on this later).
Sarah is a Latina, second year TA, focusing on nineteenth-century American literature. She had one year of teaching experience with first-years prior to this pairing. She also had two years’ experience tutoring ESL students.
Table 2-5: Team Four descriptions
The Model
The Tutor
The Instructor
Writing Advisor Tutor
Tutor commented on student papers and met one-to-one with students at her Center. She visited class only once to introduce herself.
Samantha (Sam) is a white, senior English/Biology major who had worked in her Center for a total of two years. She had read several articles on writing center theory and practice prior to tutoring.
Sarah is a second year, Latina TA in English Literature. She had one year of teaching experience in a traditional FYC classroom prior to this pairing, and two years of experience teaching ESL.
Team Five: Gina and Mya
Gina, from Team Five, is a white sophomore who plans on majoring in nursing. She attended class every day, did all of the course readings, and gave comments to some student papers outside of class. She said she felt her experiences as a student in English 110 with Mya, the term just prior to this one, prepared her well for her role as a course-based tutor because Mya worked with students just as much on general skills for succeeding in college as on their writing skills. She admitted that, while previous peer response experience helped prepare her for her tutoring role, she tried harder when helping students with peer response for this course than she did as a “student” in the previous course. As readers will hear more about in Chapter Four, Gina worked closely in the class with an autistic student, Max. Having a learning disability (LD) herself, dyslexia, she understood that Max might need a little more help and attention.
The instructor, Mya, is a white, adjunct instructor with about ten years teaching college first-year composition, two years teaching high school, and fifteen years as a home educator prior to this case study. She said she already had a “bond” with Gina, since they were together in English 110. Mya then let me know about Gina’s LD. She was aware that Gina has trouble understanding and comprehending what she reads.
Table 2-6: Team Five descriptions
The Model
The Tutor
The Instructor
In-Class Tutor
Tutor attended class every day, did all of the course readings, and gave comments to some student papers outside of class.
Gina is a white sophomore who plans on majoring in Nursing. She had taken English 110 with the instructor, Mya, the previous Fall term. She had no previous experience tutoring or teaching.
Mya is a white, adjunct instructor with about ten years teaching college first-year composition, two years teaching high school, and fifteen years as a home educator prior to this case study.
Team Six: Kim, Penny, and Jake
Team Six enjoyed a unique partnership wherein one instructor, Jake, was assigned an in-class tutor, Kim and Penny, for each of his two sections. As mentioned above, like Gina, both tutors had been students in Mya’s 110 course the previous term. Kim is a Latina freshman who planned on majoring in nursing. She had no previous experience tutoring or teaching. Interestingly, Kim had been in the same peer response group as Max, the autistic student that Sara from Team Five above worked closely with. Penny is a white, freshman Education major. She also had no previous experience tutoring or teaching.
Jake is a white, adjunct instructor with about five years teaching college first-year composition prior to this case study, including several developmental writing courses. Jake talked about how Kim and Penny had different personalities and approaches, Kim more outgoing and vociferous and Penny more reserved. He said that he actually encouraged this diversity, “letting students [tutors] find their own way.”
Table 2-7: Team Six descriptions
The Model
The Tutors
The Instructor
In-Class Tutors
Tutors attended class every day, and gave comments to several student papers outside of class.
Kim is a Latina freshman who plans on majoring in Nursing. She had taken English 110 with the instructor, Mya, the previous Fall term. She had no previous experience tutoring or teaching.
Penny is a white, freshman Education major. She had taken English 110 with the instructor, Mya, the previous Fall term. She had no previous experience tutoring or teaching.
Jake is a white, adjunct instructor with about five years teaching college first-year composition prior to this case study, including several developmental writing courses.
Categories and Codes for Analyzing Tutorial Transcripts and Small-Group Peer Response Sessions
As described above, the one-to-one tutorials presented in Chapter Three were audio-recorded. The data for the small-group sessions reported in Chapter Four are from my field notes. Tutors, instructors, and students were solicited for their impressions of both. And all course materials, including assignments, were collected for this study. Drawing largely on Black, Harris, Gillespie and Lerner, and Gilewicz and Thonus, rhetorical and conversation discourse analyses are the primary methods for coding and analyzing one-to-one tutorial transcripts. The analyses will offer broader rhetorical frameworks as well as ways to analyze linguistic features and cues that can also be used to analyze small-group peer response sessions. Attention to how the linguistic features of tutorial transcripts hint at larger rhetorical issues complicates and enriches Grice’s “tacit assumption of cooperation,” outlined in his conversational maxims of quality, quantity, manner, and relevance (see Blum-Kulka 39-40), in relation to CBT. As Carolyn Walker and David Elias’s frequently cited analysis of teacher-student conference transcripts argued—and, in relation to tutor-tutee conferences, Thompson et al.’s study supports—the quantity or ratio of student to teacher talk did not affect either participants’ perceptions of the conference’s effectiveness. What this suggests is that even though writing center practitioners talk much about the value of getting students to do most of the talking, students themselves often tacitly assume that teachers or tutors will do most or much of the talking, and if they do not then the students’ expectations might be disrupted.
Harris’s “Why Writers Need Writing Tutors” provides an overarching rhetorical framework for how tutors can help writers. Tutors can: (1) encourage student independence in collaborative talk; (2) assist students with metacognitive acquisition of strategic knowledge; (3) assist with knowledge of how to interpret, translate, and apply assignments and teacher comments; and (4) assist with affective concerns. In Teaching One-to-One Harris offers seminal analyses of tutorials from Roger Garrison and Donald Murray, as well as tutors (though these tutors are not categorized as peer or professional or graduate students). These transcript analyses offer a useful overview of directive and nondirective methods, ways tutors help students acquire writing strategies, techniques for active listening (including listening for student affective concerns), and how questions can be used in various ways with different effects.
Gillespie and Lerner supply further analysis from tutorials, though most of the tutorial transcripts they analyze are between undergraduate writers and graduate tutors. They extend many of Harris’s findings, especially in regards to the complex way various questioning techniques and strategies affect the control and flexibility of any given tutorial. In asserting “questions aren’t necessarily a nondirective form of tutoring” (112) their analyses of tutorial transcripts reveal content-clarifying questions, three types of open-ended questions (follow-up, descriptive meta-analysis, and speculative), as well as directive questions that lead tutors away from the conversation advocated for by most writing center scholars to their appropriation of one-to-one tutorials. (Although, Thompson and Mackiewicz offer an important caveat. In their study of questions used by experienced tutors in 11 one-to-one conferences the authors found that “it is not possible to describe a ‘good’ question outside of the context in which it occurs, and even in context, the effects of questions are difficult to determine” [61].) One of the most important suggestions the authors make involves note-taking as an important aspect of tutorials. They advise tutors to read the entire paper before offering any suggestions, taking careful notes so that students can walk away with a transcript of what happened. Otherwise, the authors explain, much of what went on during the conversation will be lost, tutors may make unnecessary comments, and tutors may be too controlling or directive during the session (also see Harris, Teaching 108).
But both Harris and Gillespie and Lerner, due to their goals of training often beginning tutors, fall short of pushing the analysis of transcripts to the micro-linguistic level. Black and Gilewicz and Thonus offer discourse analysis of conference and tutorial transcripts that can help link the macro-rhetorical issues to the micro-linguistic features and cues of one-to-ones. Like Harris, and Gillespie and Lerner, Black pays careful attention to the issue of directive and nondirective conferencing strategies (also drawing on Garrison and Murray). Black takes the idea of typical classroom discourse, characterized by initiation-response-evaluation, an arguably directive form of instruction (see Cazden 30-59), and shows how it makes its way, often unintentionally, into conference talk. Importantly, Black applies both conversation and critical discourse analysis to the examination of one-to-one conferences. Black also explores how interruptions, backchanneling, fillers, words like “you know,” can control and coerce students, “subtly forcing another speaker into a cognitive relationship that becomes a linguistic relationship that marks and cements the social relationship” (47). Like Black, Gilewicz and Thonus pay attention to pauses, backchannels, and fillers. And like Harris and Gillespie and Lerner, they are sensitive to the way questions can be used to encourage or discourage conversation. The authors take us a step further, however, in their breakdown of fillers into backchannels, minimal responses, and tag questions, their attention to pauses, and—especially relevant to this study—their subdividing of overlaps into interruptions, joint productions, and main channel overlaps. (Joint productions occur when one speaker finishes another speaker’s words or phrases. Main channel overlaps happen when speakers utter words or phrases simultaneously.) For example, the authors claim that “joint productions, more than interruptions or main channel overlaps, represent movement toward greater solidarity and collaboration” (36) rather than leave all control in the hands of the tutor.
Yet, while offering important micro-level sociolinguistic analyses, both Black and Gilewicz and Thonus also fall short by not providing enough contextual information that could help readers make better sense, or provide more of their own interpretations, of the authors’ research findings, including why tutors or teachers may be more or less directive in a given tutorial or conference. My attempt to triangulate data, to account for Erving Goffman’s “wider world of structures and positions” (193) via interviews and follow-ups, transcriptions, and student questionnaires are efforts in trying to account for larger CBT contextual factors. These factors become especially important when attempting analyses of small-group tutorials.
Several elements of the analytical frame for one-to-ones discussed above also apply to small-group peer response sessions (Figure 4). All four of Harris’s categories for how tutors can help writers can be highly useful as an overarching macro-frame. The use of various sorts of questions, overlaps, fillers, and frequency and length of pauses can help in the comparative micro-analyses of one-to-ones and small-group tutoring. Especially promising, as well as slightly problematic, is Teagan Decker’s idea of the “meta-tutor”—a concept that provides a conceptual and analytical bridge between one-to-one and small-group tutoring and peer response. She claims that tutors leading small-group response sessions should “become meta-tutors, encouraging students to tutor each other. In this capacity, tutors are not doing what they would be doing in a one-on-one conference in the writing center, but rather they are showing students how to do it. Their role, then, does change, but at the same time remains consistent” (“Diplomatic” 27). As Decker explains, this role is different from the ones tutors typically engage in at the center. In a one-to-one setting tutors need only share what they can about the writing process, while meta-tutoring requires a level of metacognition that enables a tutor to teach students how to do what they do—but without seeming as if the tutor is withholding important information. This coaching students how to coach each other really makes tutors have to agilely balance directive/nondirective strategies. We will see in Chapter Four how this notion of the meta-tutor played out with the teams. But, first, I will turn our focus toward the balancing acts involved in the one-to-one tutorials from the UW teams
Figure 4: Macro- and micro-heuristic for coding, analyzing, and comparing one-to-one transcripts and in-class peer response field notes | textbooks/socialsci/Education_and_Professional_Development/Beyond_Dichotomy_-_Synergizing_Writing_Center_and_Classroom_Pedagogies_(Corbett)/1.03%3A_Chapter_Two-_Methods_and_Methodology-_Locating_Places_People_and_Analytica.txt |
If writers are learning how to think about their writing based upon the conversations we have with them in writing center sessions, then our examination of those conversations can reveal the issues and challenges of learning to write in college and how writers learn to overcome them.
– Paula Gillespie and Neal Lerner
It’s easy enough to think that once the door to that tutoring room is closed, it’s only you and the writer, but the many forces swirling outside that room have not gone away.
– Paula Gillespie and Neal Lerner
What I learned from analyzing transcripts of my conferences is how great a distance lay between my image and my words, my goals and my practice.
– Laurel Johnson Black
By the time I was ready to design the case studies presented in this chapter and in Chapter Four, I had already conducted several preliminary studies at the UW. For example, at the 2005 International Writing Research Conference in Santa Barbara, I presented the findings of a comparative study of tutors in Dance. I analyzed the tutorial transcripts of sessions between students in Dance and me (then a graduate student and assistant writing center director), a freshman undeclared major tutor, and a senior Dance/Russian double major tutor. The term prior to this study, the freshman tutor had apprenticed with me. I modeled for her and encouraged her to practice a more nondirective approach, centered on open-ended questions. While I likewise encouraged the Dance major tutor to use a similar approach, she did not have the benefit of a quarter’s-worth of practice before the study. My findings echo Severino’s from Chapter One, and Thompson and Mackiewick’s study, regarding the use of open-ended questions to help students mentally work through their ideas and establish a more conversational tone to the tutorials. As with Severino’s study, the freshman tutor and I had great success with Dance majors in our frequent use of nondirective, open-ended questions, while the senior Dance major was either at a loss for what to do or resorted to simply telling her peers what she thought they should do, which resulted in the tutor doing almost all of the talking during her session. This study, among others, made me very curious about the notion of “peer.” It made me question just how important tutorial method really is when tutoring one-to-one. Would any tutor attempting to use a nondirective approach conduct successful tutorials? It also made me consider a related question: when, and under what circumstances, is a student ready to become a peer tutor?
In Chapter One I discussed how and why course-based tutors need, to some extent, to let go of some of the DOs and DON’Ts that can blind them to the needs of the individual student in a specific situation. But I also discussed how difficult this can be when participants are immersed in the swirl of pedagogical and interpersonal social drama involving the negotiation of the hybrid “play of differences” among and between the four parent genres. This chapter offers readers comparative micro-analyses from the 36 one-to-one tutorials conducted by the tutors from Teams One through Four. I will also compare the different accounts and points of view of participant experiences, gathered from interviews, to each other. Questions concerning directive/nondirective tutoring philosophy and strategy and CBT we discussed in the previous chapters will resurface, but in much greater depth and detail in relation to one-to-one tutorials: How does more intimate knowledge of course content, teacher expectations, and/or closer interpersonal connections between teachers and students, affect the ways tutors negotiate and deploy directive and nondirective strategies? How does tutor training in directive/nondirective strategies and philosophies hold up or play out during practice? How does negotiating the directive/nondirective continuum affect the quest for tutorly identity or reciprocal trust between participants? And what does all this have to add to our understanding of the rapport- and relationship-building that can occur in CBT, interpersonal relationships that can add value to our developing understanding of peer-to-peer teaching and learning? As I suggested in Chapter Two, it is relatively easy for researchers to pull tutorial transcripts, or field notes, or even memories out of context and interpret them in ways that best serve their rhetorical purposes. But it is another thing all together to attempt to provide enough of the preexisting contexts—as well as micro-analyses—that might allow readers to adhere more closely to my interpretations. Or better yet, to encourage readers to perhaps more readily and freely draw some of their own interpretations and conclusions as well.
Transcription notations were developed ad hoc as I coded audio-recordings. They were used for ease of voice-recognition transcription and will hopefully allow for easy reading:
( ) indicates interlocutor’s fillers including minimal responses, backchannels, and tag questions.
CAPITALIZED WORDS indicate commentary by transcriber: For example, SEVEN SECOND PAUSE indicates length of pause; INTER indicates interruption; JOINTPROD indicates joint production (joint productions occur when one speaker finishes another speaker’s words or phrases); MAINCHANOVER indicates main channel overlap (main channel overlaps happen when speakers utter words or phrases simultaneously).
As If She Hadn’t Said a Word: Julian’s Tutorials
Julian from Team One had relatively little in-class interaction with the students in the course. His six tutorials all took place in the eighth week (Table 3-1). They all revolved around a major paper in which students were asked to analyze and make an argument about the rhetoric, ideology, usefulness, and feasibility of one of the topics from George W. Bush’s 2006 State of the Union Address, topics including the No Child Left Behind Act; the war in Iraq; and immigration, especially the US/Mexican border. His six sessions averaged 36 minutes, with the longest lasting 53 minutes and the shortest 22 minutes. Careful analysis helps illustrate Julian’s most salient tutorial pattern—the fact that he talks too much while allowing relatively much less student talk-time (or, concurrently, tutor listening-time). Couple this with the fact that he often talks a lot before he has heard the entire student’s paper, and we are often left wondering why he is talking so much, often in the abstract, about the student’s ideas and writing.
Table 3-1: Linguistic features and cues from Julian’s (Team One) one-to-one tutorials
Linguistic Features and Cues
Julian
Students
# of Sessions
6
Average Length (minutes)
36
Total Words Spoken
15,049
5,835
Average # of Words Spoken per Minute
70
27
Content-clarifying Questions
20
Open-ended Questions
93
Directive Questions
8
References to TA
14
13
References to Assignment Prompt
12
1
Interruptions
28
13
Main Channel Overlaps
1
4
Joint Productions
4
9
In session four, Julian works with a highly reticent student who is having obvious trouble negotiating the assignment. I quote this excerpt at length because it illustrates the extreme that Julian can go to in his verbosity, in his domination of the session:
STUDENT: So right here I’m giving stats on like the casualties and stuff like that UNDECIPHERABLE
JULIAN: Okay maybe try playing around actually using those somehow in the opening paragraph. I’m making this up but due to to the casualties increasing the true number is blah blah blah the increased cost the cost of filling out the increased security that’s where we should just maybe a framework early over to talk about what you’re talking about later so they’re sort of expecting it. Does that make sense (yeah) or am I just rambling? (No that makes)INTERso you guys talked about stakes a little bit right? (yeah) okay so READING STUDENT’S PAPER “although both the opposing and supporting sides make good points I would agree that we ultimately need to follow President Bush’s plan and increase our troops in Iraq war.” So what? I don’t think you quite got the stakes there. Like literally think about it as like a bet you’re making to read or write what is at stake like what are the stakes? Like in a poker game if you’re writing what we did if you’re wrong or like if President Bush is right and what if these things don’t happen? When we lose why is this so important? I may just off-the-cuff I’m not expecting why is it important? (um)INTER I’m not expecting you to write this sentence I’m just asking you why you picked this because it’s like you said it’s slightly more interesting sorta grabs your attention why like what’s important about what’s going on here?
STUDENT: SEVERAL UNDECIPHERABLE WORDS
JULIAN: Yeah okay just get specific with it. Do you think we need to follow President Bush’s plan because it affects everybody? How does it affect everybody? Like what’s at stake? Like security? Like what else? What are the issues at play?
STUDENT: I don’t know.
JULIAN: That’s cool. Just make a note for yourself or something. I just think about it because that’s the kind of stuff I read. That idea makes sense right? Just kick it around. One thing to do is if you’re totally like it’s not coming to you forget about it for a while because it looks like you’ve got a good structure of your body paragraphs right? And this last sentence suggested like talking a little about there are many clear facts like what are you talking about? See where you can end up in your conclusion like ultimately we’ll only need to listen to Bush ready to do this because these things are like like why do we need to? What is President Bush saying that we need to do these things for right? So he says that we need to do this because ABC right? Do we need to do for AB and C if he’s right if he’s correct right? Where Bush says what we need is for AB and C and you look at that and he is right we do need to do it for these reasons one of those can be your stakes because that’s what you’re talking about right? You just need to introduce them in a general way. I know I’m rambling but I’m trying to say that the topics are the central ideas of your body paragraphs. You can sort of like generalize about them; just sort of go back and connect them to claim. (yeah) FIVE SECOND PAUSE That’s got to actually do a lot. When I get stuck on opening paragraphs like I’ll just because I don’t know I don’t know how the writing process goes for you but you my intro paragraph takes me and my claim takes me about as much time as writing half of my body paragraphs, so sometimes I’ll write by pulling my quotes and I’ll write the central paragraphs and then in writing them I’ll be like oh I do have something to say in like my conclusion. I’ll I’ll go back and generalize to make a claim. (all right) I’m talking a lot like let me ask you a question. You guys have talked about rhetorical analysis right? So what do you think about the rhetorical analysis you have so far on Bush in this first and second paragraph?
STUDENT: I don’t know what rhetorical means.
JULIAN: Okay cool. Rhetoric right the word “rhetoric” is always a like it can mean writing or speech or presentational language. I don’t know who coined the term but the big famous historical thing that it comes from is like a Roman senator who taught about it TURNING TO ANOTHER TUTOR hey Kate who was the famous Roman guy who like is the famous rhetorician? Yeah yeah thank you this famous Roman guy named Cicero who was like a major slick politician. I forget what he did, but he basically swayed the populace just by like the power of his speech. So the idea is he is like not just what he says but like why do you think he said this exactly or what’s he trying to accomplish with it right? So rhetoric is like using language in specific ways to accomplish specific goals. (ok)
STUDENT: The way he’s saying it then he’s trying to keep you’re going into details and kinda like so that everybody can understand what he’s talking about and because he’s emotional in the words that he’s, I don’t know, try to explain like why Bush is basically explaining like why we need to think about sending more troops.
JULIAN: Totally, no, I think you’re right on the money; like I heard you saying like he’s avoiding numbers and statistics and he’s using emotional language. That’s awesome; that’s the kind of stuff you want to get explicit and say right? But this will do more to it, so much easier to figure out you like okay I know you totally got that in, their fears. He’s avoiding numbers and statistics but who is he using emotional language? Was he maybe using images that have a high impact? He talks about flying a plane at you but I heard that and I’m like I had mental images of 9/11 right? Of airplanes into the building. So you figure out what you think he’s doing right? And then you’ve got to posit some sort of argument about why you think he’s doing it. The first tactic I would try, because it might not be obvious at first, take a look at the issues you are talking about so if these are the issues you’ve identified that are applied to the Iraq war against the people are for right? Where are the issues involved with it? Monetary cost, other political things right? So how does what he says and the way he says it relate to these issues right? So like how is he positioning himself with his language upon the key issues of the debate that you’ve identified? That’s kind of what you are being asked to do for rhetorical analysis. Does that make sense? (yes) And you’ve got the hard part down; you figured out the issues that you are talking about and you figured out where your key passages are. Now you got to like sort of connect them and just sort of like a sentence or two about how and why these different sentences are helping him or not helping him. Maybe you think he messed up or maybe should have said this. Bush maybe the speechwriters and you find something in stuff like that (ok). TEN SECOND PAUSE Did you talk with Anne about the feasibility, usefulness and ideological implications? (Yeah) Did that make sense? (yeah) Cool, so could you take me to your like what your thoughts are so far on this?
STUDENT: Like put both the supporting and the refusal of the arguments for and some of what the opposing sides are saying some of the different ways we can go about it and how some of his things are feasible.
In this striking example, Julian, granted, is faced with an incommunicative student whose inability to grasp the assignment makes Julian’s job tough. But notice how in that second interruption Julian asks a question and just as the student begins to annunciate a reply, “um,” Julian jumps in with more questions. Julian’s next question meets with “I don’t know” which spins him on more rambling. And he knows he is rambling, which causes him to actually slow down and ask a question that leads him to figure out the student does not understand the idea of rhetorical analysis. This seems promising. Yet rather than ask some questions that might get the student thinking, allow time for a response, and maybe even write some notes, notice how Julian will ask a question, then answer it himself (ironically, almost like a “rhetorical” question). Repeatedly, as evidenced in the above passage, and continuing throughout this session, Julian asks “does that make sense?” The student invariably responds curtly with “yes,” “yeah,” and “I think so.” Julian also uses the tag question “right?” ubiquitously. When Julian finally asks what the student’s overall thoughts are, the student replies with a scanty summary of what Julian had been proselytizing about. Obviously, it’s not making as much sense as the student ostensibly lets on. Examples like this appear repeatedly in Julian’s tutorial transcripts. We hear repeated instances of Julian asking a question, not waiting or allowing enough pause for student response, then moving on to offer extended stretches where he tries hard to offer useful suggestions.
In his sixth tutorial, Julian’s actions suggest that though he is metacognitively aware of his rather “inauthentic” listening habit, the problem is indeed a deep one. At the very beginning of the session, the student says “she [Anne] gave us this peer review thingy.” As if she hadn’t said a word, Julian responds: “How is your week going?” They never get back to the student’s initial utterance.
Of the eight student questionnaires I received back, seven were primarily negative, and one positive. Several students commented that Julian did not seem to know what was going on in the course: “I thought it was going to help out but it didn’t ... Didn’t seem as Julian was up to date with our class assignment.” Another, “he was never here in class to know what was going on.” Another, “he didn’t know what our class was doing (never updated).” Another, “Meeting with Julian seemed like a waste of time because he didn’t really help me out or give me ideas for my papers and didn’t right [sic] anything down ... Get a better in-class tutor that will actually be updated with the way our class is going and has input on our papers.” Finally, evidence from the questionnaires shows that Julian was at least somewhat helpful to two students. One said that he “had good feedback on my paper.” And the first student above who said “I thought it was going to help out ...” hints at what might have been if Julian had been in class more often: “He helped when he was in class but other than that, I still have to agree with it not helping at all.”
Praise and Teacher’s Presence: Megan’s Tutorials
Megan, from Team Two, ended up having 15 sessions, the most of all the tutors, including four return visits (Table 3-2). Megan was the only tutor for whom students visited more than once. Megan’s sessions came in two waves: the first round included eight tutorials in the seventh week of the course, and the second included seven tutorials in the tenth or final week of the quarter (before final exams week).
Table 3-2: Linguistic features and cues from Megan’s (Team Two) one-to-one tutorials
Linguistic Features and Cues
Megan
Students
# of Sessions
8/7
Average Length (minutes)
11/18
Total Words Spoken
8,986/11,675
2,150/2,444
Average # of Words Spoken per Minute
102/93
24/19
Content-clarifying Questions
15/18
Open-ended Questions
12/8
Directive Questions
5/12
References to TA
7/17
2/6
References to Assignment Prompt
1/1
0/0
Interruptions
8/17
26/20
Main Channel Overlaps
1/8
5/22
Joint Productions
3/8
17/23
The first eight tutorials dealt with short, two-page response papers on the texts from class: the movie Wag the Dog, and documentaries The Living Room Wars, and From News to Entertainment; and written texts from their course reader including excerpts from Sandra Silberstein and Michel Foucault. The sessions averaged only 11 minutes, with the shortest session lasting only six minutes and the longest lasting 31 minutes. Megan did not read the students’ papers aloud, nor have them read it aloud as she normally might. She said that the sessions were so short because the papers were so short and she wanted to try to see as many students as possible. Certain patterns that pertain to subsequent sessions quickly began to surface. After clearing the way with initial questions, Megan began to fall into a clearer pattern. It seems she would begin with praise, and then lead into a critique followed quickly by a suggestion which I associate with Harris’s metacognitive acquisition of strategic knowledge:
MEGAN: Yeah (yeah), ok, cool I think you obviously have a good grasp on the readings and you could probably bring a few quotes from the reading The Living Room Wars in toINTER
STUDENT: Oh yeah don’t worry about that I’ve got it.
MEGAN: Yeah and the movie is tricky like I said that is something that’s pretty apparent to me too so I think that will be pretty easy to do. Do you have any questions or?
STUDENT: Not really
MEGAN: I know it’s kinda brainstorming and you’ve already been thinking about it so once you kind of combine everything and start having a rough draft we can work off of that; you can come back and whatnot. It sounds like you’ve already thought about it and can already see the parallels and you have some good ideas. And don’t be afraid, you’re right it could be easier to have those two-paragraph structure, but I think that you could find a lot just using those two parts of the movie then using Bush and Clinton like that could be easily be two pages in itself. So if its two paragraphs I wouldn’t worry too much about it (ok). So awesome, thanks for coming in.
Readers will immediately recognize this as the same pattern that constitutes most end-comments given on student essays. Megan starts by praising the student’s “grasp on the readings” but quickly moves on to imply evaluation and provide suggestion. I say imply because even though Megan does not directly evaluate, she does imply evaluation by stating what is missing: direct quotes from the text. Megan follows a similar pattern in the rest of the first round of tutorials. She frequently tends to apprehensions that students voice and praises their good ideas. Yet at the same time she frequently, explicitly directs students to do what she would do, as in the case above when she advises “just using those two parts of the movie.”
The seven sessions of the second round of conferences in week ten follow very similar patterns, characterized mostly by the role that the TA Laura plays, as students are negotiating the final portfolio assignment. Overlaps abound as students fully understand that their grades for the course are at stake, and that Megan may be able to help them do better on their portfolios. Students continue to voice sentence-level issue concerns and Megan continues to aid them with this, often linking these issues to larger structural and conceptual considerations. Students in this second round came to Megan hoping to hear that they were not too far off the assignment and to get suggestions for improving their papers and make the most out of the chance offered by the cover letter. In the final session in particular a student voices his concern with his grade for the course. He had visited to talk about the cover letter, and ended up easing his worry perhaps a bit through his interaction with Megan. This final session, more than any other, showed Megan’s peer-like willingness to help strategize given the student’s strong desire for a good grade:
MEGAN: You could kind of do it two ways. (mmhm) You could either because I don’t know her as a TA like her grading at all (mm) and I don’t know her from last year either (mmhm) so I have nothing MUTUAL LAUGHTER to judge her on so I would try to figure out yourself will it be better to argue can I get a 4.0 or you could also argue get an A which would be like what a 3.8 to 4.0 on it?
STUDENT: Oh okay so I should say A instead ofINTER
MEGAN: You could either way. I mean do which you think would be bestMAINCHANOVER(I feel like)do what you really want.
STUDENT: If I said I deserve a 4.0 she’s going to be like ahhhh you don’t really deserve a 4.0 soINTER
MEGAN: Yeah maybe like an A or something MUTUAL LAUGHTER and maybe too or you could say something like I know last time my portfolio was a 3.6 (mmhm) and I’m trying to improve on that so then at least she might be like “oh he invested himself and is trying to improve” and you have like a 3.7 to 4.0 (oh ok) which is still good. So that’s something else you can say something like I’m really hoping to this revision process that by taking the class again to improve on my writing through going through the revision process again but really I’m hoping to get a better grade than I did last time on my portfolio because I got a 3.6 and I really want to improve. (ok) That would be a better way to do it. I might if it were me and you definitely (mmhm) don’t have to do it like I say but this is a suggestion but I might go with (yeah) becauseMAINCHANOVER(that way I don’t have to say) then she’ll know that your like constantly trying to improve not only making revisions to your paper but you’re also trying to improve from last timeJOINTPROD
STUDENT: Yeah and not only like I’m not asking for a grade (yeah) I’m asking for whatever she wants (yeah) to say. Okay.
MEGAN: Yeah that might be a good angle so either wayMAINCHANOVER(that might be a good angle I like that) whatever one you think is that yeah so either way whatever you think would be best but that might be a good way because then she’ll really know like you’re constantly (yeah) like even from last year you’re trying to improve your grades (ok) and your revision process. (ok) Yeah I think that sounds good.
This 14 minute session was characterized by five instances of mutual laughter, 12 overlaps, and numerous fillers. Clearly this student saw the potential value of, and took an active conversational role with, Megan in helping him to negotiate the portfolio and in his rhetorical choices for presenting his case for an A in the course to Laura.
Of the nine completed student questionnaires I received, five were clearly positive in terms of the one-to-one tutorials: one student said the tutorial was “helpful.” Another said, “Seeing her one-to-one was a lot better. I felt more comfortable.” Another, “helpful because the teacher may have problems; [the tutor] acts as a mediator.” Another, “It was nice to have someone to talk with about your paper one-to-one.” And another that it was “more helpful” than her in-class interaction.
Directing Talk and Texts: Madeleine’s Sessions
Madeleine, from Team Three, ended up conducting only four tutorials. All of Madeleine’s tutorials occurred within three days of each other, in the sixth week of the quarter (Table 3-3; Note that the third of Madeleine’s four sessions, detailed below, was singled out for analysis from the rest due to its atypical features). All four of Madeleine’s recorded sessions dealt with four to six page major papers in which students were to make an argument involving articles on two views of multicultural education from Ronald Takaki’s “A Different Mirror” and Arthur Slesinger’s “The Return of the Melting Pot” and the English 105, or the second part of the stretch course, class they were taking. The sessions averaged 50 minutes, with the shortest lasting 31 minutes and the longest 71 minutes. Madeleine read the students’ papers in the first two sessions aloud and she read them silently in the last two. I could not detect any noticeable effect this had on the content and flow of any of the sessions.
Table 3-3: Linguistic features and cues from Madeleine’s (Team Three) one-to-one tutorials
Linguistic Features and Cues
Madeleine
Students
# of Sessions
3/1
Average Length (minutes)
50/59
Total Words Spoken
12,115/7,614
1,919/2,997
Average # of Words Spoken per Minute
81/129
13/51
Content-clarifying Questions
5/4
Open-ended Questions
23/2
Directive Questions
23/5
References to TA
7/4
0/2
References to Assignment Prompt
1/0
0/1
Interruptions
21/44
10/50
Main Channel Overlaps
3/6
7/25
Joint Productions
3/5
24/6
Madeleine evinced certain patterns in her tutoring practice that shaped the content and flow of the tutorials. Madeleine usually took control of the session early and held firm control of the conversational floor. Her sessions are characterized by little to no praise; plenty of criticism and directive suggestions, usually with no qualifications; and large chunks of time spent on talking, near-lecturing really, about the readings. The teacher, Sydney, plays an integral role in Madeleine’s sessions. But Madeleine, rather than the students, brings the presence of Sydney into the session early on. This excerpt, from the beginning of the first tutorial, is typical of how Madeleine starts her sessions:
MADELEINE: Okay looking at your introduction?
STUDENT: Yeah introduction and claim.
MADELEINE: And your claim. Is it okay if I read aloud?
STUDENT: No go for it. MADELEINE READS STUDENT’S PAPER ALOUD FOR ABOUT TWO MINUTES
MADELEINE: Okay I kind of see what you’re trying to say. You’re trying to say you’re trying to set up the stakes like in the second paragraph? (yeah) You’re trying to say that racism exists and the reason that racism exists is because people don’t know about themselves (mmhhm). What I would say first of all about the beginning of your paper or the beginning paragraph is that it doesn’t really have a claim that directly references both accounts (mmhmm) and maybe that’s because you didn’t have a copy of UNDECIPHERABLE
STUDENT: Oh you mean the article?
MADELEINE: Well first of all we’re supposed to be talking about is multicultural education important? And you didn’t really say anything about multicultural education in the beginning (oh) and so you just want to like mention that (okay). And also you’re supposed to be stating whether or not you agree with the class that you just took. Like on race citizenship and the nation (ok). Like what she wants you to do is look at the class and think okay what have I gained from this class; like is it necessary for us to be studying these concepts or because the two different arguments are Takaki had his arguments well let’s take the other guy first Sl- (Slesinger)JOINTPROD something hard to say. He basically says that multicultural education, it kind of like boosts people’s self-esteem right?
Notice how after reading for a bit, Madeleine starts telling the student directly what the student is trying to say rather than ask her. Then Madeleine jumps straight into criticism of this student’s introduction and claim without praising any aspect of the student’s writing. She shows her close understanding of the assignment and implies an alignment with Sydney’s expectations by telling the student, with the modal auxiliary, what she is “supposed” to be doing. Madeleine amplifies her alignment with Sydney and the prompt by bringing in the pronoun and presence of Sydney: “what she wants you to do.” Madeleine typically uses the tag question “right?”, as in the example here, not to necessarily elicit a student response as with an open-ended question, but (much like Julian) rather just to make sure that the student is following her suggestions. Madeleine goes on from the excerpt above to bring in Sydney via “she” twice more before she stops referring to her.
The above directive suggestions also in many ways parallel the third session, characterized by what I came to see as a struggle or fight for the conversational floor. This hour-long session involved so many overlaps by both interlocutors (92 interruptions, 16 joint productions, and 32 main channel overlaps) that it was quite painful to transcribe, even with voice-recognition software. This session is characterized by a student who fights for the conversational floor, especially in regards to the main concept she wants to cover in her essay, politics. The student brings up this issue as a possible focus for her claim early in the session and several times thereafter. But Madeleine ignores the idea repeatedly:
STUDENT: I want to get out the thing is I have like three different things I’m trying to talk about (mm) and I don’t know how to go at it; like I’m talking about how politically there are going to be more students educated and having a background of different peopleINTER
MADELEINE: Yeah but I mean it’s not just about it’s not just about knowledge it’s about knowledge of not only yourself like and how you fit into American history but how other groups not just black and white right? (yeah) fit into American history because Takaki one of his main arguments is also that American history has been really black-and-white like it’s either white or it’s the other (yeah) and the other is usually black. But that’s not true because there’s been like Latinos and there’s been Asians and there’s been Native Americans that have all helped to shape what America isINTER
STUDENT: Yeah but what about because what I’m talking about here are the political process as a whole; like I actually take okay one of my positions is in a medical profession and the other one is a political position you know like what I’m saying? Okay I get the point that I’m not supposed to talk specifically about people going into the university and taking these courses and coming out a certain way, but that’s kind of what I did. I’m talking about if you have a better understanding of each other there is going to be more laws formulated their going toINTER
MADELEINE: But don’t you think it’s a little bit deeper than just having a better understanding likeINTER
STUDENT: Well but that was that was deepINTER
MADELEINE: Yeah but you’re talking about he doesn’t just say we need to like have a better understanding like try to use some of the terminology that he uses; one of the most important things that he says “we are influenced by which mirror we choose to see ourselves as” ...
STUDENT: So the political one though I thought that would be okay; maybe I should just focus in on the student actually going into the schools oINTER
MADELEINE: Well what you need to do is have an argument. So you agree with Takaki. Do you know what Takaki’s claim is? (he) TEN SECOND PAUSE
This sort of conflict in goals continues until the student emotionally expresses her frustration in not being able to match Madeleine’s insistence that she understand the texts (or Madeleine’s interpretations of the texts):
MADELEINE: I mean if you have to read it a couple more times INTER
STUDENT: Well I’m trying to read a lot but it’s just like I don’t get what I’m doing though Madeleine ...
This is the first time a student has used Madeleine’s name in any of the tutorial transcripts, an indication perhaps of the frustration that has been bottling up. Yet this is also the only time in all the tutorial transcripts I analyzed that a student called their tutor by name, suggesting a slightly more positive interpretation, perhaps, of the dramatic give and take of this interaction. Marie Nelson argued that the type of resistance this student evinces might actually suggest this student’s potential to make dramatic progress because the resistance “showed how much students cared” (qtd. in Babcock and Thonus 91). This echoes Madeleine’s own words regarding her motivation for this project: “I hoped that they would view my enthusiasm for the content as an example of it actually being cool to care.”
Tellingly, not one comment regarding one-to-one tutorials came back from student questionnaires. Yet students had much to say about their in-class interactions with Madeleine, as readers will hear in the next chapter.
Surrendering Control through the Act of Writing: Sam’s Sessions
Sam from Team Four was the tutor the least involved in any classroom activity. She was also expected to play the role of outside reader, or in her terms “independent consultant,” in one-to-ones. Having less insider knowledge of the content of the course, and given Sam’s typically nondirective approach, it would be reasonable to assume that Sam practiced a highly nondirective tutorial method with these students. Sam ended up conducting 11 tutorials total, eight sessions in the seventh week of the quarter, and three more in the tenth or final week (Table 3-4). All of Sam’s sessions involved five to six page major papers. The first eight, including the tutorials detailed below, dealt with James Loewen’s article on heroes and heroification, “Handicapped by History: The Process of Hero-Making.” Since Sam had read most of the papers and supplied written comments beforehand, her sessions were designed to fit within a 30-minute time frame: the average session lasted 25 minutes, with the longest lasting 36 minutes and the shortest 16 minutes. Sam neither had students read papers aloud nor read them aloud for them.
Table 3-4: Linguistic features and cues from Sam’s (Team Four) one-to-one tutorials
Linguistic Features and Cues
Sam
Students
# of Sessions
11
Average Length (minutes)
25
Total Words Spoken
18,181
11,292
Average # of Words Spoken per Minute
66
41
Content-clarifying Questions
20
Open-ended Questions
137
Directive Questions
21
References to TA
1
3
References to Assignment Prompt
1
0
Interruptions
12
37
Main Channel Overlaps
7
12
Joint Productions
9
49
Like the other tutors, Sam’s tutorials began to show patterns early on that continued throughout her sessions. In contrast to Madeleine, Sam would usually start off by asking the students what they wanted to work on. This open-ended start would help set up Sam’s habitual use of open-ended questions (OEQs) followed by follow-up questions and occasional directive or leading questions. Sam often used a praise-critique-suggestion sequence in her replies. Sam would qualify her suggestions much more often with phrases like “I would” or “I might” when nudging students toward acquisition of strategic knowledge. After the first few sessions, she began to say things like “I see a lot of students/people doing this” often when offering direct suggestions. Perhaps due to her more “outside reader” status, Sam referred back to the TA Sarah much less frequently than Megan, Julian, or Madeleine, instead using the phrase “the reader” to denote audience. In most of the papers, Sam talked about structure, the link between topic sentences and claim, between conclusion and claim. This often caused her to deal with sentence-level issues in relation to larger structural/rhetorical concerns. Finally, Sam’s most salient and compelling patterns involved her use of note-taking and pauses and their overall effect on the content and flow of the tutorials. Sam’s sophisticated use of note-taking and pauses caused students to talk much more than in Megan, Julian, or Madeleine’s sessions, and led to what I would describe as collaborative speaking and writing through the act of collaborative writing or note-taking.
Sam began nine of her eleven sessions asking OEQs involving what the students wanted to work on: “Do you have any questions that you want to talk about?” is the typical way she opens up the tutorial. The two atypical openers in which Sam did not start in this way both started off with her asking about the students’ claims. In the following excerpt, from the first round of tutorials, Sam evinces her typical praise-critique-suggestion pattern at the beginning of the tutorial:
SAM: ... So it might be that you partly started reading the comments here but one of the things that I noticed about your paper is that you do a really good job of demonstrating your familiarity with all the material (mmhm). Like I can tell that you’ve done all the reading and paid close attention. What I think that you’re missing though is a claim (mmhm) which is kind of a big part of writing an argumentative paper. So there’s some scratch paper over there that you can take notes on if you want. But how I’d like to start is what what was your claim that you had in mind when you were working on the paper?
Even though Sam does not start off with her typical opener in this excerpt, she still begins with the broad OEQ regarding claim. More pointedly, in this session Sam begins to show her awareness of the importance of note-taking. In other sessions, she will ask students to take notes, while she takes notes as well. Sam’s use of note-taking and pauses play the pivotal role in the content and flow of her tutorials, affecting not only how much students talk, but perhaps more importantly, to what degree they take agency in the tutorial—the number one factor that distinguishes her tutorials from all the ones conducted with the other teams.
In the following excerpt from a tutorial that lasted about 22 minutes, the student overlaps Sam’s speech 12 times, while Sam does not overlap the student’s once. The student is arguing that heroification is a bad influence on kids. Notice how pauses, questions, and overlaps function in the following extended excerpt:
SAM: Okay so you think SIX SECOND PAUSE so what’s your take on heroification and how it affects little kids?
STUDENT: It helps to bias them, makes them makes them feel like you have to do the impossible SIX SECOND PAUSE the impossible by being perfect, having no flaws.
SAM: Okay so heroification is bad for kids to 16 SECOND PAUSE image, expectations.
STUDENT: Yeah it’s just the image, what’s right.
SAM: Okay and you said something about it rocks their mind with what do you mean by that?
STUDENT: The wrong example of what to do. 12 SECOND PAUSE
SAM: Okay, so why do you think that people do this? What did you understand from Loewen? Why do people try to hide the bad things? SEVEN SECOND PAUSE Why do you think people persist in presenting these unreal representations?
STUDENT: Just to do what they do now. They’re trying to help. I have no idea why.
SAM: They’re trying to helpJOINTPROD
STUDENT: Like kids try to be better. I mean that’s FOUR SECOND PAUSE
SAM: Okay so heroification is meant to make kids be better. That you argue thatJOINTPROD
STUDENT: It doesn’t do that.
SAM: Okay good. 22 SECOND PAUSE so if you were to sum that up into one statement because what we have here is it’s not really something specific or arguable yet which is what a claim has to be. So if you were to sum up your ideas here in a statement one declarative statementINTER
STUDENT: So like tell why heroification is bad? (mmhm) Just because it gives the wrong ideas to kids on how to grow up.
SAM: Okay so would you write that in your paper and state it like that?
STUDENT: A lot like that. I don’t know how I’d state it; it’s easy to write down. I wouldn’t say it’s bad though. Heroification has a negative influence on kids because it gives them the wrong reasons to 43 SECOND PAUSE growing up it’s negative for some kids TEN SECOND PAUSE the wrong reasons.
SAM: Well reasons for doing this. (yeah) Okay so that’s good so I would just add that because this is yourINTER
STUDENT: In this sentence just get rid of this?
SAM: No leave this. This is a good transition here especially since you say that he focuses on high school. It still relates to kids; it just brings up the talk about kids which you do. Heroification has a negative influence on, etc.
STUDENT: So I can put this before the sentence?
SAM: I would put it hereJOINTPROD
STUDENT: After the sentence? Ok.
SAM: Because this is like your transitionJOINTPROD
STUDENT: So that’d move into myINTER(so it’s) into that and I just want to this.
SAM: Yeah I would just kind of insert this here but then you have to talk about why you believe this is true.
STUDENT: So I would do that in the next paragraph? (so)INTERor would I do that like in the same paragraph?
SAM: Well that’s what the rest of your paragraph is about. Basically you have to argue your point, make me believe you. TEN SECOND PAUSE.
Sam begins with her typical OEQ. What the transcript does not reveal is that in the first long 16 second pause, Sam is writing notes. Sam has written something down, and then refers back to that in her follow-up question. Then Sam allows a 12 second pause after the student responds “the wrong example of what to do.” After this pause, Sam asks more follow-up questions. When the student initially replies that she has “no idea why” and Sam begins to rephrase the student’s beginning comments “they’re trying to help,” the student overlaps with a joint production, “like kids try to be better.” In two more lines the student overlaps with another joint production. The next 22-second pause allows time for both participants to collect thoughts and get to the big picture, the claim. In the next few lines the student expresses the difficulty in trying to verbalize something as complex as wording the claim on the spot. But a few lines later a long 43 second followed closely by a ten second pause allows the student to think more. The student interrupts Sam two lines later, expressing her concern at the sentence-level. Sam then explains through praise why she could keep that sentence and how it relates to the higher-order concerns involving structure and thesis: “This is a good transition here especially since you say that he focuses on high school. It still relates to kids it just brings up the talk about kids which you do heroification has a negative influence on, etc.” The remainder of the excerpt above involves the student illustrating her agency by overlapping Sam’s speech three more times—two of which she actually interrupts Sam’s attempt to respond. Notice how the line between interruption and joint production begins to slightly blur when the conversation is really flowing, when the student is realizing some agency and urgency, and when the tutor (Sam) allows for this sort of conversational play. (During initial transcriptions, I had some difficulty in distinguishing between interruptions and joint productions in some spots.)
Sam’s longest session evinces many of the same patterns described above, further illustrating the collaborative effects of Sam’s particular style. During analysis, I was struck by how similar this student was to the one that Madeleine from Team Three had such conversational struggle with in her session above. In this 36 minute session, the student overlaps Sam’s speech 20 times, while Sam only overlaps the student’s speech five times—including three instances where the student does not allow Sam to take control of the conversational floor. In this session Sam shows one of her patterns early in the tutorial when she says “So one problem that a lot of people have tends to be coming up with the claim in the beginning.” Sam refers to what she notices that others have been doing often, perhaps deflecting any sort of individualized, evaluative finger-pointing. The student starts off describing his claim as involving his belief that heroification is ok for young kids, but that when they start to mature they need to be able to think critically about this issue. Sam proceeds to ask questions and provide suggestions on how the student can rethink his topic sentences in relation to his claim. In typical fashion, she qualifies most of her suggestions, “When you’re revising I’d probably, what I recommend ...” Discussion of the essay’s structure leads to a discussion of the student’s prewriting strategies. Later the conversation turns back to more specific instances of getting the student’s purposes across clearly to the reader. Here Sam shows her typical reference to the reader: “So all that’s really needed is that you want to make sure that you specifically say this at the beginning of this paragraph (oh ok) so that we know that that’s what you’re saying. (oh ok) So that we know that as we read the scene we go ‘okay so this is where he’s going with this.’”
A few turns later, the student second-guesses himself when he feels that Sam has disagreed with one of his points:
STUDENT: ... That was just like me presenting both sides of the argument; but clearly, like I’m thinking maybe it doesn’t belong because you’re telling me like okay this UNDECIPHERABLE.
SAM: Okay so do you feel like this fits in with any of your major points so far? Sorry I didn’t have a good look at the first paragraph should beJOINTPROD
STUDENT: More of a benefit really.
SAM: Or yeah what was the first body paragraph?
STUDENT: It was more like morale of like heroification can be used to build up morale. To want to be great you don’t need to hear the negative sides to put a high standard upon yourself; I guess that was kind of it. We could just move that chunk overINTER(well ok)
SAM: So let’s think about this, you’ve got heroification can build up morale, but then if it gets too blown up out of proportion then there’s a danger that it will break down and fail because it’s a lie. (mmhm) And then the third danger is that those that are deceived won’t be able to UNDECIPEHRABLE what they’re thinking. So of those three which do you think it fits better with?
STUDENT: Definitely more on the benefit. Well I’m not really sure because that part of my argument was more like I realize I was more focused on possibilities and I kinda wanted to end on a little bit of both because it shows that kinda gave two sides but mainly push towards one thing whether something good can come out of it if you’re going to set yourself for the challenge.
In contrast to the fight-for-the-floor pace and tone of Madeleine’s third tutorial, in this excerpt and throughout this and all of her sessions, Sam takes a much less argumentative (doubting, dissenting) and much more cooperative (believing, assenting) stance in relation to the student’s ideas. Notice how precisely Sam refers back to the student’s ideas and words:
So let’s think about this. You’ve got heroification can build on morale, but then if it gets too blown out of proportion then there’s a danger that it will break down and fail because it’s a lie. (mmhm) And then the third danger is that those that are deceived won’t be able to UNDECIPHERABLE what they’re thinking. So of those three which do you feel it fits better with?
Because Sam has been writing notes, co-constructing an outline with the student, she can repeat back, with some great detail and clarity, the student’s own ideas and how they relate to the overall essay. The student then can help add to this co-constructed oral/literate text. This exemplifies what I would describe as collaborative speaking and writing through the act of synergistic writing or note-taking.
Rather than dismiss any of the student’s ideas, or try to force ideas on the student (as Madeleine, Julian, and Megan were all prone to do sometimes) Sam uses questions to try to get at how this student’s idea might be worked into the essay’s structure. This reliance on traditionally nondirective questions is due to some degree to the fact that Sam has not done the course readings. But it is also due, I believe, to Sam’s methodology. Sam’s tenacious ability to stick to using questions to allow students time to process and respond and then to write down notes as the conversation moves forward as her basic “nondirective” modus operandi enables her to turn the conversation over to the hands and minds of the students. In one session Sam waited for 89 seconds after asking a student “So where’s your topic sentence on this paragraph”? That same student, after thinking through things for 89 seconds, responded in some detail. While tutors are typically advised to wait fifteen seconds for a reply before reframing the question, some questions may require longer cognitive processing. Courtney Cazden, drawing on Hugh Mehan, claims these “metaprocess questions ask for different kinds of knowledge and prompt longer and more complex responses” (46). While “what is your topic sentence?” may seem simple enough on the surface, imagine all the cognitive steps the student must go through to give a cogent reply: processing the question, putting the question of how the topic for one area of the paper relates to the larger structure of the rest of the paper, and finally trying to find the words to express those connections. This moves the student simultaneously from the larger rhetorical-structural issues of the paper to the micro-linguistic syntactical and lexical level of the topic sentence. Each student that Sam worked with walked away with jointly-constructed notes that they could use while revising their essays.
Of the 12 student questionnaires I received, ten were overwhelmingly positive and only two were either critical or ambivalent. (The ambivalent one was from a student who did not visit Sam in the first place.) Most students commented on the convenience of the partnership and the availability of Sam. Students described specifically how helpful Sam was during one-to-ones. For example, one student wrote: “It helped me strengthen my paper and understand what the readings were trying to get across to its audience.” Another, “She gave me ideas and hints to making my paper be voiced more by me rather than the quotes I used.” Another, “She helped me gather my thoughts clearly, gave me advice to make my paper stronger.” Two students commented favorably on Sam commenting on their papers before they met: “we would go to our appointment and she would have our paper already read so we didn’t have to wait. She would just tell us what we had to work on.” Another, “It was a lot better because at least the tutor would read it beforehand and it would not take as long as opposed to making an appointment or a drop-in where they would read it on the spot and it would take a while.” Finally, one student commented on what she saw as a problem, suggesting what some students must think of writing centers in general: “The tutor was not familiar with the subject taught in class; therefore she wasn’t able to help on specific questions or be any more helpful than the tutors at the writing center.”
Discussion: Tutoring on the Edge of Expertise
Granted, the case studies represent extremes in tutorial instruction and tutor preparation and should only be taken for what they truly are, qualitative case studies conducted in local contexts. Yet, analyzed side-by-side (Appendix C)—and from so many methodological angles—they suggest multiple points for more general comparative consideration, especially in regards to tutoring method. While CBT scholars caution practitioners and experimenters that tutors may need to be more or less directive when interacting more closely with instructors and courses, my studies suggest just how tricky this notion really is.
Julian’s (Team One) basic modus operandi of having the student read the paper aloud, while stopping intermittently to talk about things as they went, seemed to cause Julian to talk unnecessarily, and in ways that only occasionally invited students to take agency in the sessions. Julian made infrequent use of the valuable tool of note-taking, a technique that might have substantially altered the content and flow of his one-to-ones. If he had asked his questions, then waited for a response, then taken notes for the students (especially those that were not as engaged or not taking any notes themselves) his sessions may have sounded more like Sam’s, and students and Anne may have felt that these one-to-ones were adding something of value to this partnership. Instead, Julian—despite his meta-awareness that he tends to talk too much (listen too little) in a tutorial—repeatedly dominated the conversational floor, often interrupting students’ train of thought, answering the questions he should have been waiting for a response on, and even out-and-out ignoring (or more often, overlooking) student concerns and questions. Stunningly, during our interview, Julian even told me that he felt the one-to-one tutorials were “successful.”
When held in comparison to Megan and Kim, however, Julian’s style does not seem that drastically different or reprehensible. In fact, all three of these tutors exhibited similar tendencies to dominate much of the conversational floor in their own ways. (Students actually talked more, proportionately, with Julian than with Megan or Kim.) Julian, unfortunately did not have the same opportunity as the in-class tutors to redeem himself in any way via consistent and productive interactions in the classroom. (I’ll return to this comparative discussion in the next chapter.)
Megan’s (Team Two) tutorials took two different routes: shorter sessions in which she did almost all of the talking, asked few questions, and followed her usual pattern of praise-critique-suggestion; and longer sessions in which students, concerned with negotiating their portfolios or Laura’s comments, showed more engagement and concern, but still talked much less. These shorter sessions resemble the kind of conferences advocated by Garrison, shorter sessions where the tutor/teacher acts more like an editor directly intervening and offering suggestions. In a sample session with a student, Garrison often uses phrases like “this is what I would do” or more emphatically “do this” or “I want you to.” He will even ask the student a question, then, rather than wait for a response, move on to the next question or suggestion or critique (see Harris, Teaching One-to-One 143-45). In the excerpt Harris cites, Garrison does not praise, but moves quickly from critique to critique. In contrast, Megan follows a pattern of praise-critique-suggestion that students must certainly be familiar with from teacher end-comments on their papers, and perhaps even from peer review. Drawing on Aristotle’s idea of praise as action, Spigelman argues that students in classroom writing groups need to be taught the value of both epideictic and deliberative rhetorical responses. “In contrast to epideictic,” she writes,
an exclusionary deliberative approach may ... contribute to wholesale reader appropriation with little concern for writer’s intentions or motives ... When groups believe that their primary function is to change the existing text, they may fail to notice and therefore positively reinforce successful literary or rhetorical elements in their peer’s essays ... A combined epideictic and deliberative process enables readers to provide productive, action-oriented comments, and at the same time, allows writers to resist appropriation by their peers (“‘Species’” 147-8).
While it is important to praise for several reasons (Daiker; Harris, Teaching One-to-One 71-73), some maintain that too much lavish praise may have little positive, and perhaps even a slightly negative, effect on student learning (see Schunk 475-6). As I listened to Megan’s use of praise repeatedly in both peer reviews and one-to-ones, I began to wonder if it was having the effect on students she intended. Megan’s praise, however, did sound more authentic when she aligned her praise with Laura’s. This associative “team praising” allowed Megan to amplify her praise considerably, affectively easing the worries of students who perhaps felt there was little worth celebrating in Laura’s comments and evaluations. Megan also evinced a willingness to help students with sentence- and word-level issues. Megan’s transcripts show how a tutor willing to work through sentence- and word-level concerns can immediately link these issues back to HOCs like claim, especially the important role of word choice and carefully defining terms so that a writer can get their intended point across to their reader more clearly. Finally, we saw how Megan’s sessions took a different turn when it came time for students to negotiate their portfolio assignment at the end of the quarter. When students perceive the stakes as high, and I would argue, when they are dealing with the unfamiliar genre of the portfolio cover letter, they take a much more active role in the tutorial. Megan’s sessions began to involve exploratory talk much more, it seems, when the students felt the real urgency involved in arguing the strengths and weaknesses of their performance for the course.
We also saw in Megan’s first round of sessions that she started off with a typically non-directive approach, but soon, as she progressively worked with more and more students, she became increasingly directive, more Garrison-like. Most likely, seeing students with the same assignment repeatedly, caused Megan to start blurring each session together, almost into one huge tutorial. This is much less likely to happen in a typical one-to-one tutorial outside of CBT.
Madeleine (from Team Three) proved a highly directive tutor. As we discussed at length in Chapter Two, directive tutoring does not necessarily imply hierarchical, authoritarian tutoring. For my analyses here (and also in relation to Madeleine’s in-class involvement discussed in the next chapter), it is worth noting that Madeleine evinces conversational and instructional communication patterns associated with African Americans, patterns that may account in part for her instructional directiveness (see Delpit Other; Smitherman; Lee; Denny 42-43). Carol Lee, drawing on Bakhtin, Goffman, and Geneva Smitherman, points especially to AAVE as a personal discourse that brings special ways of speaking and knowing into the classroom (and, for our purposes, into one-to-one and small-group tutorials): “Within AAVE (which may be defined as a dialect of English), there are many speech genres. These genres include, but are not limited to, signifying, loud talking, marking, and testifying” (131). She draws on Smitherman’s Talk that Talk to explain how the African-American communicative-rhetorical tradition evinces some unique patterns:
1. Rhythmic, dramatic, evocative language
2. Reference to color-race-ethnicity
3. Use of proverbs, aphorisms, Biblical verses
4. Sermonic tone reminiscent of traditional Black church
5. Use of cultural referents and ethnolinguistic idioms
6. Verbal inventiveness, unique nomenclature
7. Cultural values—community consciousness
8. Field dependency (involvement with and immersion in events and situations; personalizing phenomena; lack of distance from topics and subjects) (Smitherman, 2000, p., 186)
Madeleine evinced especially 1, 2, 5, 7, and 8 in her tutorials, partly (perhaps largely) because the topic of the course—race and citizenship in the nation—brought out her passion and fluency on this topic (also see Corbett, Lewis, and Clifford). In “Community, Collaboration, and Conflict” Evelyn Westbrook reports on an ethnography of a community writing group where conflict and difference are foregrounded. One group member, an African-American woman (echoing Lisa Delpit’s direct-instruction sentiments), rather than placing the highest value on supporting its members through lavish epideictic praise, sees more value in challenging its members:
when people [in the group] say “Wow! This is good,” well, that doesn’t help me very much. But when they say, “I would use this or I would use that” or when they challenge the way I thought about [something], that’s good feedback ... When someone questions something you do as a writer ... they are really saying, “Make me understand this.” (238)
While readers might understandably question Madeleine’s performance during one-to-one tutorials, in the next chapter I’ll report on the degree to which that same authoritative style was evinced and speculate on how effective and valuable it proved to students in the classroom.
Nondirective methods and moves were showcased by Sam (from Team Four) in all of her one-to-one tutorials. But I might critique Sam’s performances in two ways. First, almost every move Sam made during her one-to-ones placed agency on the tutee. She asked many open-ended and follow-up questions. She took careful and detailed notes, to which she and the students added to and referred back to during the course of the tutorials (see Harris, Teaching One-to-One 108; Gillespie and Lerner 74). She allowed for long, extended, patient pauses that aided tremendously in both the students’ and her abilities to process information and formulate responses and questions. She also—like Megan from Team Two evinced throughout both her one-to-one tutorials and during peer response facilitation and so unlike Madeleine—used praise strategically. Yet, I might also say that the model Sam employed (at the specific request of Sarah) necessarily caused her to deploy the methods she did. Because she was less in-the-know, because she did not know as much of the content and flow of the day-to-day course happenings, and because she was trained to approach tutorials primarily from a nondirective methodology (and, recall, actually worried about being too directive), Sam was much more situated to practice a nondirective method. This method caused her to deploy such strategically valuable methods as almost always starting sessions by asking what the student wanted to work on, using a praise-critique-suggestion conversational sequence, referring to general “readers” rather than to the instructor Sarah or to the assignment, and thoughtfully and patiently crafting notes. We might, then, say that—like the successful tutorials we surveyed in Chapter One from White-Farnham, Dyehouse, and Finer, and Thompson—Sam realized the coveted humble/smart balance.
Since Sam and Sarah from Team Four had the least amount of in-class interaction with each other of all the teams, I will provide some details of their own reflections of their partnership here. The data point to an overall highly successful partnership. Since Sam did not attend any classes in an instructional role, she primarily voices how she and Sarah coordinated their activities out of class, and the effects these communications had on Sam’s involvement with students:
My involvement with the TA was pretty minimal. We mostly contacted each other via email. I saw her a couple of times, but not really during the quarter. She mostly sent me the prompts and we emailed each other. I’d give her my availability and she would send that to the class. They’d sign up for appointments and then she would send their sign-ups to me.
Sam said that at first she was a little worried that she wasn’t involved enough with the students, but from what she was hearing from Sarah noted “I think it turned out pretty well.” Sam and Sarah even agreed from the start that it would be better if Sam did not do any of the course readings. Sam suggests a fear of being too directive: “I thought it would be more helpful to go with the prompt with their papers ... because I might have my own ideas on where they should be taking their papers and I wanted to avoid that. I just wanted to help them bring out their own claims and arguments.” And although Sam did not have any in-class interaction with students, she did feel a closer connection and responsibility to these students:
I felt more tied to the success of the students in this class. I really wanted them to do better. I wanted Sarah to see the improvements in their papers. I wanted to help them get more out of the class as a whole. And I think that comes with being connected to a particular class. It makes you more invested.
Sam pointed to this as a reason why she would have liked to have had closer interaction with Sarah and the class. She spoke of establishing a more definite sense of her role in the course. She talked about coming in earlier to explain to the students her role. And she said it would have been better if she had spoken with Sarah more about how the class was going, or even visited the classroom once or twice, “just maybe coming in and sitting in the back a couple of times, letting them know you’re there and you’re tied to the class rather than some loner from the outside.”
Sarah provides further insight into Team Three’s unique partnership, including Sam’s minimal involvement with certain aspects of the course. Overall, Sarah really enjoyed the partnership. Like most participants, she said she greatly appreciated the convenience of having a specific tutor readily available for students to make appointments with. Sarah said she wanted Sam to play the role of peer tutor and outside reader for the class, rather than co-teacher: “I might’ve been uncomfortable having another person in the classroom, but that might just be my own ego [laughter]. Seriously though, one of my other concerns too is that students might be confused having too many authority figures.” Sarah decided not to have Sam do the readings because she was afraid that “the tutor would know all the readings that we’re doing and would know the kinds of arguments I’m looking for, and they might steer the students in that direction.” Of course, this is exactly what we saw Madeleine attempting to do in her tutorials.
I maintain, however, that even if Madeleine had been exposed to the literature on nondirective tutoring—like Julian and Megan, who had more experience and training—she still would have experienced the same type of conflicts in agency and authority she faced in attempting to help students negotiate the course. (This may have even conflicted more strikingly with her perhaps more directive African-American instructional style as we discussed above.) Although Madeleine’s four tutorials is quite a small data set, my experiences and case-study research over the years as well as the literature on CBT strongly suggest that tutors faced with a tutorial situation in which they have a better understanding of the course content, teacher expectations, and perhaps even closer interpersonal relationships with the students, will face a tougher challenge negotiating between directive and nondirective tutorial methods. But I do not believe this is necessarily a bad thing, nor should it deter us from continuing to practice CBT. I would rather continue to encourage tutors (and instructors) to practice at the edge of their pedagogical expertise and interpersonal facility. More specifically, for CBT and for consideration of CBT and tutors who have more or less training or experience, how might we, and why should we, encourage tutors to reap the benefits of both directive and nondirective tutoring strategies?
If a tutor has the confidence and motivation to connect more closely with a writing classroom and help provide a strong model of academic communication and conversation—regardless of how much formal training they’ve received—should we be open to such teaching and learning partnerships? In the next chapter, I’ll present what can happen when tutors make these expeditions, interacting with instructors and students in the classroom. Sam (and Sarah’s) narrative of success has all but been concluded. But will vociferous Julian and Madeleine (and to a degree Megan) prove more relatively effective in the classroom than they did in their one-to-ones? And what about those tutors from SCSU who played all of their tutorial roles strictly in the classroom? In many ways, their dramas have yet to unfold ... | textbooks/socialsci/Education_and_Professional_Development/Beyond_Dichotomy_-_Synergizing_Writing_Center_and_Classroom_Pedagogies_(Corbett)/1.04%3A_Chapter_Three-_Macro-_and_Micro-Analyses_of_One-to-One_Tutorials-_Case_Stu.txt |
On occasion, a person with a marginalized identity gains confidence to persist in the face of prevailing winds that trumpet convention.
– Harry Denny
Now is the time for peripheral visions.
– Jackie Grutsch McKinney
In this chapter I extend the work of fellow course-based tutoring researchers by offering detailed comparisons, drawn from my field notes and interviews, as we inch increasingly closer to an understanding of the many factors that provoke directive and nondirective tutoring strategies and that can encourage or deter successful CBT classroom interactions. Rebecca Babcock and Terese Thonus draw on research from the California State University Fresno Writing Center to argue, in contrast to one-to-one tutorials, “the validity of tutoring groups as an effective, and even superior, means of supporting basic writers” (92). I agree with this claim, but I also believe it warrants continuing scrutiny. What factors might make for successful classroom interactions? How can tutors best facilitate and support small-group peer response sessions in the developmental writing classroom? And what useful connections can be drawn between one-to-one and small-group tutoring in CBT situations?
I start my reporting and analyses with case studies of tutors involved in peer response facilitation in the classrooms they were connected to. I’ll begin with the three teams from the UW that were actively involved in the classroom. In the first subsection, I offer detailed micro-analyses of four tutor-led peer response sessions. These sessions are unique and worth micro-analyses due to the fact that both tutors, Julian and Megan, were trained to adhere to a more nondirective tutoring method and methodology. Their performances, then, when compared to their one-to-ones from Chapter Three, aid in my efforts to draw connections between the discourses of one-to-one and small-group tutoring. In the second subsection, I turn my reporting and analyses toward the teams with tutors who received no explicit training in directive/nondirective strategies. Rather than focusing on the micro-level language of the interactions, I focus my analyses more peripherally, more on the broader rhetorical actions and attitudes of the participants. All in all, readers will hear detailed, multivocal and multi-perspectival analyses of tutors—some of whom we’ve already seen deep in action—with varying levels of experience and training and widely different personalities and preconceived notions attempt to aid fellow students with their writing performances on location in the classroom.
Connecting the Micro and Macro in Peer Response Facilitation: Teams One and Two
Redemption Song or Cautionary Tale? Julian on Location
I thought for sure—had complete trust—that Julian and Anne of Team One would realize a fruitful partnership. Just glancing at the highly positive student course evaluations for the course overall, one would never get a sense that things were not all they could have been with that partnership. Yet, as we clearly saw from our analyses of one-to-one tutorial transcripts, Julian confounded my (and students’) expectations. Surely, he fared better in the classroom. The following scenarios take readers closer to an understanding of how authority, trust and directive/nondirective method negotiation can intertwine to either deter or promote successful peer response facilitation.
What follows are reports and analyses of peer response sessions facilitated by Julian and Anne on two different days drawn from my field notes. Due to the dynamic nature of multiple speakers in small groups, I have opted for a horizontal transcription style:
In the first peer review session, in week five, eight students are in attendance, arranged in two groups of four. I move to Julian’s group. He asks if people brought extra copies. A student replies: “Only one.” First student starts to read his paper. Other students are listening, but not writing, commenting or taking notes yet. Julian jots notes as the student continues reading. Student One says: “Didn’t catch your claim.” Student Two says: “Should be in your introduction.” The writer points it out and rereads it. Julian asks: “What do you all think of that as a claim?” Student Two says: “Sounds more like your opinion.” Julian says: “Consider bringing in extra copies [of their essays]. Student One says: “None of us knew there was peer editing today.” Julian says (commenting on the writer’s paper): “Notes on logos, pathos, ethos; good intertextuality; citings of Takaki; with the claim feels like something’s missing, stakes; could you read it again?” The writer re-reads the claim. Julian asks: “Why is it important?” Writer repeats the second part of the claim with some extra commentary. Julian says: “That sounds good, that would give the stakes.” Student One says: “Could state whether or not you agree with Takaki.” The writer asks: “What about opinions?” Julian answers: “The idea is the whole paper is your opinion; stating opinions as if they are a fact, sorta like tricking your reader that your opinion is fact.” Julian asks the student reviewers: “Patterns in a section that you did?” Student Two says: “Logos, cause he keeps giving facts, then the stakes, then facts.” Julian says: “It seems it might not be too much more work to find the pattern. If there isn’t a pattern, that might be worth commenting on.” Julian asks the writer: “Any particular questions?” This group continues in similar fashion. (At this point I notice Anne has stayed primarily out of the groups. She spent about ten minutes with the other group, then she went to her desk for about ten minutes, then came back to the group.)
Julian moves on to the next group. The group he leaves continues talking on-task. In the next group a writer is in the middle of reading his paper aloud. Julian listens quietly. The writer is catching and commenting on many of his own mistakes as he reads aloud. Julian says: “That’s one of the advantages of reading aloud; can catch your own mistakes.” The group members agree verbally and with head shakes. Student One says: “Sounds like you’re making a list; really choppy.” Group members again verbally and nonverbally assent. Julian says: “I missed the beginning; what is the claim?” The writer says: “What is a real American? She claims only white people are true Americans.” Julian asks the reviewers: “What stands out as the stakes for his claim?” Student One says: “Word choice, tone.” Student 2, overlapping his response with Student 1, says: “Go into more depth about Asian Americans.” The bell rings and the session ends.
What I find most interesting about both these peer review groups is how Julian actually does seem to be fulfilling his role as peer review facilitator when he prompts (in italics) students to comment on each other’s papers (much as Megan from Team Two does below), and the degree to which Julian tries to stay as closely as possible to the assignment prompt in his suggestions. Notice in both groups how Julian emphasizes claim, stakes, patterns, and the rhetorical appeals—all things detailed in the assignment prompt. On their perception of how the session went, both Anne and Julian agreed that students should have been told to bring in extra copies. (Each student brought only one paper copy.) Although Anne felt this was “probably not my best peer review session ever,” she liked how the opening discussion of the ground rules and strategies for peer review got the students involved early in the shaping of the session and gave them an understanding of why they were doing things the way they were. “In the name of metacognition, you know,” she said. Julian felt that while there were some good things that occurred, “overall I don’t think it went very well.” Julian blamed it primarily on not having extra essay copies, and also on a lack of time, but also pointed to what he felt was a problem with the assignment prompt: “There were so many bold words/ideas on their essay prompt, they didn’t seem to really know what to be talking about. None of the students seemed comfortable or fully in control of all the discipline-specific language, i.e., there was no common parlance amongst the peer group for all the essay’s aspects in discussion.”
In the second small-group peer response session I observed a few weeks later, Julian only worked with one group of two students the entire time. The main thing I noticed about this session was that, besides reading aloud, the students barely spoke at all during the entire roughly forty minutes. The frustrating effect this had on Julian was palpable to me and must have seemed so to the students as well. Near the beginning of the session Julian asks Student One if he is comfortable reading aloud. Student One says “not really.” Julian describes why it is a good idea to read aloud: “It helps everyone stay in the same place, and you might hear and catch many of your own mistakes.” Without answering, Student One proceeds to read his paper aloud. Julian cuts in quickly and asks if he can slow down, that he “can’t process what he’s saying.” Student One slows down considerably. As the student continues to read Anne writes more instructions on the board: “Is there a controlling thread of argument about what the readers need to take away from these texts? Is it persuasive? What would you say is at stake? Are all sources appropriately cited? Do the content, structure, evidence, appeals, tone all keep the reader in mind?”
What follows, to me, is Julian’s attempt to help students juggle all the well-intentioned prompting Anne has provided. I would call the resulting session an example of “resource overload,” or more is less. As in the peer review session above, Julian spent the entire session mostly trying to get these two students to talk about how they could get one of Anne’s prompt items, “Content,” working in their texts. There was no mention, nor any attempts at working another prompt item “Creative” textual potentialities into the conversation. Reflecting on this session, Julian felt it went poorly. He felt the groups were too small to encourage much comparative discussion. He also felt that perhaps students did not fully understand the assignment which, along with their unwillingness to talk about “their own writing process (or lack thereof),” left him with little to discuss. He said that although the two students hadn’t really done the assignment they “professed to understand what the assignment was.” Anne did not offer any reflections on this particular session.
Although Julian felt it was overall a good thing that he did attend the two peer review sessions, his explanations of the role he sees himself playing during peer review points to possible reasons why he experienced such lackluster results. Ironically, it just might be Julian’s sophisticated sense of what he should be doing during peer response that contributed to the problem. Keenly aware of authority issues, Julian feels that his role in peer review is one of “reserved adviser.” He elaborates:
My understanding is that my presence during the peer critique sessions, it’s not a tutoring session, it’s not me working one-on-one trying to work with their particular writing issues. It’s me trying to model for them skills and ways of being effective in future peer groups throughout their writing classes and college careers, so that they can be useful to other people when I’m not around.
He spoke of previous peer review experiences, among the many he had participated in, where he had taken a more directive approach and felt that this causes students to “clam up because it stops becoming a peer critique session because I’m not their peer anymore and the whole process breaks down and becomes something other than what it’s intended to be.” Julian felt that the biggest roadblock to success, however, involved lack of regular communication between him and Anne.
Rather than share the blame, as Julian did above, Anne, more than once, intimated how she should have done a better job scheduling conferences, getting Julian course materials, and most of all “including Julian in sort of the day-to-day workings of the class and making sure that he had sort of a well-defined role.” She goes on to explain how she feels this communicative oversight later caused students to have expectations of the sort of help they would receive from Julian that were never met. One of these expectations may have involved how much direct instruction they thought they might receive from Julian. Anne talked at length about how Julian’s nondirective approach made her reconsider this approach in relation to this group of developmental students. She said that she had hoped that Julian might help disrupt her teacherly authority somewhat. She felt that because Julian was trying so hard to stay within what he thought were her expectations, he forfeited any opportunity for students to really stake their own claims, something she would have valued highly: “They’re the quickest to bow to authority. They’re the quickest to say ‘well am I doing it right?’ And the least likely in some ways to sort of say ‘I don’t think that’s a useful way of approaching this question’ or ‘what can we do with this assignment to make it something real for me and not just some imagined scenario or something.’”
Three students mentioned peer review in their course evaluations, one praising, the others critical. The first student, pointed to both her admiration of Anne, and the value she saw in peer review: “Anne is an amazing professor and it seems like she absolutely loves what she does and it makes me want to learn more from her. Peer review also played a big role when writing difficult papers. It’s always nice to bounce ideas off your peers and contribute in making their papers better.” The second student, however, felt “we spent too much time reading each other’s paper[s] during peer review, leaving no time for comments ... Taking each other’s papers home to read before the actual day of peer review [would have improved the peer review process]” The third student, commenting on what aspects of the class detracted from his/her learning, wrote two words: “peer reviews.”
Peer Review People: Megan on Location
Megan, from Team Two, felt she was acting much more like a peer in the classroom than a teacher, and she saw this as a good thing. She evinced to me that she worried she would become “more of the TA or assistant TA and not the tutor.” She goes on to explain that she was relieved when other, less authoritative roles were agreed upon between her and Laura. But Megan elaborated further about role negotiation, especially what exactly her role was supposed to be in the classroom:
We didn’t have too many class discussions so I wasn’t really a discussion leader. I tried to ask questions that would really help them understand the readings. But I guess I was kind of peer review [laughter] person. I would lead peer review at times and kind of help them with a new way to do that ... and not much else actually I guess.
Both of the peer review sessions I observed for Team Two seemed to involve both Megan and Laura in dynamic peer review and response facilitation and instruction with the entire class:
For the first peer review, in the third week, 12 students are in attendance. Laura assigns four students to three groups, writing the group assignments on the board. She reminds them that they are supposed to have two copies. Laura has Megan come to the front and explain how the peer review session will work: decide who goes first, read your essay aloud, go through the worksheet, note things that don’t make sense. Megan says that Laura and she will go from group to group. Laura passes around the peer review sheet, and explains that these sheets should be attached to their essays when turned in. Next, Megan and Laura each attach themselves to a separate group. In Megan’s group students begin to fill out the review sheets as the first writer reads his paper. Megan takes notes. Upon finishing reading, Megan says that he did really well and asks for observations from fellow group members. Student One says “Nice examples. It would be nice if you could include some quotes to bring out details.” Megan replies “Good suggestion.” Student Three says “Good flow.” Student Four follows with “flow nice.” Megan says “Introduction describing surveillance knew what you would be talking about ... the other responders gave good advice ... In academics I was hoping to hear a little more about surveillance in your social life and during the games ... Who is surveilling you? Be careful about ‘being watched,’ word choice, maybe ‘surveilled’ ... details help make your points really clear; excellent, good job!” Megan certainly seems to be the authority figure here. She talks much more than the other students, who seem reluctant to offer any suggestions.
Writer Two reads his paper aloud. Fellow group members jot down notes. (Laura is still attached to the first group, listening, giving feedback apparently as a group member. Then she moves to the other group, listening, answering a few questions about the peer review sheet.) Megan says “Why don’t we do the same thing ... you want to go first? The level of student involvement picks up a little in this next round of responses. Student One offers some advice to the writer regarding his paper: “another suggestion, lack of style ... maybe make it more interesting ... I might make it like a story, rather than explaining steps.” The writer seems unsure, slightly resistant (non-verbally mostly), to this suggestion. After group members discuss what they believe is the writer’s claim and offer a few more suggestions, Megan says “It was a good job. I’m going to the next group; just continue as you’re doing.”
At this group (the first Laura was at; now she is at the third group) a student is reading aloud; other students are taking notes; one asks for clarification during reading. Megan takes notes on what she hears. Rather than Megan, Student One starts the response: “Good, explain what panopticon is.” Student Two says: “Good structure, describing and comparing to panopticon; but would be nice to describe activity.” Student One agrees. Megan says: “That’s a good observation.” One more student comments, then Megan takes over the conversation for the rest of the class period, ending with: “You guys did a great job; sometimes students don’t. I’ve been in classrooms where it’s like pulling teeth to get them to.” A student asks if Megan is still taking classes. They start to chat about her classes, future plans in teaching, etc. He asks her questions; other members in the group join in; conversation is casual and friendly.
Within days of this session, I solicited Megan and Laura for their impressions of how things went. Laura said that she had asked students how it went and she got back mixed reviews. Some students said the oral peer review style made it difficult to correct grammatical errors. Some said that reading the papers aloud helped them to recognize the structure of their papers. Laura wished she had had one tutor for each group. Megan said that by reading aloud, she felt students caught a lot of their own mistakes. She also commented on how many of these students had the first part of this stretch-course together and she believes this allowed them to feel comfortable and to be open and honest with each other. She had interesting things to say about the peer review sheets:
I think that the peer review sheets were helpful, but sometimes unnecessary. In group two I think they were doing what I would hope would happen in a peer review session, but they were not filling in their sheets as much. Whereas in group one they all filled in the sheets while the person was reading the paper aloud and then talked about the suggestions they had for the paper. I think that both were effective, but I think that sometimes students can get distracted with filling in the sheet and not giving the best feedback. For this reason, I would not have a peer review sheet. However, I can see how it might have been effective. Who knows, if there was not a worksheet in group one, they might have not paid as close attention and thus not had as many insightful comments. However, I think that group two did a wonderful job and may not get that acknowledgment because they did not fill in their worksheets as completely as group one. It is a tough balance.
It is clear to see that students were caught between what Laura called the “oral peer review” and the peer review that relied on filling out the sheet she had used with most of these students. While Megan was encouraging verbal conversation in her groups, Laura was emphasizing filling out the worksheet: “I instructed the first group I worked with step by step. We answered most of the questions on the peer review sheet. I gave less instruction to the second group, because when I got there they already figured out a different way of doing oral peer review.”
Four weeks later I was invited to their second peer review session. It seems as if they had made a couple of adjustments from the first one:
Less students end up showing up for this session, nine overall, divided into two groups. Laura chooses a group leader for group one and has that leader choose who she wants for the group. Laura then passes around the peer review sheet (a different one) and says, however, that she wants them to talk about their papers first. Megan goes to group one, Laura to group two. Having a group leader somewhat changes the dynamics of group one’s session. The group leader initiates questions and prompts speakers. But Megan soon resumes her role as authority figure by offering suggestions liberally; she ends up doing over half of the talking. However, having a different group leader than the implied Megan did seem to involve students more in the flow of the conversation, suggestions offered and questions asked, than the first group Megan worked with above.
The second group stands out in my memory and field notes for the way students seemed to control much more of the conversational floor. At right about the half-way point Megan and Laura switch groups. The flow of conversation seems strong and students readily offer answers to Megan’s prompting questions. But the conversation becomes really dynamic as Megan asks the writer about her paper. The writer talks about her paper on Britney Spears. Megan asks about sources. The writer says none. Megan asks if anyone can suggest texts/sources for her. Student Two suggests Foucault and why. Megan summarizes her words. Student Four chimes in. Student Two offers another suggestion. Student One offers how Silberstein could be used. Megan agrees. Student Five offers more on how Silberstein could be used. Student Two questions/asks for clarification and offers how she sees Britney Spears in the media all the time. This causes the writer to explain more. Megan joins the conversation on Anna Nicole and Britney in the media all the time dealing with substance abuse. Student Five joins in. The writer describes an article she found on Anna and Britney. Megan says they have “very insightful comments on each other’s papers” and suggests they incorporate texts from class, “awesome.” Notice how the conversation involved much more dynamic uptake with more students after Megan openly asked for suggestions from all. I spoke with Megan and Laura afterwards and they both felt that this peer review was a great success.
The student questionnaires offered feedback that seems to support Megan’s over Laura’s view of in-class interactions. Like Laura, one student commented positively on Megan’s personality: “I liked the attitude she had. She was always willing to help us. Very dedicated to her job.” However, five students commented on what they viewed as Megan’s lack of overall participation in the classroom: “She needs to be more obvious in class. Then maybe students will want to go get help. Because it seemed like she wasn’t involved.” Another said, “As far as having her in the classroom, I did not think it was helpful. I rarely even noticed she was in our classroom. I don’t think they need to come to class.” Another, “Maybe the tutor could plan some activities and get involved more.” Another, “Didn’t find it too effective.” And the fifth, “She didn’t help out that much.” Perhaps Megan’s initial worry over becoming too much of a TA, and subsequent hesitancy to take on any authoritative instructional role in the classroom (besides peer review leader), actually hindered her from realizing her full potential in the classroom, though it might have helped her during one-to-one tutorials.
Reciprocal Care in Peer Response Writing Groups, and Beyond: Teams Three, Five, and Six
Finding Her “Cool to Care” Niche: Madeleine on Location
The peer review session I was invited to for Team Three had a very different feel from the ones I report on with Teams One and Two above. Students in Sydney’s class were revising their annotated bibliographies for their final portfolios:
Ten students are in attendance. Sydney and Madeleine enter the classroom together. Rather than have a peer-review guideline sheet, Sydney simply passes around a handout on annotated bibliographies. Madeleine is sitting in the front row among the students. Sydney gives instructions on where to go from their previous personal responses. They are to partner-up and one, write in pen or highlighter what they can keep for the annotated bibliography, and two, write in what is missing. Information from the longer responses are to be brought down to two-three sentence summaries. Sydney writes these instructions on the board, and says that she and Madeleine will move among the groups. As students begin, Madeleine goes up to Sydney at the front with the assignment sheet and asks for some clarification. Then Madeleine begins to talk with the student next to her about the task. Madeleine uses the instruction sheet to help this student ask questions of her partner’s text. Sydney moves quickly from group to group. (Sydney commented during her interview that she felt that Madeleine often lingered too long one-to-one with students during such class activities, rather than “roaming the room.”) Madeleine refers to an article they read and continues to talk about how that relates to the task. Madeleine then moves to the student’s partner, doing the same thing, explaining the task in more detail. Madeleine moves to another student; asks if he’s doing ok; repeats the same further explanation. Sydney gathers the class’s attention and talks about evaluating the source. As she describes evaluation, she looks over, gesturing to Madeleine. Madeleine adds to what Sydney is saying about evaluation, describing the idea of the credibility of the source and where it came from, or if it might be biased. Then Madeleine continues to move among students. She approaches two young women sitting in the back, and there appears to be some pre-established rapport as they begin to chat and laugh. The students ask about her being sick; they ask about what she studies. (They are off-task, but only for about a minute, and these students are already garrulous before and after Madeleine moves on.) Madeleine leaves the room for two minutes, comes back and sits in her seat. Sydney says pull out another article and do the same process on their own bibliography. Madeleine chats with me a bit. Sydney begins meeting one-to-one with a student up front with his paper. A student close to Madeleine asks what he’s supposed to be doing right now. She explains. Then he asks her about a paper. A student behind Madeleine drops a bunch of Altoid mints. She helps him pick them up, and she throws them away. Madeleine spends the rest of the class (about five minutes) writing in her day-planner and reading a paper (maybe hers, maybe a student’s?). With five minutes of class-time left, Sydney says they can leave if they’re done. Most students begin to leave. Madeleine packs up and leaves as well.
Due to the design of this class activity, I notice that Madeleine seems much more casual and hands-off compared to both Megan (who attended class every day) and Julian. Madeleine also approached students differently. She would somewhat tentatively approach them and ask if they needed any help, rather than just assume they did. In fact, Madeleine’s attitude and actions in this peer response facilitation resembled more closely what I saw taking place with the SCSU Team Five below.
Of the ten student questionnaires I received back, all ten were overwhelmingly positive. Strikingly, while no students made direct reference to the one-to-one tutorials, nine students commented in detail on the benefits of having Madeleine in the classroom regularly. Students also wrote much more, and more complexly, than any of the other Teams’ student questionnaires. Students talked about the convenience of having a tutor in the know, a tutor closer to the expectations of the class, a tutor they trusted. One student wrote: “In English 104 [the first part of the stretch-course] I did struggle in class because I had many questions that I needed to be answered but was scared to ask, but when having a tutor you know that you can ask questions.” Another, “The in-class tutor always raised questions in class. She always let us know when we weren’t meeting the expectations of the course. For example many of the students were only focusing on content and our tutor told us that we had to focus on meaning.” Another, “In-class tutors give the professors a break and also are very helpful to the students when the professor is occupied ... When needing help in class and the teacher was helping another student having her there to answer questions.” Another, “We got a lot of attention during class. It was like being one on one.” Another, “Not having a tutor [in 104] was somewhat more difficult to receive help because there was only one instructor. Having two has made questions and help a lot faster.” Another, “It was weird at first, but later on having the tutor really helped. The in-class tutor was like a TA for the class who goes around and helps a student in need. It really helped me, because the tutor gave me ideas and thoughts to think about what I was writing about.” Another, “I had a better understanding because the tutor was willing to be a part of the class.” Another, “They help give ideas to the class, as well as brainstorming situations with us.” And finally, “She branches out a lot of good ideas during discussion ... I like how she joined class conversations. She always gave her feelings on what an article meant to her. Hearing her thoughts gave me ideas ... Many of my questions were answered because if Sydney was busy the tutor would help me.”
Learning Disability and Response-Ability: Gina on Location
Fresh from having taken the same developmental writing course the previous year, Gina from Team Five capitalized on the bond she already enjoyed with instructor Mya. During a peer review and response session in week four, I witnessed an amazing moment—something I had never quite seen before—that immediately piqued my interest.
I noticed one student in particular, Max, having a visibly tough time understanding what he was supposed to be doing, while his two peer group partners seemed to be experiencing no trouble at all. Gina, who was circulating around the room, later in our interview reported that she saw that Max was having trouble. “I noticed Max looking nervous over in his seat so I went over to see what I could help him with. His partners Kim and Adrianne already had their computers set up and were starting the assignment. Max wasn’t as far along. He hadn’t even logged into the computer,” she said. Gina spent much of the remaining class session helping him get on track with the multiple organizational and communicative tasks students needed to negotiate during this peer review and response session: working with online files, following the response guidelines and instructions, and reading and offering feedback to his group members. Gina told Max not to worry too much about the comments his partners were giving him, but rather to focus on the comments he was writing for them.
As Mya circulated the room she went over to Max’s group. Max groggily said “I’m tired today, the weather.” Gina continued to good-naturedly and patiently help him navigate the review process. She turned to his two group members at one point for help. Kim came over to help out, succeeded, and then moved back to her computer. At one point, Max deeply sighed and Kim chipped in a tip on commenting. Max said “yeah, yeah, yeah” in relief. A few minutes later, Max said to Gina that he is “falling apart” and “can’t concentrate.” She continued trying to coach him on how to handle things.
After class, Max came up to me, we said hi, and then he just stood there for a second. I asked how he is doing. He told me that he is not feeling all that well and that he is having a hard time with this peer review. We chatted a little more before he left for his next class.
Later Gina would tell me, “I felt bad for Max because he was very overwhelmed and also not feeling well. I tried to make him feel like he will do much better with his group-mates’ advising in a less stressful environment so it’s fine that he is not really doing anything during class.” Part of the problem, and one that distinguished this class and partnership from others I have studied, was the amount of technology Mya uses in her courses. Mya always teaches in wired, computer-equipped classrooms. So, unlike the peer response sessions I reported on above, the participants in this study not only had to process the typical logistics of peer response, they also had to negotiate the nuances of the technology involved. (Another thing that may have contributed to Max’s discomfort, suggested by his approaching me at the end, was my very presence in the classroom to begin with. Perhaps Max’s knowing that I was there to observe and potentially report on those observations contributed to the sensory-overload and anxiety he experienced.) The entire visit, I noticed how patient and caring Gina was with Max. And I started to think that there was something very important taking place here.
During a visit one month later, I noticed both Max and his peer response partners taking on much more interactive collaborative roles:
Max, today, seemed in much better shape—no visible worries, etc. I noticed that rather than frequently asking Gina for help he seemed to be much more involved with his two partners. In contrast to what I witnessed during my earlier visit, Max seemed to have a good grasp of what he was supposed to be doing. He asked his partners a question and they helped him; they asked him questions and he helped them. I was impressed with how these students, especially Kim, were collaborating with Max. In contrast to my last visit, Gina only came over to the group a couple of times. At one point, the group talked about works cited pages and the fact that neither of Max’s partners did one, but that he did. Gina ended up spending much more focused time with other students, including a male student who was having difficulty with citations and formatting. Following Nelson’s progression, Max seemed to be moving smoothly from dependence to interdependence and independence with his peer response group.
Gina gives her impressions of her involvement with Max and his group members in this second peer review session:
Like always Max was right on track with what he was supposed to do. He was just double-checking that he was up to speed. I looked at Max’s work and realized he was very ahead of the game. He had his e-portfolio set up very nicely. He already had one paper posted and was almost ready to post another. He then asked me to look over the second paper he was going to post before he posted it. I looked at what he changed and what Mya asked him to look over. He took everything Mya said and changed it. His paper looked very nice. I told him it looked great and it should be ready to post. He wanted a second opinion so Mya was called over. I was very happy that Max feels so comfortable to ask my opinion. I have noticed that every class he calls me over at least once. I am happy to talk with him and assure him he is on track with everything.
In their end-of-term questionnaires, ten out of eleven students felt Gina’s presence in class was beneficial, and only one was ambivalent. (The ambivalent student only wrote a couple of yes’s or no’s indeterminately.) Several students commented on their overall impressions of having an in-class tutor: “makes help only a nod away. It was great.” Another, “She was very helpful with papers and assignments. I think it was a good idea to have one in every class.” And, harking back to the comments by students involved with Julian and Anne’s overall unsuccessful partnership in Team One who felt that their course-based tutor did not know what was going on in the course, one student wrote: “I liked it because it gave you someone to help you with your work that actually sits in the class and knows what’s going on ... so maybe people feel more comfortable that way.” I also took the chance to interview the student Gina and Kim worked so closely with, Max. He told me that he really appreciated the attention he received from Gina, his group members, and Mya. He said he especially appreciated Kim’s help (for more on this case study, especially Gina’s and Max’s personal stories, see Corbett “Learning”).
Paying Care and Trust Forward: Kim and Penny on Location
As mentioned above, two of the students from Team Five’s class, Kim and Penny, were recruited to become course-based tutors for the following semester with an experienced adjunct instructor, Jake. Team Six, illustrates what can happen if continuity is carried forward (genealogically, if you will) from student-to-tutor, from tutor-to-tutor, from instructor-to-instructor, from tutor-to-instructor. One of the several threads that linked the participants from the two courses was the interaction between Kim and Max. Recall, Max and Kim were peer response group partners and, like Sara, Kim found the experience of working with Max highly rewarding. While it would be easy to overestimate the effect Max had on Kim’s performance as either a student writer with Mya or as a course-based tutor the subsequent term with Jake, one cannot help but believe there was indeed some inspirational paying forward. As with the other case-study teams, I sat in on and took field notes of in-class peer response sessions with this team. The sessions I witnessed fell very much on a continuum of directive/controlling and nondirective/facilitative interaction witnessed especially with Madeleine from Team Three and Gina from Team Five. In short, in the sessions I witnessed, Kim acted much the way Sydney from Team Three reported Madeleine acting during class discussions—more outgoing and authoritative—and Penny acted a bit more like Megan from Team Two—more reserved during class discussions, but more hands-on during peer response sessions. When Jake started addressing the entire class for the session with Kim, Kim joined in with Jake very much as a co-teacher, even finishing his sentences a couple of times. In contrast, when Jake spoke to the entire class that Penny was attached to, she did not join in like Kim had. However, once students became engaged in responding to each other’s essays, both Kim and Penny became very involved in the groups. Jake had encouraged both classes to write on each other’s essays as well as talk about them. Both Kim and Penny did not hesitate to join in on writing comments down on student papers as they discussed their suggestions. But these tutors went even further in embracing authoritative roles in their respective courses, and together.
During my interviews with all participants in Team Six, and from the journals both Kim and Penny were keeping on a class-by-class basis, I came upon some compelling findings. Due to the fact that Kim and Penny were both working with the same instructor, Jake, albeit in two different courses, this team had the opportunity to collaborate much more interactively than any other CBT partnership I’ve studied. And they took full advantage of that opportunity. Allow me to end the reporting on the case studies and stories in this book by quoting at length from Penny and Kim’s journals. (One of the strengths of both Nelson’s and Brooke, Mirtz, and Evans’s studies are the extensive amount of reporting and analyses the authors provide from participant journals.) We will begin with excerpts from Penny’s journal:
Tuesday, April 13th
Yesterday, Jake handed out the assignment that Kim and I came up with. The assignment is much more specific so the students are able to understand and follow it. The assignment is called “The American Dream Museum Exhibit” [See Appendix D]. The students are to get in groups and bring any kind of artifact that they think represents the American Dream. On Thursday, the students will bring at least ten artifacts to class and explain to their group why they chose that. On Thursday, students will also narrow down their items to five each. These five will be the items they include in their exhibit and presentation. In a few weeks, the groups will present their museum of what they think represents the American Dream.
Thursday, April 15th
Today, the groups met with artifacts they brought in or images they printed and cut out. I sat with group one for a while, just observing and listening to what they had to say about what they brought in. It was interesting to see the different perspectives they had of the American Dream. Each member brought something different, but in the same way, that one artifact connected with a group member’s different artifact. One group member printed out a picture of a white picket fence, and another member brought a picture from the newspaper of a perfect-looking house. I suggested that the members can use both of those images in the presentation to add to the exhibit.
One member brought all portable items of technology (cell phone, iPod, etc.) and another member brought a McDonald’s to-go bag. Both members had the different explanations for the artifacts, but I pointed out one way in which they tied together. I mentioned that both could represent mobility and how valuable time is to Americans. As I came back to this group later in the class, they had built off that idea even more.
As I moved onto other groups, and listened to what they had to say about their items, I was impressed with how different the results were. After listening to each member give me an explanation of all the artifacts they brought, I told them about things that I had not heard from the other groups, but I heard from them. Since they are covering the same topic, it’s important that they all have different artifacts so things don’t get repetitive when they present their exhibits.
Wednesday, April 28th
Jake sent the following email:
Hi Kim and Penny,
I wanted to give you both a heads up that I will not be in class on Thursday. However I do not want to cancel class since each group needs to work on their exhibit design and layout.
While I do not expect them to stay for the entire class, I would hope that they take the opportunity to organize their exhibit in detail and have each member give a tour of their artifacts and introductions. Each of you can provide your feedback and insights to the groups.
If either of you have any questions, please feel free to email me or call or text my cell.
Thanks and I’ll see you on Tuesday.
Jake
Thursday, April 29th
As the students walked into class, I explained to them what the agenda would be. They knew Jake wasn’t going to be in class, so I told them once I met with their groups, they were free to leave class. Each group had to explain what their title of their exhibit was, read their introductions, and give me a tour of what each artifact in their exhibit was. After each group was done presenting to me, I asked questions to keep them thinking. If they had artifacts they didn’t explain well, I asked them what the artifact’s symbolism or representation was of the American Dream. Each group was well-organized and knew how they were going to present the exhibit to the class. I made sure to ask the students how and where in the classroom were they going to set up all of their artifacts. If I was unsure of a question, I sent Jake a text. I did not want to tell the students the wrong answer, because it was Jake who was grading the presentations, not me. Before the groups left, I told them to come to class on time the following Tuesday and to be ready to present. Next Tuesday and Thursday the groups will be presenting their exhibits.
The following excerpts are taken from Kim’s journal entries. The first one offers her take on the day Jake was not in class. The latter two provide reporting on the days students delivered their “American Dream Museum” exhibits near the end of the term.
4/29/2010
Today was very cool. Prof J. was unable to attend class. So I got to act as the prof for the day :-) The students went over their exhibits and what they have so far. They both seemed to be very good and well thought out so far. However, only two people from the second group were in class today so I didn’t really get a great sense of how their exhibit will go. Both groups read their introductions as if they were presenting it to the class ... What I enjoyed most was that even though these introductions were not being peer reviewed the students gave each other criticism and helped them reword things as well as encouraged them when they enjoyed what their peer wrote!
5/4/2010
TODAY’S THE BIG DAY!! :-) The students have been working on their exhibits and they will finally be able to present them. Unfortunately, one student was absent. Thus, one of the groups was short. Also two students came late so they were unable to present their projects today with their groups The projects included a movie (The Pursuit of Happiness), a baseball card, lots of pictures, poems and songs, a water bottle, and more. I loved the explanations and after their intros last class and then again today, I could see a huge difference. The students who actually came today had made the changes to their intros for the objects and they came out very well ... The students really went in depth and took the explanation to another level. Also, the students didn’t really seem nervous. They knew why they chose their five objects and discussed them well. One of the poems that was shared also made me think a lot. It was titled “The American Dream” and the student used the poem to stress how the American dream is represented in a negative way. The poem basically goes into how once people are living the so called “American Dream,” making money and doing well for themselves, they forget about the individuals who do not have wealth or even places to rest their head at night :-(
5/6/2010
This semester is coming to an end :-( We started the class off by having the last three students present their projects. These three students actually presented their projects separately because their groups went on Tuesday. It really made me remember back to the second and third class hearing the students read their essays and being embarrassed and rushing through them, whereas today they mostly spoke clearly and with confidence. I could definitely see the growth in such a short amount of time.
In their questionnaires, students from both of Team Six’s courses reported very much the same sort of high satisfaction with the courses as with Team Five above.
Discussion: Direction, Nondirection, and Misdirection in the CBT Classroom
The above scenarios begin to clearly illustrate just how complicated—or complimentary—things can get when you combine various instructional aspects of the parent genres, as well as different participant personalities, goals, and instructional experiences and backgrounds. Of all the teams, Team One I initially thought would be the most successful. Julian, with all his experience, seemed like the ideal “writing advisor” tutor for this project. Anne, likewise had the experience, was studying in the field of Composition and Rhetoric, and showed early enthusiasm toward the project in general. Yet Julian summarized the overall experience as going “sort of poorly, less than mediocre.” Julian pointed to two primary reasons he felt the partnership did not work well: lack of communication with Anne, and confusion as to what his specific role was in the class. Julian felt that his minimal presence in the classroom affected his relationship with the class, creating an awkward, “ambivalent space” between himself and the students. He felt that the students and he never got to know each other. So, he said, students were “like ‘Julian’s going to be our writing consultant, is going to be part of the class,’ and then I show up twice and nobody ever hears from me.” Anne voiced two main reasons why she felt the partnership floundered: her lack of collaboration with Julian, and Julian’s nondirective instructional approach. On her initial high-hopes that quickly began to fall, she said: “When I met Julian at the beginning I thought this would be great; this has such great potential because we both have such similar philosophies, basically teaching philosophy ... But [laughter] in practice it wasn’t quite as good.”
Despite the relatively greater amount of tutoring experience both Julian and Anne possessed, they were ironically unable to perform with the sort of flexibility and adaptability that the other teams displayed. While we might point to instances where Julian did get directive, as when he more or less “forced” the student to read his paper aloud during the second peer review, I would argue that Julian did not really do that bad a job during the peer reviews, evidenced by him trying to play what he felt was his role of question-posing facilitator, or “reserved advisor”—in short, to play the role of Decker’s “meta-tutor, encouraging students to tutor each other” (“Diplomatic” 27). The greatest tension seemed to be in Julian’s debatably inflexible minimalist/nondirective approach. Repeatedly, as illustrated especially in the peer reviews above, the data point to instances where Julian was trying perhaps too hard to play it safe, to attempt at all costs to meet what he felt were Anne’s expectations, to stick to the prompts closely and carefully during interactions with students. Perhaps, as with Megan, Julian worried too much about taking on a teacherly role. His feeling that Anne was the teacher, and he was there to be a “reserved advisor,” may have actually confounded the students’ expectations that he should offer whatever direct suggestions he could. His attempt to be as peer-like as possible may have had the opposite effect. Clark’s study of directive/nondirective tutoring with students who labeled themselves “poor” writers, found that these students perceived their tutors as more successful when the tutors were directive, contributing “many ideas” (“Perspectives” 41). In contrast to the case studies of Teams Three, Five, and Six where tutors embraced their roles as authority figures, Julian’s attempt to stick to what he felt were Anne’s expectations, coupled with his limited presence in the classroom, only bewildered students who, it seems, wanted to know more than anything what he thought. Julian’s repeated efforts to stay within Anne’s expectations came across to student’s as unwillingness to model a sense of “what would you do?” Further, while I’ve been tempted to make tentative claims about Julian’s actions during tutorials in terms of gender roles, like Black, Denny, and Judith Butler I believe gender is performative based on context. Black argues that though feminist theorists have frequently claimed that talk between women is “cooperative, supportive, non-competitive, nurturing, and recursive” her extensive study of teacher-to-student conferences revealed that
female teachers dominate female students just as male teachers do ... they are less likely to cooperatively overlap their speech ... female students initiate fewer revision strategies to female teachers and hear less praise from female teachers ... All this together does not add up to the picture of cooperation, support, and shared control that is often presented as characteristic of female-to-female speech. (68; also see Denny 101-02)
While we saw the same sorts of instructional and conversational “domination” during the one-to-one tutorials from Madeleine, Megan, as well as Julian, a host of other contextual forces worked to undermine the success of Team One.
For Team Two, overall, Laura and Megan reported enjoying working together very much. Megan talked about initial role negotiations with Laura:
It was kind of hard because I’m not a student and I’m not her teaching assistant. But I’m not involved in the grading, but I’m supposed to help them ... it was an opportunity for the students to get some of the most personal and helpful advice in their writing, because they have someone who’s there who’s not intimidating, because it’s not their professor ... I think at the beginning she was thinking that one day I could lead the class. And so I wasn’t sure [laughter] what to do.
Megan explained that she worried she would become “more of the TA or assistant TA and not the tutor.” As we noted earlier, she was relieved when other, less authoritative roles were agreed upon.
Laura commented on how pleasant Megan’s personality was, how she was always smiling and cheerful, how she always had a positive attitude, and how she was easy to talk to and work with. In contrast to Megan’s sentiments above, Laura described her working relationship with Megan in terms of wanting to keep their interactions as peer-like as possible: “I kind of see her as my peer. Instead of asking her to do this and that I wanted to get her feedback. We kind of designed the class together.” Laura described how early in the quarter she and Megan would meet once a week to discuss weekly schedules, class plans, and upcoming assignments. They would also have “meta-teaching” conversations after class. Laura described Megan’s role as “conversation partner” who would “have a lot of things to say about texts” during conversations in class (though she did not distinguish between whole-class conversations and conversations involving peer response).
However, we saw that Megan and Laura had different perceptions of Megan’s usefulness in the classroom. While Laura felt that Megan was an important day-to-day in-class player, Megan and the students felt that she wasn’t quite living up to her participative potential. And while I think she did a great job as “peer review person,” especially in the second sessions, students didn’t seem to get the same sense of the importance of her presence. Perhaps the introduction of the “oral” peer review confused the students at first and it took a little bit of getting used to before they could feel the full benefits of that method. In the fourth edition of A Short Course in Writing, Bruffee distinguishes between two forms of peer review. According to Bruffee, corresponding is a more exacting and rigorous form where students write to each other about their papers, and conferring is an immediately responsive, conversational form more attuned to the writer’s needs. Bruffee argues that “the most helpful kind of constructive conversation combines the two ... So in peer review you write to each other about your essays first, and then you talk about them” (170; also see Gere and Stevens). It seems that, by the end, Team Two was certainly moving in this two-fold feedback direction, exercising and flexing students’ abilities to negotiate directive and nondirective strategies, and Megan’s ability to coach these peer-to-peer pedagogical skills. It also seems that from the first to the second peer response session, students were moving from dependence to interdependence. Concurrently, it appears that Megan and Laura were moving away from directiveness and more toward a more minimalist facilitative role. This supports Nelson’s claims regarding the inverse relationship between students taking and tutors/instructors relinquishing control when working toward successful peer response. Simply put, by that second peer response session I witnessed, the attitude, action, and language of control and directiveness had shifted from Megan to the students. This also coincides with Harris’s four reasons why writers need writing tutors, that valuable analytic link between tutoring one-to-one and in small groups. While control seemed to flow from Megan to the students explicitly realizing Harris’s first reason—encouraging student independence in collaborative talk—it might have more implicitly helped students realize the other three reasons: assisting students with metacognitive acquisition of strategic knowledge; assisting with knowledge of how to interpret, translate, and apply assignments and teacher comments; and assisting with affective concerns. Yet, overall, students still wanted more from Megan in the classroom.
Madeleine and Sydney from Team Three expressed mixed reviews of their partnership. The tutor, Madeleine, narrated her satisfaction with the experience from start to finish. She enjoyed all aspects of her involvement: working with Sydney; working with students; and working with the subject of the course, race and citizenship in the nation. On her initial interactions with students, Madeleine said:
I think at first they were like, “What the heck, who is this person?” They weren’t mad or anything [laughter]. They were just kind of like “ok.” They didn’t know why I was there, but it was cool. After a while they just thought of me as kind of like another student ... They really seemed to appreciate the things that I said in class and after a while I think it was really comfortable ... And they didn’t feel, at least as far as I know, they didn’t feel like I was trying to be authoritative.
And on her initial role negotiations with Sydney, Madeleine reported: “At first I didn’t know what my job would be in the class. And we were just like trying to work it out the first couple of weeks of the quarter.” Madeleine goes on to describe how she soon found her niche in the classroom as “discussion participant.” During an early class discussion of readings, Madeleine joined in. Afterwards, Sydney praised Madeleine, telling her that she felt the students had participated in a way they “might not have been able to and she [Sydney] might not have been able to. She felt like the students listen to me. Not really more than they listen to her, but they tend to agree with her. So whatever she’s saying, whatever she’s contributing to the discussion, they think ‘oh that’s the right way.’”
Sydney’s take on the partnership, however, portrays a much more conflicted point of view. Sydney said that she was initially worried that someone else’s presence in the classroom would make her feel like she was being watched, but that, fortunately, did not end up being the case. This may be due to her impressions that, echoing Madeleine’s own comments, Madeleine really took on more of a peer role in the classroom, seeming much like just another student. Sydney did, however, detail further initial misgivings that—in her mind—ended up affecting the rest of the quarter:
Initially there was a lot of frustration just trying to match two personalities, two kinds of teaching styles, trying to negotiate where roles were ... I remember the first couple of days I felt like there was a little bit of showing off going on on her part. Maybe she felt the need to prove herself to show [herself] as capable as the TA. Maybe she was trying to show me; I don’t know. And I felt that that kind of shut down conversations with my students a little bit because they might have felt intimidated a little bit you know.
But Sydney also talked about how she eventually came to view her interactions with Madeleine in a different light: “In the end I think it took us a while, but I feel like in the end we finally at least began to kind of click and mesh.” A big part of this eventually-realized mutual understanding may have something to do with Madeleine’s overall motives for and attitude toward this course. In her own words: “The most important thing for me to teach the students was to be active learners in the classroom. I hoped that they would view my enthusiasm for the content as an example of it actually being cool to care.” I believe it was this ultimate clicking and meshing that I observed late in the term.
While we might rightly question Madeleine’s performance during one-to-one tutorials, I certainly maintain my belief that Madeleine’s authoritative style was effective and valuable in the classroom. Delpit makes a related point that hints at a possible reason why the diverse students from Team Three identified so closely with Madeleine:
The “man (person) of words,” be he or she preacher, poet, philosopher, huckster, or rap song creator, receives the highest form of respect in the black community. The verbal adroitness, the cogent and quick wit, the brilliant use of metaphorical language, the facility in rhythm and rhyme, evident in the language of preacher Martin Luther King, Jr., boxer Muhammad Ali, comedienne Whoppi Goldberg, rapper L.L. Cool J., and singer and songwriter Billie Holiday, and many inner-city black students, may all be drawn upon to facilitate school learning. (Other 57; emphasis added)
Another way to consider the implications of Madeleine’s performance is when moving tutors to classrooms we could encourage a more authoritative approach, but when they move back to the center (or wherever else one-to-one or small-group tutorials happen) tutors should resist the temptation to overuse what they know about the course and the instructor’s expectations. One of the reasons the tutorials conducted by Madeleine, and to large extent with Megan and Julian (Appendix C), seemed so tutor-centric was because all three of these tutors tried perhaps much too hard to speculate on what the teacher wanted. Most of the linguistic feature and cue ratios—total words spoken, references to the TA or assignment prompts, and interruptions versus main channel overlaps and joint productions—detail salient imbalances, imbalances that overwhelmingly point to almost complete tutor control. While this discursive imbalance luckily did not seem to affect the overall successful partnerships of Teams Two and Three, it certainly did not help the unsuccessful collaboration in Team One. The overarching lesson? Tutors might hold on a little tighter to some nondirective methods and moves that could place agency back in the hands and minds of the students. Of course, unlike the other tutors, Madeleine had not been exposed to the literature on directive/nondirective tutoring, nor could I find any indication that she was encouraged to practice a particularly nondirective method. Perhaps, if she had received a bit of training in directive/nondirective strategies, then Madeleine’s fight-for-the-floor session might have sounded more like Sam’s parallel session, or even more like the sort of non-intrusive, flexible collaboration I witnessed during my visit to Madeleine and Sydney’s classroom during peer response. Maybe then Madeleine could have exhibited some of those nondirective methods and moves showcased by Sam from Team Four in all of her one-to-one tutorials. Yet perhaps, as Nelson discovered, Madeleine had earlier moments of directiveness, but as the course moved on, and by the time I saw her more “laid-back” attitude and action in the classroom near the end of the term, she had pulled back on her interventions as students became more self-directed, interdependent, and to varying degrees independent.
The tutor for Team Five, Gina, felt her involvement as a course-based tutor for the class went “different, but better than I thought it would be.” She thought it was wonderful that students had the option of asking either her or Mya questions during classroom activities. She also felt she was able to engage with students on a personal as well as academic level, even though she said that she usually sat at the head of the class with Mya when she was not circulating around the room. She also did all of the readings for the course, but only did one writing assignment to show students how she approached it. (Something none of the other case-study tutors undertook.) Gina said that if she could give other course-based tutors any advice, it would be not to overly worry or hesitate to approach and interact with students. She felt that in the first few weeks she did not want to bother or interfere too much, but then she started to realize that students really appreciated her interventionist attention.
The instructor, Mya, said that she and Gina’s familiarity allowed Gina to take a very active and highly informed role in assistant teaching for the course. She said Gina started off a little slow at first, but very soon she felt that students started to warm up to her and really lean on her for questions and support. She (echoing Madeleine from Team Three) would often help jump-start class discussions if students were initially silent. She felt that Gina was like a “life preserver” that she could throw out at any time in the classroom for any particular student who needed it. Although, she did feel this class was stronger than usual in terms of their engagement, she very much appreciated having Gina close by to help circulate and give more individual attention to others. She said that even though Gina did not say a lot in class all the time, she was very upbeat and always had wonderfully positive energy (reminiscent of Megan from Team Two). Mya said she believes Gina’s LD actually enabled her to make even stronger connections with other students, especially Max, though she said “you can’t tell Gina has a LD by just talking with her.” Mya praised Gina’s communication and organization skills. When I asked her if she would do anything differently next time, Mya said that she would have liked to plan things out a little more with Gina, perhaps regular weekly meetings, so Gina had more say in what was going on (I have heard this advice several times before with participants in CBT). When I asked her if she’d be willing to have another tutor attached to her class, she said, laughing, “I would not want to do it without one.” She felt that having a tutor did not demand any extra time on her part and was only a benefit. She felt that working with Gina made her think just how important it is to slow down sometimes and make sure things are clear to all students.
Much like the in-class peer response session I witnessed with Madeleine and Sydney from Team Three, I saw Gina responding at the point of need of the students. In other words, the potential for the tutor to control or over-direct in this situation was mitigated due to the fact that the students themselves initiated, and to a large degree controlled, the call for tutorial assistance. Yet scholars disagree on what might be the best setting for fostering such student-centered control, including minority students and students with LDs. In “Cultural Diversity in the Writing Center” Judith Kilborn describes these contrasting philosophies in terms of those who believe either: one, minority and diverse students should be mainstreamed into the general population “to prepare them to interact with the diverse population they will meet in the workplace”; or two, “minority students are best served by services designed and run by minorities for minorities; they feel that such services provide a sense of community and cultural pride” (393). In “Discourses of Disability and Basic Writing” Amy Vidali questions a claim made by Barber-Fendley and Hamel that LD students should be separated out from the writing classroom, especially the basic writing classroom, for additional support. Vidali argues, rather, that similarities abound between LD and non-LD basic writing students: they are both talked about in terms of difficulty and overcoming deficits, they often share identities and classrooms, and both are “defined according to a dominant (white, male, abled) other” (53). Vidali urges us to do what we can to unify basic writing and LD pedagogy. She believes that LD students would then benefit from the same structural support systems afforded basic writers in all their various diversities. I find myself agreeing with Vidali. When we consider the effects of the interactions of both Gina and Kim with Max, Vidali’s assertions begin to make very good sense—for all participants. In a way, then, the arguments for more unified instructional support systems for diverse students echo the arguments for closer writing classroom and peer tutoring coordination described in the Introduction (see Corbett “Learning” for more on this particular case study).
All participants from Team Six voiced high satisfaction with their experiences together. Overall, Kim described her experiences as highly positive and rewarding: “I felt that working with the students taught me a lot. It actually helped me with my own study habits and certainly helped me become more patient.” Reflecting back on their interactions, like Sara, Kim found the experience of working with Max rewarding. She told me that she would sometimes email Max when she had questions about an assignment. She went on to say:
When working with Max I remember him being a very intelligent young man. He had wonderful thoughts and ideas and always put one hundred percent into all of his work. Even when doing public speaking projects Max gave his all. He was frightened to speak in front of the class but, as his partner, I saw him practice over and over until he was confident. Sometimes Max just needed someone there to repeat or explain the assignments as well as a partner who was willing to practice with him over and over until he felt comfortable.
Likewise, the other tutor, Penny, reported an overall positive experience, especially in relation to her field of study, Elementary Education:
It helped me jump into being a mentor or teacher of some sort. It helped encourage me to dive right in and help students, no matter what age. Working with college age students for this project was a new experience, but still had the same concept of teaching and helping students. I had to figure out the correct way to communicate with them and how to approach them. I learned a lot from the experience, mostly about myself and how capable I was to help others.
The instructor, Jake, during our interview talked at length about the project, highlighting how much he felt all participants benefited from their close collaboration. He said, “the key to all of this, in my mind, both for the tutors and me the instructor, is flexibility and being open to different approaches and different ways of structuring the class.” He said that he thinks it is important not just to find out how the instructor wants to realize participant roles, but to also consider the peer tutors’ desires. (He did express some relief, however, when he saw just how active and involved, typically shy and quiet, Penny was during small-group work, compared to her more ostensibly passive performances during whole-class discussions.) On the benefits of having developmental students who themselves had just taken the course as peer tutors he said: “It let students know that here is someone who went through the same struggles as you went through and were successful in their journey through the course.” He went on to say that he felt that course-based tutors do not even need to be A students to have a positive effect. He feels there is some benefit in being able to say “Look these are real people who worked real hard to work through the writing process to improve their writing, and they are just here to help.” Jake said that he also gained quite a bit from this experience. He felt that the American Dream project, especially, made him consider the possibilities for students designing their own group projects. He felt that the creativity and care Kim and Penny demonstrated throughout that project, in their negotiations of what pedagogically might work, “might encourage me to be more creative. They [the tutors] have the benefit of tapping into many different professors who are equally or more creative than I am, and I have no problem stealing from them and learning from them.”
What I believe I saw emerge with Team Six was a heightened level of collaborative trust among the participants. This heightened level of trust enabled Kim and Penny to take active interventions in all phases of the students’ writing processes—from invention, to revision, to delivery. In “A Non-Coda,” Muriel Harris revisits her 1992, “Collaboration Is Not Collaboration Is Not Collaboration,” where she delineated the boundaries between one-to-one tutoring and peer response. In her more recent essay, she argues that peer response groups could be utilized for pre-writing activities like brainstorming how to approach the assignment, trading ideas on how to incorporate readings, and initial thoughts on topics and the narrowing of topics—if instructors are willing and able to facilitate such activities. This is precisely the sort of generative pre-writing activities we saw facilitated with such aplomb by Kim and Penny in the “American Dream” project. While the experienced tutors from the UW case studies were worried about trying to make sure students were meeting the expectations of the instructor’s assignments, these “novice” tutors were creating their own assignments and doing all they could to assist students in generative inquiry, and all other phases, in order to succeed and learn something. In short, and even more than Madeleine, these tutors were vividly enacting and modeling creative and critical thought and action for the benefit of their peers/mentees—something all teachers of writing hope and strive to do.
My research over the years, including these portraits of CBT teachers, students, and tutors in action, has persuaded me that the pros of encouraging tutors to practice at the edge of their expertise, by-and-large, outweigh the cons. Case studies like the kind presented here could help all stakeholders in peer-to-peer teaching and learning consider strategies and rationales for what methods might be characterized as directive or nondirective in various circumstances and how to try to resist moving too far along the continuum in either direction, in a variety of situations, in and out of the classroom. Perhaps with the knowledge gained regarding directive and nondirective pedagogical strategies and methods, CBT practitioners can continue encouraging colleagues (and their students and tutors) in writing classrooms and in writing centers to make and map similar explorations—to take similar complimentary journeys—serving center and classroom. | textbooks/socialsci/Education_and_Professional_Development/Beyond_Dichotomy_-_Synergizing_Writing_Center_and_Classroom_Pedagogies_(Corbett)/1.05%3A_Chapter_Four-_Conflict_and_Care_while_Tutoring_in_the_Classroom-_Case_Stud.txt |
Placing students and tutors at the center of classroom practice, on-location tutoring reforms classroom hierarchical relations and institutional structures; it shows students (tutors and the students with whom they work) that their work as knowledge makers matters and that they have much to contribute to one another, to faculty, and to the institution as a whole.
– Laurie Grobman and Candace Spigelman
The line it is drawn
The curse it is cast
The slow one now
Will later be fast
As the present now
Will later be past
The order is
Rapidly fadin’
And the first one now
Will later be last
For the times they are a-changin’
– Bob Dylan
In the Introduction and Chapter One I discussed several variables that come into play as a result of the melding of the various parent instructional genres that inform the work of CBT. I explored the genealogy of CBT, theoretically locating it within the context of the classroom/center collaborative debate. I moved on to describe a taxonomy of the major parent genres that intermingle and hybridize in CBT—writing center tutoring, writing fellows programs, peer writing groups, and supplemental instruction—to offer participants an array of instructional choices and considerations that can at times confuse or overwhelm, and at other times liberate and substantially supplement classroom and one-to-one teaching and tutoring. I then lingered in detail on the critical issues of authority, role and trust negotiation via the directive/nondirective tutoring continuum, placing special emphasis on reasons tutors may need to renegotiate the typical hands-off, nondirective one-to-one philosophy when negotiating the “play of differences” between one-to-one and one-to-more instructional situations.
I’d like to begin my concluding thoughts by returning to two questions—in relation to the directive/nondirective instructional continuum—I asked in the Introduction: What are teachers, tutors, and student writers getting out of these experiences, and what effects do these interactions have on participant instructional choices and identity formations as teachers and learners? And how soon should developing/developmental student writers, potential writing tutors, and classroom instructors or teaching assistants get involved in the authoritative, socially and personally complex acts of collaborative peer-to-peer teaching and learning? I’ll begin by framing my tentative answers to these questions in terms of how the interrelated pedagogical concepts of authority/trust building and directive/nondirective instructional negotiations played out in all teams. I’ll move on to offer some implications of the studies and stories presented in this book for one-to-one and small-group tutorials, peer review and response, and the various choices program leaders can consider in building, strengthening, or experimenting with CBT.
Directive/Nondirective Tutoring: Implications for Tutoring One-to-One and in the Classroom
The true value of CBT, and the lessons learned from experiments in pushing the limits of pedagogical peer authority and expertise, lies in the choices it offers teachers, tutors, student writers, and program leaders and the implications these choices have on the places we work and the people we work with. When participants were brought into the closer instructional orbits afforded by CBT, the biggest adjustments they described as having to make involved negotiations of instructional authority and roles, which also brought up the gravity of mutual trust(worthiness). Megan, the tutor from Team Two, worried about being too teacherly. She expressed relief when she and Laura agreed on less-authoritative roles for her in the classroom. But, as the interview and questionnaire data illustrate, both Megan and the students ended up feeling that Megan did not meet her full potential as an in-class tutor. Bruffee’s double-bind we spoke of in the Introduction was plainly elucidated in Megan’s conflicted desire to be both a peer—to appear just like one of the students and to be subsequently approachable—and to offer as much help and support to these students as possible. In a sense, the TA Laura trusted in Megan’s abilities as an experienced writing center tutor to be able to balance directive/nondirective and teacherly/studently roles in the classroom; but Megan perhaps did not trust herself enough to lean a little more toward an authoritative role in the classroom, even when offered and encouraged to act-out this role by Laura. As the literature on CBT practice points to repeatedly, tutors put in closer contact with the expectations of the writing instructors with whom they are paired will have a difficult time negotiating their tutoring approach—often times swinging too far toward the extreme ends of the directive/nondirective instructional continuum. And as Laura described, even though she and Megan did a lot of planning of the course together, students did not seem to know that Megan was that involved with the design of the course. Perhaps if she had embraced her role as a co-designer of the course a bit more vocally, taken ownership of the course like the tutors from Team Six, students would have viewed her as, in fact, much more integral to their learning for the course.
Yet, I must qualify these statements regarding Megan’s engagement with in-class activities, and the course as a whole, as she did take an active role in peer review. One interesting consideration for future peer review facilitation efforts is the idea of the “meta-tutor” (Decker). Recall Julian trying to live up to what he felt was his role as “reserved advisor,” a tutor who does not try to necessarily give suggestions directly to student papers, but rather tries to provide suggestions to students on how to tutor each other. This idea becomes problematic in light of the directive/nondirective continuum. If tutors are trying to be good meta-tutors, and, like Julian, speaking too much about revising in the abstract, then they may only confuse students. I do not think there is anything wrong—indeed it might be better in many cases—if the peer review facilitator is willing to play a role closer to just another student reviewer. Then students in that particular response group would gain the benefits of direct modeling of things to comment on. Encouraging the use of a mix of direct suggestions along with the sorts of open-ended questioning and prompting that lead other members of the response group to contribute, might be a better way to think about preparing tutors for peer response facilitation. By the second session I believe Megan had realized a great mix—one that allowed for substantial conversational momentum between students—encouraging students to rapidly and energetically uptake each other’s responses and suggestions.
Madeleine from Team Three felt she was authoritative but not authoritarian—an important distinction—in the classroom. Madeleine referred to herself as a “discussion participant” in the classroom. But she, the instructor Sydney, and the students clearly intimated that Madeleine was really much more like a discussion leader. Sydney described how her initial misgivings about Madeleine began to transform as she came to realize that what she initially perceived as Madeleine’s weakness actually ended up being her strength—Madeleine’s willingness to act as a conversation leader, even antagonist, during class discussions. Paulo Freire believed this was an important, and often overlooked, aspect of teaching. In his last book Pedagogy of Freedom, Freire urged
It is not only of interest to students but extremely important to students to perceive the differences that exist among teachers over the comprehension, interpretation, and appreciation, sometimes widely differing, of problems and questions that arise in the day-to-day learning situations of the classroom. (24)
I linked Madeleine’s instructional style to patterns of AAVE communication in Chapter Three. It may have been a combination of Madeleine’s more natural AAVE communicative patterns, coupled with her passion for both the topic of the course and her desire to help these students do well in the course, that all contributed to her performances in the class. Mutual participant trust was a key factor in this partnership. Madeleine’s willingness to take an active co-teaching role in the classroom added to the trust she earned from the students she interacted with on a day-to-day basis, and to the eventual trust (albeit qualified) she earned from Sydney. Yet, for all my conflicted feelings regarding Madeleine’s highly directive style—whether or not her directives were a “good” thing—I cannot help but wish that she could have played a slightly less directive role during her one-to-ones. Especially as evidenced in that long session with the student who kept trying to voice her ideas and opinions, with all the attending overlaps and even heightened emotion involved, I wish that Madeleine could have balanced her passion for moving students toward more feasible interpretations of the text with more traditionally nondirective approaches demanding increased listening and open-ended questioning.
Going back to Harris’s four categories—exploratory talk, acquisition of strategic knowledge, negotiation of assignment prompts and teacher comments, and affective concerns—we saw Sam helping students with aspects of all four. Harris’s categories are important and can be linked to—and offer pedagogical answers to—other categorical conceptions of educational and professional learning and development. Chris Thaiss and Terry Zawacki, for example, posit that undergraduate students’ conceptions of academic writing involve a complicated matrix of variables that include generalized standards of academic writing, disciplinary conventions, sub-disciplinary conventions, institutional and departmental cultures and policies, and personal goals and idiosyncratic likes and dislikes (from both student writers and their instructors). In their four-year study of teachers and students engaged in writing across the disciplines at George Mason University, the authors argue that as students move through their undergraduate educations, negotiating these variables, they experience roughly three developmental stages: in the first stage they use their limited academic experience to construct a general view of academic writing as “what the teachers expect;” in the second stage, after encountering a number of different teacher expectations, students develop a sense of idiosyncrasy or “they all want different things;” and in the third stage, which not all students reach, “a sense of coherence-within-diversity, understanding expectations as a rich mix of many ingredients” (139).
Sam emerged as what I have come to believe as one of the most sophisticated and methodologically sound of any tutor I’ve witnessed during one-to-one tutorials, moving students perhaps at least toward Thaiss and Zawacki’s second stage. But she may even be helping developmental students, well in advance of disciplinary courses, toward awareness of the third stage. The authors claim that the data from the instructors and students they studied point to the notion that third-stage students experience a mix of personal goals with disciplinary expectations. Of all the tutors, Sam encouraged the most exploratory talk with students—students generally spoke much more and were much more invested in the one-to-one tutorials. As Megan finally realized in facilitating peer response groups, Sam realized tremendous conversational momentum with students. Sam helped nudge students toward acquisition of strategic knowledge by focusing primarily on the big picture with each student’s paper: she usually spent much time talking—and getting students to talk about—their claim. She spent considerable time talking (and listening) about structural issues like topic sentences and how they should relate to the claim. Her ability not to get too caught up with the assignment prompts or teacher comments actually seemed to work in her favor; she appeared focused on the writing and the writer she was working with rather than worry unnecessarily about the prompt. All of these moves took into account both the students’ purposes and Sam’s knowledge of academic discourse from the disciplines of Biology and English. And, more implicitly I would argue, Sam tended to students’ affective needs largely by just listening carefully to their concerns, allowing plenty of time for them to think through ideas. From my experience, she provides a fine model of the sorts of moves all tutors and teachers can consider: careful note-taking; careful listening; and a primary concern with HOCs, though with a concurrent sense of when to pay attention to and when to defer LOCs. Whether tutoring in typical writing center one-to-one settings, or tutoring in a writing fellows program, or even facilitating peer response in the classroom, Sam’s methods have much to offer.
The uneasy relationship between all participants from Team One provides complex, somewhat troubling, and yet equally important implications for this study. Julian’s sense of himself—even during his limited classroom presence during peer reviews—as “reserved advisor” and the gross lack of communication between he and Anne combined to co-construct this cautionary tale of CBT. Julian did not attend class, or even stay in regular communication, enough to know the nuances of Anne’s expectations very well. Yet in all his interactions with students, he still tried hard to stay within what he felt were her expectations (primarily via assignment prompts and what students were telling him they thought Anne wanted). Anne felt that the lack of communication was all her fault and repeatedly during our interview expressed regret for not interacting more closely with Julian. But she also intimated that she felt students and Julian did not get to know each other well enough on an individual basis to enable Julian to move past his nondirective approach toward a method that might take into account the more individualistic needs of each student. Still, I find great value in this cautionary tale, value that points to our growth and development as a (sub)field. Like Lauren Fitzgerald and Melissa Ianetta, I “take it as a sign of writing center studies’ increasing sense of its own identity, as well as its increasing security as a field of study, that we can admit such ‘failures’ and then move on to create productive, important knowledge from these events” (9). In their laudable work on writing center assessment, Ellen Schendel and William Macauley agree “It is necessary that we become able to accept mistakes and doubts for ourselves ...” and add, “yet it is not sufficient. We have a responsibility to others, as well, especially those for whom we are connections to the field, representatives of how our field works, leaders in our local centers, regional writing center communities, and beyond” (173-74). Julian’s experiences also have something to contribute to discussions of writing teachers’/tutors’ education and development. His intelligence coupled with his desire to help cannot be denied. But some of Julian’s personality traits may make him (and tutors with similar traits) more suitable as an in-class tutor. (And I would say the same, to some degree, about Madeleine.) Julian is expressive and loves to engage in stimulating conversation. It was apparent in his one-to-one tutorials that if the students had been as verbose as he, than the dynamics of the tutorials might have been very different. Especially with this group of students, Julian might have served a better instructional niche if he had been an in-class tutor. There his ability to talk with some fluency about the texts, to offer his opinions and counter-opinions could have been put to better use.
Taken, in sum, Teams Five and Six from Chapter Four—in stark contrast to Team One—offer the true promise of CBT. The participants from Team Five and Six represent what I would classify as organic, home-grown partnerships that took full advantage of the teaching and learning situations they were engaged in. As one of the leaders of the writing program at SCSU, I was put into a position of authority and decision-making outside of the writing center. So instead of recruiting tutors from writing centers, as I did at the UW, I recruited students directly from the same sort of developmental course they would subsequently tutor in. These tutors took the collaborative lessons they learned from having recently taken the course themselves and paid them forward to fellow students they mirrored the diversity of—allowing, importantly—for a closer zone of proximal development and a more truly peer-to-peer learning ecology. The participants in Team Five and Six illustrate what can occur when trust and care are taken to the next level.
Returning to those Framework habits of mind mentioned in the Introduction, the results from Team Six seem highly promising: Curiosity? Check. Openness? Check. Engagement? Check. Creativity? Check. And so forth ... Two tutors and an instructor who could care less about whether they were being (or allowing others to be) too directive or nondirective, too controlling or intrusive in their pedagogical interventions ended up realizing a fruitful balance. As with Gina from Team Five and Madeleine from Team Three, their only real concern seemed to be: what can I do to help these students grow and develop confidence and perhaps some competence in their writing performances for that particular course? In the process, we saw Team Six (and to some extent Teams Three and Five) also approaching and pushing the boundaries of their expertise—pushing, especially, the conceived notions of what their roles and authority can or should be. We saw what can happen when young developing writers, thinkers and learners trust in their own authority and take some initiative. The “American Dream Museum Exhibit” assignment vividly showcases the potential of tutors leading the charge, blurring the lines between tutor, student, and teacher—pushing conventional pedagogical boundaries. In collaboratively conceiving of and designing the assignment, Kim and Penny thoughtfully and thought-provokingly scaffolded interactive, problem-posing activities that challenged all students, while at the same time providing ample instructional support—even when the structurally-sanctioned authority of the course, Jake, was not physically present.
In the spirit of “where are they now?” I’d like to briefly report on what I know about the tutors. From the UW tutors, Sam applied and was accepted into a Ph.D. program in English with a focus on Composition and Rhetoric at a major, Midwest research university. For the SCSU tutors, as of April 2013, Gina is a graduate student at the University of Connecticut School of Social Work, working on her master’s degree. When I asked her if she thought her experience with CBT has had any lasting effects she wrote:
Today I have a major role in establishing better policies and procedures for an organization that works with abused children. With the confidence I gained from course-based tutoring I have done extremely well at my internship. I have supervisors and program managers asking for my feedback and opinion in changing and establishing new policies. During course-based tutoring I gained a voice that I continue to use today. I am currently at a point in my life where I would have never imagined myself being. I have always been a driven person but never a confident person until I participated in course-based tutoring.
Penny is finishing her Elementary Education requirements as a student teacher. She felt that her experiences with course-based tutoring helped prepare her for her recent successes and future goals: she was captain of the SCSU field hockey team; she studied abroad in Brisbane, Australia, and traveled through the country; and she hoped to return to SCSU in Fall 2013 to get her master’s and have her own classroom by Fall of 2014. Like Bradley Hughes, Paula Gillespie, and Harvey Kail, in “What They Take with Them,” I believe that the lessons learned, lessons in responsible leadership and mentorship, clear communication, and reflective practice will travel far beyond those courses, for all participants.
Choice Matters: Recommendations for CBT Design and Implementation
This book’s central research question asked: How can what we know about peer tutoring one-to-one and in small groups—especially the implications of directive and nondirective tutoring strategies and methods brought to light in these case studies—inform our work with students in writing centers and other tutoring programs, as well as in classrooms? In answer, this book explored a myriad of ways that tutors in a variety of situations negotiated directive and nondirective strategies while trying to build rapport and trust with fellow students and instructors. In sum, and with the caveat that context might influence the feasibility of any given choice, I offer the following suggestions involving some of the strategic choices CBT practitioners have for successful practice with one-to-one and small-group tutorials, as well other possible classroom activities. These choices radiate from my suggestions for overall design and planning (Figure 5). Some suggestions might also be applicable to other related pedagogical practices, for example: teacher-student conferences, both one-to-one and small-group; writing center tutoring, again both one-to-one and in small groups; or writing classroom collaborative and group activities. (Note that some suggestions for one-to-one tutoring also apply to small-group peer response and vice-versa.)
Figure 5: CBT choices.
Overall Design and Planning
• Instructors and tutors should be made aware of different models of CBT, both more (tutors like Megan, Madeleine, Gina, Kim, and Penny attending class every day) and less (tutors like Sam not attending class and/or not doing the readings) collaborative designs. Then they should be allowed to choose, as closely as possible, which model they feel might best work for them.
• Have an early meeting between instructor and tutor (and coordinator perhaps) during which some tentative roles and expectations are laid out in advance. Be sure to let students know what these roles and expectations are as early as possible.
• Participants should talk, plan, and reflect with each other on a regular basis, via email, phone, or face-to-face. Frequent meetings, or online chat forums (blackboard, Skype, or even Facebook, for example) could be used to help facilitate dialogue and communication.
• Directors and coordinators should consider ongoing development and education just as important as initial orientations. Tutors could be asked to read current (as in the work of Thompson and colleagues) and/or foundational (like Harris’s “Talking”) articles in writing center and composition journals during any down time.
• As with the Framework and accompanying WPA Outcomes Statement, CBT practitioners, in relation to their respective programs, could develop learning outcomes or goals. I would suggest starting with Harris’s four aspects for how tutors can assist writers, mentioned repeatedly throughout this book, that she gleaned from hundreds of student responses and years of ground-breaking research and practice. These goals could incorporate the Framework habits of mind more generally, and other teaching/learning needs of tutors, tutees, and centers/institutions more specifically. Participant attitudes and other “incipient actions” (Burke Philosophy 1, 10-11, 168-9, 379-82; Grammar 235-47, 294; Rhetoric 50, 90-5) could thereby be coordinated with desired teaching and learning outcomes. These goals can then help guide tutor education courses, and continuing director/tutor development.
One-to-One Tutoring
• Whether tutors attend class every day or sometimes or not at all—if tutors will be conducting one-to-one tutorials outside of class—have students sign up for one-to-ones early in the term so that students and tutor get to know each other as early as possible and so that dialogue about students and the curriculum can start ASAP.
• Students can be offered shorter 25-minute, or longer 50-minute appointments, or their choice of either given the situation.
• Tutors should read a student writer’s entire paper before making definitive comments. While reading (whether or not the tutor or tutee reads aloud), tutors can take detailed notes—a descriptive outline could be especially helpful—and ask students to either take notes as well or follow along and help construct notes with the tutor (and perhaps audio-record the session on their smartphone). We saw all of these moves showcased in detail by Sam during her tutorials in Chapter Three.
• Tutors should be familiar with the intricacies of the directive/nondirective continuum in relation to one-to-one tutoring—and develop strategies for negotiating when to be more directive and when to be more facilitative.
Peer Response Facilitation
• If tutors and students are unfamiliar with each other, tutors might allow for some light-hearted banter or casual conversation so participants might warm up to one another before getting to the task at hand as we saw happening especially with Teams Five and Six in Chapter Four.
• Tutors should practice a mix of directive suggestions and modeling with nondirective open-ended questions and follow-up questions (as we vividly saw with Megan in Chapter Four) so that student writers receive the benefits of specific modeling and so they can also take ownership of their own and their peer group members’ learning.
• Tutors should allow for plenty of wait-time and pauses during peer response sessions, in order to allow enough time for students to process information and formulate a response (similar to how Sam allowed for during one-to-one tutorials).
• Instructors can experiment with various elements of peer response including: having students balance between how much writing versus how much conversation they engage in, and how much and in what ways instructors and tutors intervene and interact with each group in and out of the classroom (see Corbett, LaFrance, and Decker; Corbett and LaFrance Student; Corbett “Great”).
Other Classroom Activities
• Tutors do not necessarily need to be in class on a day-to-day basis. What’s more important is that when they are there, all participants have a role to play and everyone knows what they are.
• Tutor personalities can be utilized on their own terms, but instructors can also foster interpersonal opportunities that might expand tutor approaches to interacting with fellow students. Shyer tutors (or students holding back, like Megan), for example, could be gently encouraged to speak up in class if they feel they have something important to contribute. More talkative students (like Madeleine) could be nudged to balance their comments with questions and prompts that might encourage other students in class to participate or take intellectual risks.
• Tutors can be encouraged to take some authority and ownership in the design and orchestration of the class: they can help design and lead the implementation of lesson plans and assignments as we saw with Team Six; and they can share their own writing and learning experiences, strategies and processes liberally with their peers as we saw especially with Teams Five and Six.
Looking Back while Looking Forward: Diversity and Choice in Recruitment, Research Questions, and Assessment
This study has also made me question how, where, and why we recruit peer tutors. I believe—like Nelson—that we should seriously consider concerted efforts toward recruiting for more diversity in centers and programs that have been staffed predominantly by mainstream students. Though the data clearly show that a white, mainstream tutor can identify and assist nonmainstream and diverse students, as in the case of Megan and especially Sam and Penny, we clearly saw the benefits of having a tutor like Madeleine, a tutor who did indeed mirror the UW EOP students’ diversity, or a tutor like Gina, who struggles with an LD like the student Max, working closely with their peers. Students like Madeleine, Gina, Kim, and Penny—students who themselves took the developmental course, who learned lessons in how to navigate that course successfully—offer an exceptionally promising model of mirroring peer diversity that takes Vygotsky’s ZPD closely to heart. The cover image of this book—the Roman god Janus on a priceless coin—symbolizes the value of that promise. Double-faced Janus, looking simultaneously forward and backward in time, was the god of transitions, journeys, doors, gates, boundaries, endings, and beginnings. This book has offered intimate gazes into the developmental transitions of students, tutors, instructors, and researcher. Readers might look back on what this book has to offer as they look forward in their programmatic and pedagogical decision making: boundary-pushing between writing centers/peer tutoring programs and classrooms, between directive/nondirective instruction, between what it means to be a teacher/student. A student like Gina who works closely under an instructor like Mya with students/future tutors like Kim and Penny provides an example of interpersonal continuity from course to course and student/tutor to student/tutor. Further, this model can provide insights into how diverse students transition from high school to college writing and learning environments, especially if we listen closely to their stories. Yet we might consider a more advanced student like Sam as a diverse tutor herself due to the fact that she was a double major. When Sam originally applied and interviewed to be a peer tutor for the English Department Writing Center, she was not hired by the director. Later, while recruiting course-based tutors, I re-interviewed Sam. Despite feeling that her personality was a bit too “low key,” I brought her aboard anyway. Perhaps her multifarious experiences in navigating writing course boundaries and intersections between the humanities and natural sciences aided in her salutary tutoring strategies (see Thaiss and Zawacki 106). Maybe her low-key demeanor contributed to her commendable listening skills. If diverse students in their many guises do not apply to be tutors, then we should search them out—actively recruiting for talent and cultural and academic diversity—for our diverse writing programs, centers, and classrooms.
Once we’ve recruited for as much diversity and talent as possible, we can then make relevant choices on where and how to focus our research and assessment. I have advocated for a multi-method approach whenever possible, one that, if you will, methodologically mirrors the diversity of the participants involved in CBT-inspired research and practices. I want to see some researchers continue to focus on the sorts of pragmatic questions of tutoring style and method that have generated RAD case-study research from Spigelman, Thompson and colleagues, White-Farnham, Dyehouse and Finer, me, and others. I also want us to continue to build usable, authentic means of assessment that can help CBT practitioners successfully close the assessment loop, uniting learning outcomes with the habits of mind that undergird and can open the doors to successful, satisfying teaching and writing performances (see Schendel and Macauley; Johnson). But I hope others will continue to stay open and curious when they begin to hear boundary-pushing stories that warrant following up on. And when the chance arises to do both, I want our field(s) to embrace the multi-perspectives that multi-method research can deliver. By staying open, curious, and persistent in our efforts toward more hybrid, multi-method research, we can provide for more of the types of authentic assessment that can link creative processes and performances, habits of mind, identity formations, and student, teacher, and instructor success and satisfaction.
We have choices in our quests for synergistic teaching, learning, and trust. And we should welcome all colleagues, at all levels—slow and fast—ready and willing to accompany us in our journeys. | textbooks/socialsci/Education_and_Professional_Development/Beyond_Dichotomy_-_Synergizing_Writing_Center_and_Classroom_Pedagogies_(Corbett)/1.06%3A_Chapter_Five-_Conclusion-_Toward_Teacher_Student_Classroom_Center_Hybrid_C.txt |
Anderson, Julie Aipperspach, and Susan Wolff Murphy. “Bringing the Writing Center into the Classroom: A Case Study of Writing Groups.” Moss, Highberg, and Nicolas 47-62. Print.
Babcock, Rebecca Day, and Terese Thonus. Researching the Writing Center: Towards an Evidence-Based Practice. New York: Peter Lang, 2012. Print.
Barber-Fendley, Kimber, and Chris Hamel. “A New Visibility: An Argument for Alternative Assistance Programs for Students with Learning Disabilities.” College Composition and Communication 55.3 (2004): 504-35. Print.
Barnett, Robert W., and Jacob S. Blumner, eds. The Allyn and Bacon Guide to Writing Center Theory and Practice. Needham Heights, MA: Allyn and Bacon, 2001. Print.
Beaufort, Anne. College Writing and Beyond: A New Framework for University Writing Instruction. Logan, UT: Utah State UP, 2007. Print.
Bishop, Wendy. “Helping Peer Writing Groups Succeed.” Teaching English in the Two-Year College 15 (1988): 120-25. Print.
Black, Laurel Johnson. Between Talk and Teaching: Reconsidering the Writing Conference. Logan: Utah State UP, 1998. Print.
Blum-Kulka, Shoshana. “Discourse Pragmatics.” Discourse as Social Interaction. Ed. Teun A. Van Dijk. London: Sage, 1997. 38-63. Print.
Boquet, Elizabeth H. “Intellectual Tug-of-War: Snapshots of Life in the Center.” Stories from the Center: Connecting Narrative and Theory in the Writing Center. Ed. Briggs Lynne Craigue, and Meg Woolbright. Urbana, IL: NCTE, 2000. 17-30. Print.
---. Noise from the Writing Center. Logan: Utah State UP, 2002. Print.
Boquet, Elizabeth H. and Neal Lerner. “Reconsiderations: After ‘The Idea of a Writing Center.’” College English 71.2 (Nov. 2008): 170-89. Print.
Brooke, Robert, Ruth Mirtz, and Rick Evans. Small Groups in Writing Workshops: Invitations to a Writer’s Life. Urbana, IL: NCTE, 1994. Print.
Brooks, Jeff. “Minimalist Tutoring: Making the Students Do All the Work.” The Writing Lab Newsletter 15.6 (1991): 1-4. Print.
Bruffee, Kenneth A. Collaborative Learning: Higher Education, Interdependence, and the Authority of Knowledge. 2nd ed. Baltimore: The John Hopkins UP, 1999. Print.
---. A Short Course in Writing: Composition, Collaborative Learning, and Constructive Reading 4th ed. New York: Pearson, 2007. Print.
Bruland, Holly Huff. “‘Accomplishing Intellectual Work’: An Investigation of the Re-Locations Enacted through On-Location Tutoring.” Praxis: A Writing Center Journal 4.2 (Spring 2007). Web. 1 Jan. 2015.
Buranen, Lise, and Alice M. Roy, eds. Perspectives on Plagiarism and Intellectual Property in a Postmodern World. SUNY UP, 1999. Print.
Burke, Kenneth. A Grammar of Motives. New York: Prentice Hall, 1945. Print.
---. The Philosophy of Literary Form: Studies in Symbolic Action. Berkeley: U of California P, 1973 (Orig. pub. 1941). Print.
---. A Rhetoric of Motives. Berkeley: U of California P, 1969 (Orig. pub. 1950). Print.
Cairns, Rhoda, and Paul V. Anderson. “The Protean Shape of the Writing Associate’s Role: An Empirical Study and Conceptual Model.” Across the Disciplines 5 (March 29, 2008). Web. 1 Jan. 2015.
Carillo, Ellen “Teaching Academic Integrity and Critical Thinking through Collaboration.” Corbett, LaFrance, and Decker 65-76. Print.
Carino, Peter. “Power and Authority in Peer Tutoring.” The Center Will Hold: Critical Perspectives on Writing Center Scholarship. Ed. Michael A. Pemberton, and Joyce Kinkead. Logan: Utah State UP, 2003. 96-116. Print.
Carroll, Lee Ann. Rehearsing New Roles: How College Students Develop as Writers. Carbondale: Southern Illinois University Press, 2002. Print.
Cazden, Courtney B. Classroom Discourse: The Language of Teaching and Learning 2nd ed. Portsmouth, NH: Heinemann, 2001. Print.
Ching, Cory Lawson. “The Instructor-Led Peer Conference: Teachers as Participants in Peer Response.” Corbett, LaFrance, and Decker 15-28. Print.
Clark, Irene Lurkis. “Collaboration and Ethics in Writing Center Pedagogy.” The Writing Center Journal 9.1 (1988): 3-12. Print.
---. “Perspectives on the Directive/Non-Directive Continuum in the Writing Center.” The Writing Center Journal 22.1 (2001): 33-58. Print.
---. “Writing Centers and Plagiarism.” Buranen and Roy 155-167. Print.
Clark, Irene Lurkis, and Dave Healy. “Are Writing Centers Ethical?” Barnett and Blumner 242-59. Orig. published in WPA: Writing Program Administration 20 (1996): 32-38. Print.
Cogie, Jane, Dawn Janke, Teresa Joy Kramer, and Chad Simpson. “Risks in Collaboration: Accountability as We Move beyond the Center’s Walls.” Marginal Words, Marginal Work? Tutoring the Academy in the Work of Writing Centers. Ed. William J Macauley Jr., and Nicholas Mauriello. Cresskill, NJ: Hampton, 2007. 105-34. Print.
Cooper, Marilyn M. “Really Useful Knowledge: A Cultural Studies Agenda for Writing Centers.” The Writing Center Journal 14.2 (1994): 97-111. Print.
Corbett, Steven J. “Bringing the Noise: Peer Power and Authority, On Location.” Spigelman and Grobman 101-111. Print.
---. “Great Debating: Combining Ancient and Contemporary Methods of Peer Critique.” PraxisWiki. Kairos: A Journal of Rhetoric, Technology, and Pedagogy (October 8, 2014). Web. 1 Jan. 2015.
---. “Learning Disability and Response-Ability: Reciprocal Caring in Developmental Peer Response Writing Groups and Beyond.” Pedagogy: Critical Approaches to Teaching Literature, Language, Composition, and Culture. 15.3 (Spring 2015): 61-85.
---.“Negotiating Pedagogical Authority: The Rhetoric of Writing Center Tutoring Styles and Methods.” Rhetoric Review 32.1 (2013): 81-98. Print.
---. “The Role of the Emissary: Helping to Bridge the Communication Canyon between Instructors and Students.” The Writing Lab Newsletter 27.2 (Oct. 2002): 10-11. Print.
---.“Tutoring Style, Tutoring Ethics: The Continuing Relevance of the Directive/Nondirective Instructional Debate.” Praxis: A Writing Center Journal 5.2 (Spring 2008)Web. 1 Jan. 2015. Rpt. in the St. Martin’s Sourcebook for Writing Tutors 4th ed. Ed. Christina Murphy, and Steve Sherwood. Boston: Bedford/St Martin’s, 2011. 148-155. Print.
---.“Using Case Study Multi-Methods to Investigate Close(r) Collaboration: Course-Based Tutoring and the Directive/Nondirective Instructional Continuum.” The Writing Center Journal 31.1 (2011): 55-81. Print.
---. “Writing Center Research in the Making: Questioning Hierarchies of Authority across the Curriculum.” Paper Presented at the 2nd International Conference on Writing Research. Santa Barbara, CA. February, 2005.
Corbett, Steven J., and Juan C. Guerra. “Collaboration and Play in the Writing Classroom.” Academic Exchange Quarterly 9.4 (Winter 2005): 106-11. Print.
Corbett, Steven J., and Michelle LaFrance. “From Grammatical to Global: The WAC/Writing Center Connection.” Praxis: A Writing Center Journal 6.2 (Spring 2009). Web. 1 Jan. 2015.
---, eds. Student Peer Review and Response: A Critical Sourcebook. New York/Boston: Bedford/St. Martin’s, Forthcoming.
Corbett, Steven J., Michelle LaFrance, and Teagan Decker, eds. Peer Pressure, Peer Power: Theory and Practice in Peer Review and Response for the Writing Classroom. Southlake, TX: Fountainhead Press, 2014. Print.
Corbett, Steven J., Sydney F. Lewis, and Madeleine M. Clifford. “Diversity Matters in Individualized Instruction: The Pros and Cons of Team Teaching and Talkin’ that Talk.” Diversity in the Composition Classroom. Ed. Gwendolyn Hale, Mike Mutschelknaus, and Thomas Alan Holmes. Southlake, TX: Fountainhead Press, 2010. 85-96. Print.
Corroy, Jennifer. “Institutional Change and the University of Wisconsin—Madison Writing Fellows Program.” Spigelman and Grobman 205-18. Print.
Daiker, Donald A. “Learning to Praise.” Writing and Response: Theory, Practice, and Research. Ed. Chris M. Anson. Urbana, IL: NCTE, 1989. 103-13. Print.
Decker, Teagan. “Academic (Un)Seriousness: How Tutor Talk Plays with Academic Discourse” The Writing Lab Newsletter 30.4 (Dec. 2005): 11-13. Print.
---.“Diplomatic Relations: Peer Tutors in the Writing Classroom.” Spigelman and Grobman 17-30. Print.
Delpit, Lisa. Other People’s Children: Cultural Conflict in the Classroom 2nd ed. New York: The New Press, 2006. Print.
---.“The Silenced Dialogue: Power and Pedagogy in Educating Other People’s Children.” Harvard Educational Review 58.3 (Aug. 1988): 280-97. Print.
Denny, Harry C. Facing the Center: Toward an Identity Politics of One-to-One Mentoring. Logan: Utah State UP, 2010. Print.
DiPardo, Anne. “‘Whispers of Coming and Going’: Lessons from Fannie.” The Writing Center Journal 12.2 (1992): 125-44. Print.
Driscoll, Dana, and Sherry Wynn Perdue. “Theory, Lore, and More: An Analysis of RAD Research in The Writing Center Journal, 1980-2009.” The Writing Center Journal 32.1 (2012): 11-39.
Dylan, Bob. “The Times They Are A-Changin.’” The Times They Are A-Changin.’ Columbia, 1964. CD.
Ender, Steven C., and Fred B. Newton. Students Helping Students: A Guide for Peer Educators on College Campuses. San Francisco, CA: Jossey-Bass, 2000. Print.
Fitzgerald, Lauren. “Writing Center Scholarship: A ‘Big Cross-Disciplinary Tent.’” Exploring Composition Studies: Sites, Issues, and Perspectives. Ed. Kelly Ritter, and Paul Kei Matsuda. Logan, UT: Utah State UP, 2012. 73-88. Print.
Fitzgerald, Lauren, and Melissa Ianetta. “From the Editors.” The Writing Center Journal 32.2 (2012): 9-10. Print.
Framework for Success in Postsecondary Writing. Developed jointly by the Council of Writing Program Administrators, the National Council of Teachers of English, and the National Writing Project, 2011. Web. 1 Jan. 2015.
Freire, Paulo. Pedagogy of Freedom: Ethics, Democracy, and Civic Courage. Oxford: Rowman and Littlefield, 1998. Print.
Geller, Anne Ellen, Michele Eodice, Frankie Condon, Meg Carroll, and Elizabeth H. Boquet. The Everyday Writing Center: A Community of Practice. Logan: Utah State UP, 2007. Print.
Gerben, Chris. “Make it Work: Project Runway as Model and Metaphor of Authority and Expertise.” Corbett, LaFrance, and Decker 29-42. Print.
Gere, Anne Ruggles, and Ralph Stevens. “The Language of Writing Groups: How Oral Response Shapes Revision.” The Acquisition of Written Language: Response and Revision. Ed. Sarah Warshauer Freedman. Norwood, NJ: Ablex, 1985. 85-105.
Gilewicz, Magdalena. “Sponsoring Student Response in Writing Center Group Tutorials.” Moss, Highberg, and Nicolas 63-78. Print.
Gilewicz, Magdalena, and Terese Thonus. “Close Vertical Transcription in Writing Center Training and Research.” The Writing Center Journal 24.1 (2003): 25-49. Print.
Gillespie, Paula, and Neal Lerner. The Allyn and Bacon Guide to Peer Tutoring 2nd ed. New York: Pearson, 2004. Print.
Goffman, Erving. Forms of Talk. Philadelphia: U of Pennsylvania P, 1981. Print.
Greenfield, Laura and Karen Rowan, eds. Writing Centers and the New Racism: A Call for Sustainable Dialogue and Change. Logan: Utah State UP, 2011. Print.
Grimm, Nancy Maloney. Good Intentions: Writing Center Work for Postmodern Times. Portsmouth, NH: Boynton/Cook, 1999. Print.
Grutsch McKinney, Jackie. Peripheral Visions for Writing Centers. Logan: Utah State UP, 2013. Print.
Hafer, Gary R. “Ideas in Practice: Supplemental Instruction in Freshman Composition.” The Journal of Developmental Education 24 (2001): 30-37. Print.
Hall, Emily, and Bradley Hughes. “Preparing Faculty, Professionalizing Fellows: Keys to Success with Undergraduate Writing Fellows in WAC.” The WAC Journal 22 (2011): 21-40. Print.
Haring-Smith, Tori. “Changing Students’ Attitudes: Writing Fellows Programs.” Writing Across the Curriculum: A Guide to Developing Programs. Ed. Susan McLeod, and Margot Soven. Newbury Park, CA: Sage, 1992. 175-88. Print.
Harris, Muriel. “Centering in on Professional Choices.” College Composition and Communication 52.3 (Feb. 2001): 429-40. Print.
---. “Collaboration Is Not Collaboration Is Not Collaboration: Writing Center vs. Peer-Response Groups.” College Composition and Communication 43 (1992): 369-83. Print.
---. “A Non-Coda: Including Writing Centered Student Perspectives for Peer Review.” Corbett, LaFrance, and Decker 277-88. Print.
---. “Talking in the Middle: Why Writers Need Writing Tutors.” College English 57.1 (Jan. 1995): 27-42. Print.
---. Teaching One-to-One: The Writing Conference. Urbana, IL: NCTE, 1986. Print.
Haswell, Richard H. “NCTE/CCCC’s Recent War of Scholarship.” Written Communication 22.2 (2005): 198-223. Print.
Healy, Dave. “A Defense of Dualism: The Writing Center and the Classroom.” The Writing Center Journal 14.1 (1993): 16-29. Print.
Hemmeter, Thomas. “The ‘Smack of Difference’: The Language of Writing Center Discourse.” The Writing Center Journal 11.1 (1990): 35-48. Print.
Hughes, Bradley, Paula Gillespie, and Harvey Kail. “What They Take with Them: Findings from the Peer Tutor Alumni Research Project.” The Writing Center Journal 30.2 (2010): 12-46. Print.
Hurley, Maureen, Glen Jacobs, and Melinda Gilbert. “The Basic SI Model.” Supplemental Instruction: New Visions for Empowering Student Learning. Ed. Marion E. Stone and Glen Jacobs. San Francisco, CA: Jossey-Bass, 2006. 11-22. Print.
Johnson, Kristine. “Beyond Standards: Disciplinary and National Perspectives on Habits of Mind.” College Composition and Communication 64.3 (Feb. 2013): 517-41. Print.
Kail, Harvey, and John Trimbur. “The Politics of Peer Tutoring.” The Writing Center Journal 11.1-2 (1987): 5-12. Print.
Kilborn, Judith. “Cultural Diversity in the Writing Center: Defining Ourselves and Our Challenges.” Barnett and Blumner 391-400. Orig. published in The Writing Lab Newsletter 19.1 (1994): 7-10. Print.
Launspach, Sonja. “The Role of Talk in Small Writing Groups: Building Declarative and Procedural Knowledge for Basic Writers.” Journal of Basic Writing 27.2 (2008): 56-78. Print.
Lawfer, Laura. “Writing Fellows: An Innovative Approach to Tutoring.” The Writing Lab Newsletter 29.9 (2005): 12-13, 10. Print.
Lee, Carol D. “Double Voiced Discourse: African American Vernacular English as Resource in Cultural Modeling Classrooms.” Bakhtinian Perspectives on Language, Literacy, and Learning. Ed. Arentha F. Ball, and Sarah Warshauer Freedman. Cambridge: Cambridge UP. 129-47. Print.
Lerner, Neal. The Idea of a Writing Laboratory. Carbondale: Southern Illinois UP, 2009. Print.
---. “The Teacher-Student Writing Conference and the Desire for Intimacy.” College English 68.2 (Nov. 2005): 186-208. Print.
Liggett, Sarah, Kerri Jordan, and Steve Price. “Mapping Knowledge-Making in Writing Center Research: A Taxonomy of Methodologies.” The Writing Center Journal 31.2 (2011): 50-88. Print.
Liu, Barbara Little, and Holly Mandes. “The Idea of a Writing Center Meets the Reality of Classroom-Based Tutoring.” Spigelman and Grobman 87-100. Print.
Lutes, Jean Marie. “Why Feminists Make Better Tutors: Gender and Disciplinary Expertise in a Curriculum-Based Tutoring Program.” Writing Center Research: Extending the Conversation. Ed. Paula Gillespie, Alice Gillam, Lady Falls Brown, and Byron Stay. Mahwah, NJ: Lawrence Erlbaum Associates, 2002. 235-57. Print.
Mackiewicz, Jo, and Isabelle Kramer Thompson. Talk about Writing: The Tutoring Strategies of Experienced Writing Center Tutors. New York: Routledge, 2015. Print.
Mann, April. “Structure and Accommodation: Autism and the Writing Center.” Autism Spectrum Disorders in the College Composition Classroom: Making Writing Instruction More Accessible for All Students. Ed. Val Gerstle, and Lynda Walsh. Milwaukee, WI: Marquette UP, 2011. 43-74. Print.
Miller, Judith E., James E. Groccia, and Marilyn S. Miller, eds. Student-Assisted Teaching: A Guide to Faculty-Student Teamwork. Bolton, MA: Anker, 2001. Print.
Moss, Beverly J., Nels P. Highberg, and Melissa Nicolas, eds. Writing Groups Inside and Outside the Classroom. Mahwah, NJ: Lawrence Erlbaum Associates, 2004. Print.
Murphy, Susan Wolff. “‘Just Check It: I Mean, Don’t Get Fixed on It’: Self Presentation in Writing Center Discourse.” The Writing Center Journal 26.1 (2006): 62-82. Print.
Neff, Julie. “Learning Disabilities and the Writing Center.” Intersections: Theory-Practice in the Writing Center. Ed. Joan A. Mullin and Ray Wallace. Urbana, IL: NCTE, 1994. 81-95. Print.
Nelson, Marie Wilson. At the Point of Need: Teaching Basic and ESL Writers. Portsmouth, NH: Boynton/Cook, 1991. Print.
Nelson, Jane, and Margaret Garner. “Horizontal Structures for Learning.” Before and After the Tutorial: Writing Centers and Institutional Relationships. Ed. Nicholas Mauriello, William J. Macauley, Jr., and Robert T. Koch, Jr. New York: Hampton, 2011. 7-27. Print.
Nicolas, Melissa. “A Cautionary Tale about ‘Tutoring’ Peer Response Groups.” Spigelman and Grobman 112-25. Print.
North, Stephen M. “The Idea of a Writing Center.” College English 46.5 (1984): 433-46. Print.
---. “Revisiting ‘The Idea of a Writing Center.’” Writing Center Journal 15.1 (1994): 7-19. Print.
Paulson, Eric J., Jonathan Alexander, and Sonya Armstrong. “Peer Review Re-Reviewed: Investigating the Juxtaposition of Composition Students’ Eye Movements and Peer-Review Processes.” Research in the Teaching of English 41.3 (Feb. 2007): 304-35. Print.
Pemberton, Michael A. “Introduction to ‘The Function of Talk in the Writing Conference: A Study of Tutorial Conversation.’” The Writing Center Journal 30.1 (2010): 23-26. Print.
Raines, Helon Howell. “Tutoring and Teaching: Continuum, Dichotomy, or Dialectic?” The Writing Center Journal 14.2 (1994): 150-62. Print.
Reid, E. Shelley. “Peer Review for Peer Review’s Sake: Resituating Peer Review Pedagogy.” Corbett, LaFrance, and Decker 217-31. Print.
Robinson, Heather M., and Jonathan Hall. “Connecting WID and the Writing Center: Tools for Collaboration.” The WAC Journal 24 (2013): 29-47.
Schendel, Ellen, and William J. Macauley, Jr. Building Writing Center Assessments that Matter. Logan: Utah State UP, 2012. Print.
Schunk, Dale H. Learning Theories: An Educational Perspective 4th ed. Columbus, OH: Pearson/Merrill Prentice Hall, 2004. Print.
Severino, Carol. “Rhetorically Analyzing Collaborations.” The Writing Center Journal 13 (1992): 53-64. Print.
Severino, Carol, and Mary Trachsel. “Theories of Specialized Discourses and Writing Fellows Programs.” Across the Disciplines 5 (March 29, 2008).Web. 1 Jan. 2015.
Shamoon, Linda K., and Deborah H. Burns. “A Critique of Pure Tutoring.” The Writing Center Journal 15.2 (1995): 134-52. Print.
Shaparenko, Bithyah. “Focus on Focus: How to Facilitate Discussion in a Peer Group.” The Writing Lab Newsletter 29.8 (2005): 11-12. Print.
Smith, Louise Z. “Independence and Collaboration: Why We Should Decentralize Writing Centers.” The Writing Center Journal 23.2 (2003): 15-23. Orig. pub. in 7.1 (1986): 3-10. Print.
Smitherman, Geneva. Talkin and Testifyin: The Language of Black America. Detroit, MI: Wayne State UP, 1977. Print.
Smulyan, Lisa, and Kristen Bolton. “Classroom and Writing Center Collaborations: Peers as Authorities.” The Writing Center Journal 9.2 (1989): 43-49. Print.
Soliday, Mary. Everyday Genres: Writing Assignments across the Disciplines. Carbondale and Edwardsville: Southern Illinois UP, 2011. Print.
---. “Shifting Roles in Classroom Tutoring: Cultivating the Art of Boundary Crossing.” The Writing Center Journal 16.1 (1995): 59-73. Print.
Soven, Margot Iris. “Curriculum-Based Peer Tutoring Programs: A Survey.” Writing Program Administration 17.1-2 (1993): 58-74. Print.
Spigelman, Candace. “Reconstructing Authority: Negotiating Power in Democratic Learning Sites.” Spigelman and Grobman 185-204. Print.
---. “‘Species’ of Rhetoric: Deliberative and Epideictic Models in Writing Center Settings.” Moss, Highberg, and Nicolas 133-50. Print.
---. “The Ethics of Appropriation in Peer Writing Groups.” Buranen and Roy 231-40. Print.
Spigelman, Candace and Laurie Grobman, eds. On Location: Theory and Practice in Classroom-Based Writing Tutoring. Logan: Utah State UP, 2005. Print.
Spigelman, Candace, and Laurie Grobman. “Introduction: On Location in Classroom-Based Writing Tutoring.” Spigelman and Grobman 1-13. Print.
Spilman, Isabel B. “Tutoring Five on Five.” The Writing Lab Newsletter 13.10 (1989): 9-10. Print.
Stewart, Donald C. “Collaborative Learning: Boon or Bane? Rhetoric Review 7 (1988): 58-83. Print.
Thaiss, Chris, and Terry Myers Zawacki. Engaged Writers and Dynamic Disciplines: Research on the Academic Writing Life. Portsmouth, NH: Boynton/Cook, 2006. Print.
Thompson, Isabelle. “Scaffolding in the Writing Center: A Microanalysis of an Experienced Tutor’s Verbal and Nonverbal Tutoring Strategies.” Written Communication 26 (2009): 417-53. Print.
Thompson, Isabelle, Alyson Whyte, David Shannon, Amanda Muse, Kristen Miller, MillaChappell, and Abby Whigham. “Examining Our Lore: A Survey of Students’ and Tutors’ Satisfaction with Writing Center Conferences.” The Writing Center Journal 29.1 (2009): 78-105. Print.
Thompson, Isabelle, and Jo Mackiewicz. “Questioning in Writing Center Conferences.” The Writing Center Journal 33.2 (2014): 37-70. Print.
Trimbur, John. “Peer Tutoring: A Contradiction in Terms?” The Writing Center Journal 7.2 (1987): 21-28. Print.
Trimbur, John, and Harvey Kail. “Foreword.” Bruffee, A Short Course xix-xxix. Print.
Vidali, Amy. “Discourses of Disability and Basic Writing.” Disability and the Teaching of Writing: A Critical Sourcebook. Ed. Cynthia Lewiecki-Wilson, and Brenda Jo Brueggemann. Boston: Bedford/St. Martin’s, 2008. 40-55. Print.
Walker, Carolyn, and David Elias. “Writing Conference Talk: Factors Associated with High- and Low-Rated Writing Conferences.” Research in the Teaching of English 21.3 (1987): 226-85. Print.
White-Farnham, Jamie, Jeremiah Dyehouse, and Bryna Siegel Finer. “Mapping Tutorial Interactions: A Report on Results and Implications.” Praxis: A Writing Center Journal 9.2 (2012). Web. 1 Jan. 2015.
Zawacki, Terry Myers. “Writing Fellows as WAC Change Agents: Changing What? Changing Whom? Changing How?” Across the Disciplines 5 (March 29, 2008).Web. 1 Jan. 2015. | textbooks/socialsci/Education_and_Professional_Development/Beyond_Dichotomy_-_Synergizing_Writing_Center_and_Classroom_Pedagogies_(Corbett)/1.07%3A_Works_Cited.txt |
Appendix
Appendix A: Interview Questions for Instructors and Tutors
Instructors
1. Could you tell me just a little about yourself: where you’re at in the program, what your area of focus is, how long you’ve been teaching?
2. So how did it go? What are your overall impressions of your experience with course-based tutoring?
3. What worked well?
4. What were the students’ impressions? The tutors?
5. What roles(s) did the course-based tutor play: e.g., instruction partner, conversation participant, discussion leader?
6. Did you require visits to the tutor?
7. How did it compare/contrast to not having a tutor directly attached to the EOP classroom?
8. What might have worked better? What suggestions might you offer other tutors or TAs interested in participating in this project?
9. How did this experience affect your relationship to the Instructional Center or other writing centers?
10. Would you collaborate with a course-based tutor again? Would you make any changes in the way you employed the tutor, to your syllabus or assignments, or in any other way?
11. Did this experience change or add to your overall view of what it means to tutor, teach, or learn writing?
Tutors
1. Could you tell me just a little about yourself: where you’re at in your studies, grade level; what your major is; how long you’ve been tutoring?
2. How did it go? What are your overall impressions of your experience with course-based tutoring?
3. What worked well?
4. What were the students’ impressions of your involvement with the class? The TAs?
5. What role(s) did you play: e.g., instruction partner, conversation participant, discussion leader?
6. How did your in-class experience compare/contrast to your experiences as a tutor one-to-one in the Center?
7. What might have worked better? What suggestions might you offer other tutors or TAs interested in participating in this project?
8. Did your tutor training and experience as a one-to-one tutor prepare you for this role?
9. Would you be willing to be a course-based tutor again? What changes, if any would you make, or want to see made?
Appendix B: Student Questionnaires
This questionnaire asks general questions about your perspectives on interacting with an in-class tutor for this course. Participation is voluntary. You may skip any questions that you do not wish to answer. Your responses will be used to better understand the effects and potential value added by having an in-class tutor. The information you provide here is confidential. Based on your responses, we may contact you in the future to ask if you’d like to participate in a follow-up interview.
1. Before this class, how often would you say you’ve used peer writing tutors in the past? (check one):
Often______ Occasionally_____ Rarely______ Never_____
Comment:
2. What are your overall impressions of having a course-based tutor?
3. What did you like best about having a course-based tutor?
4. Were there any problems with having a course-based tutor?
5. How did this compare to not having a course-based tutor in English 104? [Only for UW case studies]
6. Do you feel that you saw or visited a tutor more or less often than in English 104? [Only for UW case studies]
7. Did you visit your tutor for a one-to-one tutorial? How did this compare to your in-class interactions?
8. Do you think that you will continue to talk to writing tutors in the future?
Appendix C: Linguistic Features and Cues of One-to-One Tutorials for Teams One-Four
Ling. Feat. and Cues
Julian
Team One
Students
Megan
Team Two
Students
# of Sessions
6
8/7
Average Length (minutes)
36
11/18
Total Words Spoken
15,049
5,835
8,986/
11,675
2,150/
2,444
Average # of Words Spoken per Minute
70
27
102/93
24/19
Content-clarifying Questions
20
15/18
Open-ended Questions
93
12/8
Directive Questions
8
5/12
References to TA
14
13
7/17
2/6
References to Assignment Prompt
12
1
1/1
0/0
Interruptions
28
13
8/17
26/20
Main Channel Overlaps
1
4
1/8
5/22
Joint Productions
4
9
3/8
17/23
Ling. Feat. and Cues
Madeleine
Team Three
Students
Sam
Team Four
Students
# of Sessions
3/1
11
Average Length (minutes)
50/59
25
Total Words Spoken
12,115/
7,614
1,919/
2,997
18,181
11,292
Average # of Words Spoken per Minute
81/129
13/51
66
41
Content-clarifying Questions
5/4
20
Open-ended Questions
23/2
137
Directive Questions
23/5
21
References to TA
7/4
0/2
1
3
References to Assignment Prompt
1/0
0/1
1
0
Interruptions
21/44
10/50
12
37
Main Channel Overlaps
3/6
7/25
7
12
Joint Productions
3/5
24/6
9
49
Appendix D: American Dream Museum Exhibit Assignment
The American Dream Museum Exhibit
Your team has been asked to create an exhibition that communicates the essence of the American Dream.
Your job is to collect artifacts—images, music, literature, poems, or other items that represent or symbolize the idea behind the American Dream. Use your imagination and have fun!
Each team member must collect 10 items and bring them in for discussion with the other team members. You should be able to make an argument about why you believe each artifact should be in the exhibition. Then, the team should choose five objects from each person’s collection. Take notes about why those items were selected as representatives of the American Dream.
The format of your exhibit is limitless—your team (if everyone can agree) can have an overarching theme such as “Unrealistic Expectations? Women and the American Dream” or “His Way and the American Way: Music and Images of Frank Sinatra” or you can have a hodge-podge collection of items. The important thing to remember is that you must be able to make the argument that your exhibition says something about some aspect of the American Dream.
1. At some point before the exhibit, decide on a title of your exhibition—be creative!
2. Each individual team member will write a one to two page argument about each of their (5) artifacts and why they are important representations of the American Dream. For example, if you are writing about an image, you might do a textual analysis of the image—the subject, composition of the elements, colors, etc. If you selected a song or other music, you might show how the music or lyrics represent the American Dream. Try to make connections and/or cite some of the material we’ve covered in class.
3. Each team member will also write a one or two page introduction to the exhibit. Be sure to define the “American Dream.” (We created a definition in class and our reading materials also defined it.) This introduction should provide an overview of the exhibit and why the audience should be interested in it. Look over all our material from this semester—the founding documents, speeches, and essays. Again, try to connect and/or cite some of the material we’ve covered in class.
4. Finally, each team will give a tour of their exhibit and provide information about their artifacts.
Due Dates:
Thurs—04/15 Each person brings in 10 artifacts—Teams discuss and narrow down each person to 5
Thurs—04/22 Each team member brings in and reads their arguments about each of their artifacts
Thurs—04/29 Teams work on the design and order of their artifacts and presentation
Tues—05/04 Group Project Presentations
Thur—05/06 Group Project Presentations | textbooks/socialsci/Education_and_Professional_Development/Beyond_Dichotomy_-_Synergizing_Writing_Center_and_Classroom_Pedagogies_(Corbett)/1.08%3A_Appendix.txt |
Accessibility is the degree to which a product, service or environment is available to all users, including those with disabilities or special needs. It’s about our ability to participate in and belong to the world around us. The Office of Civil Rights, in a resolution agreement with South Carolina Technical College, said, “Accessible means a person with a disability is afforded the opportunity to acquire the same information, engage in the same interactions, and enjoy the same services as a person without a disability in an equally effective and equally integrated manner, with substantially equivalent ease of use.”
A similar concept is Universal Design. Universal Design is the proactive design of products and environments to be usable by all people, to the greatest extent, without the need for adaptation or specialized design. A curb cut is an example of something that is universally designed. It helps people in wheel chairs and using walkers navigate their environment, but it also benefits people pushing strollers and carts, pulling luggage, people on bicycles, and people on roller skates.
Curb cuts are an example of universal design. They benefit everyone, not just people who use wheelchairs.
While it may not be possible to make everything accessible to all people due to variation in user needs and abilities, we can follow standards to make our online content accessible to the broadest range of people. These standards are referred to as the Web Content Accessibility Guidelines.
1.02: What is Disability
According to U.S. Code Title 42, Chapter 126, § 12102, a disability is a physical or mental impairment that substantially limits one or more major life activities of an individual. Major life activities include, but are not limited to, caring for oneself, performing manual tasks, seeing, hearing, eating, sleeping, walking, standing, lifting, bending, speaking, breathing, learning, reading, concentrating, thinking, communicating, and working.
A major life activity also includes the operation of a major bodily function, including but not limited to, functions of the immune system, normal cell growth, digestive, bowel, bladder, neurological, brain, respiratory, circulatory, endocrine, and reproductive functions.
In reality, we are all aging into disability.
Visual disabilities include blindness, low vision, tunnel vision, and central field loss in which there is loss of vision in the central portion of the eye. Auditory disabilities include deafness, hearing impairment and speech disabilities. Diseases such as Parkinson’s and arthritis can affect mobility, use of hands, eyes, coordination and fine motor control. Cognitive disabilities are highly variable and can include learning disability, dyslexia, and conditions that affect memory and attention.
Sometimes when people think of a person with a disability, they think of a condition that has existed since birth. Not all disabilities are present at birth and not all are permanent. A person may lose the use of his or her dominant arm or hand for a few months due to a fractured or broken bone, for example. A disability can happen to anyone of us and anytime.
Online we will be concerned with the accessibility of our documents, such as Word, PDF and Power Point files, the code behind what we present, the websites we ask our users to interact with, and the activities we ask them to perform.
1.03: Person First Language
When I was in Physical Therapy school at OSU, I had an instructor tell us that we have to be careful not to refer to the patient by their condition. That is, you wouldn’t want to say, “I have my hip replacement coming at 1 o’clock.” Instead you might tell your supervisor or fellow physical therapist, “I have Mrs. M coming at 1 o’clock for her hip replacement therapy.” Likewise, when we have a student with a disability, such as blindness, we would refer to him or her as a person who is blind. For example, if I were to approach our Office of Disability Services, I would say, “I need help with making accommodations for a student in my course who is blind,” instead of “I need help with making accommodations for a blind student in my course.”
The Center for Disease Control has a helpful chart of people first language and language to avoid. They’ve put this language in a poster called Communicating With and About People With Disabilities (visit: https://www.cdc.gov/ncbddd/disabilit...ter_photos.pdf)
An interactive or media element has been excluded from this version of the text. You can view it online here:
https://pressbooks.ulib.csuohio.edu/accessibility/?p=635
1.04: Why Should We Design for Accessibility
Please view the video titled “Keeping Web Accessibility In Mind
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=390
As teachers and members of an academic institution, don’t we want everyone to have the opportunity to learn, share, be creative and succeed in life? Aren’t we here to enrich people’s lives and provide information to everyone? Imagine if we can open doors to information, not shut them to a certain segment of the population. We want to enable independence for people with disabilities.
To quote David Berman, a web accessibility expert, “what if we had denied an education to these people?”
Ron Mace is known as the father of universal design. He became an architect and worked on the first U.S. building codes to incorporate principles of accessible design, in North Carolina in 1973. In college, he was carried into buildings that had no wheelchair ramps.
Stephen Hawking is a famous mathematician and physicist, teaching at Cambridge University. He wrote “A Short History of the Universe” and served as the prestigious Lucasian Professor of Mathematics at Cambridge.
Helen Keller was the first blind and deaf person to receive a Bachelor of Arts degree. She became a lecturer, writer, and political activist.
If you’ve ever experienced being excluded from an activity you wanted to participate in, you’ll understand, that designing inclusive experiences is the right thing to do.
It’s smart for business, helping users find our information and broadening our customer base.
Moreover, it’s the law.
For another perspective on designing for accessibility, view David Berman’s video titled “Why Should We Care.”
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=390 | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/01%3A_Understanding_Accessibility/1.01%3A_Understanding_Accessibility.txt |
There are four existing laws you should be aware of when designing online content at an institution of higher education.
1. Section 504 of the Rehabilitation Act of 1973, says that colleges or universities that receive federal funds need to ensure that people with disabilities can participate in programs & activities and have the same benefits that people without disabilities have. Subpart E requires an institution to be prepared to make reasonable academic adjustments and accommodations to allow students with disabilities full participation in the same programs and activities available to students without disabilities.
2. Section 508 of the Rehabilitation Act of 1973 is an amendment that deals with electronic information and technology. If you are a state that receives federal funds under the Technology Related Assistance for Individuals with Disabilities Act of 1988, you need to ensure that people with disabilities have access to and can use information and services available to non-disabled members of the public. This includes information stored on web sites and in online courses. Ohio receives funds from the Technology Related Assistance for Individuals with Disabilities Act of 1988. With a recent rewriting of Section 508, on January 18, 2017, the rule now refers to Electronic and Information Technology (EIT) as Information and Communication Technology (ICT).
3. The American with Disabilities Act of 1990 expands the reach of the Rehabilitation Act of 1973 to private as well as public institutions of higher education. It requires schools to provide access to the same programs that non-disabled people participate in, in an integrated setting where possible. It requires auxiliary aids and services when necessary to ensure effective communication.
4. The 21st Century Communications and Video Accessibility Act of 2010 requires modern communications to be accessible to people with disabilities. This means VOIP services, electronic messaging, video conferencing, video communications & mobile browsers.
For those who are curious, the following sites have lists of higher education accessibility lawsuits, complaints and settlements:
1.06: A Shift Toward Broader Standards and Functionality Supporting Accessible O
Legislation regarding accessibility at higher educational institutions recently changed. In the United States, we follow Section 508 guidelines for designing websites and online content. The U.S. Access Board writes Section 508 rules for organizations receiving federal funds to follow, as part of the 1998 Amendment to the Rehabilitation Act. They are enforced by the Dept. of Education’s Office of Civil Rights and the Department of Justice.
Section 508 rules were previously published on December 21, 2000, and implemented in the summer of 2001. However, Section 508 recently underwent a refresh, and the new standards were written into the Federal Registrar on January 18, 2017. The new standards align with the Web Content Accessibility Guidelines 2.0 (WCAG 2.0) Level AA. At the time of writing, WCAG went through an update to version 2.1 on June 5th, 2018. We’ll have to keep checking to see when Section 508 points to the newer standards. The Web Content Accessibility Guidelines are written by the Web Accessibility Initiative (WAI) at the World Wide Web Consortium. This is an international group and these are international standards that have been adopted by designers in other countries, such as in Europe and Canada, already. There are three levels to each WCAG 2.0 guideline – Level A is what you must do, level AA is what you should do, and level AAA produces the best accessibility for the most people. The WCAG 2.0 guidelines are broader in scope than the previous Section 508 standards. The new focus will be on functionality rather than device or technology specific standards. Technology has changed a great deal since Section 508 standards were last written in 2000. People are no longer relying on desktops primarily to access information. They are using smart phones and other mobile devices with voice text and video communications. By harmonizing U.S. standards with what other countries are following, we will create a larger market for our information and communication technology. Along with the Section 508 rules rewrite, section 255 of the Communications Act of 1934 is also being updated. Section 255 applies to the manufacturers and authors of our communication and information technology devices and software. The information and communication technology the new standards refer to are: VOIP products, computers, software, office machines, information kiosks, transaction machines, websites, and electronic documents.
The image below shows the degree of accessibility each level of WCAG 2.0 covers compared to the previous Section 508 standards, which most closely resembles WCAG 2.0 Level A.
Another change to Section 508 is delineation of covered “content.” All public facing content as well as eight categories of non-public facing content will have to be accessible. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/01%3A_Understanding_Accessibility/1.05%3A_The_Laws_Supporting_Accessibility.txt |
Why look at the older, previous set of standards? Some businesses or vendors may not have paid for an updated Voluntary Product Accessibility Template yet. So, you may see some reference to the older standards until they have the accessibility of the product evaluated again. It’s also interesting to see similarities between the old Section 508 EIT standards and the WCAG 2.0 Level AA standards.
A complete list of Section 508 standards for Electronic and Information Technology can be found on the United States Access Board website. Information about accessibility features of the technology we use at Cleveland State University for our online courses can be found within our course template under the Syllabus & Course Information folder, within a subfolder called Accessibility Resources.
Below is a list of the previous 2000 Section 508 standards pertaining to web-based intranet and internet information and applications.
1. (a) A text equivalent for every non-text element shall be provided (e.g., via “alt”, “longdesc”, or in element content).
2. (b) Equivalent alternatives for any multimedia presentation shall be synchronized with the presentation.
3. (c) Web pages shall be designed so that all information conveyed with color is also available without color, for example from context or markup.
4. (d) Documents shall be organized so they are readable without requiring an associated style sheet.
5. (e) Redundant text links shall be provided for each active region of a server-side image map.
6. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape.
7. (g) Row and column headers shall be identified for data tables.
8. (h) Markup shall be used to associate data cells and header cells for data tables that have two or more logical levels of row or column headers.
9. (i) Frames shall be titled with text that facilitates frame identification and navigation.
10. (j) Pages shall be designed to avoid causing the screen to flicker with a frequency greater than 2 Hz and lower than 55 Hz.
11. (k) A text-only page, with equivalent information or functionality, shall be provided to make a web site comply with the provisions of this part, when compliance cannot be accomplished in any other way. The content of the text-only page shall be updated whenever the primary page changes.
12. (l) When pages utilize scripting languages to display content, or to create interface elements, the information provided by the script shall be identified with functional text that can be read by assistive technology.
13. (m) When a web page requires that an applet, plug-in or other application be present on the client system to interpret page content, the page must provide a link to a plug-in or applet that complies with §1194.21(a) through (l).
14. (n) When electronic forms are designed to be completed on-line, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues.
15. (o) A method shall be provided that permits users to skip repetitive navigation links.
16. (p) When a timed response is required, the user shall be alerted and given sufficient time to indicate more time is required.
1.08: Web Content Accessibility Guidelines (WCAG 2.0)
The Web Content Accessibility Guidelines 2.0 are published on the World Wide Web consortium’s website. There are four principles to the guidelines. You can use the acronym POUR to remember it.
Perceivable: The first is that the content must be Perceivable to the end user. It can’t be invisible to all of their senses.
Operable: The second is that the user interface and navigation must be Operable. The interface can’t require interaction that a user can’t perform.
Understandable: The third is that the interface and information must be Understandable.
Robust: The fourth is that the content must be Robust in that it can be interpreted by assistive technology existing today as well as that coming in the future.
WCAG 2.0 includes three levels of conformance represented by “A,” “AA,” and “AAA” with “AAA” being the best accessibility. It’s made up of 12 guidelines. Each guideline has checkpoints you can use to evaluate the accessibility of your online content.
The WCAG 2.0 Guidelines are listed below with links to their details.
You’ll notice similarities between the WCAG 2.0 guidelines and the 2000 Section 508 standards. Guideline 1.1 says, “Provide text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols or simpler language” is similar to the previous Section 508 standard (a) A text equivalent for every non-text element shall be provided (e.g., via “`alt`”, “`longdesc`”, or in element content). | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/01%3A_Understanding_Accessibility/1.07%3A_The_Previous_%28year_2000%29_Section_508_Standards.txt |
Screen reader software reads online content to the end user. The primary users are people who are blind and those with low vision. It’s also used by people with other disabilities and multiple disabilities to a lesser extent. Some users have reading difficulties which may be due to being a non-native speaker, or dyslexia, for example. There are also used by people who are deaf and blind to convert text into Braille characters on refreshable Braille devices. The photograph below shows a refreshable Braille device being used.
Refreshable Braille device being used. Photo from Flickr by Visualpun.ch.
There are a number of freely available screen readers, such as Voiceover built into Mac operating systems, Window-Eyes built into Windows OS, NVDA for Windows, and a 40 minute, limited version of JAWS. JAWS has been around a long time and is still predominantly used today, but is quickly being displaced by ZoomText, Window-Eyes and NVDA (See the WebAIM survey on screen reader usage). Designers and developers use screen readers as a readily accessible way to test their content, though the best tests are with people who rely on this technology on a regular basis. You can download JAWS for Windows at http://www.freedomscientific.com/downloads/jaws. NVDA doesn’t have a 40 minute time limit per use, which is fantastic! You can download NVDA at http://www.nvaccess.org/download/.
People use screen reader software to read content in PDFs, Word files, email applications, and web pages, for example. Screen readers not only read the text on the page, they announce important elements to the user to describe what is on the page, such as the number of headings on the page, number of links, and number of form fields. A screen reader user can have the software say everything from the top left of the page, or they can use keyboard navigation to jump to certain elements. They can use the tab key to move between links and form fields on a page, and use the up and down arrow keys to move between lines of text or to the next item. There are also custom keys to jump to specific types of content the user is interested in. In JAWS, the “H” key will move between headings. The “F” key will move from form to form. “T” will move to the next table, “B” to the next button. There are also keyboard combinations that can help bring up lists of important elements on a web page, in order to help users navigate faster. In JAWS, pressing the Insert key followed by F6 will bring up a list of headings on the page with heading level, Insert + F7 will bring up a list of links on a page, Insert + F9 will bring up a list of form elements.
Screen reader users can run into barriers when content isn’t marked up properly. This markup tells the user what an object on the screen is and what it does. A big one is missing alternative text. Alt text is used to convey the purpose or information contained in or by an image. If an image is used as a link or a button, alt text will tell a blind person where the link will lead to, or in the case of a button, what will happen if it is activated. When headings are coded as headings on a web page, or styled as headings in Word, these provide landmarks to help a screen reader user navigate to what they would like to hear faster.
1.10: Voice Recognition Software for Accessing Online Content
Voice recognition software is used primarily for hands-free operation of a computer by people with mobility impairments. It’s used for dictation to write text in place of typing. It’s also used for commanding the computer to perform tasks that would be done with a mouse or keyboard. These tasks include opening and closing applications, switching from one application to another, using the menus and options available within an application, clicking on buttons, links and other interactive elements on a web page, drag and drop, as well as other tasks. Dragon Naturally Speaking is the most popular voice recognition software, but there are others. Windows operating system has a built in tool called Speech Recognition. Mac OS X has its Enhanced Dictation tool and iOS devices have Siri.
Command mode works on the premise of see it and say it. A voice recognition user gives commands to their computer based on what they see on the screen. Later, we’ll talk about alternative text that can be placed on images or buttons. Alternative text is typically used by screen readers to help people who are blind know what information is being conveyed by an image, but it’s also used by people using voice recognition software. Alternative text or “alt text” that is placed on a button, for example, should match text that is on a button, so that the command the user will give will match.
Dragon Naturally Speaking has features that will help with command and control when elements on a web page aren’t coded as links or buttons, or elements don’t have the proper alternative text attribute set on them. One is the “mouse grid” tool which provides a series of numbered grids on the page that progressively shrink and re-center themselves over the previously spoken number’s area. Another is telling the mouse to move up, down, left and right. There’s a demonstration of mouse grid and voice commands to move the mouse at https://www.youtube.com/watch?v=iOSObinq7a4.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=428
In the previous example, we saw how painfully slow it can be for voice recognition software users to navigate an improperly coded website. The technology, though, can allow for greater efficiency in navigating when alt text is set, for example. Compare by watching the next movie on following links in web pages with Dragon.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=428
Today, Nuance, the maker of Dragon Naturally Speaking refers to their software packages by other names. These include Dragon Professional Individual, Dragon Anywhere, Dragon Legal Individual, and Dragon Law Enforcement to name a few. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/01%3A_Understanding_Accessibility/1.09%3A_Screen_Reader_Software_for_Accessing_Online_Content.txt |
People with low vision include those with part of their field of view blocked, or blurred. Glaucoma causes tunnel vision, reducing the field of view around the outer edges. Macular degeneration causes blockage of vision in the central part of the eye. Cataracts will cause blurred vision. People with low vision frequently use magnification programs to view the computer screen. Operating systems have built-in magnifiers and there are third party ones that are more feature rich. Screen magnifiers built into operating systems are Windows Magnifier on the Windows platform and Zoom on MacOS and iOS. More feature rich magnification programs are ZoomText and MAGic. Screen magnifiers typically provide features like variable magnification, color contrast adjustment, text to speech, and tools for helping with cursor tracking and focus.
Some users will magnify what they see in a browser by pressing the CTRL key followed by the + key on their keyboard. They zoom back out by pressing CTRL – . Users can also increase the font size within their browser settings to be greater than the default 16 pixels.
Seeing a small portion of the screen at once can cause problems with orientation and navigation. Split screen modes are possible in which one side presents the enlarged version of the content and the other side presents the non-magnified view of the page, but this uses up screen real estate. Other screen magnifiers will use a picture in picture mode. This presents as an enlarged outlined area over the top of the page wherever the mouse is pointing on the page. It’s best to avoid using multi-column layouts in order to help users with low vision view your content.
Views Using Screen Magnifiers
Using MAGic at 2 Times Magnification:
Using MAGic at 5 Times Magnification with Inverted Contrast:
Using MAGic at 8 Times Magnification:
Difficulties with using Screen Magnifiers
When a student uses a screen magnifier they might have trouble with scanning the screen to find specific content they are looking for. This might make performing certain tasks take more time than they would for other students. If images that the student is viewing were produced with a low resolution (usually below 300dpi), the greater the magnification the more difficult it will be to read the content in the image. If color is used for conveying information, an inverted contrast or color filtering might make it impossible to identify the color that the student needs to see. Students using this technology commonly have difficulties perceiving color as well.
Font Style, Color, and Size Adjustment
Unfiltered:
Image of High Contrast White:
Image of a Color Tinted Screen:
Image of Negative Contrast:
Difficulties with Font and Color Adjustment
Using images of text instead of text will make it so that the person cannot adjust the preferences they need to be able to see the content. This happens commonly when people use PDF documents without proper accessibility elements, images of foreign language or images of mathematical text. People can have difficulty making these adjustments if documents were prepared with spacing instead of using proper table structure or multi-column layout in Word. People also have difficulty making adjustments with a screen magnification program when text with fixed positions on the page are used, such as when text is put in text boxes with fixed sizes to create a callout. When enlarged, the text can overlap or be displayed in an order that no longer makes sense. If color is used as the only method for providing information, when the color is changed it might nullify the message.
Printed Content
Students with low vision will also sometimes print online documents to make adjustments like those seen in the images above. Students who do this typically are looking to make the text larger, or make the colors less busy, so that they can print an easier to read document. Students with low vision also use handheld magnifiers or electronic magnifiers which can apply tinting or invert the colors of the printed document. If the student is not able to customize the printed document, it may make the document harder or even impossible to read with the magnifier. Elements that make this harder to accomplish are text embedded into images, customizing colors inside of individual PowerPoint slides instead of within a PowerPoint theme slide, and using floating textboxes in Word.
1.12: Hardware and Hardware-Software Assistive Devices
Hardware assistive devices include specialized keyboards and mice, as well as mouth sticks, head wands, button switches and sip and puff switches for those who cannot use their hands to operate a computer. They are numerous and have evolved over time. Most of them are used by people with mobility impairments. You can see a list of them on WebAIM’s Assistive Technologies page.
People who are deaf-blind may use a screen reader in conjunction with a refreshable braille display.
Refreshable Braille device at https://commons.wikimedia.org/wiki/F...ge-braille.jpg Photo by Sebastien Delorme, 17 November, 2008. CC-BY-SA
Assistive devices have enabled people with mobility impairments to not only use a computer and access information on it, but to create websites, videos, and documents for others, provide independence and a path to employment.
Hardware assistive devices primarily emulate keyboard and mouse input. However, there are switch devices in place today that work with Switch Control in iOS 7 and higher, that allow users to operate Apple mobile devices, such as an iPad. Watch the video below that shows how a boy with cerebral palsy uses a switch and an iPad in switch mode and point mode to navigate the web, interact with applications, focus the camera. Switch mode is used when there are recognizable buttons. But, he engages Point Mode, often, when there aren’t recognizable buttons or it allows him to navigate faster. Below is an example of a non-captioned Youtube video captioned with an Amara.org account. It contains both a “CC” icon to show closed captioning, as well as a page icon to display an interactive transcript. Check out the interactive transcript, which highlights lines of text as they are spoken by the narrators. You’ll learn more about Amara in the module on transcribing and adding closed captions to your multimedia files. The video, called Intersection: A Dream Come True can also be accessed by the URL: http://amara.org/en/videos/PllkG3AiQDvq/url/2315386/
Intersection: A Dream Come True – Version with Closed Captioning | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/01%3A_Understanding_Accessibility/1.11%3A_Screen_Magnification_Software_for_Accessing_Online_Content.txt |
Setting Document Language/s
It’s important to indicate the language or languages within your document for a screen reader to read it properly. During setup of Microsoft Office, you likely specified your default language as English. If you would need to change the default document language or add additional languages, within a multi-language document, you can do this under the File menu.
1. Click the File menu.
2. Select Options from the list on the left navigation. It’s at the bottom of this list within Word 2016.
3. In the Word Options dialogue box that pops up, select Language from the navigation list on the left.
4. From the drop down menu below your Editing Languages box, where it says, “Add additional editing languages,” click on the drop down arrow to expand the menu and select English (United States) or another language.
5. Then, click the Add button the the right of the drop down menu. If you need to add more languages, select them from the box also, and click on the Add button.
6. Click OK at the bottom of the dialogue when you are finished.
Checking with a Screen Reader
The default settings in JAWS 17 automatically detects when the language changes within a Word document and will start reading in the new language. NVDA will do this when its voice synthesizer is set to its default eSpeak NG. To make sure NVDA’s synthesizer is set to eSpeak NG, find the NVDA icon in the system tray (Windows). It looks like an N with a curved ascending arm as seen below.
1. Right click on the NVDA icon
2. Select Preferences to expand its preferences menu
3. Within the preferences menu, select Synthesizer
4. Select the first option for synthesizer to set it to eSpeak NG. The Microsoft Speech API version 5 sounds more pleasant, but won’t automatically switch the languages its reading when the language changes.
I’ll attach a multi-language Word document below in case you would like to practice with it.
Setting Language on Small Amounts of Text in Multi-Language Document
If you find that a screen reader automatically switches to a foreign language when it should not have, or if you have small amounts of text that are a different language from the main language of the document, you can set the language on this text specifically by:
1. select the line of text that it isn’t reading properly, or the word/s that are in a different language
2. click the Review tab,
3. click on Language in the ribbon at the top
4. Select “Set Proofing Language.”
5. From the Language dialogue box the opens up, you’ll see the incorrect language highlighted. Scroll to find the language you want, such as Chinese (PRC), select it and click OK at the bottom. The screen reader should now read back the line in that language. This case really did happen with the practice syllabus you’ll work on for your assignment in this chapter. For some reason, when JAWS and NVDA reached the line starting with “Prerequisites,” it started reading in German. That could throw your audience off track!
Setting a Word Document Title
Setting a Word document title can be useful if the document will be converted to HTML and viewed by a screen reader user in a browser. In the .htm (HTML) format, the title will show within the title element, located between the opening and closing head element. The structure would look like <HEAD><TITLE>Your unique title here</TITLE></HEAD> with some other elements listed between the head elements also. A title set in Word will go with a Word document converted to PDF also. You can set a title on a Word document by:
1. With the document open, click on the File menu
2. Select Info if it is not selected
3. Under the Properties column on the right, find Title and click the Add a Title text to the right of it
4. Type a unique, concise title that explains what the document is about
5. Save the document
Create a Unique Concise File Name
Both in Word and Acrobat, a screen reader reads the file name first of the document that is open. Because of this, it’s a good idea to make your file name a unique, concise description of what the file is about also. It’s okay if it matches your title or heading level 1. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.01%3A_Creating_Accessible_Word_Documents_-_Setting_Language_and_Title.txt |
Contrast is the difference in luminosity or tonal values between the foreground and background. In other words, the difference between the value of the foreground color, which is usually your font color, and the background color of the screen, Contrast is a property that helps us see and distinguish figure from ground.
Key Takeaway
WCAG 2.0 guidelines require a contrast ratio of 4.5:1 between the foreground color (text color) and background color to meet level AA, for standard font size, e.g. 12 pt font. Large print sizes, or large scale text can have a contrast ratio of 3:1. A large print size is considered 14 pt (approximately 18.66 px) bold, or 18 pt (24 px) or larger non-bold text.
There are lots of color contrast analysers available within various programs and as free downloads on the web. A stand alone application that will tell you whether your foreground and background color choices have enough contrast to meet WCAG 2.0 level AA or level AAA standards is called Colour Contrast Analyser. It is a free download from The Paciello Group’s website. This is a nifty tool and I recommend downloading and installing it! It contains an eye dropper for sampling colors on your computer screen, much like that of Adobe Photoshop’s color picker. You can keep the application on your desktop or set a short cut for it within your Start menu in Windows to have easy access to launching it. Colour Contrast Analyser will tell you the contrast ratio of your selected foreground and background colors, let you know if it passes WCAG 2.0 level AA and level AAA, and toggles into a mode that lets you see how people with different color blindnesses see your text against background. If your color choices don’t pass, there are sliders to adjust the red, green, and blue values to change the color to something that will pass, and suggested variations on the color. The hexidecimal color codes from the passing colors can be copied and used in Word or Blackboard Learn’s content editor. Hexidecimal code is typically how colors are designated in HTML. Hexidecimal code for color takes the format of an octothorpe (pound symbol) followed by six letters or numbers, two characters for red, two for green and two for blue. Hexidecimal code for white is #ffffff, and for black is #000000.
Colour Contrast Analyser also has a built in screen, web page, and file viewer that will allow you to see what the colors look like as a whole for people with various types of color blindness. This later simulation is available in the Windows version only. These simulated web pages for protanopia, deuteranopia and tritanopia can be saved as jpg (jpeg) files if you need to reference them later or share them with other designers and decision makers.
Colour Contrast Analyser Video Introduction
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=64
Colour Contrast Analyser
Download the free Colour Contrast Analyser application from The Paciello Group’s site (visit: https://developer.paciellogroup.com/...trastanalyser/)
Word has a built in accessibility checker that will sometimes catch contrast issues and sometimes not. The use of the Colour Contrast Analyser application can help you check, when you may have doubt about your choices.
Other tools for checking to see if you have adequate color contrast are WebAIM’s contrast checker page, and JuicyStudio Luminosity Colour Contrast Ratio Analyser. Both of these require you to know the hexidecimal color values, before arriving at the page. There is no eye dropper tool to sample the colors from a page open on your screen. You enter the known color values into their form and submit it to get results on pass or fail for WCAG 2.0 Level AA or AAA. WebAIM directs designers to install Colorzilla add-on for Firefox, or extension for Chrome, to figure out hexidecimal values of colors on a web page. Another free online color picker that will allow you to get hexidecimal color code, RGB and HSL values is located at http://htmlcolorcodes.comWAVE will also analyze contrast ratios for all page elements at once. You simply enter the URL into their form to analyze a web page.
The Color Contrast Analyser developed by the North Carolina State University Office of Information Technology is available as a Chrome extension through Chrome Web Store. It will show contrast edges as a black and white screen. It’s a different way of looking at things. If you don’t see enough delineation between text, foreground symbols and their background, then you’ll need to increase the color contrast ratio. The example below shows the visual results of their processor. It is a form in which the required field labels are entered as red text. The red text shows as areas of less delineation. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.02%3A_Creating_Accessible_Word_Documents_-_Color_Contrast_for_Accessibility.txt |
You should avoid the use of color alone to convey meaning in order to help user’s with colorblindness. Color blindness comes in various states. Protanopia results from insensitivity to red light, which causes confusion in discerning greens, reds or yellows. Deuteranopia results from insensitivity to green light, which causes confusion in discerning green, red and yellows. It is the most common form of color-blindness, often referred to as red-green colorblindness. Tritanopia results from an insensitivity to blue light, which causes confusion in discerning greens and blues. Achromatopsia or Monochromacy is rare, but results in a person not being able to see color at all, only black, white and shades of gray. Wordwide, approximately 8% of men and 0.5% of women have a color deficiency. There are a greater number of white (Caucasian) people with color blindness, such as in Scandanavia where 10-11% of men have color blindness.
Avoid Color Coding Text, Only, To Indicate Importance:
Below is an example of what you should not do. You would not want to state in your course schedule that the dates listed in red are quiz dates, and then only color code the text to represent those days.
Week Date Topic
1 January 17 Orientation to Blackboard Learn
January 19 APA Style Manual
2 January 24 Exploring CSU Library Resources Online, Literature Services
January 26 Refreshing Writing Skills
3 January 31 Using Images and Media
February 2 Presenting a Professional Image
4 February 7 Public Appearances
February 9 Communication in the Health Care Setting
5 February 14 Resolving Conflicts
You could fix this problem by saying the dates listed in red with the asterisk beside them are days you will have a quiz over the previous topics. The asterisk has come to be known as a designator of required fields on internet forms, for screen reader users. Thus, it could be used here to designate important dates in a schedule.
Week Date Topic
1 January 17 Orientation to Blackboard Learn
January 19 APA Style Manual
2 January 24 * Exploring CSU Library Resources Online, Literature Services
January 26 Refreshing Writing Skills
3 January 31 * Using Images and Media
February 2 Presenting a Professional Image
4 February 7 Public Appearances
February 9 * Communication in the Health Care Setting
5 February 14 Resolving Conflicts
Inherent Color Conveying Meaning, What Do You Do?
Sometimes you need to present information that inherently has color coding that represents meaning. The following example is from Vischeck’s website. On the left, the photomicrograph shows how fluorescent staining of a cell renders a blue, green and red image where the colors represent different structures within a cell. The image to the right of it, shows how someone with red/green color blindness (deuteranopia) would see the colors. The greens and reds show as yellows and the boundary between them is lost.
Vischeck’s website used to have a way to upload an image like this to run through their Daltonizing algorithm. The algorithm increases contrast between the green and red areas, and shifts the hue of the red toward a blue or grayish color. This helps differentiate the structures of the cell. There is an example of the same image of the cell run through their Daltonizing algorithm on the right, below. Unfortunately, no one can load an image to be run through the Daltonizing algorithm at this time. There are other resources you can recommend to colorblind students that may help them see difference between colors.
Resources to Help Daltonize Images for Colorblind Audience:
Google Chrome browser has an extension which will simulate what an image or page looks like to a person with various types of colorblindness and has a Daltonization algorithm to convert a page or image for people with colorblindness to see differences between colors. Daltonize.org has information about Chrome Daltonize! extension. Within Chrome, add the extension from this link to Chrome Daltonize!
Below is a captioned movie tutorial overview of using Chrome Daltonize! extension. Unfortunately, it doesn’t seem to work on pages that are behind password protection.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=74
Designers who use Photoshop CC 2017 also have access to colorblind simulation for protanopia and deuteranopia. You can access these tools by:
1. Clicking on the View menu
2. Selecting Proof Setup
3. Then, select Color Blindness – Protonopia-Type or Color Blindness – Deuteranopia Type
The following movie demonstrates the use of Photoshop’s colorblind simulators when looking at a form. The form, unfortunately, was designed so that the color red alone indicated required fields! See what someone with protanopia or deuteranopia would see when viewing this form. They are left guessing which fields are required. Best practice would be to place an asterisk before each required form field label and let the user know that the asterisk designates a required field.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=74 | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.03%3A_Avoiding_the_Use_of_Color_Alone_to_Convey_Meaning_and_Algorithms_That_Help.txt |
Your choice of font family, style, weight, size, color, and line-height can all affect its legibility and readability on screen. Legibility is the ability of someone to discern what is written. Capta text is often barely legible, as an example. Avoid the use of Script text, for legibility reasons. Readability is how easy or difficult it is to read the content. Readability is affected by not only your style of writing, but your choice of and formatting of text on the page. In Word, we’ll be formatting text with Word Styles primarily. But, we’ll talk about font properties that will help your readers, both with and without documented disabilities.
When selecting a font family for screen viewed text, you’ll want to select families and weights that have a substantial stroke thickness and medium sized x-heights. The x-height is the height of the lower case letter x within the font family. All the lower case letters within the font typically have the same height as the x height.
Sans-serif fonts, such as Arial, Verdana, Helvetica, Tahoma, Trebuchet MS, or Myriad Web Pro are good choices for body text. Though you want to pick fonts with substantial stroke thickness, it may be a good idea to avoid the use of a font that is designed primarily as bold, such as Arial Rounded MT Bold, because you are left with few options for formatting for importance. If you apply a Strong Style (bolding), to an already bold font, it makes the font so thick, it becomes difficult to read on screen.
If you are using a serif font family, Times New Roman, and Georgia are good selections. In general, serif font families work well for body type that is printed, and sans-serif families work well as body type presented on screen. When it comes to styling text, it’s a good idea to minimally use Italics (the emphasis style) because slanting the letters can effect their readability. Bolding (the strong style), also, should be used sparingly and be reserved for text you want to strongly emphasize. Selecting 12 point text or higher will help readers with vision difficulties. When a browser’s default font size is set to 16 px, 12 pt is approximately equal to that height. Line-height can make large blocks of body text or long lists easier to read. These paragraphs, on this page, have a line-height of 170% set within the HTML. In Word, you can select a multiple from the paragraph formatting options, such as 1.5, that will be one and a half the the line height of your font. Keep your Word documents in Word format, rather than converting them to PDFs with security restrictions, to help users who may need to download and manipulate font sizes and colors to help them perceive and understand the information presented. From the previous section, we saw how assistive technology applications like MAGic can manipulate a document to make it more accessible to those with low vision.
There are other properties of a document’s formatting that effect readability. It helps to group related content together by using headings above associated paragraphs. There should be separation of groups of related content by white space. Adding white space around images can help also. Line lengths should be reasonable. The avoidance of unnecessary distracting moving graphics or blinking images is important for those with cognitive difficulties related to focus, and for individuals with photosensitive epilepsy who may go into seizure because of a blinking object set at a frequency of 5 to 30 Hertz (flashes per second). | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.04%3A_Formatting_Font_for_Readability.txt |
Everyone loves style! Industrial designers use it to attract people to their products. Graphic designers use it to organize information on a page and designate changes in topic. We’ll create organization of our content, add meaning, and make it easier to navigate by adding Word Styles. Word Styles are used to format headings, emphasized and strongly emphasized text, links, paragraphs, and lists for example. When text is formatted with Styles in Word, it creates a navigational structure that screen reader users can use to quickly get to the information they want.
Screen reader users use shortcut combinations of keys, for example, to bring up a list of headings that can help them navigate to where they want to go in the document faster. In JAWS, pressing the Insert key followed by F6 will bring up a list of headings. The numbers behind the names of the headings, such as 1, 2, and 3 designate heading level. It is good practice to create one heading level 1 at the top of the page that describes what the page is about, followed by subheading levels 2 through 6. In JAWS, pressing of the H key will help a screen reader user jump from heading to heading on a page. See a screen shot of what JAWS’ Heading List floating window looks like on top of a Word syllabus with headings levels 1 and 2. Within this document, you can see that the heading level 1 is the title of the document, “Syllabus for NUR 440.” The syllabus is then broken up into sections, which are second level headings.
Once you add headings with Word Styles, you can see them in Word’s NavigationPane to the left of the document. If you don’t see the navigation pane, you can bring it up by going under the View menu in Word and checking the box next to Navigation Pane. It shows you the structure of your page and can help people without disabilities also. See the screen shot of the Navigation Pane and the box to enable this outlined in red with a red arrow pointing to it (for accessibility reason).
How to Apply a Word Heading Style
In html, there are 6 heading levels represented by H1 through H6. H1 is typically your largest and/or most important heading of a document. H2 headings would be subheadings of the H1 heading. H3 headings would be subheadings of H2, H4 headings would be subheadings of H3, and so forth, creating a visual and semantic hierarchy. The following graphic is a way to visualize the hierarchy.
The heading level 1 at the top of this page will be the name of the syllabus for a class, such as “Syllabus for Nursing 440.” It sums up what the page is about. Once you type this in, highlight the text and click the Heading 1 style in Word’s Styles bar at the top. You can expand the styles into a pane that floats to the right, by pressing the tiny button, with the diagonal arrow that points toward the lower right, at the bottom right of the styles menu. In the graphic below, the red arrow points to the button to open the Styles pane.
The Styles pane is bordered by the red box in the image below, pointed to by a large red arrow.
A Note about the Title and Subtitle Styles:
If you use the Title or Subtitle style to designate Title text within a Word document, this will not show up in a list of headings, within a screen reader. It will also not convert to a <title> element if the document is saved as a Web Page Filtered. Title and Subtitle text gets converted to an ordinary paragraph element in HTML. If you use a Heading level 1 to style your title or main subject, this will show up in a list of headings to help the user navigate the page.
A note about Heading levels above 6:
Headings 1 through 6 should convert to their corresponding HTML heading element (<h1>, <h2>, <h3>, <h4>, <h5>, and <h6>), but heading levels greater than this, such as Heading 7 through Heading 9, will convert to paragraphs with custom classes to affect their visual appearance, when a Word document is saved as a Web Page Filtered. This isn’t semantically correct.
A note about text formatted as lists in Word:
I’m seeing each list item convert to a paragraph element in HTML instead of their semantically correct <li> (list item).
How to Modify a Heading Style
A movie tutorial on how to modify a Heading 1 Style in Word exists at http://flash.ulib.csuohio.edu/elearn...1_in_Word.html, or you can view the written instructions below.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=102
If you don’t like a style, you can change it by right clicking on the name of the style in the Styles pane, and selecting “Modify.” In the image below, the text that has the heading level 1 applied to it is selected at the top of the screen. It is, “Syllabus for Nursing 440: Primary Preventive Strategies for Communities,” outlined in a red box, pointed out by a large red arrow. The Modify option is outlined in a thick red box also, pointed out by a large red arrow.
Within the Modify Style dialogue box, you’ll see options to change the font, font size, font color, font weight, emphasis, bolding, underlining, line spacing, space before and after a line of text, indentation and alignment at the top. There are more options available, like setting the language, by clicking the Format button in the lower left of the Modify Style window. These options are outlined in thick red boxes in the image below, and pointed to by the large red arrows.
Control of the space between lines and before and after styled text, without hitting the Return or Enter key, is important. If you format with extra line returns or by hitting the space key multiple times, this adds extra hidden characters to your layout. If a Word document is later converted to a PDF, these extra returns create extra meaningless tags that have to be deleted from a tagged PDF. You can see extra returns and spaces within documents by clicking on the icon that looks like a paragraph symbol under the Home ribbon. It looks like a backwards P and is the Show/Hide button for showing paragraph returns, spaces, tabs and other hidden characters.
You can add space above and below your selected text equally by using the fourth and fifth buttons on the second row of the formatting area in the Modify Style dialogue box. The button that increases spacing between a heading and the surrounding content has two blue arrows, pointing away from one another. The button that decreases space between a heading and surrounding content has two blue arrows pointing at one another. You can also adjust the spacing before and after the style of text by clicking on the Format button at the bottom left of the Modify Style dialogue and selecting Paragraph.
Once in the Paragraph style dialogue menu, you can see separate text boxes to enter the space before and space after. These are outlined by the red box in the image below. A big red arrow points to this box. You also see the ability to indent, align, and adjust line spacing. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.05%3A_Using_Word_Styles.txt |
Avoid the Use of the B or I Buttons under Word’s Home tab
You’ll want to avoid the use of the B (bold) or I (Italic) buttons and use the Strong or Emphasis styles found in the Styles pane respectively.
If you would like your text bolded, use the Strong style in Word’s Style pane. If you want to create italicized text, use the Emphasis style in Word’s Style pane. These will create the correct semantic structure for this type of text. Some screen readers will change their tone of voice when reading to reflect Emphasis (the style that looks like italic) and more Strongly emphasized text (the style that looks like bold text). If the document is converted to HTML, Strong and Emphasis will add the correct HTML tags around those words.
2.07: Alternative Text for Images - Descriptions in Word
The addition of alternative text for images will help a blind user know what is the content or function of an image. You’ll want to keep your alt descriptions succinct, and equivalent to what the image is conveying. If you have a chart or a graph, it’s best to describe what is being represented in the chart or graph within the body of the document, rather than in alt text. Or, if you can convey the information in the chart or graph by way of a simple data table, this would also make the content accessible. If you have an image that is a link, your alt text can be the name of the page that the link leads to.
In HTML, we can add a null alt attribute to a decorative image that doesn’t convey meaning, such as a flourish. This will tell the screen reader to skip over it and not announce it at all. In more recent versions of Word, unfortunately, there’s no way to do this. JAWS may be set to announce images, even if they don’t contain Description text. If you leave the Description field blank for a decorative image, this leaves the screen reader user wondering if the image contains meaning or not, when it is announced. One way to get around this is, would be to put a decorative image or flourish in the header or footer of a document. Or, you could leave the image out. But, in Word, if you find that you don’t want to leave the decorative image out, you should write very brief Description text about what it is.
You can add alternative text by right mouse clicking on an image, and choosing “Format Picture.” In the Format Picture pane that opens to the right of the document, click on the cross shape icon for “Layout & Properties” of the image. Click the arrow to the left of Alt text and type a Description for the image. Why is there both a Title and Description field under the Layout and Properties’ Alt Text options, and which serves as the alt text? The Description text serves as the alternative text and should always be filled in! Some screen readers will read the Title text before the Description, such as JAWS 15 and newer, but some older screen readers, such as NVDA 2014.2 and Window-Eyes 8.4 don’t. You can leave the Title text out. If you include it, make it short and screen reader users can decide if they want to hear the Description. But, also try to keep your Description text short and succint (200 characters or less). It’s important that you write a Description that conveys the images meaning, purpose, or where it leads to. Don’t just describe what is in the image or write “image of.”
If you have an artistic image that conveys sensory information, such as a painting or photograph, it is okay to describe what is in the image, such as “photo of,” or “painting of.”
The choice of alt text depends on the context of the image. You don’t want to be redundant and repeat text that is surrounding or adjacent to the image.
How to add an image to Word and add Alt Text to it
To insert an image in Word, go under the Insert tab and select Pictures from the ribbon. Browse to your image, select it and click Insert. With the image selected, right click on the image and select Format Picture from the context menu that pops-up. From the Format Picture pane options, select the icon that looks like four arrows forming a white cross on a blue background. It is the Layout & Properties button. Expand the Alt Text option by clicking on it. This brings up the Description field into which you should type your alternative text. The image below shows the Layout & Properties button outlined in red and the Description field where you would enter your alternative text. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.06%3A_Styles_for_Strong_and_Emphasis_-_Avoid_B_and_I_buttons_in_Word.txt |
At some point in a document or course creation, you’ll want to provide links to other websites or pages. Screen reader users can bring up a list of links on a page, much like they bring up lists of all the headings on a page, or landmarks. In JAWS, a screen reader user does this by pressing the Insert key followed by F7. It’s important to use descriptive text for your links that convey where the link is going to or it’s purpose, so that the links in this list make sense to a user taken out of context. You don’t want to use “click here” or “read more.” If there are more than one of these “click here” or “read more” links on a page, a screen reader user may not remember, provided she or he read the page, which “read more” went with which topic. It’s best to write the sentence in such a way to provide link text that will describe the purpose of the link and where it will lead to. The link text should be meaningful and unique from the other link text on the page.
In the following example I’ll show you what you should avoid and then how to rewrite and relink it for descriptive link text.
Avoid this type of writing and linking:
“For more information about Fair Use of copyrighted materials, click here. For information about a Fair Use Evaluator that can help you decide if your specific use of copyrighted material favors Fair User or not, click here. For a chart that shows a list of conditions that favor or oppose Fair Use, click here.
or
“More information about Fair Use of copyrighted materials can be found at http://librarycopyright.net/. If you would like to take a Fair Use Evaluator to determine if your use of copyrighted materials falls under Fair Use please visit http://librarycopyright.net/resources/fairuse/. There is a checklist of Fair Use chart at https://library.wheaton.edu/copyright/8 to see uses that favor Fair Use vs. oppose Fair Use.
How you could reword the text to provide descriptive, accessible text links:
“More information about Fair Use of copyrighted materials can be found on the Copyright Advisory Network’s website. If you have a question about whether a use falls under Fair Use or not, you can take the Fair Use Evaluator on the their site. There is also Fair Use chart at Wheaton’s Buswell Library, that you can use as a quick guide to see if your use of copyrighted materials is favoring or opposed to Fair Use.”
Key Takeaway
If you know that the Word file will be printed, you can include the URL for the link in parenthesis next to the descriptive link, so that sighted users can enter it into a browser if needed. So, our first sentence above would look like:
“More information about Fair Use of copyrighted materials can be found on the Copyright Advisory Network’s website ( http://librarycopyright.net/).”
The following is a screen capture of JAWS links list showing the links on this page. You can see the three vague “click here” links followed by the three descriptive text links. The later three are best practice.
Word often makes text that starts with http:// into a link automatically. You can change the link text that is displayed by highlighting the text and going under the Insert menu and selecting Links to bring up the Insert Hyperlink window. Below is an image of the URL http://librarycopyright.net/ highlighted (selected by dragging over it), the Insert tab options displayed, and Hyperlink options expanded.
After you select Hyperlink, in the window that pops-up, at the top where it says Text to display, describe where the link takes the user, or the function of it, e.g. “the Copyright Advisory Network’s website.”
In Word, the ScreenTip option is similar to adding a Title attribute to a link in html, or with the Blackboard’s content editor. The Title attribute or ScreenTip will show on screen to sighted users when the link is moused over or tabbed to, that is, when the link has focus. The text for the ScreenTip or Title attribute should be different from the link text. It is advisory in nature, so in this case, we can write at ScreenTip that says, “This link will take you to the Copyright Advisory Network’s website.” We’ll do this by clicking on the ScreenTip button to the right of the text entry box for Text to display. The image below shows the Set Hyperlink ScreenTip window with the ScreenTip text written in it. Be sure to click OK to close both Hyperlink editing pop-up windows to save your changes. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.08%3A_Descriptive_Links_and_Tool_Tips_in_Word.txt |
Tables can be used to layout information that has a two way relationship, or tabular data. Information that has a two-way relationship is found in grading rubrics, evaluation information and course schedules.
In HTML, header table cells are read by a screen reader before a corresponding data cell to tell the user what the data is and give it meaning. In HTML, a screen reader will read both column headers and row headers. The column headers are in the rows above the data columns. A row header would be found in a column on the far left typically. In a rubric table layout, a column header cell might read as “exemplary performance” and the data cell under it would read, “makes an original post and replies to at least two classmates in the discussion.” The row header for this may read as “participation.” An example of this is below.
Discussion Rubric
Criteria Exemplary Performance Satisfactory Performance Needs Improvement
Participation Makes an original post and replies to at least two other classmates in the discussion. Makes an original post and replies to one other classmate in the discussion. Makes an original post but doesn’t reply to others within the discussion.
Relevance The posting directly addresses key issues, questions, or problems related to the text and the discussion activity. The posing applies course concepts well. The posting addresses key issues, questions, or problems related to the text and the discussion activity, but in some cases, only indirectly. It does not always apply course concepts fully. The posting does not directly address the question or problem posed by the discussion activity.
Insight The posting offers original or thoughtful insight, analysis, or observation that demonstrates a strong grasp of concepts and ideas pertaining to the discussion topic. The posting does offer some insight, analysis, or observation to the topic but may not demonstrate a full understanding or knowledge of concepts and ideas pertaining to the discussion topic. The posting does not offer any significant insight, analysis, or observation related to the topic. No knowledge or understanding is demonstrated regarding concepts and ideas pertaining to the discussion topic.
Support The posting supports all claims and opinions with either rational argument or evidence. The posting generally supports claims and opinions with evidence or argument, but may leave some gaps where unsupported opinions still appear. The posting does not support its claims with either evidence or argument. The posting contains largely unsupported opinion.
Word has limited ability to designate table header cells, and unfortunately, JAWS doesn’t read the header cell text before each associated data cell text in a Word document, to establish the relationship. However, we should designate a header row in Word tables, in case they are later turned into accessible PDF files, or web pages. Tables with header rows that repeat upon page breaks also help sighted users. We can designate column headers by selecting the top row of a Word table, but we can’t create row headers out of a far left column, for example. It’s best to setup simple tables with one header row across the top. In the case of a table used to layout evaluation methods and their associated points, we’ll designate the top row as a header by selecting all cells in the first row, right clicking and selecting Table Properties. See the screen shot of the menu with Table Properties (outlined in a thick red box) that appears after you right click on the highlighted contents of the top row:
In the Row tab, check the box next to “Repeat as a header row across the top of each page.” See a screen shot of this option outlined in a thick red box below.
With table setup, it’s best to keep tables simple. Avoid blank cells if possible and merged cells. Screen readers read linearly, from left to right and top to bottom, row by row. It helps to keep this in mind when setting up a Word table.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=171
Also, avoid combining information that should be spread across more than one table. If you merge and/or color cells to create a visual separation from different content, JAWS may not read the information in an order that makes sense to a screen reader user. The following example is from a real table setup I’ve seen in a course. It’s one table that combines what should have been an “Evaluation Methods” table and a separate “Grading Scale” table.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=171 | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.09%3A_Designating_a_header_row_in_Word.txt |
In Word, there is a Document Layer and a Drawing Layer to a document. Objects such as Text Boxes, Shapes, Smart Art, Charts, Word Art, and Miscellaneous inserted Objects float in the Drawing Layer. These floating objects may not be detected and read in the correct order when a screen reader reads the text on the document. Though, modern versions of Word, JAWS and NVDA are getting better. Both JAWS 17 and NVDA will announce that there are embedded objects on a Word document. Word 16 has a way to add alternative text to these objects, which is explained below. JAWS will bring up a list of floating objects when the user presses Control + Shift + O on her keyboard, but these objects may not make sense when taken out of the flow of the document. NVDA doesn’t read objects in the drawing layer.
If you set the objects to be In Line with Text, this can often move them from the Drawing Layer to the Document Layer and make them accessible. This can be done by selecting the object by clicking on it. Click on the Layout Options icon to the top right of the selection. It has horizontal lines and an arch, and select In Line with Text.
Diagrams and illustrations created with Shapes and Smart Art can be copied to the clip board and then reinserted as one image within the document. You can save a copy of the document with the separate Smart Art and Shapes in case you would need to go back and edit. In another copy of the document, you would delete all the separate shapes that make up your diagram, and insert the single image you’ve snipped. You can then add alternative text in the Image Description field for this one image, to make it accessible. In Windows, you can copy the image to the clipboard with the Snipping Tool. To bring up the Snipping Tool, enter “Snipping Tool” in Windows Search box. The Search box which contains the default text, “Search for programs and files,” is found by clicking Window’s Start icon in the lower left of the screen. The Snipping Tool program will appear at the top of the search results. Click it to open it. The background will white out somewhat, and the Snipping Tool will be a floating window. Under the New menu, select Rectangular Snip, and then drag a rectangle around the area of the screen you want to capture as one image.
Once the image has been copied, you can go under the File menu and select Save As to save it as a .gif, .jpg, .png, or .mht. If the image contains large areas of solid colors, GIF is a good format to choose. If the image is a photograph or has subtle gradients, choose either JPG or PNG. MHT is a proprietary Windows format which I can’t recommend. Sometimes, learning management or content management servers don’t contain the MIME types to display the format.
After saving out your Word file as another document, go under the Insert tab and select Pictures from the ribbon. Browse to your newly created image, select it and click Insert. With the image selected, right click on the image and select Format Picture from the context menu that pops-up. From the Format Picture pane options, select the icon that looks like four arrows forming a white cross on a blue background. It is the Layout & Properties button. Expand the Alt Text option by clicking on it. This brings up the Description field into which you should type your alternative text. The image below shows the Layout & Properties button outlined in a thick red box and the Description field where you would enter your alternative text.
On a Mac laptop or desktop, you can use press Command-Shift-4 on the keyboard to bring up a tool similar to the Windows Snipping Tool that will let you take a screen shot of a window on your desktop. There’s an explanation of how to take screen shots on your mac at http://www.macworld.com/article/2987...-your-mac.html.
If you have an object that represents data, for example a bar chart, you can explain the equivalent of the information presented within the body of the document, or create a simple accessible data table, such as in the example below. NVDA will read the table column headers before each data cell. JAWS 17, in default settings read the column header row once and then simply read the following rows linearly.
In Word 2016, there is the ability to add alternative text that describes the data within the chart. JAWS 17 will recognize this alternative text and read it. You can add alternative text to a chart by selecting the whole thing (an outline will appear around the outer perimeter), right click and select Format Chart Area. When the Format Chart Area pane opens on the right side of the screen, click on the Layout & Properties icon (crossed white arrows on blue background), and type the equivalent of the information the chart presents within the Description text entry box. You can add a title that represents what the chart is about.
2.11: An Alternative Custom Callout Style to Avoid Using Floating Text Boxes
Text Boxes are frequently used in Word to draw attention to important information. They float above the Document Layer and reside in the Drawing Layer. Therefore, you’ll want to avoid the use of Text Boxes and instead use and modify the Intense Quote style to make text stand out if it is important, or create a custom style for this. The following movie tutorial will show you how to create a custom style for callout text.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=204
Remember that screen readers will not present visual information, such as bolding, borders, background colors and highlighting. Sometimes, screen readers can be set to describe visual information, but it is not typically done. The information can be overwhelmnig and irrelevant to screen reader users. To designate important information, you can start the sentence with “Attention,” or “Note.” “Warning,” and “Important” are also usable choices to draw the attention of a screen reader user, as well as sighted users.
2.12: Word's Built-in Accessibility Checker
Word has an accessibility checker that can be run on an open document by going under the File menu, and clicking on the Check for Issues drop down menu next to Inspect Document. Select Check Accessibility, the middle option. This option is outlined in the image below.
The Accessibility Checker is a good place to start when creating an accessible document out of an existing Word document, or to check how accessible a document is that you are currently working on. The Accessibility Checker pane will open on the right side and you will see Errors, Warnings and Tips. I’ve heard some trainers say that Errors are the things you must fix, Warnings are the things that should be fixed and Tips aren’t as critical but should still be looked at. My advice is to look at it all carefully, and use what you’ve learned in this module to repair what you think are issues. Word provides information about each issue when you select it in the Accessibility Checker pane, at the bottom. It explains why you should fix it and gives an explanation of how. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/02%3A_Word_Accessibility/2.10%3A_Avoid_Floating_Objects_on_the_Drawing_Layer.txt |
Blackboard will allow you to create a page using its WYSIWIG (What you see is what you get) editing buttons within its content editor, or switch to HTML Code View to add accessibility elements and attributes. Blackboard’s content editor shows up within many of its tools, such as discussion forum posting, assignment submissions, or creation of Items or Blank Pages. The content editor has three rows of buttons. If you don’t see three rows, you can click the upper right most icon of the two downward pointing chevrons to expand its view.
Much of the content in this course was created with Blackboard’s Blank Page tool. The buttons in this content editor allow you to enter into pop-up windows that enable addition of images, media files, tables, hypertext links, and math equations, to name several content types. It also contains an HTML Code View button that will allow you to manipulate the code that generates the content the user will see. Sometimes, with Blackboard’s content editor, it is necessary to enter into HTML Code View to add accessibility features. It’s not hard and I’ll guide you in what should be done.
* TIP: Turn off the Content Editor’s Spell Checker Feature
You’ll want to turn off the Spell Checker within the practice course you’ve been given for assignments to produce clean HTML that you can work with. Your practice course will show up in your Blackboard course list with a name that looks like Online_Accessibility_Practice_Course_YourFirstInitialAndLastName. You’re enrolled as an instructor within your practice course and have full privileges. Unfortunately there is a bug in the current version of Blackboard that causes multiple, useless <span> elements to be placed around fragments of words, URLs, and anything the spell checker thinks is misspelled. Since you will be editing code in HTML Code View, it will help if you turn off the Spell Checker within your course. To do this, click on Control Panel at the bottom of your course menu, in the left navigation. This expands options available for course management. Click on Customization to expand its options. At the bottom of the Customization options, click on Tool Availability. On the Tool Availability screen, scroll down until you see Spell Check under the Tool column. Uncheck the box beside it that controls its Availability. Then scroll to the top of the page, or bottom, and click the Submit button. This will make the Spell Checker icon disappear from Blackboard’s Content Editor. When it is available it appears as a button with ABC over a check mark. If the tool is available and on, this button will be a darker shade of gray from surrounding buttons, as seen at the end of the second row in the image above.
Alternately, if you don’t want to turn off Spell Check completely for your course, you can remember to turn it off within the instance of the Content Editor, by clicking on the Spell Check button, every time you go to edit content within your Blackboard course. This is hard to remember to do, and inevitably results in a time or two when you’ve forgotten and submitted, and then found the multiple, useless <span> elements breaking up your code when you go back to edit. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/03%3A_Blackboard_Learn_Accessibility/3.01%3A_Blackboard%27s_Content_Editor.txt |
In the module on Word, we learned that font family, font size, line height, and the contrast of the font against its background all effect the accessibility and readability for the sighted, as well as those with vision difficulties. It’s best to pick font families with a good mid-weight stroke thickness, good space between letters, and medium sized x-heights. An x-height is the height of the lower case letter x within the font family. Avoid the use of fancy fonts, scripts and those with thin strokes. Sans-serif fonts work better for body text on screen, rather than fonts with serifs. Examples of good sans-serif fonts to use are Arial, Verdana, Tahoma, Myriad Web Pro, Trebuchet, or Roboto. Serif font families with good stroke weight and spacing for online display are Georgia, Times New Roman, and Times. The image below shows a comparison between font families. The upper four fonts, Amienne, Blackadder ITC, Bradley Hand ITC, and Burnstown Dam, have thinness to their strokes, or jagged appearance to the serifs which makes them hard to read on screen. The bottom two fonts, Arial and Georgia, have more evenness to the width of their strokes, and the strokes are straighter, making them easier to read on screen.
If you write text from scratch, with no copying and pasting from Word, into Blackboard’s content editor’s WYSIWYG mode, you’ll see options to set the text type to Paragraph, Heading, Subheading 1, and Subheading 2. These are OK, but you may not like the size of the text they produce. You’ll also see options to set the font family and its size. The sizes listed, such as Arial Size 3 (12pt), are a bit misleading because the HTML generated behind the WYSIWYG interface sets font sizes in the descriptive keyword units xx-small, x-small, small, medium, large, x-large, and xx-large, not point (pt). These units are OK because, if an end user sets her/his browser’s default font size to a larger setting, these will scale up, which will help people with low vision.
All font sizes should be specified in relative units that will scale up or down when a user changes her default font size in her browser. Other relative font size units include em, and percent (represented as %). The typical default browser font size is 16 pixels (px), but people with low vision will increase this default font size to help them read. One em is the height of the font you are using, usually thought of as the height of the upper case M within that font. So, if a user doesn’t change the default font size and the font size isn’t set in the code of an html document or a CSS file, 1em would be equal to 16 px. The percent (%) unit scales relative to the font size of a parent HTML element, or the default font size set in the browser. So, if a parent HTML element contains a font size of 16 px (pixels), font set to 125% would be 20 px heigh. Please view the following movie demonstration that shows how fonts set in em, rem and percent (%) scale with a change of the default browser font size, while fonts specified in pixels (px) and points (pt) don’t.
When you copy content from other sources, such as Word or from a website, you copy over styling code with it that may not help people with vision difficulties who choose to increase their default font size in their browser. Font sizes for content copied from Word usually end up as points (pt), and font sizes from content copied from websites can end up being in pixels (px). This can be seen after you copy the content into Blackboard’s content editor, by toggling into the HTML Code View. The font sizes in points and pixels should be converted to em or percent (%). The following chart, taken and modified from Reed Design, will show you equivalent em and percent (%) units for point (pt) and pixel (px) units that get copied into Blackboard from other sources, like Word documents. You can use this chart to convert your font sizes in Blackboard’s HTML Code View. If for example, you had a heading that was 18 pt, copied from Word, you would want to change it to 1.5 em. A 16 pt subheading, copied from Word, should be changed to 1.4 em. I’ve also added in where the fonts specified with descriptive keywords would fall. Please view the following movie tutorial demonstrating how to format text copied from Word into Blackboard Learn’s content editor for accessibility. This demonstration will also show you how to increase line-height between list items in HTML Code View, using inline styling. This increase of space between lines will help with readability also.
Font Size Conversion Chart
Points Pixels Ems Percent Descriptive Keyword
6pt 8px 0.5em 50%
7pt 9px 0.55em 55% XX-Small
7.5pt 10px 0.625em 62.5% X-Small
8pt 11px 0.7em 70%
9pt 12px 0.75em 75%
10pt 13px 0.8em 80% Small
10.5pt 14px 0.875em 87.5%
11pt 15px 0.95em 95%
12pt 16px 1em 100% Medium
13pt 17px 1.05em 105%
13.5pt 18px 1.125em 112.5% Large
14pt 19px 1.2em 120% Larger
14.5pt 20px 1.25em 125%
15pt 21px 1.3em 130%
16pt 22px 1.4em 140%
17pt 23px 1.45em 145%
18pt 24px 1.5em 150% X-Large
20pt 26px 1.6em 160%
22pt 29px 1.8em 180%
24pt 32px 2em 200% XX-Large
26pt 35px 2.2em 220%
27pt 36px 2.25em 225%
28pt 37px 2.3em 230%
29pt 38px 2.35em 235%
30pt 40px 2.45em 245%
32pt 42px 2.55em 255%
34pt 45px 2.75em 275%
36pt 48px 3em 300%
Remember to Check Your Color Contrast
Color contrast for a normal body type that is 12 pt, or 16px, against its background would be 4.5:1. This will pass WCAG 2.0 level AA standard. If you really want to pass the AAA level, the best, you would want a contrast of 7:1 between the font color and its background. Use the free Colour Contrast Analyser from the Paciello Group’s website, that you learned about within the Word module, to help you determine if your color choices meet WCAG 2.0 Level AA. If you have a larger font, which would be 14 pt bold (19 px, or 1.2 em) or 18 pt (24 px or 1.5 em), the contrast ratio can be lower. In this case, it should be 3:1 or greater to meet level AA of WCAG 2.0 standards for contrast. If you would like to review the movie demonstration of Colour Contrast Analyser, you can replay it below.
Colour Contrast Analyser Video Introduction
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=228
Remember Not to Use Color Alone to Convey Meaning
Colour Contrast Analyser will also help you see your design as someone with one of the color blindnesses, as we learned in the module on Word accessibility. Often, we format important text with color alone, such as red, but this text doesn’t appear red to someone with protonopia or deuteranopia. When designating something as important, use an asterisk (*) or write the word “IMPORTANT” before the content that you want special emphasis put on. You can also enclose the text with the <strong> element to make it appear bold and give a strong semantic emphasis to it. The <em> element will give emphasis to text by making it appear italicized. Fortunately, Blackboard’s content editor will add a <strong> tag around words that you select and format with the editor’s “bold” button. It is the first button in the first row, with a bold capital letter T. Blackboard’s content editor also adds an <em> tag around words that you select and format with its “italic” button. This is the second button in the first row, with an italic letter T.
Another best practice is to make sure your links are underlined and use the underlining only on links. Blue, underlined text has been used as a default style for links from the early stages of the web. It’s best not to remove the underlines, even if you choose a different color for your linked text. This could cause a user to miss important content, and in a teaching and learning setting, could lower a student’s grade. I know first hand, because I was in a course that changed its link styling from underlined to non-underlined between modules and pages. Consistency in design is important for usability and accessibility!
Avoid Busy Background Images Behind Text
Placing text directly over a background image can hurt readability also, especially an image with great detail, many tonalities, and semi-transparent font. You can solve the problem by placing the text within a solid color box whose background contrasts sufficiently with the font color. This would be best. But, some designers have been able to blur background images that are photographs with enough contrast with the font, and put a drop shadow behind the font, or outline it in black to improve readability in this situation.
Below is an example of a sign that is difficult to read for people with good vision. The photograph’s been taken from approximately the position that a passenger riding in a car on the street in front of the sign would have. The designer has made a white CSU seal semi-transparent against a red background, and superimposed solid white text on top of the semi-transparent white seal. Though the very large “Fire Lane” is readable due to its size, the fine print about having your car cited, booted, and towed if you dare park there, isn’t readable. The white text message about the fire lane competes with the “Cleveland State University 1964” text of the seal for our attention.
Here is another example of text over an image, that helps solve the readability problem. The photograph is blurred and the dark grays provide contrast with the white “AACSB Accreditation matters” text, which has a drop shadow around it. The designer has also placed the AACSB International Challenge text message over a solid blue background and put the “Challenge” in all caps. The contrast ratio between the white font and the medium blue background is 4.7:1, which passes WCAG 2.0 color contrast level AA for regular text, and level AAA for large text. When the image is filtered for the various color blindnesses and cataracts, we see that the small “INTERNATIONAL” is the hardest text to read. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/03%3A_Blackboard_Learn_Accessibility/3.02%3A_Formatting_Font_for_Readability_and_Accessibility_in_Blackboard_Learn.txt |
One of the Web Content Accessibility Guidelines is to provide equivalent alternatives to auditory and visual content. We’ll talk about adding alt text for images here. Though, the same principles apply to audio-visual media, applets, or non-text objects and embeds.
The addition of alternative text for images will help someone who is blind or has vision difficulties know what is the content, purpose or function of an image. It can also help sighted users with mobility impairments, who are using dictation software to navigate, when an image is also a link. Alt text helps sighted users who may be using a browser with images turned off, perhaps to load pages faster, save on bandwidth or data usage when roaming. Alt text will also help search engines index your images.
Alt text (short for alternative text) is a complex topic to discuss. You’ll get different ideas about what alt text should be, depending on what reference you read, or who you talk to. All images presented in an HTML document should have what is referred to as an alt attribute. The alt attribute will either contain text that conveys the meaning or function of the image, or it will be left empty or null. Images generally fall within three types: functional, informative and decorative. Decorative images always use a null (or empty) alt attribute and the functional and informative images sometimes use a null alt attribute or they require text that conveys the meaning or function of the image. Whether or not an image requires alternative text within the alt attribute, or if it is left empty (null) depends on its context within the page. The Web Accessibility Initiative has a good tutorial on alternative text and a decision tree for helping you decide how to write the alt attribute. WebAIM’s article on alt text is helpful also, but some of their scenarios don’t have an objective right or wrong way.
We’ll look at examples of the different types of images and discuss how to write the HTML code for accessibility. Alternative text as well as a title (screen tip) can be created with the non-code, WYSIWYG part of Blackboard’s content editor, when inserting an image. Within Blackboard’s content editor, an empty alt attribute can be added by simply putting your cursor within the Image Description field and pressing your spacebar once to create one empty space. You can then toggle over into the HTML Code View and delete this space to make the alt attribute null.
The following are general guidelines for alternative text:
• The text alternative should be equivalent to the meaning, purpose or function of the image
• Keep it short. Usually a few words will do, but sometimes a sentence or two may be needed. Images requiring a longer description can use a longdesc attribute that links to another page with a more comprehensive description, or the information can be written within the body of the document that contains the image.
• Use a null or empty alt text attribute if the same information is presented in the text adjacent to the image, or if the image is purely decorative. The former is to reduce redundancy when the information is read to the end user.
• Don’t start the alt text with “image of” or “graphic of.” A screen reader announces when it encounters an image. You may explain that the image is an illustration, painting or photograph when it conveys content, such as in an Art History course that displays many examples of paintings showing different styles or subject matter.
• For functional images, don’t start the alt text with “link to.” A screen reader announces that it is a link. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/03%3A_Blackboard_Learn_Accessibility/3.03%3A_Alternative_Text_for_Images.txt |
The example below gives information about flooding in an emergency preparedness course. Would you consider the photograph below the text to be informative or decorative? Does it convey important supplemental information that the surrounding text doesn’t?
Flooding:
The National Weather Service (NWS) issues flood advisories. A flash flood occurs within 6 hours of excessive rainfall and poses a threat to life and/or property. The following are advisories the NWS issues.
1. Flash Flood Watch: A flash flood watch typically occurs 6 to 24 hours in advance of expected flooding.
2. Flash Flood Warning: A flash flood warning is issued when flooding is occurring or imminent.
3. Flood Warning: A flood warning is declared when general flooding is occurring, imminent, or likely.
Let’s say that I determine that the photograph does show important information I would like to get across to the audience. It shows an important impact of a flash flood. Here’s how I could go about adding the alternative text in Blackboard’s content editor.
1. With your cursor insertion at the point where you want to insert the image, click the Insert/Edit Image icon in the text editor. It is the second icon from the left in the third row, with a picture of a mountain. It’s outlined with a red box below.
2. Browse to the location on your computer or in the course, by clicking on the Browse My Computer button or Browse Course button.
3. Select it by clicking on the name of the image file, and click Open.The Image URL will appear. Blackboard gives an image a unique URL once it has been uploaded to a course.
4. Next, in the text entry box for Image Description, type the alternative text you have in mind for this image. Try to keep it short and succinct. If it goes over two sentences, you’ll need a long description instead. For this image, I chose to write, “A flooded river crests over the top of a bridge, extending onto the banks and roadway of a nearby community, where onlookers stand in frustration.” I chose to leave the title text box empty since I don’t have advisory information that I want to show as a screen tip for this image. The point to take away is that the Image Description serves as the alt text.
5. The Appearance tab has some other options for styling the image that you might like to set before clicking the Insert button at the bottom of the Insert/Edit Image dialogue box. It has some older options for creating white space around an image so that it doesn’t butt up against surrounding content. These are called Vertical Space and Horizontal Space. I’ll set 5 for five pixels of white space both vertically, above and below the image, and horizontally, to the sides of the image. I’ll also set a 1 pixel wide border around it to help separate it from the background. As I set these styles, the HTML code is written in the Style text entry box. You can add styling here, or in the HTML Code View.
The code that Blackboard adds for the image looks like the following:
<img src=”https://bb–csuohio.blackboard.com/courses/1/Inclusive_Online_Design_1/content/_2019925_1/embedded/flooded_bridge_pg22.jpg” alt=”A flooded river crests over the top of a bridge, extending onto the banks and roadway of a nearby community, where onlookers stand in frustration.” style=”margin: 5px; border: 1px solid black;” height=”445″ width=”642″ />
Blackboard’s content editor uses the dimensions of the image to set the height and width in pixels. You can delete the height value (height=”445″) and change the width value (width=”642″) to a percentage. You would choose a percentage that you want the width of the image to take up within its containing element, which is the central white background content area in Blackboard. For the flooding image, I chose 40%. Percent is a relative unit that will allow the image to scale up and down with a resize of the browser window. This minimizes scrolling horizontally, which helps with accessibility and usability. So, in HTML Code View, the final code for the image would look like:
<img src=”https://bb-csuohio.blackboard.com/co...ridge_pg22.jpg” alt=”A flooded river crests over the top of a bridge, extending onto the banks and roadway of a nearby community, where onlookers stand in frustration.” style=”margin: 5px; border: 1px solid black;” width=”40%” />
Banner Example
In the case of a logo or banner for a department, you can simply use the text within the image, or the department or school name that it represents as the alternative text. For the School of Nursing banner, the alt text would be “School of Nursing.” The code would look like the following:
<img src=”https://bb-csuohio.blackboard.com/bb...xid-11788111_1alt=”School of Nursing” width=”90%” />
3.05: Creating Alt Text and a Long Description for Complex Informational Grap
Images of charts, graphs and diagrams convey more information than can be conveyed with a short alt text attribute. In this case, we need to find a way to present a longer description of the data contained in the chart, graph or diagram. One way to do this would be to summarize the information within the body of the page. Another way is to represent the same data that is graphically displayed in a chart, within an accessible table, with headers and a summary. You’ll learn about how to add table headers, captions and summaries later in this module. However, I’ll show you a method of adding a complex image and its alternative text, including a long description, with the add Image feature in Blackboard Learn.
To add an image of a bar chart with an alternative text and a long description:
1. With Edit mode set to On, there is an Actions Bar at the top center of the content region which will allow you to add content. Click on the Build Content option.
2. Within the Build Content menu of this Actions Bar, select to add an Image. This menu option is shown in the image below.
3. In the Create Image dialogue box that pops up, give your data image a Name or title in the text entry box next to Name. This should express what the data or information is about. In the final display, it will show above the image of the chart, graph or diagram.
4. Next, you can locate the image file by browsing your computer, the course files (provided you’ve already uploaded the file to the course), or select a Flickr Photo from the Mashups feature. Select one of the Browse buttons below the Color of the Name box (defaults to black).
5. Enter a short, concise alternative description of the data represented by the chart, graph or diagram in the text entry box next to Alt Text. The Alt Text will not be seen by sighted users viewing the image on the page, but will be read by screen readers.
6. Next, in an accurate way, describe all of the information conveyed by the chart, graph or diagram. This will be written in the Long Description text entry box. In the final display, the Long Description will show beneath the image of your chart, graph or diagram. Write the long description as if you are conveying the information to someone on the other end of a phone call.
The final chart shows with its name above it and its description below, as represented in the image below. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/03%3A_Blackboard_Learn_Accessibility/3.04%3A_Creating_Alt_Text_for_Informational_Images.txt |
A decorative image is one that doesn’t convey meaning nor add information to the page. Examples of decorative images include:
1. Images that serve as borders, spacers, corners, flourishes
2. Images adjacent to text that convey the same meaning or information as the text, e.g. an image of Abraham Lincoln next to the text link “Abraham Lincoln.” You want to avoid redundancy when a screen reader reads what is there.
3. Images adjacent to text that serve as “eye-candy,” or “ambience,” and don’t add information.
4. Images used next to link text simply to improve its appearance or increase its clickable area
Decorative images require the addition of an empty or null alt attribute in the HTML Code View that will tell the screen reader to skip over the image. An empty alt attribute is written as alt=” ” with a space between the double quotes. A null alt text is written as alt=”” with no space between the set of quotes. Without a null or empty alt attribute, the screen reader will announce that there is an image, and possibly give its dimensions, but leave the audience wondering what the image is and if it is important.
You can produce an empty alt attribute (alt=” “) by typing one space with your spacebar within the Image Description field when inserting an image with Blackboard’s content editor. This is the easy route. When the content is read back with JAWS 17 in Firefox, it will skip over the images without announcing them at all. NVDA will announce “graphic” but give no dimensions or other information. If you use a null alt attribute on an image, NVDA will skip over it without announcing it. JAWS 17 will also skip over an image without announcing it when there is a null alt attribute set. Therefore, it’s best practice to create a null alt attribute on a decorative image or one that doesn’t convey information that is not on the page.
Below is an example of a decorative image of a flourish that might be used at the top of a page or a separator for sections of a document. It would require the null alt attribute on the image element in HTML. The HTML code for this image would look like the following:
<img src=”flourish.jpg alt=””>
***IMPORTANT: Blackboard’s content editor, won’t create null alt text attributes on an image if you simply leave the Image Description blank when inserting an image. That is, if you leave the Image Description field blank and then click on the HTML Code View button after you have inserted the image, you won’t see <img alt=””> anywhere.
To setup an image with null alt text, insert the image with the Insert/Edit Image icon as you would normally, but leave the Image Description field empty, as seen in the image below. Don’t worry about this right now.
When you get a pop-up dialogue window warning you that you need to include an image description to make the image accessible, click OK. When that dialogue disappears, click Insert to finish inserting your image.
Next, you’ll need to toggle into HTML Code View using the HTML icon which is second from the right in the third row, to edit the code.
You’ll see the code that Blackboard’s content editor created for the image. It will look like the following:
<img src=”https–csuohio.blackboard.com/sessions/3/4/2/0/8/8/4/7/session/ff9f70436e4a4ab9a27a283871716715/flourish%281%29.gif” height=”105″ width=”659″ />.
You can see the really long location and name change that Blackboard created for the image, ending in the file format, which is gif. You’ll also see height and width attributes. You can add a null alt attribute any where after img plus a space. We’ll add it just after the source (src) pathway to the gif. Place your cursor just before the h in height and type alt=””, and then a space. The code will now look like:
<img src=”https://bb–csuohio.blackboard.com/sessions/3/4/2/0/8/8/4/7/session/ff9f70436e4a4ab9a27a283871716715/flourish%281%29.gif” alt=”” height=”105″ width=”659″ />
Creating a Fluid Image Size
We’ll make the image respond to changes in the browser window size, by changing its width to a percent value. Setting its width to a percentage of the container it is in, will allow it to scale proportionately downward with a decrease in window size. This technique can also avoid the need for horizontal scrolling if the window gets smaller than the width of the picture. We’ll delete height=”105″ and width=”659″ from the code above, so that it looks like the following:
<img src=”https://bb.blackboard.com/sessions/3...ish%281%29.gif” alt=”” />
We’ll now create a width that uses percent as a value. I’ll use 90%. I like to make the image slightly less than 100% of its container, to create some white space on the left and right of the image. To enter the width=”90%”, I leave a space after alt=”” and type the new property. The final code will look like:
<img src=”https://bb–csuohio.blackboard.com/sessions/3/4/2/0/8/8/4/7/session/ff9f70436e4a4ab9a27a283871716715/flourish%281%29.gif” alt=”” width=”90%” />
A note about image sizing: If your image dimension in pixels is a smaller percentage of the parent container than the percent that you write, there will be some scaling up, which could lead to a less detailed display of the image. Blackboard’s content area scales down to a minimum width of 1150 pixels before it stops responding to a decrease in the width of the browser’s display.
Below is a comparison chart you can use to determine the percentage that an image of a certain numbers of pixels will take up when Blackboard’s content area is scaled down to its minimum width. If the browser window is scaled up, the image will still occupy the percentage of space that you designate, but pixels will not be added to your image to maintain image quality.
Width of image in pixels Width of image in percent
1150 100%
1092.5 95%
1035 90%
977.5 85%
920 80%
862.5 75%
805 70%
747.5 65%
690 60%
632.5 55%
575 50%
517.5 45%
460 40%
402.5 35%
345 30%
287.5 25%
230 20%
172.5 15%
115 10%
57.5 5%
There is another method to make the image respond to a decreased screen size and eliminate the need for horizontal scrolling, but the execution isn’t perfect within Blackboard. Blackboard clips a small portion of the image on the right side. This method uses the max-width property for sizing the image. With the HTML Code View, you set the max-width of the image to be 100% of the containing parent element, and set the height to auto. In this case, the image will scale down to fit a window smaller than its original width in pixels (down to the minimum width of 1150 pixels in Blackboard), but it won’t scale up if the window size is larger than its width in pixels. The code would look like the following:
<img alt=”” src=”https://bb–csuohio.blackboard.com/bbcswebdav/pid-2020261-dt-content-rid-11186514_1/xid-11186514_1″ style=”max-width: 100%; height: auto;” />
Decorative Image Example #2:
The following example shows the use of an image (the Start button) beside text to draw the user’s eye to important information. This helps sighted users and can help people with low vision or cognitive impairments orient. The image of a round button that says “Start” repeats the same information or has the same meaning as the text heading “Start Here” which is located above it. So, it won’t add meaning for someone listening to a screen reader. Therefore, we’ll but a null alt attribute on this image so that a screen reader skips over it.
The yellow folder icon indicates that the author added a folder of content to her Blackboard course. To create the folder called “Start Here!” you can add a Content Folder from the Build Content menu on Blackboard’s Action Bar, with Edit mode On.
Type the name of your Content Folder, such as Start Here! in the Name text entry box.
To create the Start button image to the left of the text description of the folder, you can copy and paste the following code within HTML Code View for the Text description box:
<div style=”color: #000; padding: 1.5em; font-size: 1 em; font-weight: bold; font-family: Arial, Helvetica, sans-serif;”><img src=”https://bb-csuohio.blackboard.com/bb...xid-11267299_1″ style=”margin: 0px 10px;” height=”50″ width=”50″ alt=”” /> This folder contains instructions and a general overview of the course.</div>
The image below shows the code pasted into the HTML Code View dialogue window, superimposed over the WYSIWYG editor. Note that the spell checker button, outlined in red, is turned off. It is the ABC icon with the check mark. This is to avoid the unnecessary insertion of <span> elements in your code, due to a bug in the current version of Blackboard Learn. The HTML button clicked to enter into HTML Code View is also outlined in red.
*** Note: This code will work for centering the alignment vertically for one small image and one short line of text beside it. If you have a larger image and want to wrap a paragraph of text to the right of it within Blackboard Learn’s content editor, please see the section on Creating a Fluid Wrap of Text Around an Image. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/03%3A_Blackboard_Learn_Accessibility/3.06%3A_Creating_an_Empty_or_Null_Alt_Attribute_for_Decorative_Images.txt |
Functional images are images that are linked or initiate an action, such as submission of a form or a search query. If the image isn’t beside text that conveys the same function, then it will need alt text.
Below are some examples of functional image situations.
A Logo and Seal Image
In general, it’s best not to use text inside images for buttons, or links. If you can create these buttons or links with HTML and CSS, they should be created this way. Logos are exceptions. In the case of a logo, the alt text should convey what the image conveys. For example, if it is Cleveland State University’s logo or seal, you can simply use alt=”Cleveland State University” rather than describing it. You wouldn’t want to write alt=”Cleveland State University logo and seal“. The code for the image above is:
<img src=”https://bb–csuohio.blackboard.com/courses/1/Inclusive_Online_Design_1/content/_2033036_1/embedded/CSU-Logo-NoTag-Split-2015.png” alt=”Cleveland State University” style=”margin: 5px; border: 1px solid black;” width=”260″ height=”54″ />
Logo Image that Links to a Home Page
If the Cleveland State University logo is linked and leads to the CSU home page, you can use the destination of the action the image performs, alt=”Cleveland State University Home Page”.
The code for this situation would look like the following:
<a href=”http://www.csuohio.edu” title=”This link takes you to Cleveland State University’s Home Page” target=”_blank”><img src=”https://bb-csuohio.blackboard.com/bb...xid-11787785_1alt=”Cleveland State University Home Page” title=”This link takes you to CSU’s home page” style=”margin: 5px; border: 1px solid black;” height=”54″ width=”260″ /></a>
If a College or School logo is used with the CSU seal, as in the following example, you can use the destination of the action the image performs, alt=”Cleveland State University School of Nursing Home Page”.
The code for the example above looks like the following:
<a href=”http://www.csuohio.edu/nursing/home” target=”_blank”><img src=”https://bb-csuohio.blackboard.com/bb...xid-11787936_1alt=”Cleveland State University School of Nursing Home Page” style=”margin: 5px; border: 1px solid black;” height=”84″ width=”240″ /></a>
Icons to Inform File Type
Sometimes, you’ll use icons to represent what a linked file type for download is. These icons may be for Word, PowerPoint or PDF formats, for example. The example below represents this.
The code for this example looks like the following:
<a href=”http://www.csuohio.edu/nursing/sites...20Handbook.pdf” target=”_blank”>Nursing Graduate Student Handbook <img src=”https://bb–csuohio.blackboard.com/bbcswebdav/pid-2033036-dt-content-rid-11256471_1/xid-11256471_1″ alt=”PDF” style=”margin: 1px;” width=”50″ height=”50″ /></a>
Notice that the alt text, “PDF” doesn’t repeat text that is used for the link, which is good practice.
Icons that Represent an Action
Icons that represent an action could be a magnifying glass to submit a search, a printer icon to print a page, a phone icon to call a phone number, a twitter icon to send a tweet, a submit image to submit a form. In these cases, for alt text, we would not describe what the icon looks like, but what it will do. Therefor, we would not write alt=”magnifying glass” for the magnifying glass icon that submits a search term, but we would write alt=”search”.
In the example below, the phone icon launches a phone application to call someone, the email icon launches an email application to email someone, and the twitter icon opens twitter to send a tweet to someone. In each case, the icon is linked but the text beside is not. If the text beside it were descriptive links, we could use alt=”” for the images if they were included inside the html link. To give them the same alt attribute as the descriptive link, would be redundant.
Contact Information:
1 (216) 687-3960
@eLearningCSU
The code for these looks like the following:
<p>Contact Information:</p>
<p><a href=”tel:+12166873960″ title=”This link telephones 12166873960″><img src=”https://bb-csuohio.blackboard.com/bb...xid-11788045_1alt=”Phone elearning” style=”margin: 2px 5px;” width=”40″ height=”40″ /></a>1 (216) 687-3960</p>
<p><a href=”mailto:[email protected]” title=”This link opens an email to the CSU Center for eLearning” target=”_blank”><img src=”https://bb-csuohio.blackboard.com/bb...xid-11787791_1alt=”email elearning” style=”margin: 2px 5px;” width=”40″ height=”40″ /></a>[email protected]</p>
<p><a href=”https://twitter.com/intent/tweet?scr...e=eLearningCSU” data-show-count=”false”><img src=”https://bb-csuohio.blackboard.com/bb...xid-11788137_1alt=”Send Tweet to @eLearningCSU” title=”This link opens a tweet in twitter” style=”margin: 2px 5px;” height=”32″ width=”40″ /></a> @eLearningCSU</p>
<script async=”” src=”//platform.twitter.com/widgets.js” charset=”utf-8″ type=”text/javascript”>/*<![CDATA[*/
/*]]>*/</script>
The code inside the <script> element is a JavaScript widget to open Twitter. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/03%3A_Blackboard_Learn_Accessibility/3.07%3A_Creating_Alt_Text_for_Functional_Images.txt |
At some point in document or course creation, you’ll want to provide links to other websites or pages. Screen reader users can bring up a list of links on a page, much like they bring up lists of all the headings on a page, or landmarks. In JAWS, a screen reader user does this by pressing the Insert key followed by F7. It’s important to use descriptive text for your links that convey where the link is going to, so that the links in this list make sense to a user taken out of context. You don’t want to use “click here” or “read more.” If there are more than one of these “click here” or “read more” links on a page, a screen reader user may not remember, provided she or he read the page, which “read more” went with which topic. It’s best to write the sentence in such a way to provide link text that will describe the purpose of the link and where it will lead to. The link text should be meaningful and unique from the other link text on the page.
In the following example I’ll show you what you should avoid and then how to rewrite and relink it for descriptive link text.
Avoid this type of writing and linking:
“For more information about Fair Use of copyrighted materials, click here. For information about a Fair Use Evaluator that can help you decide if your specific use of copyrighted material favors Fair User or not, click here. For a chart that shows a list of conditions that favor or oppose Fair Use, click here.
How you could reword the text to provide descriptive text links:
“More information about Fair Use of copyrighted materials can be found on the Copyright Advisory Network’s website. If you have a question about whether a use falls under Fair Use or not, you can take the Fair Use Evaluator on their site. There is also Fair Use chart at Wheaton’s Buswell Library, that you can use as a quick guide to see if your use of copyrighted materials is favoring or opposed to Fair Use.”
The following is a screen capture of JAWS links list showing the links on this page. You can see the three vague “click here” links followed by the three descriptive text links. The later three are best practice.
It is best practice to also avoid linking URL text. It would be better to write out where the link is taking the person. Imagine hearing the following two links read out within a list of links by a screen reader: http://www.pcmag.com/article2/0,2817,2370354,00.asp, or http://mashable.com/2010/10/13/bit-l...9#33L46KnrOkqk. The first link doesn’t give a clue to what the article is about, and the second link is insanely long.
Blackboard’s content editor Insert/Edit Link button will give you a screen in which you can add a Title attribute to your link. Title attributes are additional advisory information about where a link leads to or its purpose. It shouldn’t duplicate the descriptive text link. Screen readers may or may not be set to read the Title text. But, if you don’t want it to show as missing on a web accessibility check, you can write Title text either in the WYSIWYG view of Blackboard’s content editor or within the HTML Code View. For sighted users, when they mouse over the link, the Title text shows as a tool tip.
Remember, the text you select to make a descriptive link is more important that adding titles to links, for screen reader users.
We’ll take a look at how to add a link within Blackboard’s content editor. Select the descriptive text you would like to turn into a link by clicking and dragging over it. Click on the Insert/Edit Link button in the second row of the content editor. It is the eighth button from the left and has a chain link on it.
In the next screen that pops-up, you would enter the URL of the web page you would like to link to. You can also link to a file that is already loaded in your course or upload a new file. For the Target, you can choose to open it in the same window or in a new browser tab. If you will be opening your links within new browser tabs, you can make a note of this in your course information folder under the Accessibility Resources section. The Title attribute below gives advisory information (a tool tip) that the link will take the user to the Copyright Advisory Network’s website.
Don’t bother setting the Class style. When you are done, click on the Update button at the bottom. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/03%3A_Blackboard_Learn_Accessibility/3.08%3A_Setting_up_Descriptive_Links_and_the_Title_Attribute_in_Blackboard_Lear.txt |
In Word and HTML, you want to avoid combining tables, and cells if possible. In this example, you’ll see how JAWS screen reader reads these combined tables in a confusing and incomprehensible manner. The tables lack header rows. The two visual headings that exist, “Assignment” and “Maximum Grade” are formatted with the Bold button on the Word ribbon instead of making them into a Table Header Row through Table Properties.
This is a recreation of a real example I spotted in a course! Remember to avoid blank table cells also.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=180
3.11: Building a Fluid Container for Content in Blackboard
Earlier we talked about using relative units such as em and % for font size and image width to create content that scales when a user changes her font size in her browser, or content that reflows when the user changes the browser window size. This same principle applies to HTML containers. In HTML, there is a generic type of container element, called a Div, that we can use to surround our content. It has an opening <div> tag before the content, and a closing </div> tag at the end of the content. When we specify a width in percent (%) for a container, its content will reflow as a browser window is scaled down. In the past, people used tables to layout just about everything on a web page. While this works for sighted users, it causes problems for people using screen readers. Using tables for layout should be reserved for tabular data or information that has a two-way relationship, such as found in rubrics or course schedules. Sometimes, tables can be used for layout if the content within is read in a logical order. A screen reader reads linearly, from top to bottom, line by line, in the order that content appears in the HTML. This holds true for both the content in the HTML code and content within a table. If the default language is read from left to right, the screen reader also reads from left to right, line by line, from top to bottom.
Below I’ll give you some code to create fluid content containers. Their basic structure can be re-used and you can put your choice of text, images or media within.
Single container with rounded corners
Here’s a single container with rounded corners in which you can write whatever you would like. This box will respond to your screen size so that the text reflows and remains visible without scrolling, until Blackboard’s content area hits its minimum width of 1150 pixels. It doesn’t use an image in its background, so it is light weight in terms of downloading content. Its only limitation is set within Blackboard Learn’s minimum width size for the parent element (the content area).
The code that you would enter into the HTML Code View is:
<div style=”border-radius: 25 px; background-color: #006a4d; color: # fff; padding: 1.5em; margin:1.5em; font-size: 1 em; font-weight: bold; font-family: Arial, Helvetica, sans-serif;”><p>Here’s a single container with rounded corners in which you can write whatever you would like. This box will respond to your screen size so that the text reflows and remains visible without scrolling, until Blackboard’s content area hits its minimum width of 1150 pixels. It doesn’t use an image in its background, so it is light weight in terms of downloading content. Its only limitation is set within Blackboard Learn’s minimum width size for the parent element (the content area).</p></div>
Note the words contained within the <div> element that display in white letters. You can simply select this text, delete it or type over it, and enter whatever message you would like to appear here. The essentials to keep are: <div style=”border-radius: 25px; background-color: #006a4d; color: #fff; padding: 1.5em; margin:1.5em; font-size: 1 em; font-weight: bold; font-family: Arial, Helvetica, sans-serif;”><p></p> </div> . Write whatever you would like in between the paragraph (<p>) tags.
The style=”border-radius: 25px; background-color: #006a4d; color: #fff; padding: 1.5em; margin:1.5em; font-size: 1 em; font-weight: bold; font-family: Arial, Helvetica, sans-serif;” is not HTML, but “inline” CSS. CSS is another web language, that styles the content laid out by HTML. CSS stands for Cascading Style Sheet. Most web designers use separate (external) style sheets of CSS and link to them within their web page. We have to use what is called “inline” CSS within a content management system in order to impose the styling that we want and override the basic styling set within the learning management system’s CSS files.
Below, you can see a video of the contents within the container reflowing as I scale the viewport size down.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=350
3.12: Creating a Fluid Textwrap Around an Image
If you would like to avoid the use of tables for layout, and save them for tabular data that has a two-way relationship, then the following code will help you create a fluid textwrap around an image. It uses a <div> element, which a generic container for content in HTML. You would click on the HTML Code View icon in Blackboard’s Content Editor and paste the following into the HTML Code View dialogue box.
<div style=”clear: both; width: 90%; float: left; display: block; color: #000; padding: 2em; font-size: 1 em; font-weight: bold; font-family: Arial, Helvetica, sans-serif;”><img src=”https://bb-csuohio.blackboard.com/bb.../xid-7398148_1″ style=”float: left; margin: 0 1.5em; border: 1px solid black;” width=”50%” height=”auto” alt=”” />This bar chart shows the distribution of preferred learning styles of fifty-eight first and final year nursing students in a study done by Flemming et al. The preferred learning style of first and final year nursing students was Reflector. The highest bar shows thirty three out of fifty-eight first year students preferred Reflector learning style. Forty out of fifty-eight final year students preferred Reflector learning style. Seven first year students and eleven final year students preferred Activist learning style. Eight first year students and five final year students preferred Theorist learning style. Ten first year students and two final year students preferred Pragmatist learning style.</div>
This code for wrapping text around a larger image will look like the following when displayed within Blackboard Learn:
You can go back into the WYSIWYG editor, select the bar chart image, and then click on the Insert/Edit Image icon. You can then browse your computer or course to find an image of your choosing. Likewise, you can highlight the text and write over it by typing in your own text. The code above provides an alternate way to layout your text and images than using a table. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/03%3A_Blackboard_Learn_Accessibility/3.10%3A_Avoid_Combining_Tables_and_Merging_Cells.txt |
Adobe Reader is a free application for reading PDF documents. Please download it from:https://get.adobe.com/reader/.
4.02: The Difference Between an Accessible P
Adobe Reader is a free program for viewing documents that end with .pdf. Adobe Reader does not have the full functionality of Acrobat Professional, which is used to create accessible PDFs.
If you have a PDF you would like to use online, you can open it with Adobe Reader to check for some accessibility features. If a PDF is accessible, it will have tags and searchable text. Tags are the structural elements, similar to HTML elements, that provide semantic meaning to the content on the page. There are tags for headings, paragraphs, images, lists, tables, and table headers for example. It is these tags, much like Word Styles and properly marked up HTML, that help people using assistive technology and devices navigate the page and understand what content is.
You can check to see if the PDF has been tagged by opening the document in Adobe Reader and going under the File menu, selecting Properties. In the Document Properties pop-up window, look for the word “Yes” next to Tagged PDF at the bottom of the screen. Please be aware, that even though a document may have tags, they may not be semantically correct. Problems arise with automatic tagging that produce the wrong tags, or wrong parenting order and break the meaning that is conveyed visually. A common problem with automatic tagging is the breaking up of one list into multiple lists. Another is the addition of empty tags produced by spaces created by using the return key multiple times within a Word document, before it was converted to PDF.
To see if a document has searchable text, you can also check it within Adobe Reader by pressing your CTRL key plus “F.” Then in the “Find” search box that pops up, type a word that you see in the PDF and click the “Next” button. If the word becomes highlighted on the page, then it has searchable text as opposed to being a scanned image of text.
You can check for correct read order within an accessible, tagged PDF or one that has had optical character recognition done on it by going under the “View” menu and checking off “Read Mode.” You then go under the “View” menu, select “Read Out Loud,” and “Activate Read Out Loud.” Go back under the “View” menu and select “Read Out Loud,” and either “Read This Page Only,” or “Read To End Of Document.” Listen to see if the order of presentation makes sense for meaning. Pay attention to areas where diagrams with text have been inserted into pages. The text within these gets converted to searchable text that can be read, but often they get read in the middle of something else, breaking logical read order.
Often, you will find “full text PDF” versions of publisher’s articles on library databases. These articles have had optical character recognition run on them, and have searchable text, but don’t necessarily have correct read order, especially when figures or tables are inserted within the body of the text, or there are multi-column layouts. These are partially accessible, but better than a scanned image of an article. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/04%3A_Checking_PDF_Accessibility_-_Scanned_Content_and_Publisher_PDFs/4.01%3A_Download_the_Free_Adobe_Reader_Applica.txt |
Often, you’ll find passages from books or articles in a journal that you would like to make available to your students. Provided the usage falls under Fair Use guidelines, these hard copies of the content can be scanned on the 8th floor of the Michael Schwartz Library in Electronic Course Reserves. Though, an office copy machine may scan to PDF and do optical character recognition, these machines can’t do the quality of scanning that the BookEye scanner can do in the Michael Schwartz Library. Optical Character Recognition or OCR is the process of recognizing images of letters in a document and converting them to editable and searchable text. The BookEye scanner, with 600 dpi optical resolution, produces a high quality scan that is optimized for the optical character recognition process. It adjusts the baselines of the text on the page to make sure they are straight and running at a 90 degree angle to the top of the page. It also prevents shadows from page curvature at the seams. ECR has the ability to create accessible, tagged PDFs, which can be used in your course.
More information about Fair Use of copyrighted materials can be found on the Copyright Advisory Network’s website. If you have a question about whether a use falls under Fair Use, you can take the Fair Use Evaluator on their site. There is also a Fair Use chart at Wheaton’s Buswell Library, that you can use as a quick guide to see if your use of copyrighted materials is favoring or opposed to Fair Use.
Note: If you have a student who needs an entire textbook scanned for accomodation due to disability, and they have purchased a copy of the text, the Office of Disability Services can do this scanning.
4.04: Free OCR When You Can't Afford Other
Some would agree, the best things in life are free, like kittens, or software that will do optical character recognition.
Most people have access to Microsoft Word. The most recent version, in Office 365, will import a PDF and convert it to native Word format for editing. You simply open the PDF in Word. There are some issues that you may need to fix and PDF forms do not convert well. For more information, see the tutorial on How to Convert & Edit PDF Documents in Microsoft Word on envatotuts+ site.
If you don’t have Word and are still looking for a free way to perform optical character recognition on a PDF document, you can also try the OCR feature in Google Drive. It will convert files that are 2MB or under and in PDF, JPG, GIF, or PNG formats. Anyone can get access to a Google Drive by creating a Gmail account. After uploading the PDF to Google Drive, you right click on it and select to Open with Google Docs. The OCR process will take place. You’ll need to format the text in Google Docs after that point. For more information about this, see the tutorial on How to OCR Documents for Free in Google Drive on envatotuts+ site.
There is also a Free Online OCR site that will allow you to upload a PDF and select Word (doc), Rich Text Format (rtf), or a Text Document (txt) as the output. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/04%3A_Checking_PDF_Accessibility_-_Scanned_Content_and_Publisher_PDFs/4.03%3A_The_BookEye_Scanner_in_Electronic_Cour.txt |
There are a variety of free web accessibility checkers available on the web. They can give you an idea of how accessible a webpage is, if you are requiring students or users to go there. I’ll provide a list of free resources at the end of this lesson. We’ll focus on WAVE web accessibility evaluation tool here.
WAVE, hosted by webaim.org, is available as a website and as an extension for Chrome browser. The WAVE web accessibility evaluation tool website is located at http://wave.webaim.org. Once on this page, you can enter the url of a web page to check its accessibility. The same tool exists in a form that will check the accessibility of password protected or internal pages, as well as public web pages. For this, install the WAVE extension for Chrome. The WAVE Chrome extension will also allow you to evaluate pages that use JavaScript. JavaScript is stripped out of the page when displayed on the WAVE website evaluator. There is more information about WAVE Chrome Extension at http://wave.webaim.org/extension/. Webaim has a help page that explains how to use WAVE at http://wave.webaim.org/help.
WAVE checks for compliance with many WCAG 2.0 guidelines, but no automated tool can check for all issues with the guidelines.
Once the analysis is run, WAVE presents a report on the top left side of the screen as well as embeds icons and indicators within the web page. You don’t need extensive knowledge about web accessibility to benefit from the results. The RED icons indicate accessibility errors. If you get red icons, it is likely that someone with a disability will have difficulty accessing the content.
Below is a screen shot of a WAVE report for the internal landing page for a personal account within our library’s system. This is a very clean page with few errors and warnings. You can see the “W” icon within a circle at the top left of the browser window. After I logged into my account at our library, I clicked on the Wave “W” icon at the top to run the web accessibility report. I’ve added red arrows to point out embedded icons which WAVE added to the page. Red icons indicate problem areas. Yellow icons indicate warnings. Green icons indicate accessibility features (green icons) that may or may not be helpful to someone using a screen reader. The blue icons indicate structural elements that have been included that can potentially help people using a screen reader or other assistive technology, navigate your page more efficiently.
I’ll include a view of what the page looks like without the WAVE embedded icons below:
When you click on the icons embedded in the web page, you get more information about what each means. The top red icon with the white bubble, for example, says that the language for the page isn’t specified. Setting the language is important for screen readers to read in the correct language. I’ve personally heard JAWS switch from reading in English to reading in French once it ran into the word “prerequisites” in a syllabus. The author never intended for this to happen. If you are authoring an HTML page, you can set a language attribute on the HTML tag at the top of the document. The code would look like this for United States English:
<html lang=”en-us”>.
The yellow “h1” warning icon to the right of the red icon states that there is no heading level 1 for this page. It’s best to put a heading level 1 at the top of a web page to tell what the main topic of the page is. The visual headings on the page should be represented in the heading tag structure on the page, in the correct hierarchical order. There should be one h1 at the top of the page followed by heading level 2. Any subheadings would be represented by h3 through h6.
Form Labels As An Example of An Important Accessibility Guideline
Forms are vital to business and education. It’s impossible to get through college without having to fill out forms online. Even within courses students are asked to register with professional organizations, or register online with services that check for plagiarism, such as Turn-it-in. If you are directing your students or users to fill out a form, it’s important that each input item in the form be coded with an “id” attribute that directly corresponds with a “for” attribute of a form <label> element. WCAG 2.0 techniques for Using label elements to associate text labels with forms also tests for visibility of the label for conformance. With explicit labeling, this code looks like the following for a text entry box asking for a first name:
<label for=”
<input id=”firstname” name=”firstName” type=”text”>
On the web page this displays like the image below.
With explicit labeling, the user can click on the text “First Name” and the cursor will insert into the text entry box to the right of it, which helps people with mobility impairments who may have trouble clicking in small areas. Not only do these labels increase the area for a person with a mobility impairment to click on, in order to fill an input element such as a text entry box, they also help screen reader users know what they are supposed to be entering in a form input, such as their first name or last name. Fieldsets and legends are elements that should be used on pages with multiple forms to help the person filling out the forms distinguish between them. Fieldset tags group related form items and legends provide a title or short description of what the grouped form items are for, e.g. a home address. A good use of this would be when there is a set of inputs for a mailing address and another set of form inputs for a billing address.
Simply putting the text “First Name:” next to the text entry box is not a reliable form of labeling. The same is true for forms that save space by placing the label inside the form input, such as shown below. Screen readers may read these, but there’s no explicit connection between the labels and their inputs. Also, putting a form label inside of a text entry box, is not good for users with cognitive issues that include short term memory or attention. The label often disappears as soon as the user clicks within the box. If the person forgets what was needed for the input, she or he will have to back track out of the form input to discover what it is. I know it is hard to imagine, for some of us, what this is like. Keep in mind that we have a legal obligation to make our programs accessible to them. Nielsen Norman Group has an article about the harmful impacts of placeholders in form fields for usability, accessibility and conversion. Below is a screen shot of WAVE flagging form fields with labels inside the text entry boxes.
Leaving form labels as visible within the code is important for passing accessibility checks. In the WAVE analysis of the Scholar search form above, WAVE and other accessibility auditing applications yield an error for the form labels for the central search inputs. In a couple cases, this is not due to form labels not being present in the code, but because they aren’t visible. Their display is set to “none.” Some screen readers will ignore content with styling that sets the visibility to “hidden,” or the display to “none.”
Listen to the following demonstration of NVDA reading the central form fields on the CSU MyAccount page. Listen with your eyes closed and imagine never having seen the page, as I tab through the form elements in order.
The image below shows the red and white warning tags next to the form inputs that need visible form labels.
The code for the inputs above looks like the following:
`<label for="searcharg" style="display:none;">Search Type</label>`
<input id=“searcharg name=”searcharg” size=”30″ onchange=”return searchtoolSubmitAction()” maxlength=”75″ value=”” type=”text”>
A way to get around the issue of hiding a label visually, but making it available for screen reader users, is to use WAI ARIA. WAI ARIA is an extension of HTML. An aria-label attribute can be added to an input element to give it a label that is detected by assistive technology. Important to note here, is that there is no added benefit for people with mobility impairments, such as mouse users with tremors or spastic hand movements, needing a larger clickable area. The code for an aria-label would look like the following:
<input type=”text” name=”search” aria-label=”Search”>
<button type=”submit”> Search</button>
In another instance of a label error, the code lacks a form label. This is for the check box input next to the text, “Limit to items which are not checked out.” In this case, there is no label element above the input element. The code looks like the following:
<div>
<input name=”availlim” value=”1″ type=”checkbox”></input>
Limit to items which are not checked out
<br/><br/>
</div>
The following example shows problems with a form to join the American Nursing Association, an organization Nursing students are asked to join. The form lacks properly coded labels to associate form fields with their text descriptions. The labels are present, but the form inputs don’t have corresponding id attributes to associate them with their labels. The yellow tag warning icons are next to the labels that have no corresponding input, and the red tag error icons are next to the form fields that aren’t marked up to associate them with their labels. The ANA has a paper form that is meant to be printed and mailed in. It is an untagged PDF, with no form elements, and therefore not accessible to the blind and visually impaired. It’s best to look for sites that have contact phone numbers with people that can help register a user who can’t use their site. Alternatively, someone at CSU would need to help a blind or visually impaired person complete a form.
WAVE extension for Chrome also has a tab at the upper left side to turn off presentation Styles to see the structure of the page. This button is called No Styles. When you click it, you’ll see navigation as lists, hopefully headings to provide structure, as well as paragraphs of text. This structure is similar to what a screen reader sees and reads out. If the structure lacks lists for navigation, headings, or fieldsets with legends for multiple forms, this can be a warning sign that the page will be have to be read through from beginning to end by someone using a screen reader, and will be more time consuming to navigate. Below is a screen shot of what the form page above looks like with styles turned off.
WAVE also has a tab for checking color contrast, called Contrast, at the upper left. When you click on this, you’ll get warning icons for areas where the text doesn’t have a high enough contrast ratio with its background to pass WCAG 2.0 standards. The icons in red below, that have an ABC on them are areas with color contrast problems. There’s no eye dropper tool to sample colors, so the Paciello groups Colour Contrast Analyser would be a much stronger tool to use to find color and contrast issues, and find color combinations that will pass WCAG 2.0 level AA. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/05%3A_Checking_Websites_for_Accessibility/5.01%3A_WAVE_Web_Accessibility_Evaluation_Tool.txt |
Deque Labs developed an accessibility engine called aXe accessibility tools to test accessibility of web pages. Axe exists as an extension for Chrome and Firefox. I’ll focus on instructions for adding it to Firefox, since WAVE only installs in Chrome browser. You can install aXe Developer Tools to add an Accessibility Audit tool to Mozilla Firefox’s Web Developer Tools. When this tool is active, it will allow you to analyze a web page and see the accessibility issues in a panel below the web page. When you are on a page you would like to check the accessibility of, you can activate aXe’s Accessibility Audit. This is a great tool if you not only want to identify the problem areas, but also inspect the code of the page. If you prefer not to look at the code, you are still made aware of the issues that will arise for people with disabilities landing on the page you’ve chosen to share.
Below is a video overview of aXe Accessibility Audit tool, used with its Highlight and Inspect features.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=549
The aXe add-on for Firefox recently upgraded to version 3. It looks like it will add to CSU’s mandated Firefox version 48, but there’s a notice that it shouldn’t be used with an outdated version of Firefox. If you want to keep using Firefox browser and have a version of it that will work with aXe Developer Tools, you can install Firefox Developer Edition. The colors in the interface are different, but you can search for the site you want much in the same way as with the standard Firefox.
To search for aXe in Firefox, click the menu icon with the three horizontal bars in the upper right corner. This type of icon is typically referred to as a “hamburger” because it resembles a sandwich. The arrow in the image below points to the menu icon.
Once open, click on Add-ons.
In the next screen, click on Extensions and type “aXe” in the Search box at the upper right.
When aXe comes up at the top of the search results, click on it. On the page with the details about aXe, click the Add to Firefox button.
Once aXe is added to Firefox, you can access it under the web developer tools. These tools are opened by clicking the wrench icon at the top right of the browser, or going to the hamburger menu and selecting Web Developer and then Toggle Tools. See the method for accessing the web developer tools under the wrench icon below.
Enter the URL for the page you would like to analyze for accessibility. I’ll go to the updated version of the American Nursing Association’s membership form. The Developer Tools will likely open at the bottom of the browser window or on the right side. Below is an image of the tools open at the bottom of the screen. You will also need to navigate to the aXe tab or view of the developer tools. Once there, click the Analyze button.
The analysis results will come up at the bottom of the window. On the left are a list of problem items on the page. In the middle are details about the issue, such as a problem with text being too light against the background color and not creating enough contrast to meet the WCAG 2.0 AA standard. When the Highlight button is clicked on, the item will have an outline around it, which helps people with sight recognize what it is on the page. This will help non-coders to see what the problem is, along with the written description of the problem. The end user can arrow through all of the items on the page with the same issue, e.g. not enough color contrast between text and background, using the arrows at the far right of the aXe screen. To look at different issues, click the next item down in the list on the left, e.g. documents must have a title.
On the right, you’ll see information about how to fix the problem and how critical the impact is. With the color contrast problem below, we see that its impact is serious. We’re told that the light gray text has a contrast of 2.32 to 1 against the background color. We’re told that we need to raise the contrast to a ratio of 4.5:1. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/05%3A_Checking_Websites_for_Accessibility/5.02%3A_aXe_Accessibility_Audit_Tool.txt |
Use a Web Developer Extension in Firefox to Disable all CSS Styles and Turn Off all Images
Web Developer add-on for Firefox by Chris Pederick is also a good extension that will allow you to turn off all CSS styles and see the structure of the page. This structure is what a screen reader will detect and read back. This extension for Firefox will also allow you to turn off all images, to check for alternative text which the screen reader will read back. WebAIM’s page on how you can use the Disable feature of the Web Developer extension to check for accessibility is at http://webaim.org/resources/webdev/.
HTML Based Text Enlargement and Reflow
Because people with low vision will increase their default browser font size, text should be formatted with relative font size units such as em, or %. Content should also respond to a reduced window size and the user’s set font-size by reflowing in a logical order, and not clipping content. You can check this by entering Responsive Design Mode with Mozilla Firefox’s Web Developer tool under the Tools menu and then dragging the lower right corner of the browser window to the left and upward. The page content, including the text shouldn’t just scale down or shrink to fill the reduced viewport window. Text size should remain readable, which typically involves the text becoming slightly larger, but it could possibly stay the same size.
It’s also possible to check page response to a reduced window size in Chrome browser. You do this by first clicking on the middle “Restore Down” icon at the top right of the screen that looks like a window frame over the top of another window frame. Then, hover your mouse over the lower right corner of the browser window until it becomes a diagonal arrow. Click and drag the lower right corner up and to the left. The text should reflow and possibly change font size. If the developer used CSS media queries to layout the page, the font size may increase when the window approaches the size of a phone or small tablet. This would be an example of a best practice.
Look at the Page Info
A user can look at a web page’s info within Mozilla Firefox browser by right clicking on a web page and selecting View Page Info. This is a quick way to see what information is coded into the meta elements within the head of a web page. These meta tags give information about the page that help screen reader users know what it is about when they find the page within Google results or on Social Media. A properly marked up page will have a unique title at the top of the General tab. A meta “description” is particularly helpful and shows highlighted in blue in the image below. The image below is a screen capture of the page information for The Ohio State University’s home page. It has a unique title at the top that says, “The Ohio State University.” The “og” tags provide information to FaceBook users when the page is shared on Facebook. The “twitter” tags provide a title, description, and an image for a Twitter Card when the page is shared on Twitter.
6.01: How to Take Advantage of Youtube's Auto-generated Captio
Youtube will generate captions automatically for many videos. There are some exceptions. They have more information about what could cause captions to not auto-generate on their support site (See: https://support.google.com/youtube/a.../6373554?hl=en). Below is a captioned tutorial on How to Correct Youtube’s Auto-Generated Captions. Sometimes, it will take a day or two for automatic captions to become available on a video once you upload it. But, once there, you can select them and edit them.
An interactive or media element has been excluded from this version of the text. You can view it online here: https://pressbooks.ulib.csuohio.edu/accessibility/?p=600 | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Best_Practices_in_Accessible_Online_Design_(Caprette)/05%3A_Checking_Websites_for_Accessibility/5.03%3A_Other_Techniques_for_Checking_Accessibility.txt |
Learning Outcomes
• Identify several different emerging technologies.
• Incorporate emerging technologies in teaching and learning activities to engage learners.
• Explain how emerging technologies will affect education, and vice versa.
• Identify the challenges organizations face in adopting emerging technologies.
As the capacity of the Internet evolves and expands, the potential for online teaching and learning also evolves and expands. The increasing number of new technology tools and expanding bandwidth are changing all facets of online activity, including e-learning. As technologies become more sophisticated and as they begin to converge (for example, cell phones becoming multimedia-capable and Internet-connected), educators will have more options for creating innovative practices in education.
The shift occurring in the Web from a static content environment where end users are the recipients of information—defined as Web 1.0—to one where they are active content creators—defined as Web 2.0—can be described as a transition to a more distributed, participatory, and collaborative environment (Wikipedia, 2005). Web 2.0 is considered to be a platform where “knowledge-working is no longer thought of as the gathering and accumulation of facts, but rather, the riding of waves in a dynamic environment” (Downes, 2005, para. 14). Web 2.0 is defined not only by technologies such as blogs, wikis, podcasts, vodcasts, RSS feeds, and Google Maps, but also by the social networking that it enables. As these communication-enabling technologies conjoin text, voice, and video using CoIP (communications over Internet protocol), they will provide a seamless integration with cell phones, personal digital assistants (PDAs), and computers (Yarlagadda, 2005). Web 2.0 technologies can bring people together in ways Web 1.0 did not.
At the beginning of any technological change, several definitions often encompass a new concept. This is also true with Web 2.0. In an interview with Ryan Singel (2005), Ross Mayfield, CEO of a company that creates wiki software, offered this simple definition: “Web 1.0 was commerce. Web 2.0 is people” (Singel, 2005, para. 6). Tim O’Reilly, who wrote one of the seminal articles on Web 2.0, saw it as an “architecture of participation” (O’Reilly, 2005, para. 26) and “not something new, but rather a fuller realization of the true potential of the web platform” (para. 88). Web 2.0 is centred on communication—the ability to interconnect with content, ideas, and with those who create them. Social networking is a key phrase for Web 2.0. The Web 2.0 framework sets the stage for a student-centred collaborative learning environment. Using existing communication tools in a way that encourages collaboration can be a step in the direction of incorporating the spirit of Web 2.0 philosophies in online learning environments.
A parallel can be drawn between the shift from Web 1.0 to Web 2.0 and the shift many instructors are making in online learning from an instructor-centred (Web 1.0) approach to a student-centred (Web 2.0) approach where students have more control over their learning. The effects of Web 2.0 may influence how online courses are conceptualized, developed, and taught. The use of Web 2.0 technologies and philosophies in education and training are sometimes referred to as “e-learning 2.0” (Cross, 2005; Downes, 2005; Wilson, 2005).
Currently, Web 2.0 technologies are just beginning to affect online teaching and learning. As the Web becomes more interactive, instructors will want to incorporate these technologies effectively. It is likely that Web 2.0 technologies will affect student-to-student communications in project-based learning, as it will affect ways in which instructors conceptualize, develop, and teach their courses. Incorporating Web 2.0 technologies and philosophies can make courses more student-centred.
Web 2.0 technology emphasizes social networking. Online learning environments can be used for enhanced communication among students, as well as between students and the instructor. Creating learning opportunities that harness the power of Web 2.0 technologies for collaborative learning, distributed knowledge sharing, and the creation of media-rich learning objects can further the scope of what students can learn by “placing … the control of learning itself into the hands of the learner” (Downes, 2005, para. 12). These tools provide an avenue for students to spend more time on task, from sharing ideas and their understanding of the course content to collaborating in creating artifacts that represent their learning, whether in a traditional or an online classroom.
A few ways Web 2.0 technologies can support project-based learning include: blogs for journaling assignments, wikis for creating content in collaborative group projects, podcasts for audio-based assignments, vodcasts for video-based assignments, and RSS feeds for syndication. The creativity and remixing of technologies is an exciting new direction for both instructors and students. Several chapters in this book address these ideas in greater detail.
Creating online courses in which students construct their own meaning with hands-on activities may radically change how teaching and learning is designed. Delivering an online course with content created by either a publisher or an instructor alone is no longer considered an effective strategy. Students working in environments that shift learning to knowledge construction rather than by assimilating what the instructor delivers will create courses that “resemble a language or conversation rather than a book or manual” (Downes, 2005, para. 32).
Web 2.0 technologies and their use in teaching and learning are currently in a nascent state. Further research on the adoption and use of Web 2.0 technologies, and their effects on teacher philosophies with respect to teaching and learning, will deepen our understanding of how to use these technologies to design courses that engage and retain students. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/01%3A_Emerging_Technologies_in_E-learning/1.1%3A_Introduction.txt |
For some instructors, integrating technology into their teaching can be an overwhelming task. Adding the word “emerging” can make these technologies seem impractical, unnatural, or counter-intuitive, as well as implying hat the technology is transient. Although technology is constantly changing, using it for instructional goals can make a difference in a successful adoption and implementation.
As the authors of this chapter, we firmly believe in the use of technology for teaching and learning purposes. In this section, we will describe several currently emerging technologies. Johnson (2006) provides a list of emerging technology links on his website. Using his list as a base, we provide definitions, as well as examples of how these technologies can be used in teaching and learning. The list below is not in any particular order.
Digital storytelling
Storytelling is one of the oldest teaching methods. By using digital video cameras and software such as iMovie, almost anyone can extend a story’s reach to a much wider audience. In education, instructors can ask students to create digital stories to demonstrate knowledge of a topic. Websites such as the Center for Digital Storytelling emphasize that the technology is “always secondary to the storytelling” (Banaszewski, 2002, para. 18). See Chapter 25, Tools for Online Engagement and Communication, for more information on digital storytelling.
Online meetings
Synchronous meetings of online classes can be facilitated by the use of web conferencing/virtual classroom tools such as WebEx, Wimba, Elluminate, Skype, Microsoft Live Meeting, Adobe Breeze, Centra, and Interwise. These technologies add presentation and group interaction tools. Most of them provide both voice and text chat functionality. Their synchronous nature appeals to many people and complements other asynchronous activities. Huge savings in travel costs can be realized by conducting meetings over the Internet. For a geographically widespread class or working group, occasional online meetings can help to keep people on track and provide a valuable opportunity for synchronous discussions.
Communities of practice
Much of social computing revolves around the formation of communities of practice, which are groups with a common interest. With technologies that ease the sharing of experiences, information, and resources, whether across the hall or around the world, many communities of practice are developing spontaneously, or are intentionally created by an individual or organization to meet a specific purpose. Communities of practice use social computing tools and often form as a result of the availability of the tool. They can contribute greatly to the dissemination of knowledge and skills within an organization, as when, for example, the group serves as mentor to a new member.
Communities of practice are not a technology, but rather a learning theory that can make use of many of the emerging technologies available today. For more information on communities of practice, see Chapter 30, Supporting Learning Through Communities of Practice.
Personal broadcasting
Personal broadcasting tools include: blogs (web logs), moblogs (mobile blogs), vlogs (video blogs), podcasts, vodcasts (video podcasts), and RSS feeds with uploaded images from cell phones. Instructors can use these technologies to bring diverse elements into a course to assist in meeting a variety of learning styles. These technologies can also be used for updating students on current activities and projects.
Podcasting and videoblogs can assist learners whose learning style is primarily auditory. Some uses include recording lectures for students to review, providing more clarity for difficult concepts, and supplementing lecture information such as, for example, guest lectures and interviews.
RSS feeds allow students to selectively download updates from targeted sources, personalizing the information and news they want to receive. Tools such as Suprglu allow multiple RSS feeds on one Web page. Stead, Sharpe, Anderson, Cych & Philpott (2006) suggest the following learning ideas for Suprglu:
• Aggregate all of a student’s production in one page.
• Bring a range of different search feeds together for easy viewing.
• Create a class site that aggregates whatever content feeds you are providing for students.
• Create a collaborative project site.
• Bring teacher lesson plans or ideas together on one page (p. 37).
Personal broadcasting technologies give students an opportunity to participate in the creative construction of knowledge and project-related work. People can share their broadcasts on their own websites or through sites that specialize in specific types of broadcasting, such as wordpress.com for blogs or youtube.com for vlogs. YouTube’s tagline captures the essence of personal broadcasting: “Broadcast Yourself.”
Wikis
Wikis are a type of website that allows visitors to easily add, remove, and otherwise edit the content. This ease of interaction makes wikis an effective tool for collaborative authoring. In a short time Wikipedia (Wikipedia, 2006d) has become a primary reference tool for many students, though by the readily editable nature of its information, it cannot be considered authoritative. Wikis can be useful as a tool for students to build their own knowledge base on specific topics and for sharing, comparing, and consolidating that knowledge.
Educational gaming
Despite the vast interest in video and computer games, the educational game market still has a long way to go. Many people have heard of Warcraft, a strategy game, and Halo, a battlefield simulation game, but how many people have heard of Millie’s Math House, a learning game? However, as Web 2.0 puts more power in the hands of mere mortals, teachers will start making better learning games than the commercial game producers. These games will also take advantage of new technologies. For example, low-cost virtual reality gloves give middle school students the ability to play “Virtual Operation.” John Shaffer (2002) describes a variety of educational learning experiences that virtual reality could present to middle school, high school and even college students.
Several renowned organizations have turned to educational games to attract young people to their disciplines or movements. The Nobel Foundation uses educational games on its website to teach different prize-winning concepts in the areas of chemistry, physics, medicine, literature, economics, and world peace. The Federation of American Scientists has created engaging games that ask players to discover Babylon as archaeologists and to fight off attacks as part of the human immune system. Instructors do not have to be game designers to incorporate existing educational games into their curriculum. They may want to play the games first, both to make sure they address course concepts and to have fun!
Massively multiplayer online games (MMOGs)
Interacting online within the same game environment, hundreds, if not thousands of people gather together to play in MMOGs. In Worlds of Warcraft, one popular game, players can choose roles as a human, elf, orc, or other creature that works with others to accomplish goals. In the future, students will choose whether they will play as red blood cells, white blood cells, viruses, or anti-viral drugs to learn how viruses affect the body, and how to stop them. Currently, gamers seek treasures to score points and gain levels in an MMOG called Everquest. In the future, students will use MMOGs in an online environment depicting the historical period to seek answers to instructors’ questions about World War II such as, “How did women influence the end of World War II?”
Extended learning
Also known as hybrid or blended learning, extended learning mixes instructional modalities to provide an ideal learning solution, using e-learning and classroom training where each is most appropriate. It may also be a mix of synchronous and asynchronous technologies. Using both online and in-person methodologies allows instruction to be designed to address diverse learning styles, as well as meet the course’s learning objectives. For example, learners might use e-learning for the basic content, but meet face-to-face in a laboratory, or in a classroom.
Intelligent searching
Google and other search engines are already the most used learning tools around. Many people use them daily to do research and to find all kinds of information. Some librarians have noticed that students are not learning how to use journal databases and other sources of materials because of their over-reliance on Google. Search engines will evolve to provide more concept- and context-sensitive searching. Currently these have emerged in specific content areas such as Google Maps, Google Scholar, a self-adapting community system using Gnooks, video and audio using Blinx and StumbleUpon, which uses ratings to form collaborative opinions on website quality.
Intelligent searching will use such tools as vision technology (for images), natural language processing, and personalization by users to make them more usable and useful. Ask.com uses what it calls ExpertRank (Ask.com, 2006). This technology ranks pages based on the number of links that point to it rather than by how popular it is. Known as subject-specific popularity, this technology identifies topics as well as experts on those topics. Search engines will also become learning and content management systems that will help us organize, catalogue, and retrieve our own important information more easily.
Webcams and video from cell phones
Digital cameras, video cameras, webcams, and video from cell phones have become almost ubiquitous as ways to capture personal history. But they have gone far beyond that and have become a means of communication. People have captured events like weather, subway bombings, and funny incidents that have become part of television entertainment and news. Thanks to sites like Flickr and YouTube, online videos have become a pervasive online feature.
Examples of educational uses include: a source of data for student projects, a way to practise skills, document events, record interviews, and add video to videoblogs (vlogs). Instructors might use them to emphasize or explain important or difficult-to-understand concepts. The use of video provides learners with an alternative medium for grasping concepts when text or images alone don’t convey the necessary information.
Mashups
(Lightweight, tactical integration of multi-sourced applications.) “A mashup is a website or web application that seamlessly combines content from more than one source into an integrated experience” (Wikipedia, 2006a, para. 1). Mashups take advantage of public interfaces or application programming interfaces (APIs) to gather content together in one place.
Tracking the Avian Flu, which tracks global outbreaks, is an example of how content is integrated with Google Maps. Top City Books is another example; this site shows the top 10 books in a city for eight subjects.
SecretPrices.com is a comparison-shopping site with customer reviews, information on deals, and more. It uses APIs from Amazon.com, Shopping.com, and A9 and gathers information from Amazon.com and Epinions.com.
Cookin’ with Google aggregates several databases. Type in a few ingredients you have on hand and Google searches databases with recipes containing those ingredients and presents a list of recipes you can consider cooking for dinner tonight.
Social computing
Social computing is the essence of Web 2.0. It is the use of technologies such as wikis, blogs, and podcasting by individuals and groups to create content, instead of simply being content recipients. Web 1.0 was about downloading; Web 2.0 is about uploading.
Forrester Research describes social computing as “[e]asy connections brought about by cheap devices, modular content, and shared computing resources [that] are having a profound impact on our global economy and social structure. Individuals increasingly take cues from one another rather than from institutional sources like corporations, media outlets, religions, and political bodies. To thrive in an era of social computing, companies must abandon top-down management and communication tactics, weave communities into their products and services, use employees and partners as marketers, and become part of a living fabric of brand loyalists” (Charron, Favier & Li, 2006, para. 1).
In an e-learning context, social computing is about students becoming the creators as well as the consumers of content. In a formal setting, students can be encouraged to use social computing technologies to share their experiences and collaborate on assignments and projects. In informal situations, people will be able to find great treasuries of information on almost any imaginable topic and contribute their own knowledge to it.
A new category of software has emerged called social networking software. This web-based software assists people to connect with one another. Examples of social networking software include Flickr, MySpace, Facebook, YouTube, Plaxo, and LinkedIn.
Peer-to-peer file sharing
In a peer-to-peer (P2P) network, files are shared directly between computers without going through a server. P2P applications are usually web-based and use peer-to-peer file sharing. Some examples include online meeting (web conferencing), instant messaging, Skype, Groove, Festoon, and BitTorrent. “P2P merges learning and work, shedding light on team processes that used to disappear when a project’s participants dispersed. For example, P2P applications can create an audit trail” (Cross, 2001, para. 13).
Despite the copyright controversy around music file sharing on Napster, Kazaa, and others, P2P is a useful technology that offers opportunities for e-learning. P2P file sharing can support students working together on collaborative projects. Having one central location for group members to access and edit a master copy of a shared document can help with version control. Another benefit in collaborative work is the ability to view and mark up a master copy instead of sending documents as attachments through email. This can help avoid confusion over who has the master copy and the problem of edits accidentally missed or overwritten. P2P technologies also enable chatrooms and online groups, where students can talk synchronously about their project. Using a P2P application such as Groove, students can create a shared virtual office space for group projects (Hoffman, 2002). P2P technologies can possibility encourage project-based learning.
Another technology related to both P2P and podcasting is swarmcasting. Because files are transported across the network in smaller packets, swarmcasting is a more efficient way to send large files such as video files. Swarmcasting provides the possibility of Internet broadcasting much like a television station does (tvover.net, 2005).
Mobile learning
Also called m-learning, this represents an evolution of e-learning to the almost ubiquitous mobile environment for laptop computers, cell phones, PDAs, iPods, and RFID (radio frequency identification) tags. Technologies like GPS and Bluetooth will also enable the adoption of m-learning.
Learning will be in smaller chunks and designed as just-in-time (performance support) to accommodate wireless form factors, the flood of available information, and multi-tasking users. It is an opportunity for people to learn anytime, anywhere. An executive heading to a meeting can brush up on his or her facts, and students can study for an upcoming test or access information needed for a research project.
Using mobile devices for learning is the logical next step for e-learning. It will require some new strategies— smaller chunks of information, shorter modules, efficient searching for learning objects, and an orientation to performance support rather than information dumps (Wagner, 2006).
Examples of m-learning include:
• SMS (text messaging) as a skills check or for collecting feedback
• audio-based learning (iPods, MP3 players, podcasting)
• Java quizzes to download to colour-screen phones
• specific learning modules on PDAs
• media collection using camera-phones
• online publishing or blogging using SMS, MMS (picture and audio messages), cameras, email, and the Web
• field trips using GPS and positional tools (Stead et al., 2006, p. 12)
Mobile learning is already making an impact. In a recent survey conducted by the eLearning Guild, Pulichino (2006) reported that 16 percent of the responding organizations are currently using mobile learning and 26 percent expect to do so over the next 12 months. He also observed that colleges and universities are ahead of corporations in its adoption.
Context-aware environments and devices
Environments and devices that are tuned into the needs of those using them and automatically adjust to the situation are considered to be context-aware. Everyday devices such as phones, personal digital assistants (PDAs), and multimedia units equipped with built-in software and interfaces can be made context-aware. The strength of this technology is its ability for learners to extend their interaction with an environment. One example is the integration of student services with a PDA device. A student points a PDA to a computing device, and the PDA captures the information about the service which is beamed into the PDA. For more information on context-aware environments and devices, use a search engine with the parameters “Cooltown + HP.”
Augmented reality and enhanced visualization
Augmented reality (AR) is an evolution of the concept of virtual reality. It is a hybrid environment, which is a combination of a physical environment with virtual elements added by computer input. This computer input augments the scene with additional information. While virtual reality strives for a totally immersive environment, an augmented reality system maintains a sense of presence in the physical world. Augmented reality’s goal is to blur both worlds so the end user doesn’t detect the differences between the two.
Augmented reality may use some of the following technologies:
Display technologies:
• high-definition, wall-sized display screens
• three-dimensional displays
• handheld mini-projectors
• glasses-mounted, near-to-eye displays
• flexible, paper-like displays
• full-face virtual-reality (3D) helmets
Multi-sensory inputs and outputs (see Stead, Sharpe, Anderson, Cych & Philpott, 2006):
• speech
• smell
• movements, gestures, and emotional states
• tangible user interfaces using the direct manipulation of physical objects
• handheld PCs for user input and data
• GPS (global positioning system) units
• wearable sensors
Examples of augmented reality applications include:
• image-guided surgery in medicine
• movie and television special effects
• airplane cockpit training
• computer-generated images for engineering design
• simulation of major manufacturing environments
Augmented reality is most often used to generate complex, immersive simulations. Simulations are powerful learning tools that provide a safe environment for learners to practise skills and conduct experiments.
Integrating the physical world and computer input is obviously an expensive technical challenge, and it is mainly a research field at this time. Up to now, the potential training applications are limited to medical, military, and flight training; but as costs come down, the possibilities for simulations in all fields are limited only by the imagination.
Many research projects are being carried out in this area. For more information on augmented reality, see Sony’s Computer Science Laboratory (www.csl.sony.co.jp/project/ar/ref.html) and the thesis abstract at http://www.se.rit.edu/~jrv/research/...roduction.html.
Smart mobs
Rheingold, the author of Smart Mobs, considers smart mobs to be “the next social revolution” (Rheingold, 2006, para. 1) combining “mobile communication, pervasive computing, wireless networks, [and] collective action” (para. 1)
Two well-known examples of smart mobs involved events in the US as well as in the Philippines: “Street demonstrators in the 1999 anti-WTO protests used dynamically updated websites, cell phones, and ‘swarming’ tactics in the ‘battle of Seattle.’ A million Filipinos toppled President Estrada through public demonstrations organized through salvos of text messages” (Rheingold, 2006, para. 2).
In education, instead of smart mobs protesting a political decision, smart study groups will form to prepare for quizzes or to provide feedback about written assignments before submitting them for a grade. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/01%3A_Emerging_Technologies_in_E-learning/1.2%3A_Defining_Todays_Emergi.txt |
In his “lost novel,” Paris in the 20th Century, science fiction author Jules Verne predicted gasoline-powered automobiles, high-speed trains, calculators, the concept of the Internet, and several other technologies invented well after 1863. Verne believed strongly that humans could realize all such predictions: “Anything one man can imagine, other men can make real” (Verne, n.d., para. 1). As scientists in various fields may have taken their cues from Jules Verne, we too can get some ideas about the future of technology and education from science fiction.
Looking at some science fiction within the past 15 years, we will start with predictions that are less farreaching than those contained within Jules Verne’s works. For example, in 1993 a low-grade action movie called Demolition Man depicted a teacher in the year 2023 talking to distance learners who attended class via individual video monitors placed around an empty table. The students’ heads, as shown on the monitors, followed the instructor’s movements as he paced around the room. Most or all aspects of this scenario are already possible with today’s videoconferencing solutions, high bandwidth connectivity, and cameras that use infrared beams to automatically follow a moving subject. Three years ago, Florence Olsen (2003) depicted immersive videoconferencing solutions with virtual students beamed into another classroom hundreds of miles away. In some cases, perhaps, Moore’s Law—computerprocessing power, measured by the number of transistors on integrated circuits, doubling every 18 months— makes it more difficult to look too far into the future because the future arrives so much more quickly.
At the same time, when we read Neal Stephenson’s The Diamond Age, we can see the potential to realize some of his predictions in less dramatic fashion. For example, when people first study sign language, they may dream about signing in full sentences, even though they cannot yet sign in the waking world. In this scenario, the brain contains the previously learned phrases in a mental “database” and stitches them together in new ways during the dream. Soon some instructional designer will put a comprehensive set of sign language video clips into an online database that will allow anyone to learn full sentences quickly by typing text and watching the dynamically generated compilation of the sign language equivalent. Additionally, education and technology have been combined to create tutoring software that learns what you know and steers you to specific lesson components that will fill your learning gaps. These “intelligent tutors” exist for math, accounting, physics, computer science, and other disciplines.
A final set of educational predictions in science fiction is too far out to tell if they are possible. In 1999, a film called The Matrix strongly contradicts William Butler Yeats, who said, “Education is not the filling of a pail, but the lighting of a fire” (Yeats, n.d., para. 1). In the film, the characters plug a cable into the back of their heads and go through “programs” that embed knowledge and skills directly into their brains. The lead character, Neo, becomes a martial arts expert in hours instead of years. Another character, Trinity, learns how to pilot a helicopter in seconds. In reality, humans have had little success linking computers to the brain. Recent developments, such as real-time brain control of a computer cursor (Hochber, Serruya, Friehs, Mukand, Saleh, Caplan, Branner, Chen, Penn & Donoghue, 2006), allow us to believe that some day Matrix-style education may be possible. By then, hopefully, we will have mastered how to teach higher level thinking skills, since this futuristic just-in-time learning presumably will let us skip over lower level skills. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/01%3A_Emerging_Technologies_in_E-learning/1.3%3A_Technology_in_Educatio.txt |
Following Stephenson’s example from The Diamond Age, we will imagine how emerging technologies from the foreseeable future can help us meet instructional needs in the online environment. Being educators, we will start with the instructional needs when making predictions. To do this, we will focus on needs related to helping students successfully meet the learning objectives: sharing resources, facilitating activities, and conducting assessment strategies.
Sharing Resources
Almost all online instructors begin the teaching and learning process with sharing resources with students. Currently, this process requires instructors to create new and/or find existing resources that relate to the topics being studied and then to disseminate them to the students. Unfortunately, some end the process with just sharing resources instead of going further to facilitate interactivity or to assess student performance. Students may miss opportunities to participate in robust, collaborative learning experiences. Here are some ways in which we think the resource sharing process will change.
User-created content
Learners will not only have the opportunity to add value to structured courses through the use of emerging technologies such as blogs and wikis; many of them will create their own content which can be massaged and developed through group participation. Ordinary people will become creators and producers. Learners will truly begin to take control. Examples can be seen at the website called Wifi Cafés, where Internet users can add the locations of their favourite Internet cafe to an open list, and Current TV, where people—mostly nonprofessionals—create television segments and shows. Similarly, students, parents, teachers, and others will continue to create and disseminate educational content on a large scale. Instructors will require students to create content to share with their peers.
User-created content provides a challenge, in that it will be difficult to verify the accuracy of each educational resource. Educators often comment that Wikipedia, while very useful, is made by experts and non-experts alike, potentially decreasing its credibility. While research conducted by Nature magazine determined that Wikipedia comes close to the Encyclopedia Britannica in terms of accuracy of science entries (Giles, 2005), it also shows that collaborative approaches to knowledge sharing require facilitation and editing. No matter what printbased or online source students use to substantiate their course work, they should use multiple sources to check the validity, reliability, and potential bias of information.
To counter this problem, educators will adopt a practice used by eBay and other commercial websites (see the description of similar rating systems in Intelligent Searching above). Namely, people can rate individual pieces of educational content. Users who share educational content will have a dynamic profile that changes each time someone rates their contributions. For example, someone with high ratings would have the title of “trusted content provider”. Experts would have an equal opportunity to check the accuracy of user-created content.
The “Long Tail”
In October 2004, Chris Anderson of Wired magazine published an article outlining the long tail of business. The term “long tail” refers to a statistical concept of the very low part of a distribution where the population “tails off.” The long tail marketing idea is that the Internet is capable of reaching tiny markets, which were previously ignored by marketers because they were too expensive to reach. Online companies can use the Web to sell a vast range of products from mainstream popular items right down to the singularity of one unique unit (Anderson, 2004). Statistically, the sum of the less popular items can outnumber the sum of the popular items.
This “long tail” will also apply to learning. More resources—commercial, instructor- and user-created—are already increasingly available for learners who have, up to now, been somewhat marginalized. English as a second language, international learners, gifted, learning disabled, and physically challenged students, and people with behavioural disorders will all benefit. For example, a website that offers resources for learning disabled students is http://www.npin.org. An excellent site for gifted students is www.hoagiesgifted.org.
In general, more user-created educational content becomes available every day. Of course, these usercreated resources will draw fewer learners than popular websites like Discovery School or the Exploratorium. However, the accumulated total of learners who use the less popular educational resources—the long tail—will outnumber the learners who visit the popular sites.
Facilitating Interactivity
How instructors approach the design of their courses is profoundly affected by their teaching styles (Indiana State University, 2005). The lecture-based approach to teaching is most often used in on-campus courses, and it is what instructors are most familiar with. Findings from research have shown that the lecture-based approach often fails to engage students in online courses (Ally, 2004; Conrad, 2004; Gulati, 2004). Instructors unfamiliar with other instructional strategies need time to explore them while conceptualizing how they will design their online course.
The opportunity to design, develop, and teach in a new medium opens the door to learning new pedagogies. Applying new approaches may affect how instructors perceive their teaching role. In distance education this role shift is often described as a transition from a lecturer to a facilitator (Brown, Myers & Roy, 2003; Collison, Elbaum, Haavind & Tinker, 2000; Conrad, 2004; Maor & Zariski, 2003; Young, Cantrell & Shaw, 2001). This transition is a process that takes time and support, and often it isn’t considered when instructors are asked to develop an online course. During the development process, instructors are often surprised at how much is involved in course development and in conceptualizing their role and how they will teach. If the design of the support infrastructure takes this transitional process into consideration, it can positively influence how instructors view their role and, subsequently, how they design their course. This in turn may also affect student success rates in online courses.
As instructors design or redesign their courses to incorporate emerging technologies they may find that their role and that of their students change. In the example of an online course where there is “no there there,” a student cannot sit passively at the back of the classroom. To be present and seen in an online class, students must be active and involved. Similarly, an online instructor cannot stand in front of the class and conduct a lecture. Because the online environment differs from a physical classroom, the instructor’s role changes as well. For some instructors, shifting from a lecturer to a facilitator role can be a major change in teaching style. Facilitating interactivity in an online course places the instructor alongside the students instead of in front of the classroom.
Designing courses with activities that encourage collaboration, communication, and project-based learning can help instructors step out of the lecturer role. Web 2.0 technologies can be a resource for instructors as they construct new modalities in how they teach and how their students learn. Interactivity can be stimulated by a variety of techniques, ranging from posing questions to be discussed in groups to involving students in projects that include the creation of wikis, blogs, and podcasts.
Forum participation via cell phone
In the future, learners will use cell phones to participate in threaded discussion forums. Instructors and students will use cell phone web browsers to navigate and read threads. Text-to-voice software will read threads to users, giving options such as press 1 to reply, press 2 to hear next message, press 3 to hear previous message, etc. Teachers and learners will use cell phone text message capabilities or voice-to-text software to dictate the thread content. The latter concept requires voice-to-text technology to improve.
For students who prefer it or who don’t have a computer, this technology has the potential to provide more flexibility for learning. ClearTXT is a good example of a company that has already started working in this direction. However, voice recognition software still needs to be dramatically improved.
Assessing Performance
Chapter 14, Assessment and Evaluation, discusses various assessment strategies, so we will focus on how emerging technologies will enable instructors to assess student performance in new, more authentic, ways. As audio, video, and computer applications improve, it will be easier to assess certain knowledge, physical skills, and even attitudes. Virtual reality technologies will also enable students to demonstrate the knowledge, skills, and attitudes to evaluate themselves using methods that they choose (for more, see Chapter 11, Accessibility and Universal Design).
Voice recognition and intelligent tutoring applications
Today, students can record MP3 audio files to demonstrate proficiency in speaking another language. Tomorrow, students will be able to hold conversations with intelligent tutoring programs that use voice recognition software to analyze their phrases before responding, making corrections, or changing levels of difficulty to accommodate their needs. In non-language situations, instructors can use the same combination of applications to assess law student responses in mock court cases or drama student responses during readings.
At other levels, voice recognition and intelligent tutoring will provide multiple avenues for assessing students’ true abilities, reducing the overemphasis on standardized, written tests. Primary school students can demonstrate proficiencies such as spelling aloud or reciting poetry, and secondary students, by answering questions about government or literature.
Electronic portfolios
An e-portfolio is a digitized collection of documents and resources that represent an individual’s achievements. The user can manage the contents, and usually grant access to appropriate people. Currently, there are a variety of e-portfolio types with varied functionality. E-portfolios are increasingly being used for coursework and other assessment purposes.
While electronic portfolios exist today, very few, if any solutions have reached their full potential. Administrators want a tool that allows them to aggregate student results for accreditation audits and other institutional assessments. Principals, deans, and department chairs want a tool that lets them assess program effectiveness via student work. Namely, they want to see if students can achieve program objectives, and, if not, where the department, college, or school falls short. Instructors, advisors, and counselors want to assess student performance and to guide students through the learning process over time. This could be throughout a four-year period at a university, or during a particular degree program. Finally, students want to be able to bridge to careers by using electronic portfolios to demonstrate their skills, knowledge, and attitudes that pertain to job opportunities.
Emerging technology will enable us to make such a tool, or a collection of tools, and integrate them with other infrastructure pieces that improve workflow. For example, students transferring from a two-year community college to a four-year university can use an electronic portfolio to demonstrate required competencies. By this means a student can avoid taking unnecessary classes, and advisors can help the student plot a course after a quick review of the materials and reflections.
Some of the challenges raised by this idea revolve around the electronic portfolio process, rather than the tool or tools. For instance, organizations may need to clarify what constitutes evidence of competence or even what learning objectives and prerequisites are critical in a particular field. Electronic portfolios may very well inspire changes to long-standing articulation agreements that will not work in the future.
The Learning Environment and E-learning 2.0
Whether a classroom is on ground or online, for the learning environment to be stimulating, reinforcing, easy to access, relevant, interactive, challenging, participatory, rewarding, and supportive, it should provide input, elicit responses, and offer assessment and feedback. In an online learning environment, these elements are even more critical because learners are working outside of the usual classroom social environment.
The Internet itself has always had the capacity to be a learning medium. Services such as Google and Wikipedia are probably used more frequently as learning tools than any formal courses or learning management systems. Web 2.0 provides new opportunities for learners through participation and creation. In a 2.0 course, instructors will no longer be able to rely simply on presenting material; they will be involved in a mutually stimulating, dynamic learning environment.
E-learning 2.0 is the application of the principles of Web 2.0. Through collaboration and creation, E-learning 2.0 will enable more student-centred, constructivist, social learning with a corresponding increase in the use of blogs, wikis, and other social learning tools.
Rosen (2006) offers a perspective of what a 2.0 course would look like: they “should never be a hodge-podge assembly of old methodologies delivered through new technologies. They should be a true ‘2.0 course,’ rather than a self-propelled PowerPoint presentation or CBT training presented on a PDA. 2.0 courses provide just-in-time training. They are used as a resource—not a one-time event. A 2.0 course lasts 15 to 20 minutes, runs smoothly on any configuration of device (high resolution, portable) or PDA, and delivers smoothly on all versions of web browsers. Finally, 2.0 courses incorporate the best-of-breed techniques from web design and instructional design” (p. 6).
The term e-learning
Distance learning, distributed learning, online learning, e-learning, virtual learning, asynchronous learning, computer supported collaborative learning, web-based learning . . . these are a few of the many terms used to describe learning in environments in which students and instructors are not physically present in the same location. In burgeoning fields, it is commonplace that a variety of terminology is used to describe a new phenomenon. Clark and Mayer (2003) chose the word e-learning and described its functionality:
[T]he “e” in e-learning refers to the “how”—the course is digitized so it can be stored in electronic form. The “learning” in e-learning refers to the “what”—the course includes content and ways to help people learn it—and the “why”—that the purpose is to help individuals achieve educational goals. (p. 13)
The term e-learning, as well as some of the other terms, will eventually disappear. Electronic delivery will become just one of the options which we will consider to optimize learning for people.
Broadband
What we call broadband today is just a beginning of the kind of network access we will see in the future. Universities are connected by a fibre optic network that works up to 10 gigabits/second. That is 10,000 times faster than the typical broadband download of 1 megabit/second. There will be a next generation of broadband which will enable speeds 10 times greater than we have now and enable downloading of high definition movies and TV shows, VoIP, video telephony, full resolution streamed video and audio and the creation of unimagined learning environments.
Learning management
E-learning 2.0 will be a challenge for learning management systems (LMS, also called course management systems). At the time of this writing, most LMS solutions are designed for Web 1.0, with minimal capability for a fully functioning interactive environment. Nevertheless, LMS vendors will gradually incorporate Web 2.0 capabilities. At this time, education LMS solutions are ahead of corporate solutions in this respect. In the immediate future, LMS solutions will continue to be primarily administrative tools and only secondarily real learning tools. Users will be challenged to find ways to use them so that they facilitate learning. For more information on learning management systems, see Chapter 7, Learning Management Systems.
Eventually, we will be able to find almost anything online. Ten years ago, a colleague said that everything current and worthwhile was already online. This is more true now with Project Gutenberg and Google Books putting libraries of books online, universities making their course materials available (e.g., MIT’s Open CourseWare), communities creating knowledge repositories with wikis, and blogs making almost everyone’s opinions available whether we want them or not.
The challenge will be for learners (all of us) to manage information overload. Much of this will happen beyond the scope of any locally installed learning management system. Google and other search engines will evolve to provide tools for people to manage it all.
Content will be organized as reusable learning objects, much as they are in learning content management systems but on a much broader scale. Wikis and folksonomies may help solve this. Simply put, a folksonomy is a collaborative method of categorizing online information so that it can be easily searched and retrieved. More commonly, it is called tagging. This term is often used in websites where people share content in an open community setting. The categories are created by the people who use the site. To see how tagging operates, go to sites such as Flickr or Del.icio.us. Learning object repositories such as ARIADNE and learning object referratories such as MERLOT facilitate the exchange of peer-reviewed learning materials in a more structured way.
Personalization and context-aware devices such as GPS (global positioning system) units will also help. Personalization is the ability of a website to adapt to its users, like Amazon.com does when it suggests other books you may like, or for the user to adapt the website for his or her own purposes like Google does when it allows you to customize what you see on its website. RSS feeds are a way of personalizing information you receive from the Internet. GPS units can locate the user so that information can be customized for that location. For example, a user who lives in Chicago but is visiting New York would receive weather information for New York. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/01%3A_Emerging_Technologies_in_E-learning/1.4%3A_Imagining_Technology_i.txt |
There are, however, some barriers to the adoption of these emerging technologies. While learners may embrace them, it may take longer for institutions and corporations to adopt and implement them. Administrative policies as well as an organization’s culture can slow down or halt their adoption. Some policy makers may misunderstand the usefulness of these technologies in teaching and learning. As learners adopt new technologies, they will take more control over their own learning, which may challenge the status quo. This may gradually influence corporations and institutions to accept this new paradigm of learning. The consequences of not serving the needs of learners to keep up-to-date with these new ways of learning challenge the relevance of formal training and learning in our organizations.
Perceptions about the quality of certain technology-mediated instructional activities or environments provide additional challenges. As a prime example, the US-based College Board questions “whether Internet-based laboratories are an acceptable substitute for the hands-on culturing of gels and peering through microscopes that have long been essential ingredients of American laboratory science” (Dillon, 2006, para. 3). While emerging technologies allow us to extend nearly unlimited possibilities to those who previously did not have access to them, there may always be a group of people who feel online instruction cannot replace direct experience. Who would not want to see lions and zebras in their natural habitat in Africa instead of going to a zoo or watching a video clip online? Similarly, if it were possible to set up expensive chemistry labs in every school or college, then the virtual environments would not be necessary. They would only serve as a way to refresh knowledge, rather than to obtain it. An alternate solution may be to allow students to learn virtually, but to require them to demonstrate proficiencies in person as appropriate (e.g., before moving to a certain level of difficulty).
Intellectual property (IP) rights and digital rights management will be major challenges. Short-sighted, large corporations who expect to profit from sales (particularly in the entertainment sector) will fight widespread distribution of their product. Solutions like Creative Commons licensing will become the new way of doing business. See Chapter 15, Understanding Copyright.
1.6 Summary
“Web 2.5, Web 3.0, Web 4.5, Web n: whatever it is, I’m enjoying the ride. The pieces are coming together. Glue, indeed.” (Cross, 2006).
Traditional teaching and learning methods and institutions will not go away. They will still be necessary to provide research-based knowledge, structure, and social context for learning. The new technologies will not replace traditional learning but complement it. The history of technology shows us that few technologies replace previous technologies; instead they emerge to coexist and complement them. Television did not kill radio or movies. The Internet has not replaced books. The new technologies discussed in this chapter will be used primarily for extending the ability to create, communicate, and collaborate.
Create
With Web 1.0, almost everyone was a consumer. Only technology wizards had the power to create. Now that online technologies have advanced, Web 2.0 enables almost anyone to be a producer as well as a consumer. Pushing this to education, Web 2.0 tools such as blogs and wikis create a level playing field, where faculty, parents, and even students compete with vendors to produce educational content. Going beyond Web 2.0, technology will raise the bar yet again so that everyone can produce educational activities and assessment strategies that incorporate or go beyond the static content.
With this new equality, we face some familiar challenges. Web 1.0 brought us information overload. It still is not easy for everyone to consistently and quickly find the information they seek online. The same holds true for Web 2.0 information, if not more so, since there are so many more information providers. As the quantities of both producers and products grow, quality becomes more difficult to distinguish as well. Instructors today do their students a great service by asking them to consider validity, reliability, and bias of online information. Looking forward to Web 2.5, Web 3.0, and beyond, we will rely on context-sensitive searching, intelligent searching, peer review ratings, and content expert review ratings to separate the digital chaff from the digital wheat. Finding instructional content and activities to meet almost any learning objectives will continue to become easier, but finding quality instruction will take more effort.
Communicate
In many countries around the world today, communication by cell phones is ubiquitous. Trends in mobile and social computing will make it possible for learners to create and interact with learning communities. For example, using course rosters as “buddy lists” in connection with wireless, mobile devices such as personal digital assistants (PDAs), students will be able to identify if their peers are nearby on campus. Someone in a large section class with more than 100 students will be able to use technology to create a sense of community. The social computing phenomenon will move beyond using static Web pages to share party pictures with peers to using digital storytelling to share competencies with future employers. Instead of smart mobs protesting a political decision, “smart study groups” will form to prepare for quizzes or to provide feedback about written assignments before submitting them for a grade.
Communication challenges in education will include infrastructure, resources, and freedom of speech. Maintaining an adequate communication infrastructure for learning means setting up wireless networks throughout a campus or even throughout a metropolitan area. This work is expensive, labour intensive, and requires a great deal of planning. Educational organizations do not always have the right amount of resources to keep communications running smoothly. Chapter 26, Techno Expression, covers bridging the gap between allowing freedom of expression and setting boundaries to restrict inappropriate behaviour. Despite the power of emerging technologies in education, this balance is difficult to achieve.
Collaborate
With both current and emerging technologies, people sometimes collaborate without the intention or knowledge of doing so. Mashups, for instance, require multiple parties to play a role, but only the person who creates the final product really knows what pieces were required to make it work. Even people who make APIs to enable others to use their tools do not know how they will be used. The makers of Google Maps probably did not predict WeatherBonk (http://weatherbonk.com), a popular mashup that lets people view real-time weather on top of a detailed satellite map. Similarly, wikis require contributions from several parties to be successful. The strength of Wikipedia is in the number of people who contribute ideas and who police the site. For evidence of the power of collaboration, note the number of Wikipedia references in this collaboratively written book!
The future of collaboration involves repurposing the emerging technologies to meet educational goals. Instead of weather map mashups with live webcams, we will see underground railroad map mashups with links to writings from former slaves and re-enactments. Students in certain cities can see if their neighbourhood had any homes that participated in aiding slaves get to the Northern states.
Collaboration poses its own challenges. If not facilitated well, it can devolve into anarchy or, at the very least, into the specter of unmet potential. While constructivist theory has become more popular, completely unguided group learning can lead to large groups of people who collaboratively teach each other with misinformation and groupthink. Facilitating educational collaboration requires both structure and flexibility. You can provide structure by defining expectations, writing clear instructions, setting deadlines for each assignment or project component, and being consistent in how you facilitate online collaboration. You can provide flexibility by allowing students to take turns moderating online discussions, giving students choices about which project they pick or which group they join and being willing to move in new directions that emerge during the collaborative exchanges.
Teaching and learning still relies on people—expert learners and beginning learners—more than technology. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/01%3A_Emerging_Technologies_in_E-learning/1.5_Challenges.txt |
Learning Outcomes
• Demonstrate the knowledge and understanding of the emerging issues of diversity for online learning.
• Explain different definitions of diversity with references from literature.
• Identify the different parameters of diversity.
• Analyze different learner characteristics and their online behaviour.
• Prioritize different parameters of diversity according to their importance for designing online courses.
• Design learning environments to sustain motivation in online courses.
“In the life of the human spirit, words are action, much more so than many of us may realize who live in countries where freedom of expression is taken for granted. The leaders of totalitarian nations understand this very well. The proof is that words are precisely the action for which dissidents in those countries are being persecuted”. – Carter (1977)
The world is shrinking rapidly. The Internet has brought the world together in ways that nobody would have expected. You can now attend a college halfway around the world, with students from any country with Internet access. People will telecommute to their jobs more in the future, while their companies compete globally (elearners.com). Many countries around the world are experiencing increasing diversity amongst their populations (Wentling & Palma-Rivas, 2000). While this is having a major impact on organizations within the business sector (Thomas, 1995), higher education institutions are also feeling the effects of increasing diversity within student populations (Smith, 1995). The last decade in particular has seen an increasing trend towards globalization (Farrell, 2001) particularly with the introduction of the World Wide Web and the Internet. As a result the tertiary education landscape has changed considerably as institutions seek new and innovative ways to meet the needs of a growing and increasingly diverse student population (Rumble & Latchem, 2004). Online learning, or e-learning, is an increasingly popular method being used by institutions to meet the requirements of the changing learning landscape (Dimitrova, Sadler, Hatzipanagos & Murphy, 2003).
4.2: Diversity
Within any group of people there will be many aspects of diversity. Whether the focus of investigation is a sports team, a school class, a work group within an organization, or a group of online learners, these groups are made up of individuals who differ on at least some dimensions of diversity (Maznevski, 1994). While many would acknowledge that no two persons are alike in every respect and therefore can be regarded as diverse relative to each other, it is the similarities between some specified group of people and differences to other groups that has been the focus of much research on diversity (Cox, 1993; Hofstede, 2004; Thomas, 1995; Triandis, 1995b). Indeed it is this ability to identify meaningful distinctions that make diversity a useful and extensively studied concept (Nkomo, 1995).
4.3: Defining
That diversity is a complex issue is reflected in the difficulty in defining what diversity is (Smith, 1995). In order to make some sense of the countless potential sources of diversity among groups of people numerous definitions have arisen. Within organizations diversity is “typically seen to be composed of variations in race, gender, ethnicity, nationality, sexual orientation, physical abilities, social class, age, and other such socially meaningful categorizations” (Ferdman, 1995, p. 37). In other words diversity measures are assumed to capture a perception of similarities and differences among individuals in a group or organization (Wise & Tschirhart, 2000).
Wentling and Palma-Rivas (2000) point out that there are many definitions of diversity that range from narrow to very broad. Narrow definitions of diversity tend to focus on observable or visible dimensions of difference (Milliken & Martins, 1996) which Lumby (2006) asserts are likely to evoke bias, prejudice, or the use of stereotypes leading to disadvantage. These include ethnicity, race, gender, disability, and age. Indeed much of the organizational diversity research has tended to focus on the identification of differences between the cultural majority and particular minorities in the workplace with regard to race, culture, and gender (Thomas, 1995). As a result of this somewhat narrow focus some argue that the term diversity should only pertain to particular disadvantaged groups (Wise & Tschirhart, 2000). A direct consequence of this is the current politicised nature of the discussion which has seen diversity become synonymous with affirmative action where diversity is seen as a means of fostering the recruitment, promotion, and retention of members of a particular group (Thomas, 2006).
Not all agree with this view and argue that the definition of diversity is much broader and is continually changing and evolving (Smith, 1995). Broader meanings of diversity tend to encompass a greater variety of characteristics that are not immediately observable or public. These include dimensions such as educational background, national origin, religion, sexual orientation, values, ethnic culture, education, language, lifestyle, beliefs, physical appearance, economic status, and leadership style (Cox, 1993; Lumby, 2006; Thomas, 1995, 1996; Wentling & Palma-Rivas, 2000). Still others take account of additional dimensions such as political views, work experience/professional background, personality type and other demographic socioeconomic, and psychographic characteristics (Gardenswartz & Rowe, 1998; Thomas, 1995; Wise & Tschirhart, 2000).
Maznevski (1994) differentiates between two main types of diversity characteristics, namely, role-related diversity such as occupation, knowledge, skills, and family role; and inherent (to the person) diversity such as gender, age, nationality, cultural values, and personality. In contrast, McGrath, Berdahl & Arrow (1995) developed a more comprehensive framework of diversity attributes using clusters.
What these different definitions highlight is the breadth and variety of understanding of what diversity is and can encompass. Thomas’ (1996, pp. 5–8) definition of diversity is an attempt to reflect this broadness as well as acknowledge that any discussion about diversity must make explicit the dimensions being explored. He defines diversity as “any mixture of items characterized by differences and similarities”. Key characteristics of diversity include:
• Diversity is not synonymous with differences, but encompasses differences and similarities.
• Diversity refers to the collective (all-inclusive) mixture of differences and similarities along a given dimension.
• The component elements in diversity mixtures can vary, and so a discussion of diversity must specify the dimensions in question. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/04%3A_Addressing_Diversity_in_Design_of_Online_Courses/4.1%3A_Introduct.txt |
A significant diversity dimension that has received considerable attention and research is that of culture (Cox, 1993; Hofstede, 2004; Triandis, 1994). Much of the drive for this has come from the increasing types and degrees of diversity occurring within organizations in an increasingly globalized marketplace and the need to manage this process to achieve effective functioning of work groups (Maznevski, 1994).
Historically the definition of culture has been contentious, resulting in numerous definitions by researchers (Erez & Earley, 1993; Triandis, 1996). Shweder and LeVine (1984) and D’Andrade (1984) defined culture as a shared meaning system within a group of people. Hofstede (1980), on the other hand, described culture as a set of mental programs that control an individual’s responses in a given context. Still others (Triandis, 1972; 1995b) have viewed it as consisting of shared elements of subjective perception and behaviour where the subjective aspects of culture include the categories of social stimuli, associations, beliefs, attitudes, norms, and values, and roles of individuals who share a common language and live during the same historical time period in a shared geographical location. Triandis (1996) also identified subjective culture as being a function of the ecology (terrain, climate, flora and fauna, natural resources) linked to the maintenance system (subsistence and settlement patterns, social structures, means of production) within which it is situated.
Even though there are multiple definitions most agree that culture consists of shared elements “that provide the standards for perceiving, believing, evaluating, communicating, and acting among those who share a language, a historic period, and a geographic location (Triandis, 1996, p. 408). It’s important to note that most countries consist of hundreds of cultures and subcultures (Triandis, 1995b) and that culture is not synonymous with nations, although it is often discussed this way in the literature (Erez & Earley, 1993).
One of the most widely used and quoted studies on culture is the seminal work of Hofstede (1980; Hofstede, 2001), which studied cultural differences in a large multinational organization with data from more than 40 countries. He developed a five-dimensional model that took account of cultural variation in values. According to this research, the five dimensions on which culture vary are power distance, uncertainty avoidance, individualism versus collectivism, masculinity versus femininity, and long-term versus short-term orientation.
Power distance describes the way in which members of the culture accept inequality of power, that is, the unequal sharing of power; uncertainty avoidance reflects the degree to which a culture emphasizes the importance of rules, norms, and standards for acceptable behaviour; individualism versus collectivism relates to the degree to which individuals are integrated into primary groups or in-groups (Triandis, 2001); masculinity versus femininity refers to the division of roles based on gender; and long-term versus short-term orientation highlights the predominant focus of people within the group, namely the future or the present (Hofstede, 2001, p. 29). Of these five dimensions most of the variance in the data was accounted for by the individualism and collectivism (I-C) dimension. Since the publication of the original work in 1980 a multitude of research and theory has the I-C dimension as a focus (Church, 2000; Triandis, 2004).
Triandis (1995b) defines individualism as “a social pattern that consists of loosely linked individuals who view themselves as independent of collectives; are primarily motivated by their own preferences, needs, rights, and the contracts they have established with others; give priority to personal goals over the goals of others; and emphasize rational analyses of the advantages and disadvantages to associating with others”. Collectivism on the other hand is “a social pattern consisting of closely linked individuals who see themselves as parts of one or more collectives (family, co-workers, tribe, nation); are primarily motivated by the norms of, and duties imposed by, those collectives; are willing to give priority to the goals of these collectives over their own personal goals; and emphasize their connectedness to members of these collectives” (p. 2). These differences can be summarised as:
• A sense of self as independent versus self that is connected to in-groups. Markus and Kitayama (1991) view this as independent versus the interdependent self-construal
• Personal goals have priority versus group goals have priority
• Social behaviour guided by attitudes, personal needs and rights versus social behaviour guided by norms, obligations, and duties (Church, 2000; Triandis, 1995b)
In addition to these general contrasts the following attributes tend to be reflective of the I-C dimension (see Table 4.1).
It is important to note that to this point the terms individualism and collectivism and the corresponding attributes refer to the cultural level where the unit of analysis is the culture (i.e., between culture analyses) and individualism is the opposite of collectivism (Hofstede, 1980). To make the distinction between the cultural and individual level of analysis (i.e., within-culture analyses), Triandis Leung, Villareal & Clack (1985) used the terms idiocentrism and allocentrism (I-A) that describe individual personality attributes (Triandis and Suh, 2002, p. 140).
Table \(1\): Attributes of individualist and collectivist cultures
Attributes Individualist Collectivist
Self-perception individual group
Attributions internal causes external causes
Prediction of behaviour more accurate based on internal dispositions such as personality traits or attitudes social roles or norms
Identity & emotions ego-focused relationships & group membership; other focused
Motivation emphasize abilities emphasize effort
Cognition see themselves as stable and the environment as changeable see their environment as stable and themselves as changeable/ flexible
Attitudes self-reliance, hedonism, competition, emotional detachment from in-groups sociability, interdependence, family integrity
Norms curiosity, broadminded, creative, having an exciting and varied life family security, social order, respect for tradition, honouring parents and elders, security and politeness
Social behaviour personality more evident influenced by behaviour and thoughts of others; shifts depending on context
Attitudes towards privacy personal business is private personal business is also business of group
Communication
• direct
• emphasizes content and clarity
• frequent use of “I”
• message is indirect and reliant on hints, eyes bodies, etc.
• emphasizes context and concern for feelings and face-saving
• frequent use of “we”
Conflict resolution more direct obliging, avoiding, integrating, & compromising styles
Morality prefer attitudes and behaviour are consistent
• contextual and focused on welfare of the collective
• linked to adherence of many rules
Responsibility individual collective
Professional behaviour promotion based on personal attributes promotion on the basis of seniority & loyalty
Idiocentrics emphasize self-reliance, competition, uniqueness, hedonism, and emotional distance from ingroups. Allocentrics emphasize interdependence, sociability, and family integrity; they take into account the needs and wishes of in-group members, feel close in their relationships to their in-group, and appear to others as responsive to their needs and concerns.
At the individual level of analysis idiocentrism and allocentrism are often orthogonal to each other meaning that individuals can and often do exhibit attributes of both. In addition idiocentrics and allocentrics are found in all cultures (Triandis & Suh, 2002). It’s also been found that idiocentrism tends to increase with affluence, leadership, education, international travel, and social mobility; is more likely if migration to another culture has occurred; and in cases of high exposure to Western mass media. Allocentrism is more likely if individuals are financially dependent; of low social class; have limited education; undertaken little travel, socialized in a traditionally religious environment; and acculturated in collectivist culture (Triandis & Trafimow, 2001, cited in Triandis, 2006). Additionally allocentrism and idiocentrism attributes are dependent on context (Triandis, 1995a). Triandis (2006) also notes that globalization is essentially compatible with individualism and idiocentrism. This has the effect of complicating the discussion about I-C cultures and in turn the discussion on diversity.
Ferdman (1995) also discussed the gap between group differences and individual uniqueness using the concept of cultural identity. He argued that “culture is by definition a concept used to describe a social collective” (p. 41) but that values, norms and behaviours ascribed to a particular culture are expressed by individuals who vary in their image of the group’s culture as reflected in individual-level constructions. In other words diversity does not just apply to differences between groups but also within-group differences and the “concept of cultural identity suggests that simply having some representatives of a particular group may not adequately reflect the full range of diversity” (p. 56). Cox (1993) argues that many individuals belong to multiple groups and that group identity develops when there is an affiliation with other people who share certain things in common. Indeed “various group identities play a part in how we define ourselves” (p. 43) and how we behave as individuals. The growing recognition that globalization is giving rise to more multicultural or complex hybrid identity development of young people is a case in point (Lam, 2006). This in turn “shifts our understanding of culture from stable identities, categorical memberships, and holistic traits to ways of acting and participating in diverse social groups and the heterogeneous sets of cultural knowledge, skills, and competence that are required in the process” (p. 217).
While some have warned against describing both cultural and individual characteristics using a broad dichotomy such as I-C (Church & Lonner, 1998) and that different selves are accessible in different contexts (Trafimow, Triandis & Goto, 1991), given the accumulated research in this area and continuing dominance the I-C dimension it seems an appropriate and valid dimension to consider when attempting to address issues of diversity in the online learning environment. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/04%3A_Addressing_Diversity_in_Design_of_Online_Courses/4.4%3A_Lessons_f.txt |
In the design of online learning environment it is not only the diversity among people which is of utmost importance it is also the diversity among available resources and technologies, subject area, methods of assessment, and capabilities of both faculty and students to handle the technologies and their expectations from each other and from the course (Bhattacharya and Jorgensen, 2006).
In reality all the parameters or aspects of diversity are intermingled and intertwined with each other. The ideas or solutions can not be presented as stand-alone to address a single aspect of diversity; they are as complex and interlinked as a kaleidoscope, with the pieces connected to all the other pieces and to the whole. They interact with one another, and in that interaction change the dynamics. Make one small twist on the kaleidoscope, and the pieces shift into another pattern (Thomas and Woodruff, 1999). Therefore knowledge about diversity and the related issues are useful for developing online learning environments, but are not enough to design courses which will suit individual needs, expectations, interests, and so on. There are definitely no simple solutions or ideal conditions for designing online courses to address the issues of diversity.
In the following section we have identified some of the design principles for creating online learning environments to cater to diversity and discussed some of the innovations we have tried in this regard. Our motto is to “address diversity through variety”.
4.6: Learning
Success indicator or effectiveness of any learning environment design is judged by students’ satisfaction and success rate. Both satisfaction and success depend on sustaining interest and motivation for learning. Much research is needed to identify the different motivating factors for learning and the strategies for sustaining learners’ motivation in online courses. Most of the online courses are attended by the students who are busy professionals, or who do not have access to face-to-face education. These students are highly motivated to learn, although they have different motivations or objectives for learning. So our challenges are to sustain students’ motivation in the online environment, provide challenges, provide support, and facilitate learning. One of the primary aspects of sustaining interest in online courses is to provide opportunities for interactions. People are, above all else, members of social groups and products of the historical experiences of those groups (Wood, 2004).
Some of the basic principles of instructional (interaction) design are:
• Design and use learning activities that engage students in active learning.
• Provide meaningful and authentic learning experiences that help learners apply course concepts and achieve course objectives.
• Use strategies that consider the different learning styles of students.
The teacher as the leader and designer of the learning environment must possess and inculcate fundamentals of embracing diversity (Sonnenschein, 1999) which include:
• Respect—for others, for differences, for ourselves.
• Tolerance—for ambiguities in language, style, behaviour.
• Flexibility—in situations that are new, difficult and challenging.
• Self-awareness—be sure you understand your reactions and know what you bring to the diverse workplace (learning environment).
• Empathy—to feel what someone different from you might be feeling in new and strange surroundings.
• Patience—for change that can be slow, and diversity situations that might be difficult.
• Humour—because when we lose our sense of humour, we lose our sense of humanity, as well as perspectives (p. 9).
The instructor or designer has to be creative, and use several different activities and interactivity to engage students and enhance their learning experience in an online course. These could be done through introduction of case studies, reflective journals, research reports, eportfolios, wikis, blogs, podcasts, simulations and games, authentic group projects through problem-based or inquiry-based learning, tests, quizzes, synchronous chat and asynchronous discussion forums, audio-videoconferences via Internet, etc. The instructor will have to develop strategies and techniques for establishing and maintaining learning communities among distance learners through the use of learning technologies. This will help to overcome the isolation that students can experience when taking an online course and also provide opportunities for collaboration and sharing knowledge and expertise.
We have conducted online collaborative problem-based learning for distance students. It was initially very difficult for the students to adapt to the new learning environment. By the end of the course, students realized that much learning had taken place by working in collaborative groups and participating in synchronous and asynchronous interactions using Internet tools. Student reflections revealed that the learning environment allowed them to choose their own problem to work on. They could schedule their work in negotiation with other group members. Students felt a sense of ownership of their work. Some students indicated that they were so involved in finding solutions for a problem or resolving an issue that at times they forgot that they were doing the activities for a course assignment. Assessment was done for the acquisition of higher order cognitive skills, e.g., critical thinking, decision-making, reflection, problem solving, scientific, and research skills. Self reflection, peer assessment and feedback are also a part of the peer-based learning process. In the process students also acquired valuable social and interpersonal skills through collaborative activities (Bhattacharya, 2004).
We have introduced e-portfolios in various courses and programs over the years. E-portfolios allow students to integrate and identify the links between the various activities they do in and outside of their formal education. Students can bring in their personal experiences and demonstrate how they have applied the knowledge and skills acquired in actual practice through e-portfolios. Developing e-portfolios and reflecting on the activities allow students to learn about their strengths, weaknesses, interests, and provide them directions for future. It also provides opportunities for teachers to learn about their students: their motivations, their previous experiences, their background, their skills, their attitudes, etc. Students can personalize their learning, and develop communication, organization, presentation, and design skills through development of e-portfolios (Bhattacharya, 2006).
In recent times we have used a combination of freeware for conducting interactive sessions in our online courses. Students were consulted before combining and using the technologies. A quick survey revealed the pros and cons of different technologies. Students and faculty agreed upon a set of tools which would work for them. The process of selecting tools, particularly criteria for selection, preferences, and justifications for using particular tools provided useful data for identifying tools and technologies to mashup to suit different purposes. Examples include Skype, Googledoc, Googlechat, or Skypechat for collaborative group assignments for an online and distance education course. WebCT discussion forums were used for asynchronous interactions among group members. In this course all the synchronous interactions were recorded for future reference and feedback.
4.7: Conclusio
In this chapter we have discussed different approaches to designing online courses to address the issues of diversity where diversity is viewed as a strength to be exploited rather than a problem to be solved.
We envisage that in the near future mashups of different technologies will be easier, and students will be able to create their own learning environment by dragging and dropping different tools into one common platform, and access their personalized learning environment with one login.
The online learning environment should be flexible with respect to time and pace of learning. It should provide different forms of active learning and ways of assessment, and give control and choices to the learner. It should allow for the synthesis of formal, informal, and non formal learning to address the issues of diversity.
There is a major issue in that everyday informal learning is disconnected from the formal learning that takes place in our educational institutions. For younger people there is a danger that they will increasingly see school as a turn off—as something irrelevant to their identities and to their lives. Personal learning environments have the potential to bring together these different worlds and inter-relate learning from life with learning from school and college (Pontydysgu, 2007).
Social software and Web 2.0 technologies are increasingly allowing people to create their own learning environments, creating and publishing material, sharing ideas with people, and receiving feedback from not only the teacher or peers but from anyone, anywhere. Our future online courses will have to be dynamic and process-oriented to address the fast-changing nature of the electronic age.
More research, innovation, and developmental work are needed to cater to the demands of future learners. We need to work on developing theories of e-learning to guide teachers and developers of online learning environments (Bhattacharya, 2007). In future students will develop their own personalized learning environments and build their learning communities. Students will be equal partners with teachers in designing assessment activities. Students will have the freedom and right to choose how and when they would like to be assessed. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/04%3A_Addressing_Diversity_in_Design_of_Online_Courses/4.5%3A_Other_Par.txt |
Learning Outcomes
• Trace the history of instructional technologies in education.
• Select the best emerging technologies in e-learning.
• Develop design guidelines for learning materials to be delivered via emerging technologies.
• Provide support for learners taking courses at a distance using emerging technologies.
• Identify trends in e-learning and emerging technologies.
Learners. educators, and workers in all sectors are increasingly using emerging technologies such as cell phones, tablet PC, personal digital assistants (PDAs), web pads, and palmtop computers. As a result, these tools make learning and training materials accessible anywhere, anytime.
Today, the trend is towards learning and working “on the go”, rather than having to be at a specific location at a specific time. As learners become more mobile, they are demanding access to learning materials wherever they are and whenever they need them. This trend will increase because of ubiquitous computing, where computing devices, wireless connectivity, and transparent user interfaces are everywhere.
Educators must be prepared to design and deliver instruction using these emerging technologies. In addition to delivering learning materials, emerging technologies can be used to interact with learners, especially those who live in remote locations. At the same time, learners can use the technologies to connect with each other to collaborate on projects and to debate and discuss ideas.
This chapter provides a brief history of technology in education, outlines the benefits of using emerging technologies in e-learning, provides design guidelines for developing learning materials, describes the support required for these technologies, and discusses future trends in e-learning.
6.2: The History of Instr
In the early ages, before formal schools, family members educated younger members with one-to-one coaching and mentoring. Early instructional technologies were sticks to draw on the ground and rocks to draw on walls. Information was not recorded permanently. With the invention of paper and the printing press, information was recorded, and learners could refer to documents as needed for learning. The paper revolution was followed much later by the invention of computer hardware and the software that makes computers do what we want, including developing electronic learning materials.
In the early 1960s, these learning materials were designed and developed on mainframe computers. In the 1970s, computer-based training systems used minicomputers to teach. With the invention of the microcomputer in the late 1970s and early 1980s educators and learners had more control over the design and delivery of learning materials. As learners determined for themselves what they wanted to learn, the instructor’s role changed from that of a presenter of information to that of a facilitator. The microcomputer revolutionized the way educational materials were developed and delivered. The instructor was able to design learning materials using authoring systems, and learners were able to learn when and where they wanted.
Rumble (2003) identified four generations of distance education systems: correspondence systems; educational broadcasting systems; multimedia distance education systems; and online distance education systems. In early distance education learning materials were mailed to learners and the learners mailed assignments back to the instructor. The first attempt to use computers for instruction was by the military, who designed instruction to train military staff. About the same time, educational institutions started to use broadcast television to deliver instruction to learners. With the invention of the microcomputer in the 1970s, there was a shift to microcomputer-based learning systems. Because the different microcomputer systems then in use did not communicate with each other, there was limited flexibility in developing and sharing learning materials. Also, the early microcomputer systems did not provide features such as audio, video, and special effects. As instructional technology improved, educators developed learning materials in less time and with more control over the product.
Until the late 1970s, educational institutions used face-to-face classroom instruction. This was followed by a shift to a more individualized format using self-study workbooks, videotapes, and computer software. As technology advanced, the group-based classroom mode shifted to the one-to-one mode of delivery. The combination of the Internet and mobile technology has moved elearning to the next generation, allowing educators to design and deliver learning materials for learners living in remote locations, or who cannot attend face-to-face schools for other reasons. The available computing power of these technologies allows educators to better meet the needs of individual learners. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/06%3A_The_Impact_of_Technology_on_Education/6.1%3A_Introduction.txt |
Because of the rapid development of information technology, there is a shift from print-based learning to elearning to mobile learning (m-learning). M-learning refers to the use of electronic learning materials with built-in learning strategies for delivery on mobile computing devices (Ally, 2004a). Mobile devices offer many benefits. Thanks to wireless technology, mobile devices do not have to be physically connected to networks to access information. They are small enough to be portable, allowing users to take the devices anywhere. Users can interact with each other to share information and expertise, complete a task, or work collaboratively on a project.
Benefits of emerging technologies for education:
• Education is scaleable, since educational institutions do not have to build classrooms and infrastructure to hold face-to-face classes. To accommodate more learners, educational institutions need only expand the network and hire more instructors to facilitate additional courses.
• Electronic learning materials are easy to update. Because learners use their mobile devices to access the learning materials from a central server, they can receive these updates as soon as they are made.
• The same learning materials can be accessed by students from different regions and countries.
• Learners can complete their education from any location as long as they have access to the learning materials, possibly through a wireless connection.
• Because learners can access the learning materials anytime, they can select the time they learn best to complete their coursework. This increases the success rate in learning, and facilitates informal learning.
• Designers of learning materials for emerging technologies can leverage the computing power of the technology to personalize the learning experience for individual learners.
• Since learning with emerging technologies is learner-focused, learners will be more involved with their learning, and thus motivated to achieve higher level learning.
• For businesses, mobile learning can be integrated into everyday work processes, which promotes immediate application. The emerging technologies allow workers to access learning materials for just-in-time training.
• Because most learners already have mobile technology, educational institutions can design and deliver courses for different types of mobile technology (Ally & Lin, 2005).
Mobile technologies such as Blackberries, Treos, iPods, and cell phones are being used in the classroom and in distance education to reach out to students and to deliver learning materials to students. Instructors are taping their lectures and making them available for students to listen whenever they like. Providing lectures and learning materials in audio format is important for some subject areas such as when learning a language and English Literature. The mobile technologies are also used to connect to students to inform them when course requirements are due and informing them on updates to courses. Mobile learning technologies can be used in any discipline that can be broken down into small segments of instruction. This will allow students to complete one segment at a time. In addition to playing a support role in classroom instruction, mobile technologies can play a major role in distance education by delivering instruction anywhere and at anytime. Books and course information will have to be formatted for reading on computer and mobile devices screens. A good example of how this is being realized is the screen on the one hundred dollar laptop (OLPC, 2006). Information on the screen can be read in daylight as well in the dark. The small screens on the mobile devices are becoming more advanced for reading. As with the development of the virtual screen, students will be able to project information and images on a surface that is the same size as a regular computer screen.
However, before these benefits can be realized, the learning materials must be designed specifically for emerging technologies. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/06%3A_The_Impact_of_Technology_on_Education/6.3%3A_Benefits_of_Using_Em.txt |
In developing learning materials for any technology, learning theories must be used for effective and efficient instruction. This section will address theories and design principles for emerging technologies.
Early learning materials development was influenced by behaviourist learning theory. Behaviourists claim that it is the observable behaviour of the learner that indicates whether or not they have learned, not what is going on in the learner’s head. Early instructional methods, such as the teaching machine, were influenced by behaviourist theory. The teaching machine taught by drill and practice, and transferred the repetitiveness of teaching from the instructors to the machine.
Cognitive learning theory influenced the development of learning materials with the introduction of computer-based instruction. Cognitive psychologists see learning as a process involving the use of memory, motivation, and thinking, and that reflection plays an important part in learning. Cognitivists perceive learning as an internal process and claim that the amount learned depends on the processing capacity of the learner, the amount of effort expended during the learning process, the quality of the processing, and the learner’s existing knowledge structure. Cognitive theory was influenced by information processing theory, which proposes that learners use different types of memory during learning.
As technology emerged, there was more emphasis on learner-centred education, which promoted the use of constructivist theory in the development of learning materials. Constructivists claimed that learners interpret information and the world according to their personal reality, and that they learn by observation, processing, and interpretation, and then personalize the information into their own worldview. Also, learners learn best when they can contextualize what they learn for immediate application and to acquire personal meaning. The learner-centred approach allows learners to develop problem-solving skills and learn by doing rather than by being told.
They are many instructional design models for developing learning materials. Dick et al. (2001) proposed a design model with the major components being design, development, implementation, and evaluation of instruction. Another widely used model is by Gagné et al. (1991) who claimed that strategies for learning should be based on learning outcomes. Gagné specifies nine types of instructional events:
• gain the learner’s attention;
• inform the learner of the lesson objectives;
• stimulate recall of prior knowledge;
• present stimuli with distinctive features to aid in perception;
• guide learning to promote semantic encoding;
• elicit performance;
• provide informative feedback;
• assess performance; and
• enhance retention and learning transfer.
Most of the current and past instructional design models were developed for classroom and print-based instruction rather than for learner-centred instruction and e-learning. New instructional design models are needed to develop learning materials for delivery on emerging technologies.
According to Jacobs and Dempsey (2002), some emerging influences that will affect future instructional design include object-oriented distributed learning environments, the use of artificial intelligence techniques, cognitive science, and neuroscience. Below are guidelines for educators to develop learning materials for delivery via emerging technologies.
Tips and Guidelines
• Information should be developed in “chunks” to facilitate processing in working memory since humans have limited working memory capacity. Chunking is important for mobile technologies that have small display screens, such as cell phones, PDAs, etc.
• Content should be broken down into learning objects to allow learners to access segments of learning materials to meet their learning needs. A learning object is defined as any digital resource that can be re-used to achieve a specific learning outcome (Ally, 2004b). Learning materials for emerging technologies should be developed in the form of information objects, which are then assembled to form learning objects for lessons. A learning session using a mobile device can be seen as consisting of a number of information objects sequenced in a pre-determined way, or sequenced, according to the user’s needs. The learning object approach is helpful where learners will access learning materials just in time, as they complete projects using a self-directed, experiential approach. Also, learning objects allow for instant assembly of learning materials by learners and by intelligent software agents to meet learners’ needs.
• Use the constructivist approach to learning to allow learners to explore and personalize the materials during the learning process. Learning should be project-based to allow learners to experience the world by doing things, rather than passively receiving information, to build things, to think critically, and to develop problem-solving skills (Page, 2006). Mobile technologies facilitate project-based learning since learners can learn in their own time and in their own environments. For example, as learners complete a project they can use wireless mobile technology to access just in time information and the instructor as needed.
• Simple interfaces prevent cognitive overload. For example, graphic outlines can be used as interfaces and as navigational tools for learners. The interface should allow the learner to access learning materials with minimal effort and navigate with ease. This is critical for emerging technologies since some output devices are small.
• Use active learning strategies that allow learners to summarize what they learn and to develop critical thinking skills. For example, learners can be asked to generate a concept map to summarize what they learned. A concept map or a network diagram can show the important concepts in a lesson and the relationship between them. Learner-generated concept maps allow learners to process information at a high level. High-level concept maps and networks can represent information spatially, so learners can see the main ideas and their relationships.
• Learning materials should be presented so that information can be transferred from the senses to the sensory store, and then to working memory. The amount of information transferred to working memory depends on the importance assigned to the incoming information and whether existing cognitive structures can make sense of the information. Strategies that check whether learners have the appropriate existing cognitive structures to process the information should be used in emerging technologies delivery. Pre-instructional strategies, such as advance organizers and overviews, should be used if relevant cognitive structures do not exist.
• There should be a variety of learning strategies to accommodate individual differences. Different learners will perceive, interact with, and respond to the learning environment in different ways, based on their learning styles (Kolb, 1984).
According to Kolb, there are four learning style types:
1. Divergers are learners who have good people skills. When working in groups, they try to cultivate harmony to assure that everyone works together smoothly.
2. Assimilators like to work with details, and are reflective and relatively passive during the learning process.
3. Convergers prefer to experiment with, and apply new knowledge and skills, often by trial and error.
4. Accommodators are risk-takers, who want to apply immediately what they learn to real-life problems or situations.
Examples of strategies to cater for individual learning preferences include:
• Use visuals at the start of a lesson to present the big picture, before going into the details of the information.
• For the active learners, strategies should provide the opportunity to immediately apply the knowledge.
• To encourage creativity, there must be opportunities to apply what was learned in real-life situations so that learners can go beyond what was presented.
• The use of emerging technologies will make it easier to cater to learners’ individual differences by determining preferences, and using the appropriate learning strategy based on those preferences.
• Provide learners the opportunity to use their meta-cognitive skills during the learning process. Meta-cognition is a learner’s ability to be aware of their cognitive capabilities and to use these capabilities to learn. This is critical in e-learning, since learners will complete the learning materials individually. Exercises with feedback throughout a lesson are good strategies to allow learners to check their progress, and to adjust their learning approach as necessary.
• Learners should be allowed to construct knowledge, rather than passively receive knowledge through instruction. Constructivists view learning as the result of mental construction where learners learn by integrating new information with what they already know.
• Learners should be given the opportunity to reflect on what they are learning and to internalize the information. There should be embedded questions throughout the learning session to encourage learners to reflect on, and process the information in a relevant and meaningful manner. Learners can be asked to generate a journal to encourage reflection and processing. Interactive learning promotes higher-level learning and social presence, and personal meaning (Heinich et al., 2002).
Intelligent agents should be embedded in the technology to design instruction and deliver the instruction based on individual learner needs. An intelligent agent gathers information about learners and them respond based on the what was learned about the student. For example, if a learner consistently gets a question on a concept wrong, the intelligent agent will prescribe other learning strategies until the learner master the concept. As the user interacts with the system, the agent learns more about the learner. This is critical, as learning materials may be accessed by people globally. These agents can be proactive so that they can recognize and react to changes in the learning process. As the intelligent agent interacts with the learner, it gains more experience by learning about the learner and then determining the difficulty of materials to present, the most appropriate learning strategy based on the learner’s style, and the sequence of the instruction (Ally, 2002). The intelligent learning system should reason about a learner’s knowledge, monitor progress, and adapt the teaching strategy to individual’s learning pattern (Woolf, 1987). For example, the intelligent system could monitor learning and build a best practice database for different learning styles. It could also track common errors and prescribe strategies to prevent similar errors in the future. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/06%3A_The_Impact_of_Technology_on_Education/6.4%3A_Design_Principles_fo.txt |
Good planning and management are necessary for developing and delivering successful learning materials. E-learning development projects tend to be interdisciplinary, requiring a team effort. No one person has the expertise to complete the development project. The different types of expertise required include subject matter, technical support, instructional design, project management, multimedia, and editing. Educational organizations should be thinking long-term and, strategically, to make sure that learning systems are aligned with the goals of the institution.
Some of the factors that lead to successful e-learning follow.
Tips and Guidelines
• Involve key players form the start of the project. One group that should be involved are instructors who may be developers or reviewers of the learning materials.
• Inform stakeholders of the progress so that they will continue to fund the project.
• Determine team members’ roles and responsibilities so that they can be productive and cooperative.
• Involve all team members in the project, with team members interacting with each other.
• Keep learners’ needs foremost in mind during the development of learning materials.
• Establish standards of quality control and quality assurance to ensure the learning materials are of high quality.
• Assess skills and expertise of team members, and provide the appropriate training if needed.
• Start with a small project to build success before moving on to larger projects.
• Ensure proper support during the implementation of the learning systems.
• Maintain the learning materials to ensure they are current, and address any problems with the delivery system.
6.6: Providing Support in
When instruction is delivered to learners at a distance, learners must have adequate support to be successful. Instructor can use synchronous or asynchronous communication tools to communicate and interact with learners. In synchronous learning, support is provided in real time, using two-way text, audio, and/or video. The learner and the instructor are able to interact with each other synchronously. In the asynchronous mode, there is a delay in communication between the instructor and learner. For example, in computer conferencing learners post their comments in real time while other learners and the instructor may respond at a later time. Hence, as instructors move from face-to-face delivery to e-learning, their roles change drastically, shifting from that of dominant, front-of-the-class presenter to facilitator, providing one-to-one coaching to learners, and supporting and advising them. Since the learner and instructor are not physically present in the same location, the instructor has to use strategies to compensate for the lack of face-to-face contact.
How should the instructor function in the e-learning environment? In a study conducted by Irani (2001), faculty suggested that training for online delivery should include instructional design, technology use, and software use. Keeton (2004), reported that the areas faculty see as important for e-learning are those that focus on the learning processes. The instructor should be prepared both to facilitate and to provide support for learning.
Tips and Guidelines
• Instructors must be trained to be good facilitators of e-learning. The instructor has to facilitate learning by modelling behaviour and attitudes that promote learning, encourage dialogue, and demonstrate appropriate interpersonal skills. Good facilitation skills are important to compensate for the lack of face-to-face contact in e-learning and to connect to the learner and create a high sense of presence (Hiss, 2000).
• The instructor should be trained to recognize different learning styles and adapt to them. An effective e-learning instructor must recognize that learners have different styles and prefer certain strategies (Ally & Fahy, 2002).
• The e-learning instructor should understand the importance of feedback, and be able to provide effective, constructive, and timely feedback to learners (Bischoff, 2000). Learners should feel comfortable and motivated by the instructor’s enthusiasm about the course materials. Learners can be motivated by challenging them with additional learning activities, and by emphasizing the benefits of what they are learning.
• The e-learning instructor must be able to advise learners when they encounter academic and personal problems during their studies. The instructor has to acknowledge the problems and, in some cases, address them. For personal problems, the learner should be referred to a professional counselor. One of the key competencies for training instructors is deciding when to help a learner with a problem and when to refer the learner for professional help.
• The e-learning instructor must interpret learners’ academic problems and provide resolutions. This implies that the instructor has the expertise to solve content problems. The instructor solves these problems by staying current in the field, interpreting learners’ questions, communicating at the level of the learner, providing remedial activities, and following up on help provided. Interaction with learners requires good oral and written communication skills. E-learning instructors are required to develop and revise courses on an ongoing basis. Part of the tutoring process is to provide written feedback. The instructor needs good listening skills to understand what the learner is saying in order to respond appropriately. A training program for e-learning instructors must include effective listening skills. As part of the tutoring and coaching process, the instructor needs to know how to ask questions to elicit information from learners and to diagnose their problems.
• Instructors must be trained in using e-learning technologies to develop and deliver learning materials. This is critical, as the instructor must model proper use of the technologies for the learners. Instructors should be patient, project a positive image, enjoy working with learners, and be a good role model.
With learners at a distance, some in remote locations, one way to connect them is to use of online discussion forums.
Guidelines for Moderation Online Discussion in E-learning
Well-moderated discussion sessions allow learners to feel a sense of community and to develop their knowledge and skills in the subject area. The moderator should have good written and oral communication skills, be a good facilitator, be able to resolve conflict, and should be an expert in the subject field. Below are some specific guidelines for moderating online discussion forums using emerging technologies.
• Welcome the learners to the forum, and invite them to get to know each other.
• Provide appropriate feedback to forum postings. Learners expect the instructor to be subject matter experts, and to provide feedback on their comments and questions on the course content. Foster dialogue and trust with comments that are conversational.
• Build group rapport by encouraging learners to share ideas and help each other. Learners could, perhaps, form small groups to address certain issues and report back to the larger group.
• Respond to learners’ questions promptly. In synchronous conferencing, learners will see or hear the responses right away. In asynchronous computer conferencing, as a guideline, the instructor should post responses to questions within twenty-four hours.
• Set the tone of the discussion. Providing sample comments is helpful for new learners to model their own comments. Keep the forum discussion on topic. Some learners might stray off topic during the discussion. If learners want to discuss another topic, create another forum where participation is voluntary. If a learner continually stays off topic, the instructor should consult with the learner individually. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/06%3A_The_Impact_of_Technology_on_Education/6.5%3A_Planning_for_Impleme.txt |
Educators need to develop innovative models of teaching and delivery methods tailored to emerging technologies. Future learning systems should contain intelligent agents to duplicate one-to-one tutoring. Multiple intelligent agents could also monitor learners’ progress, and cater to individual needs and styles. Intelligent learning systems will allow learners to be more active and will place more responsibility on them in the learning process. Research is needed on how to empower learners to learn on their own and how to activate learners’ metacognitive skills.
Content will be designed as small chunks in the form of information and learning objects. This will allow intelligent agents to prescribe the most appropriate materials based on learner’s learning style, progress, and needs. The intelligent agents will assemble these chunks into a larger instructional sequence so that learners can achieve the learning outcomes of the lesson. More work is needed on how to develop learning objects and how to tag them for easy retrieval by intelligent agents.
Future technologies will use intelligent agents to assemble courses and modules of instruction immediately by accessing learning objects from repositories. Because of the changing nature of content, models are needed to develop learning materials in as short time as possible using techniques similar to rapid application development (Lohr et al., 2003). Smart learning systems in emerging technologies will be able to assemble unique courses for each learner, based on the learner’s prior knowledge, learning preferences, and needs.
Pervasive computing is making it possible for computing power to be included everywhere, thanks to tiny microprocessors and wireless access. As a result, educators must design for pervasive computing where learners will access learning materials using everyday objects and environments. For example, learners might be able to access course materials using kitchen appliances, or their clothing.
The trend in hardware development is towards virtual devices, such as the virtual keyboard and virtual screen. With these devices, learners are able to turn on the device, use it, and then turn it off. For example, for input into a computer, a learner can press a button to turn on a virtual keyboard on a temporary surface, use it, then turn it off. When developing learning materials for emerging technologies, educators must design for delivery on these virtual devices.
6.8: Summary
As we continue to use such technologies as cell phones, PDAs, palmtops, and virtual devices for everyday activities, educators will need to develop and deliver learning materials on these devices. Educators must proactively influence the design and development of emerging technologies to meet learners’ needs. A good example of this is the one hundred dollar laptop that is being developed by a multidisciplinary team of experts, including educators (OLPC, 2006). The one hundred dollar computer is a global device that will be used by learners around the world since it is affordable.
E-learning materials must be modular to allow for flexibility in delivery. Modular learning materials allow learners to complete a module of instruction at a time rather than an entire course. The learning time for a module of instruction is between two to four hours. The content must be broken down into small chunks and developed as learning objects. The modular format allows the segments of instruction to be tagged and placed in learning object repositories for easy retrieval by learners and instructors. When designing learning materials for emerging technologies, educators must think globally and must design for the future so that the materials do not become obsolete.
Learning systems of the future must develop intelligent systems to relieve tutors from routine decision-making so that they can spend time on issues concerning the learning process. Intelligent systems will be able to design, develop, and deliver instruction to meet learners’ needs. For example, an intelligent agent will be able to identify learners who need extra help and provide an alternative learning strategy. The intelligent agent should anticipate learners’ requirements and respond immediately to take corrective action or to present the next learning intervention based on learners’ characteristics and style to maximize learning benefits. In other words, the intelligent agent should form dynamic profiles of the learner and guide the learner to the next step in the learning process.
One of the major challenges educators will face is how to convert existing learning materials for delivery on emerging technologies rather than redeveloping courses from scratch. It is important to develop learning materials in electronic format so that the information can readily delivered by newer technologies.
“Real learning occurs when learners learn by doing and making things”. – Ally | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/06%3A_The_Impact_of_Technology_on_Education/6.7%3A_Emerging_Trends_in_T.txt |
Learning Outcome
• Describe the functions of learning management systems (LMS) for formal education and corporate training.
• Conduct a needs analysis, select an appropriate LMS for your environment and manage the implementation and change process successfully at least 50 percent of the time. A higher success rate will depend upon the political environment and the diligence of the needs analysis and research that is done.
"I truly believe that the Internet and education are the two great equalizers in life, leveling the playing field for people, companies, and countries worldwide. By providing greater access to educational opportunities through the Internet, students are able to learn more. Workers have greater access to e-learning opportunities to enhance and increase their skills. And companies and schools can decrease costs by utilizing technology for greater productivity”. – John Chambers, CEO of Cisco Systems (Chambers, 2002)
What are Learning Management Systems?
Learning management systems (LMSs) are electronic platforms that can be used to launch and track e-learning courses and enhance face-to-face instruction with online components. Some also manage classroom instruction. Primarily they automate the administration of learning by facilitating and then recording learner activity. They may or may not include tools for creating and managing course content. As the systems grow, they also add new features such as e-commerce, communications tools, skills tracking, performance management and talent management.
LMSs have evolved quite differently for formal education and corporate training to meet different needs. The most common systems used in education are WebCT, Blackboard (these are now effectively one) and Moodle. They often use the term course management system to describe themselves. The term course management system, however, is easily confused with content management system, so we will use the term LMS to describe the solutions for both educational and corporate environments. We will distinguish between them by discussing corporate or business LMS versus education LMS. Education LMSs are also known as virtual learning environments (VLE).
This chapter will be a non-technical look at the features of these systems and the processes of selecting and implementing them. It will address the different functionalities of the systems and consider open-source systems as an option to commercial proprietary ones. It will discuss needs analysis to help you begin the process of selecting an appropriate system, and the change management process to address the implementation issues. Case studies will be provided for illustration. Open source systems will be discussed in Chapter 8, Exploring Open Source for Educators.
Occasionally certain vendors and products or services are mentioned by name. These are not intended to be endorsements in any way but simply to serve as familiar examples. We do not endorse any products or services. Vendors and products that are mentioned are usually the best known or the ones with the greatest market penetration. There is no single “best” solution. The ideal solution is the one that fits your needs and environment.
7.2: Learning Management - The Two Cultures
There are two main thrusts in formal learning: academic education, and corporate training (including government and the non-profit sector). In educational institutions, the learning model uses courses of fairly long duration (weeks to months) for the long-term educational benefit of the learner. In corporate training, the model is usually short courses (hours to days) for immediate updates, with specific focus on job functions and objectives. Some corporations try to emphasize the importance of their training services by calling them “universities” such as McDonald’s University and General Motors University. As part of their long-term development plans, many businesses also provide support for their employees to attend educational institutions for longer courses and degree programs. For centuries, both systems have relied upon classroom-based, instructor-led facilitation in which a live teacher leads the process.
Distance learning by correspondence has been with us now for many decades. When e-learning became a reality over 10 years ago (first on CD-ROM and then over the Internet), it extended the opportunities for distance learning, and new options and models became possible. The education and corporate training models have evolved separately and somewhat differently.
In the online education environment, it is generally assumed that an instructor leads the course, is available by chat (synchronous), via email and discussion groups (asynchronous), and sometimes via virtual classrooms. In the corporate online learning environment, there is a high degree of dependence on self-directed learning often using courses that have been purchased off-the-shelf from third-party vendors. Only occasionally is an instructor present. As a result, the communication/collaboration tools for email, chat, and group activity are well developed in education LMSs while they are less so in corporate LMSs.
Education LMSs are primarily for the delivery of instructor designed online learning and include course content creation (or course authoring) capability as well as some tools to manage the content. While corporate LMSs provide features to help manage classroom instruction, the e-learning is often assumed to be primarily asynchronous, self-directed courses. Many of these courses are purchased from off-the-shelf courseware vendors. As a result, corporate LMSs do not typically include course authoring or content management features. The larger corporate vendors do often offer suites of tools that do include these capabilities.
In most educational institutions, computer systems for registration already exist, so the features for this in education LMSs are limited while many corporate LMSs offer full capabilities for managing classroom learning from registration to assessment as well as e-learning. It is highly desirable that in an educational institution, the LMS can send data to and from the registration system, and in corporate training the LMS can communicate with the human resources information system.
The focus of both education and corporate LMSs often tends to be more on the administration and technical requirements of the organization rather than on the dynamic facilitation of learning. Some instructors and designers are frustrated by the constraints (both technical and learning) of using these systems and would prefer more dynamic learning support systems such as student weblogs and learning wikis. (See Chapters 25 and 26 for further discussion of these tools). Some of the open-source systems, especially when combined with social learning tools, are more student-centred than the commercial ones.
Online and classroom learning each offer different advantages for different learners. Many people argue that classroom learning is better. Some believe that the classroom offers interactivity—a dynamic exchange of information, questions and opinions between students and instructor and among students. Unfortunately interactivity in a classroom often involves a minority of students who choose to participate, and for others it may not be interactive at all. We have been conditioned since the age of five to believe that learning only happens in a classroom. The reality is that we are continuously learning in all situations. One benefit of the classroom is the social structure and support of schedule, deadlines, the physical presence of the instructor, and other learners. Self-directed online courses offer the obvious advantages of time flexibility—they can be done almost anywhere and at anytime at the convenience of the learner, and they can be repeated several times if necessary. Well-designed online courses can be more effectively interactive than many classrooms in that they require active learning on the part of each student in responding to questions, doing an activity, getting feedback—there is no back of the classroom in an online course—and give them the added flexibility of the freedom from time and place constraints.
Tip
There are at least 100 LMSs available for business and at least 50 available for education. Many of the latter are open-source. Although they offer different features, it is best not to ignore the LMSs from the other sector.
Features of Education Learning Management Systems
The original educational learning management system was probably PLATO, which was developed in the early 1960s. In the late 1970s there were initiatives like the Open University in the UK Cyclops system and CICERO project, Pathlore’s Phoenix software, and Canada’s Telidon project. Wikipedia has an extensive listing of initiatives in its article, History of Virtual Learning Environments.
In formal education LMSs were first used to support distance education programs by providing an alternative delivery system. They are also now used as platforms to provide online resources to supplement regular course material and to provide courses for students who require additional flexibility in their schedules, allowing them to take courses during semesters when they are not physically present or are not attending on full time basis. This also benefits students who are disabled or ill and unable to attend regular classes.
Education LMSs primarily support e-learning initiatives only. Systems for regular classroom support are already in place.
The model for an LMS designed for education is that an instructor creates a course using web-based tools to upload the necessary materials for the students, and sets up collaborative tools such as:
• email
• text chat
• bulletin board presentation tools (e.g., a whiteboard for collaborative drawing and sketching)
• group web page publishing
Students access the course materials on the Web, do both individual and collaborative assignments, and submit them to the instructor. Most education LMSs offer the following features:
Tools for instructors:
• course development tools—a web platform for uploading resources (text, multimedia materials, simulation programs, etc.), including calendar, course announcements, glossary, and indexing tools
• course syllabus development tools with the ability to structure learning units
• quiz/survey development tool for creating tests, course evaluation, etc.
• grade book
• administrative tools to track student activity both as individuals and in groups
Tools for students:
• password protected accounts for access to course materials
• course content bookmarking and annotation
• personal web page publishing
• accounts for access to the collaborative tools (email, discussion groups, collaborative web page publishing)
• access to grades and progress reports
• group work areas for collaborative web page publishing
• self-assessment tools
Administrative tools:
• management of student and instructor accounts and websites
• monitoring and reporting activity
• e-commerce tools for sale of courses
• communication and survey tools
Some may also offer, maybe at extra cost, some of the following features:
• learning object management (course content management for reusability)
• e-portfolios
• file and workflow management
• streaming audio and video
• access to electronic libraries
Blackboard now offers an e-commerce module, and Moodle integrates with PayPal to allow for customers to pay online.
Although LMSs often claim a learner-centred approach involving active collaboration between the instructor and students, both as individuals and in groups, there are some social networking tools such as wikis and weblogs (blogs) that most of these systems do not (as of this writing) support. There are numerous initiatives underway to develop add-on tools and to integrate social learning tools with open-source platforms.
In most cases it is assumed that the teacher provides the content, but some system vendors are now selling content as “e-Packs” or “cartridges” that can be uploaded by teachers. It is also possible to purchase course materials from other institutions. Using courses from other sources, however, may be challenging if they are not compatible with your LMS, consistent with the instructor’s approach, or accessible by students with disabilities. This may improve with the development and application of operating and accessibility standards.
Commercial systems
The most widely adopted commercial systems are WebCT and Blackboard. Web CT was originally developed by Murray Goldberg at the University of British Columbia, beginning in 1995. In 1999 the company was purchased by Universal Learning Technology of Boston, and became WebCT, Inc. Blackboard was originally developed at Cornell University. The company was founded in 1997 by Matthew Pittinsky and is based in Washington, DC. WebCT and Blackboard currently control about 80 percent of the LMS market in higher education (Sausner, 2005, p. 9). Blackboard purchased WebCT in 2005, making them the dominant force in the market. The WebCT products are currently being merged and re-branded as Blackboard products.
The most widely adopted commercial systems are WebCT and Blackboard. Web CT was originally developed by Murray Goldberg at the University of British Columbia, beginning in 1995. In 1999 the company was purchased by Universal Learning Technology of Boston, and became WebCT, Inc. Blackboard was originally developed at Cornell University. The company was founded in 1997 by Matthew Pittinsky and is based in Washington, DC. WebCT and Blackboard currently control about 80 percent of the LMS market in higher education (Sausner, 2005, p. 9). Blackboard purchased WebCT in 2005, making them the dominant force in the market. The WebCT products are currently being merged and re-branded as Blackboard products. Office (USPTO) ordered re-examination of the patent. On February 1, 2007, Blackboard announced its patent pledge, which is a promise by the company to never assert its issued or pending course management system software patents against open-source software or homegrown course management systems.
It is hard to say what the effect of this will be on current and potential WebCT and Blackboard customers. Some will want to go with the market leader regardless, others will stay with what they have, and many may move to open-source solutions. Cornell University, the birthplace of Blackboard, is reconsidering whether Blackboard is the most appropriate software for Cornell professors and students.
Some other education oriented systems offered by commercial vendors:
• Desire2Learn
• College
• Jenzabar
• Odyssey Learning Nautikos
• WBT Systems Top Class (now appears to be targeting the corporate sector)
• ANGEL
• Centrinity First Class (now a division of Open Text)
• Geometrix Training Partner (primarily a corporate LMS but often used by educational institutions for distance learning programs with a business orientation).
Notes:
• IBM/Lotus Learning Space no longer seems to be a viable contender in the education market. It is now called Workplace Collaborative Learning, and appears to be targeted to the business market.
• Prometheus has been purchased by Blackboard and no longer seems to be supported.
Tip
If you currently are using a commercial education LMS, you may find costs escalating, and a continual demand for upgrades. For these and other reasons, many educational institutions are considering opensource systems as an alternative.
Open-source systems
Open-source software is computer software whose source code is available free “under a copyright license … that permits users to study, change, and improve the software, and to redistribute it in modified or unmodified form.” (http://en.Wikipedia.org/wiki/Open-source_ software, February 2007). Open-source LMSs are gaining ground in the education market as a reaction to increasing costs for the commercial systems, and because of the greater flexibility and more student-centred learning approaches in the open-source systems. Some instructors, particularly those with technical expertise, will prefer these systems because of fewer constraints, a greater sense of control, and and generally better communication tools. Other instructors won’t like them because they prefer more rule-based systems with full administrative features.
There are numerous open-source systems available. Some of the better known ones are:
• Moodle
• ATutor
• Sakai
• Bodington
• Claroline
• Magnolia
Although the software is free, open-source solutions are not without their costs. They need continuous support and maintenance, which require either a strong and supportive internal IT group, very dedicated instructors, or a contract with outside vendors who will do it for you. Open-source software is maintained by an active community of users who are constantly upgrading the code. These code changes can affect the operability of courses unexpectedly, and require more local maintenance. The “hidden” costs of the time of the IT people and the instructors may or may not outweigh the cost of a licence for a commercial system.
There are useful discussions of open-source systems at http://www.funnymonkey.com, http://openacademic.org/ and in Chapters 8 and 12 of this book.
Other aspects of LMSs
Some educational institutions have built their own LMS, and have not chosen to market them. Although it is possible for anyone to do the same, it is an expensive process, and it may be vulnerable if one person is the primary developer. Some of the open-source systems have been built by an institution or a group of institutions, and then shared. ATutor was developed at the University of Toronto. The Sakai initiative is a collective effort by 65 academic partners.
Course development: Course development tools (also called course-authoring tools) are an integral part of most education LMSs. Some instructors also like to use some of their own tools such as web authoring/HTML editors (e.g., Dreamweaver, FrontPage, GoLive), word processing (e.g., Microsoft Word) and presentation tools (e.g., Flash, PowerPoint). The LMS should be capable of working with such tools.
Virtual classrooms/web conferencing: Virtual classrooms (also known as web conferencing tools) add audio, video, and graphics to synchronous classes over the Internet. Such tools are not usually included as part of an LMS but are available separately.
Learning content management systems (LCMS) provide a means of storing developed courseware in learning repositories (databases) as learning objects where it can be retrieved and used by others. Most education LMSs have at least some learning content management capabilities.
Most LMSs are primarily administrative tools, and it is up to the instructors and designers developing the courses to address the issues of the learning model, but many of the LMSs lack the tools to support more student-centred learning. The integration of social learning tools such as wikis and blogs with an LMS can help create a more dynamic learning environment.
Social learning is closely related to social networking and social computing and is the essence of what is being called Web 2.0. It is the use of wikis, blogs, podcasting, etc., by individuals and groups to create content instead of simply being the recipients. Web 1.0 was about downloading; Web 2.0 is about uploading.
Web 2.0 is defined not only by technologies (blogs, wikis, podcasts, vodcasts, RSS feeds, and Google Maps are a few examples), but also by the social networking that it enables. Web 2.0 tools can scaffold learning environments for enhanced communication among students as well as between students and the instructor. Creating learning opportunities that harness the power of Web 2.0 technologies for collaborative learning, distributed knowledge sharing, and the creation of media-rich learning objects can further the scope of what students can learn by fostering a constructivist environment, and putting learning in the control of the students. Both students and instructors are embracing these tools at a phenomenal rate. Examples are Wikipedia and YouTube. LMSs will need to catch up.
Initiatives to include social learning into LMS include:
• Learning objects is a commercial product, and targets users of large-scale course management platforms.
• Elgg elgg.net/ (February 2007)—open-source
• Drupal http://drupal.org/ (February 2007)—opensource
• MediaWiki http://www.mediawiki.org/ (February 2007) —open-source
It is interesting to note that the University of Phoenix, one of the largest e-learning organizations in the world with nearly 200,000 students online simply uses Outlook Express newsgroups for its courses, along with other tools it has developed internally. Other early online universities like Pepperdine University use newsgroups extensively as well.
Tip
Adult and continuing education departments tend to follow more of a business model. If you are seeking an LMS for this application and need registration and payment features, consider some of the more reasonably priced business LMSs (see below). | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/07%3A_Untitled_Chapter_7/7.1%3A_Introduction.txt |
The major business-oriented LMSs manage classroom and blended learning as well as e-learning, and are intended to function as the full registration systems for corporate training departments. Some of the larger ones such as SumTotal Systems, Saba and GeometrixTraining Partner actually evolved from registration systems. A few very basic corporate LMSs manage only e-learning, and then usually only for pre-packaged, self-directed courses.
Corporate LMSs usually offer the following features:
Classroom course management:
• registration
• course scheduling and set-up (instructors, facilities, equipment)
• email status notification
• tracking.
E-learning management:
• registration
• delivery
• email status notification
• tracking
• interoperability with third-party and custom courseware
• testing and evaluation
• communication tools.
Blended learning management combines e-learning course content with classroom activities and communication tools such as discussion groups and virtual classrooms.
Support for e-learning standards such as AICC (Aviation Industry Computer-based training Committee) and SCORM (Shareable Content Object Reference Model) to enable interoperability between third-party courseware and the LMS and between different LMSs. These standards do not guarantee the interoperability, but they are a step in the right direction. The origin of many of these standards come from engineering, the airline industry, and the US military who operate on a corporate training model, so they are less relevant to education courseware, but may help if you are switching platforms or making courses available to others using different platforms. See Appendix D, Course Authoring Tool Features, and Chapter 17, E-learning Standards.
Competency and performance management:
• Identify needed competencies for individuals and groups in order to perform the necessary work.
• Track performance for both individuals and groups and identify where improved performance is needed.
• Link to human resource systems. This is another feature not directly relevant to an education environment.
Reporting and analytics:
• Ability to generate reports on participation, assessments, etc.
• Includes standard and custom reports.
• Reports generated in graphical form.
• Financial analysis.
• Survey generation and analysis.
• Regulatory compliance tracking.
Multiple language support: Multinational corporations usually require different languages. Many LMSs provide for multiple languages now, but this does not necessarily include true localization which requires adaptation of the content and design to fit local cultures. True localization is far more extensive than translation and requires substantial additional work.
The following functions are usually offered as separate capabilities or as part of a suite. Often the course authoring and web conferencing tools are supplied by separate vendors.
• Course development/authoring: A means of creating online courses. Many of the tools used in business are designed for creating interactive, self-directed courses complete with tests and assessments. Examples of such tools include Authorware, ToolBook, Lectora, ReadyGo, and Outstart Trainer. Other tools offer so called rapid e-learning development—conversion of Word, PowerPoint, etc. documents into interactive courseware. Examples include Articulate, Elicitus, Impatica and KnowledgePresenter.
• Virtual classrooms/Web conferencing: Synchronous instructor-led classes over the Web. Tools include Microsoft Live Meeting, Elluminate, Adobe Acrobat Connect Professional (formerly Macromedia Breeze), LearnLinc, Webex, Interwise and Saba Centra.
• Learning content management/learning object repository: A means of storing developed courseware in learning object repositories (databases) so that it can be retrieved and reused. In addition to suites offered by the major LMS vendors, notable others include Eedo, Chalk Media Chalkboard, and Cornerstone OnDemand.
One of the main distinguishing features between corporate and education LMSs is that for most business LMSs provide fairly complete registration systems for classroom instruction as well as e-learning. Full scale registrations usually already exist in educational institutions.
LMSs sometimes offer e-commerce capabilities that allow both internal and external people to pay for courses. These features for managing both classroom instruction and e-commerce are not usually part of education LMSs. The exceptions to this rule are Blackboard, which does offer a commerce solution for educational institutions, and Moodle, which integrates with PayPal for this purpose.
In the corporate environment, there is a great deal of reliance on pre-packaged, self-directed courses. Many of these will likely be generic courseware available from such suppliers as SkillSoft, Thomson NETg (Skillsoft now owns NETg), ElementK, and others. The off-the-shelf courseware usually covers such topics as information technology (IT) skills, communication skills, business processes, and sales training. In most cases there is also the need for custom courseware for training on proprietary products and solutions, and unique situations. It is extremely important that the LMS can work with all possible third-party courseware and tools used to create custom courseware.
Most corporate LMSs are limited in their use of communication tools. Unlike education LMSs, there is no assumption that an instructor will be available via email. This will probably change somewhat as businesses recognize the value of communication tools, communities of practice, mentoring, blogs, wikis, etc.
As corporate LMSs expand their capabilities, they begin to overlap with human resources functions, with terms like performance management, human capital management and talent management becoming frequently used by the major vendors.
The major vendors of corporate LMSs are:
• Generation21
• GeoLearning
• GeoMetrix Training Partner
• Intelladon
• KnowledgePlanet
• Learn.com
• OutStart
• Plateau
• Saba
• SumTotal Systems
These are the ten largest vendors in the corporate LMS market. Open-source systems are not yet a major factor in the corporate environment, but as Linux becomes more popular this may change.
As with any enterprise software system purchases, there are generally two approaches—“best-of-breed” in which companies look for the best possible tools in each category, and the single vendor approach in which all the tools are obtained from a singe vendor. The former can give the organization all the functions it needs while creating some integration challenges in getting the tools to work with each other. The latter will probably simplify integration, but may sacrifice some functionality.
Tip
Business LMSs typically include classroom registration features and do not include course development tools. Education LMSs are just the opposite. Education LMSs are also strong on communication tools.
For a detailed comparison of the features of education and corporate LMSs, see Appendix A, LMS Comparison Matrix.
Tip
Corporate LMSs tend to be very expensive for an educational environment but some of the more modestly priced ones may be suitable, particularly in a continuing education application where registration and e-commerce features may be needed.
Standards
E-learning standards
Technical, design, and accessibility standards for e-learning are in a constant state of flux. Technical standards continue to be developed to provide for compatibility between systems and courseware, and for the definition and use of learning objects. See Appendix B, Standards Bodies and Links, for a list of standards bodies and links. Several different international organizations are working on these standards. The AICC (Aviation Industry Computer-based Training Committee) standard was developed more than 10 years ago when the aviation industry (one of the early adopters) recognized the problem of interoperability among systems. SCORM (Shareable Content Repository Reference Model) is a collection of technical standards for different purposes. It is developed by the Advanced Distributed Learning (ADL) initiative of the US Department of Defense. SCORM was begun in 1997, and the standards continue to evolve. Many LMS vendors and courseware vendors claim to be standards-conformant, but that does not yet guarantee that the systems will be interoperable. Some course designers are against standards altogether, claiming that it constrains creativity and the facilitation of learning.
Instruction design Standards
At least as important as technical standards is the quality of the instructional design. Instructional design certification is offered by ASTD (American Society for Training and Development. “Designed for asynchronous Web-based and multimedia courses, the E-Learning Courseware Certification (ECC) recognizes courses that excel in usability and instructional design”. (American Society for Training and Development, n.d., para. 4)
ISPI (International Society for Performance Improvement) offers numerous publications and awards addressing design standards for e-learning.
E-learning design can also be certified by eQcheck. “The eQcheck is designed to ensure that a product will give satisfactory performance to the consumer. The standards on which the eQcheck is based are the Canadian Recommended E-Learning Guidelines—the Can-REGs, published and copyrighted by FuturEd Inc. and the Canadian Association for Community Education (2002)” (eQcheck, n.d., para. 2).
Accessibility standards
These relate directly to general Web accessibility, particularly for the visually impaired. The initiative is led by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (http://www.w3.org/WAI/). There is also the Web Standards Project, which “is a grassroots coalition fighting for standards which ensure simple, affordable access to web technologies for all.” (http://www.webstandards.org/). In the US, Section 508 of the Rehabilitation Act requires access to electronic and information technology procured by Federal agencies. See Chapter 11, Accessibility and Universal Design, where this is discussed extensively.
Tip
Claims of standards conformance do not yet guarantee interoperability. Tools and courseware should be tested with the LMS to be sure. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/07%3A_Untitled_Chapter_7/7.3%3A_Features_of_Corporate_Learning_Manageme.txt |
Course development is also referred to as course authoring. Courses made available on the Web are simply collections of web pages designed to help people learn. They may be a group of resources to which a learner is referred, or they may be carefully crafted sequences of learning events that include interactivity, tests and assessments, animations, screen simulations, video, and audio. It is possible to create web-based learning courses by using templates or by programming directly in HTML or Flash but there are course authoring tools available which are designed to simplify the process.
In education LMSs some course authoring capability is usually included. Some instructors may prefer to use additional tools. Course authoring is not usually included in corporate LMSs, but is available separately. as part of an LCMS or as part of a suite of products.
Course authoring tools like Adobe/Macromedia Authorware and SumTotal ToolBook have been around since before the World Wide Web, and have evolved with it. Not all the tools do everything. The more complex ones require considerable expertise and can benefit from programming experience. Simpler ones are easier to use but may be somewhat limited in capability. Some are tools for converting PowerPoint presentations or Word documents to web code. They are often referred to as “rapid e-learning” development tools. Others are specialized to produce software simulations, or tests, and assessments.
In education LMSs course development tools provide the means for teachers to perform the following types of activities:
• Provide and organize resources related to the learning objectives: Most education solutions allow instructors to create simple text pages or web pages. These can be used for a syllabus, a project outline, assignment instructions, grading guidelines, and much more. LMSs usually provide support for multi-media materials such as video and audio streaming or modules or simulations built in other software tools. If instructors are using tools such as Dreamweaver, Flash, or other authoring tools, it is important to obtain an LMS that supports the code generated by these products particularly for any rich media, interactivity, and for recording scores on tests.
• Set up communication tools for the students to use: LMSs often give instructors and students the ability to send email to one another via the LMS. Instructors can also set up group areas, discussion forums, wikis, and other tools to allow students to communicate about general topics with little to no facilitation by the instructor or teaching assistant. For example, you can use a discussion forum as a way for students to introduce themselves, to provide technical support to each other, or to continue an interesting discussion if you run out of time in the classroom. Many LMSs also provide a calendar to which students, instructors, and the LMS itself can add events. Students can schedule study groups, instructors can remind students of special events such as field trips, and the LMS itself will mark events such as quiz dates or assignment due dates.
• Facilitate and manage online interactivity related to the learning objectives: Those same communication tools, and several others, can be used to facilitate online interactivity related to coursework. Depending on the LMS, instructors can use single-question polls to gauge student attitudes or knowledge about a reading, discussion forums to have students analyze a lab procedure before entering the lab, wikis to have students collaboratively solve a problem or work on a project, or chat to let small groups discuss required field work in real time.
• Assess student performance (skills, knowledge, and attitudes): LMSs provide avenues for students to submit assignments and for instructors to evaluate different types of student performance. For example, students can submit written essays in several ways, including, but not limited to, digital drop boxes, discussion forum threads, discussion forum attachments, wikis, or “assignment” modules. Instructors can require students to use different submission pathways to create different types of assignments. You might use a discussion forum to allow peer review, wikis to engage students in collaborative writing exercises, or assignment modules to make it easy to collect all the essays. LMSs usually provide tools for creating and delivering quizzes as part of the courses. Instructors may also use other tools for this purpose such as Questionmark Perception, Respondus, Hot Potatoes, and test banks that publishers provide. If you plan to use these tools, it is important to be sure that your LMS can work with the code generated by these third-party software solutions.
• Assess teaching effectiveness: Many LMSs contain survey tools to allow instructors to collect feedback about specific topics, including teaching effectiveness (see Chapter 24, Evaluating and Improving Your Online Teaching Effectiveness, for more information on this topic). The different LMSs vary the possibilities for instructors and students. Some allow anonymous student responses and some contain specific survey instruments for teaching effectiveness. If the LMS does not do everything you want, you can always link to an external survey tool on the Web. For example, the Free Assessment Summary Tool (http://getfast.ca) allows instructors to use a database of more than 350 teaching effectiveness questions, to create twenty questions per survey, and to download the results as an Excel spreadsheet, all for free.
Tip
Be sure your LMS will work with the additional tools that instructors are likely to use for course development.
Course Development in Corporate LMSs
Course authoring tools are not usually included as part of a corporate LMS, but are available separately or as part of an LCMS.
For corporate training there is a strong reliance on pre-packaged, self-directed courses. These can be purchased from third-party vendors like Skillsoft, Thomson NETg (now a part of Skillsoft, making Skillsoft the single largest vendor of such courseware by a substantial margin), ElementK (now owned by NIIT), and Harvard Business School Publishing. Generic courseware is available for learning skills in communication, business, leadership, management, finance, information technology (IT), sales, health and safety, and more specialized topics.
Most companies also have a need to develop courses on for unique situations and proprietary products and services. There are many tools available for this purpose. Most of these are designed primarily for creating self-directed online courses, but they can also be used to develop classroom materials.
Some examples of popular course authoring tools:
• SumTotal ToolBook
• Adobe Authorware, Flash, Dreamweaver, and Acrobat Connect Presenter
• Trivantis Lectora
• ReadyGo Web Course Builder
• MaxIT DazzlerMax
• Outstart Trainer
Course development can be very time consuming. There is a lot of material already available in Microsoft Word or PowerPoint. So-called rapid development, or rapid e-learning tools are designed to quickly convert these documents to e-learning courses. Examples include:
• Articulate
• Impatica
• Adobe Presenter (formerly Macromedia Breeze Presenter)
• KnowledgePresenter
Most of these tools (with the exception of Impatica) convert PowerPoint and Word documents to Flash because it is web-friendly and so widespread. (According to Adobe, Flash is already installed in 97 percent of browsers.) | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/07%3A_Untitled_Chapter_7/7.4%3A_Course_Development.txt |
There are numerous tools designed specifically for the simulation of computer screens by recording screen interactions. For example:
• Adobe Captivate (formerly Macromedia RoboDemo)
• TechSmith Camtasia
• Qarbon ViewletBuilder
Many of these also do PowerPoint to Flash conversion.
Test and Assessment Tools
Most course authoring tools can create and deliver tests and quizzes as part of the courses. Instructors may also want use test banks that publishers provide, and/or other, more powerful tools built specifically for testing. For example:
• Questionmark Perception
• Respondus
• Hot Potatoes
There are well over 100 available sources for software that can be categorized as course authoring tools. When choosing an LMS, be sure that it can support any third-party generic courseware or content authoring tools being used. Particular attention should be paid to the LMS’s ability to launch the courses, and track and record interactions and responses to quizzes. Support for standards helps, but it is no guarantee. You should test the LMS with the tools and courseware that you will be using. You should also determine how accessible the file formats are for students with disabilities. (See Chapter 11, Accessibility and Universal Design, for more information about accessibility.)
Tip
Be careful with rapid development tools. Speed of delivery can be very important but make sure you are not just making bad Word or PowerPoint documentation into even worse e-learning courses.
7.6: Virtual Classrooms Web Conferencing
Web conferencing tools can bring a new dimension to your programs. They add presentations, audio, video, graphics, synchronous chat and voice interactions to meetings and classes at a distance. They can effectively complement online courses where some live interaction is called for and where there is an immediate need for new information or skills. Recordings can often be made to enhance asynchronous distance education programs. In an education/training mode, they are often referred to as virtual classrooms.
With a few exceptions, virtual classrooms are not included as part of an LMS, either for education or business, but are available separately. Some LMS vendors partner with web conferencing software vendors to integrate the products so they will work well together.
There are more than 50 vendors of these products. In most cases, these systems can support either corporate or education needs. Some of the best known include:
• Centra Live (now owned by Saba)
• Citrix GoToMeeting
• Elluminate
• Horizon Wimba
• iLinc LearnLinc
• Interwise Connect
• Adobe Acrobat Connect Professional (formerly Macromedia Breeze Live)
• Microsoft NetMeeting (free but apparently no longer supported)
• Microsoft Live Meeting (formerly Placeware)
• Tapped-In (a free text-only based conferencing system)
• WebEx Training Center
Licensing of these products varies from annual subscriptions (Elluminate) to pay-as-you-go (WebEx) to free (TappedIn). If they are only used occasionally, then the pay-as-you-go option is probably the best choice. However, that can rapidly get very expensive.
Tip
As with any software or instructional approach, it takes considerable skill to facilitate an online session effectively.
Learning Content Management
The management of learning content involves saving developed courseware as learning objects in a learning object repository (database). It is catalogued using metadata (descriptive tags) so that it can be easily found and retrieved by anyone who has access to it. It supports institutional or corporate reuse of the learning objects. Systems that do this are often called learning content management systems (LCMS). They are specialized content management systems.
Most education LMSs include at least some capability for content management. Some even call themselves learning content management systems.
Learning content management is not usually a feature of the corporate LMS, but some of the major corporate LMSs include content management as part of a suite of programs. It is also available separately. Most separate LCMSs include content authoring and some learning management features as well.
Performance support: Some corporate LCMSs provide for a feature called performance support. Also called JIT (just in time) learning, performance support allows employees to immediately access information (courses, units, and learning objects) that enables them to do their job better “in the moment”. For example, if an employee working on a task cannot remember exactly how to do something, he or she can quickly access a course, or parts of a course, that will show how to perform the operation. This requires managing the course content as learning objects, and making them easily accessible to all learners. Such systems when available separately are often called EPSS (electronic performance support systems) but are now sometimes included as part of an LCMS. This is another concept which does not really apply in the education environment. See Appendix C, LCMS Features.
LMSs that include this capability as part of a suite include:
• Cornerstone OnDemand
• Generation21
• GeoLearning
• KMx
• LearnCenter
• Plateau
• Saba
• Sum Total Systems
Some examples of separate systems are:
• Chalk Media Chalkboard
• dominKnow LCMS (formerly Galbraith Media)
• Eedo
• Outstart
Tip
Be careful about learning content management. Everyone thinks, “What a great idea—save the course materials in a way that they can be reused easily.” But too often it doesn’t happen. Some organizational cultures do not support the value of sharing. This is a great tool if it is used but an expensive mistake if not used. | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/07%3A_Untitled_Chapter_7/7.5%3A_Software_Simulation_Tools.txt |
Choosing an LMS is not a technology decision. It is primarily a leadership and change management decision. No matter what system you adopt, it will change the way you do things. Even if you adopt a system that supports your basic learning model, procedures will change. This is a major decision that calls for a careful assessment of your needs.
Before you even talk to LMS vendors or open-source LMS community members, form an expert committee of people consisting of educational leaders and administrators and instructors—people who understand how online learning works. Be sure to include some IT personnel to enlist their ideas and support and their understanding of the technology.
Consult with end users, both instructors and students, by questionnaires, surveys, interviews, and/or focus groups to determine their needs, desires, willingness, and abilities. They can identify the desirable features of the system, and give some indication of the change management factors that need to be addressed. Be careful of scope creep. When asking people what they would like to see, they will tend to ask for everything. Distinguish between the things that are truly needed and the “nice-to-haves”.
Consult with people in other organizations like yours that have already gone through the process. Find out what they are using and how they like it. Read the literature and attend conferences.
Are you looking at an LMS to initiate e-learning? You may not actually need to do this. Online courses are just a collection of web pages that do not require an LMS to run them. The primary purpose of an LMS is to provide a working platform and administration for tracking the results. If you don’t need to track the results, or if instructors will do it manually, then you don’t need an LMS.
LMSs tend to constrain people to do things in certain ways. Some instructors and designers are frustrated by the constraints (both technical and learning) of using these systems and would prefer more dynamic learning support systems such as student weblogs and learning wikis, and even just email or newsgroups. You may prefer to give them more creative freedom. Wikis and blogs don’t require an LMS but they are hard to track. Instructors can track activity manually and assign grades but it limits the analysis you can do, for example to find out to what degree students participate, how students perform on individual questions, etc. Wikis and blogs can be altered easily, so are not ideal for formal assignments (other than perhaps a team assignment to build a wiki). Individual and team essay assignments are probably best submitted to instructors via direct email messages and attachments. This would still not require an LMS to track as the instructors would be marking and tracking such assignments manually.
Tip
Obtaining an LMS will change the way you work. Choosing one is not a technology decision. It is about leadership and change.
Steps in The Needs Assessment Process
Conduct primary research
Survey, interview and conduct focus groups among your expert committee, instructors, and students to determine the primary needs of your system. Don’t ask general questions like, “What do you need?” or you will get a wish list that may not be practical. See Appendix F, Needs Assessment Questions, for suggestions about questions to ask.
Conduct secondary research
1. What LMSs are other organizations using?
• Is the organization similar to your own, or have similar needs?
• What made them choose that particular solution?
• How satisfied are they with it?
• What features do they like and not like?
• What feedback have they had from students and instructors?
2. What does the literature say?
If you are looking for an education LMS, a good source of information is the website of the Western Cooperative for Educational Telecommunications: Online Educational Delivery Applications: A Web Tool for Comparative Analysis ( http://www.edutools.info/). This website contains reviews and comparative data on a large number of education learning management systems.
You may also wish to attend conferences where LMS are featured and profiled.
Good corporate conferences are:
• Learning 2007 (formerly TechLearn) (www.learn ing2007.com/)
• Training (http://www.trainingconference.com/)
• American Society for Training and Development (ASTD) (astd2007.astd.org/)
• International Society for Performance Improvement (ISPI) (www.ispi.org/ac2008/)
• Association for Educational Communications and Technology (AECT) (http://www.aect.org/events/)
• ED-MEDIA (Association for the Advancement of Computing in Education—AACE) (www.aace .org/conf/)
• Association for Media and Technology in Canada (AMTEC)/Canadian Association for Distance Education (CADE) (www.cade-aced.ca/conferences /2007/)
• Canadian Association for University Continuing Education (CAUCE) (www.cauce2007.ca)
You can expedite the process by attending virtual trade shows and online demonstrations. Check out the possibilities at www.virtualtechfair.com/ and vendors’ websites.
Tip
For reviews of education LMS software, check out www.edutools.com.
If you are looking for a corporate LMS, you can check out the reports by Brandon Hall at www.brandon -hall.com, Bersin & Associates at www.bersin .com/ or by using the comparison tool at http:// learning-management.technologyevaluation.com/.
Other good sources of information include the eLearning Guild (http://www.elearningguild.com/) and Chief Learning Officer magazine (www.clomedia .com/).
Once you have determined your requirements and have documented them carefully, prioritize them to determine the critical needs.
Tip
Be careful of scope creep. When asking people what they would like to see, they will tend to ask for everything. Distinguish between the things that are truly needed and the “nice-to-haves”.
System selection
Now you can begin to research vendors and/or open-source solutions. Looking at different products can open up new possibilities, but, again, be careful of scope creep, and of being sold something just because it is the latest hot item.
Use your documented requirements and priorities to identify a manageable list of solutions (perhaps 10) from the more than 100 vendors. An evolving, fairly complete list of such vendors can be found at www.trimeritus .com/vendors.pdf.
Request for proposal (RFP)
Requests for proposals (RFP) follow fairly standard industry forms. At www.geolearning.com/rfp there is a template specifically for LMS selection but be careful about templates that are just lists of features. Include only those features that you really require. Use your documented requirements and develop use case scenarios and scripts to paint a clear picture of your LMS vision so that a vendor can provide a proposal focused on your specific environment/culture. Include reporting functions in your scenarios. Poor reporting capability is a great source of customer dissatisfaction. Be sure to ask questions about post implementation customer service because it is also a key factor in customer satisfaction.
Ask vendors for references especially those for organizations similar to your own. Ask the vendors from your list to submit proposals. When you contact vendors, the more clearly you have identified your requirements, the more attention you will get from suppliers—they will see you as a qualified prospect. A full formal RFP process may not be practical in all situations unless it is required by your organization.
Review the proposals
Develop a rubric for scoring the proposals you receive from vendors. Make a short list of the top three to ten vendors to be invited to provide demonstrations.
Schedule meetings and demonstrations
Ask your short list of vendors or open-source community representatives (who may be members of your own organization) to demonstrate their products either at your location or online. Ask them to demo directly to the use case scenarios and demonstration scripts you developed in the RFP. Invite students, instructors, and IT people to the demos, as well as members of your core committee.
Most vendors will have pre-packaged online demonstrations of their products, but remember that these are mostly designed to show off the good features of the product that may not be relevant in your situation.
Use your rubric to have each participant evaluate the solutions. At the meetings, discuss specific details about how the vendor provides service, maintenance, etc. Try to arrange for a free, in-house trial. If possible, run a small pilot program with a small sample before rolling a solution out to the entire organization.
Note that the needs assessment and selection strategies are also part of your change management strategy. The more input people have in the decision, the more likely they will adopt it.
Make the selection
Meet with your review team to consolidate the rubrics and make a selection. The bottom line is selecting the LMS that meets your needs.
“The average company doesn’t get excited about buying an LMS; it gets excited about managing learning. It doesn’t get excited about buying a new e-learning course; it gets excited about changing an employee’s performance.” (Elliott Masie as quoted by Ellis, 2004) | textbooks/socialsci/Education_and_Professional_Development/Book%3A_Education_for_a_Digital_World_-_Advice_Guidelines_and_Effective_Practice_from_Around_Globe_(Hirtz)/07%3A_Untitled_Chapter_7/7.7%3A_Needs_Assessment.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.