chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What is the life-cycle model of consumption? 2. What are the effects of changes in a pay-as-you-go system? Chapter 28 "Social Security", 28.2 Section "Individual and Government Perspectives on Social Security" examined an explicit example of what Social Security implies for households and for the government. We can take away the following insights from this example: • Households decide on consumption and saving taking into account their lifetime income. • Lifetime income includes both taxes paid during working years and benefits received during retirement. • From the government’s view, taxes received and benefits paid need to balance, at least over long periods of time. • In the example, the Social Security program was irrelevant: individuals had the same lifetime income and thus consumption opportunities regardless of the Social Security taxes paid and benefits received. We now go beyond our numerical example and give a more general analysis of how an individual’s lifetime consumption choices are influenced by Social Security. Household Budget Constraints We first consider the budget constraints faced by an individual or household (remember that we are using the two terms interchangeably). There are two household budget constraints. The first applies in any given period: ultimately, you must either spend the income you receive or save it; there are no other choices. For example, $disposable\ income = consumption + household\ savings.$ Households also face a lifetime budget constraint. They can save in some periods of their life and borrow/dissave in other periods, but over the course of any household’s lifetime, income and spending must balance. The simplest case is when real interest rates equal zero, which means that it is legitimate simply to add together income and consumption in different years. In this case the lifetime budget constraint says that $total\ lifetime\ consumption = total\ lifetime\ income.$ If real interest rates are not zero, then the budget constraint must be expressed in terms of discounted present values. The household’s lifetime budget constraint is then $discounted\ present\ value\ of\ lifetime\ consumption = discounted\ present\ value of\ lifetime\ income.$ If the household begins its life with some assets (say a bequest), we count this as part of income. If the household leaves a bequest, we count this as part of consumption. As in our earlier numerical example, we can think about the lifetime budget constraint in terms of the household’s assets. Over the course of a lifetime, the household can save and build up its assets or dissave and run down its assets. It can even have negative assets because of borrowing. But the lifetime budget constraint says that the household’s consumption and saving must result in the household having zero assets at the end of its life. Toolkit: Section 31.4 "Choices over Time" and Section 31.5 "Discounted Present Value" You can review both the household’s intertemporal budget constraint and the concept of discounted present value in the toolkit. To see how this budget constraint works, consider an individual who knows with certainty the exact number of years for which she will work (her working years) and the exact number of years for which she will be retired (her retirement years). While working, she receives her annual disposable income—the same amount each year. During retirement, she receives a Social Security payment that also does not change from year to year. As before, suppose that the real interest rate is zero. Her budget constraint over her lifetime states that $total\ lifetime\ consumption = total\ lifetime\ income = working\ years \times disposable\ income+ retirement\ years \times Social\ Security\ payment.$ Our numerical example earlier was a special case of this model, in which $disposable\ income = 34,000 working\ years = 45 retirement\ years = 15,$ and $Social\ Security\ payment = 18,000.$ Plugging these values into the equation, we reproduce our earlier calculation of lifetime income (and hence also lifetime consumption) as $(45 \times 34,000) + (15 \times 18,000) = 1,800,000.$ The Life-Cycle Model of Consumption Economists often use a consumption function to describe an individual’s consumption/saving decision: $consumption = autonomous\ consumption + marginal\ propensity\ to\ consume \times disposable\ income.$ The marginal propensity to consume measures the effect of current income on current consumption, while autonomous consumption captures everything else, including past or future income. The life-cycle model explains how households make consumption and saving choices over their lifetime. The model has two key ingredients: (1) the household budget constraint, which equates the discounted present value of lifetime consumption to the discounted present value of lifetime income, and (2) the desire of a household to smooth consumption over its lifetime. Toolkit: Section 31.32 "Consumption and Saving" and Section 31.34 "The Life-Cycle Model of Consumption" You can review the consumption function, consumption smoothing, and the life-cycle model in the toolkit. Let us see how this model works. According to the life-cycle model of consumption, the individual first calculates her lifetime resources as $working\ years \times disposable\ income + retirement\ years \times Social\ Security\ payment.$ (We continue to suppose that the real interest rate is zero, so it is legitimate simply to add her income in different years of her life.) She then decides how much she wants to consume in every period. Consumption smoothing starts from the observation that people do not wish their consumption to vary a lot from month to month or from year to year. Instead, households use saving and borrowing to smooth out fluctuations in their income. They save when their income is high and dissave when their income is low. Perfect consumption smoothing means that the household consumes exactly the same amount in each period of time (month or year). Going back to the consumption function, perfect consumption smoothing means that the marginal propensity to consume is (approximately) zero.With perfect consumption smoothing, changes in current income will lead to changes in consumption only if those changes in income lead the household to revise its estimate of its lifetime resources. If a household wants to have perfectly smooth consumption, we can easily determine this level of consumption by dividing lifetime resources by the number of years of life. Returning to our equations, this means that This is the equation we used earlier to find Carlo’s consumption level. We took his lifetime income of $1,800,000, noted that lifetime income equals lifetime consumption, and divided by Carlo’s 60 remaining years of life, so that consumption each year was$30,000. That is really all there is to the life-cycle model of consumption. Provided that income during working years is larger than income in retirement years, individuals save during working years and dissave during retirement. This is a stylized version of the life-cycle model, but the underlying idea is much more general. For example, we could extend this story and make it more realistic in the following ways: • Households might have different income in different years. Most people’s incomes are not constant, as in our story, but increase over their lifetimes. • Households might not want to keep their consumption exactly smooth. For example, if the household expects to have children, then it would probably anticipate higher consumption—paying for their food, clothing, and education—and it would expect to have lower consumption after the children have left home. • The household might start with some assets and might also plan to leave a bequest. • The real interest rate might not be zero. • The household might contain more than one wage earner. Working through the mathematics of these cases is more complicated—sometimes a lot more complicated—than the calculations we just did, and so is a topic for advanced courses in macroeconomics. In the end, though, the same key conclusions continue to hold even in the more sophisticated version of the life-cycle model: • A household will examine its entire expected lifetime income when deciding how much to consume and save. • Changes in expected future income will affect current consumption and saving. The Government Budget Constraint The household’s budget constraints for different years are linked by the household’s choices about saving and borrowing. Over the household’s entire lifetime, these individual budget constraints can be combined to give us the household’s lifetime budget constraint. Similar accounting identities apply to the federal government (and for that matter, to state governments and local governments as well). In any given year, money flows into the government sector, primarily from the taxes that it imposes on individuals and corporations. We call these government revenues. The government also spends money. Some of this spending goes to the purchase of goods and services, such as the building of roads and schools or payments to teachers and soldiers. Whenever the government actually buys something with the money it spends, we call these government purchases (or government expenditures). Some of the money that the government pays out is not used to buy things, however. It takes the form of transfers, such as welfare payments and Social Security payments. Transfers mean that dollars go from the hands of the government to the hands of an individual. They are like negative taxes. Social Security payments are perhaps the most important example of a government transfer. Any difference between government revenues and government expenditures and transfers represents saving by the government. Government saving is usually referred to as a government surplus: $government\ surplus = government\ revenues − government\ transfers − government\ expenditures.$ If, as is often the case, the government is borrowing rather than saving, then we instead talk about the government deficit, which is the negative of the government surplus: $government\ deficit = −government\ surplus = government\ transfers + government\ expenditures − government\ revenues.$ Toolkit: Section 31.33 "The Government Budget Constraint" and Section 31.27 "The Circular Flow of Income" You can review the government budget constraint in the toolkit. Applying the Tools to Social Security The life-cycle model and government budget constraint can be directly applied to our analysis of Social Security. Let us go back to Carlo again. Carlo obtains pretax income and must pay Social Security taxes to the government. Carlo’s disposable income in any given year is given by the equation $disposable\ income = income − Social\ Security\ tax.$ Imagine that he receives no retirement income other than Social Security. Carlo’s lifetime resources are given by the following equation: $lifetime\ resources = working\ years \times income − working\ years \times Social\ Security\ tax+ retirement\ years \times Social\ Security\ income.$ Now let us examine Social Security from the perspective of the government. To keep things simple, we suppose the only role of the government in this economy is to levy Social Security taxes and make Social Security payments. In other words, the government budget constraint is simply the Social Security budget constraint. The government collects the tax from each worker and pays out to each retiree. For the system to be in balance, the government surplus must be zero. In other words, government revenues must equal government transfers: $number\ of\ workers \times Social\ Security\ tax = number\ of\ retirees \times Social\ Security\ payment.$ Now, here is the critical step. We suppose, as before, that all workers in the economy are like Carlo, and one worker is born every year. It follows that $number\ of\ workers = working\ years$ and $number\ of\ retirees = retirement\ years.$ But from the government budget constraint, this means that $working\ years \times Social\ Security\ tax = retirement\ years \times Social\ Security\ payment,$ so the second and third terms cancel in the expression for Carlo’s lifetime resources. Carlo’s lifetime resources are just equal to the amount of income he earns over his lifetime before the deduction of Social Security taxes: $lifetime\ resources = income\ from\ working.$ No matter what level of Social Security payment the government chooses to give Carlo, it ends up taking an equivalent amount away from Carlo when he is working. In this pay-as-you-go system, the government gives with one hand but takes away with the other, and the net effect is a complete wash. We came to this conclusion simply by examining Carlo’s lifetime budget constraint and the condition for Social Security balance. We did not even have to determine Carlo’s consumption and saving during each year. And—to reiterate—the assumption that there is just one person of each age makes no difference. If there were 4 million people of each age, then we would multiply both sides of the government budget constraint by 4 million. We would then cancel the 4 million on each side and get exactly the same result. We have gained a remarkable insight into the Social Security system. The lifetime income of the individual is independent of the Social Security system. Whatever the government does to tax rates and benefit levels, provided that it balances its budget, there will be no effect on Carlo’s lifetime income. Since consumption decisions are made on the basis of lifetime income, it also follows that the level of consumption is independent of variations in the Social Security system. Any changes in the Social Security system result in changes in the level of saving by working households but nothing else. As we saw in our original numerical example, individuals adjust their saving in a manner that cancels out the effects of the changes in the Social Security system. The model of consumption and saving we have specified leads to a very precise conclusion: the household neither gains nor loses from the existence of the Social Security system. The argument is direct. If the well-being of the household depends on the consumption level over its entire lifetime, then Social Security is irrelevant since lifetime income (and thus consumption) is independent of the Social Security system. Key Takeaways 1. The life-cycle model of consumption states that the household chooses its consumption during each period of life subject to a budget constraint that the discounted present value of lifetime income must equal the discounted present value of lifetime consumption. 2. If the household chooses to perfectly smooth consumption, then consumption during each period of life is equal to the discounted present value of income divided by the number of years in a lifetime. 3. In general, a household’s lifetime income and consumption are independent of the taxes and benefits of a pay-as-you-go Social Security system. Changes to the system lead to adjustments in saving rather than consumption. Exercises 1. What are the two types of budget constraints that a household faces? 2. If working years increased by five and retirement years decreased by five, what would happen to lifetime income?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/28%3A_Social_Security/28.03%3A_A_Model_of_Consumption.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What is the current state of the Social Security system in the United States? 2. What are some of the policy choices being considered? The Social Security system in the United States went into deficit in 2010: tax receipts were insufficient to cover expenditures. This was largely because the recession led to reduced receipts from the Social Security tax. However, the Social Security Board of Trustees warns that “[a]fter 2014, cash deficits are expected to grow rapidly as the number of beneficiaries continues to grow at a substantially faster rate than the number of covered workers.” “A Summary of the 2011 Annual Reports: Social Security and Medicare Boards of Trustees,” Social Security Administration, accessed June 24, 2011, http://www.ssa.gov/OACT/TRSUM/index.html. It is hard to reconcile these statements with the model that we developed in 28.2 Section "Individual and Government Perspectives on Social Security" and 28.3 Section "A Model of Consumption". If Social Security is an irrelevance, why is there so much debate about it, and why is there so much concern about its solvency? The answer is that our model was too simple. The framework we have developed so far is a great starting point because it tells us about the basic workings of Social Security in a setting that is easy to understand. Don’t forget, though, that our discussion was built around a pay-as-you-go system in a world where the ratio of retirees to workers was not changing. Now we ask what happens if we complicate the demography of our model to make it more realistic. The Baby Boom During the period directly following World War II, the birthrate in many countries increased significantly and remained high for the next couple of decades. People born at this time came to be known—for obvious reasons—as the baby boom generation. The baby boomers in the United States and the United Kingdom are clearly visible in Figure 28.4.1 "The Baby Boom in the United States and the United Kingdom", which shows the age distribution of the population of those countries. If babies were being born at the same rate, you would expect to see fewer and fewer people in each successive age group. Instead, there is a bulge in the age distribution around ages 35–55. (Interestingly, there is also a second baby boomlet visible, as the baby boomers themselves started having children.) If the same number of people were born every year, then a bar chart of population at different age groups would show fewer and fewer people in each successive age group. Instead, as these pictures show, the United States and the United Kingdom had a “baby boom”: an unusually large number of children were born in the decades immediately following World War II. In 2010, this generation is in late middle age. Source: US Census Bureau, International Data Base, www.census.gov/population/international/data/idb/informationGateway.php. Figure 28.4.2 "The US Baby Boom over Time" presents the equivalent US data for 1980, 1990, and 2000, showing the baby boom working its way through the age distribution. Figure $2$: The US Baby Boom over Time These pictures show the age distribution of the population as the baby boom generation gets older. The “bulge” in the age distribution shifts rightward. In 1980, the baby boomers were young adults. By 2000, even the youngest baby boomers were in middle age. Source: US Census Bureau, International Data Base, www.census.gov/population/international/data/idb/informationGateway.php. As the baby boom generation makes its way to old age, it is inevitable that the dependency ratio—the ratio of retirees to workers—will increase dramatically. In addition, continuing advances in medical technology mean that people are living longer than they used to, and this too is likely to cause the dependency ratio to increase. The 2004 Economic Report of the President predicted that the dependency ratio in the United States will increase from 0.30 in 2003 to 0.55 in 2080. Economic Report of the President (Washington, DC: GPO, 2004), accessed July 20, 2011, www.gpoaccess.gov/usbudget/fy05/pdf/2004_erp.pdf. Roughly speaking, in other words, there are currently about three workers for every retiree, but by 2080 there will only be two workers per retiree. A Baby Boom in Our Model In our framework, we assumed that there was always one person alive at each age. This meant that the number of people working in any year was the same as the working life of an individual. Likewise, we were able to say that the number of people retired at a point in time was the same as the length of the retirement period. Here is a simple way to represent a baby boom: Let us suppose that, in one year only, two people are born instead of one. When the extra person enters the work force, the dependency ratio will decrease—there is still the same number of retirees, but there are more workers. If Social Security taxes are kept unchanged and the government continues to keep the system in balance every year, then the government can pay out higher benefits to retirees. For 45 years, retirees can enjoy the benefits of the larger workforce. Eventually, though, the baby boom generation reaches retirement age. At that point the extra individual stops contributing to the Social Security system and instead starts receiving benefits. What used to be a boon is now a problem. To keep the system in balance, the government must reduce Social Security benefits. Let us see how this works in terms of our framework. Begin with the situation before the baby boom. We saw earlier that the government budget constraint meant that Social Security revenues must be the same as Social Security payments: $number\ of\ workers \times Social\ Security\ tax = number\ of\ retirees \times Social\ Security\ payment.$ If we divide both sides of this equation by the number of retirees, we find that The first expression on the right-hand side (number of workers/number of retirees) is the inverse of the dependency ratio. • When the baby boom generation is working. Once the additional person starts working, there is the same number of retirees, but there is now one extra worker. Social Security revenues therefore increase. If the government continues to keep the system in balance each year, it follows that the annual payment to each worker increases. The dependency ratio has gone down, so payments are larger. The government can make a larger payment to each retired person while still keeping the system in balance. Retirees during this period are lucky: they get a higher payout because there are relatively more workers. • When the baby boom generation retires. Eventually, the baby boom generation will retire, and there will be one extra retiree each year until the baby boom generation dies. Meanwhile, we are back to having fewer workers. So when the baby boom generation retires, the picture is reversed. Social Security payments are higher than in our baseline case, and revenues are back to where they were before the baby boomers started working. Because there are now more retirees relative to workers—that is, the dependency ratio has increased—retirees see a cut in Social Security benefits. If the Economic Report of the President figures are to be believed, the coming increase in the dependency ratio means that Social Security payments would have to decrease by about 45 percent if the Social Security budget were to be balanced every year. The reality is that this simply will not happen. First, the Social Security system does not simply calculate payouts on the basis of current Social Security receipts. In fact, there is a complicated formula whereby individuals receive a payout based on their average earnings over the 30 years during which they earned the most. Kaye A. Thomas, “Understanding the Social Security Benefit Calculation,” Fairmark, accessed July 20, 2011, www.fairmark.com/retirement/socsec/pia.htm. Of course, that formula could be changed, but it is unlikely that policymakers will completely abandon the principle that payments are based on past earnings. Second, retired persons already make up a formidable political lobby in the United States. As they become more numerous relative to the rest of the population, their political influence is likely to become even greater. Unless the political landscape changes massively, we can expect that the baby boom generation will have the political power to prevent a massive reduction in their Social Security payments. Social Security Imbalances To completely understand both the current situation and the future evolution of Social Security, we must make one last change in our analysis. Although the Social Security system was roughly in balance for the first half-century of its existence, that is no longer the case. Because payments are calculated on the basis of past earnings, it is possible for revenues to exceed outlays or be smaller than outlays. This means that the system is not operating on a strict pay-as-you-go basis. When the government originally established Social Security, it set up something called the Social Security Trust Fund—think of it as being like a big bank account. Current workers pay contributions into this account, and the account also makes payments to retired workers. Under a strict pay-as-you-go system, the balance in the trust fund would always be zero. In fact, in some years payments to workers are smaller than tax receipts, in which case the extra goes into the Trust Fund. In other year payments exceed receipts, and the difference is paid for out of the Trust Fund. To be more precise, $tax\ revenues = number\ of\ workers \times Social\ Security\ taxes = number\ of\ workers \times tax\ rate \times income$ and $Social\ Security\ payments = number\ of\ retirees \times Social\ Security\ payment.$ If tax revenues exceed payments, then the system is running a surplus: it is taking in more in income each period than it is paying out to retirees. Conversely, if payments exceed revenues, the system is in deficit. In other words, $Social\ Security\ surplus = number\ of\ workers \times tax\ rate \times income − number\ of\ retirees \times Social\ Security\ payment.$ For the first half-century of Social Security, there was an approximate match between payments and receipts, although receipts were usually slightly larger than payments. In other words, rather than being exactly pay-as-you-go, the system typically ran a small surplus each year. “Trust Fund Data,” Social Security Administration, January 31, 2011, accessed July 20, 2011, http://www.ssa.gov/OACT/STATS/table4a1.html. Over the first half-century of the program, the Trust Fund accumulated slightly less than $40 billion in assets. This might sound like a big number, but it amounts to only a few hundred dollars per worker. The Social Security Trust Fund contains the accumulated surpluses of past years. It gets bigger or smaller over time depending on whether the surplus is positive or negative. For example, $Trust\ Fund\ balance\ this\ year = Trust\ Fund\ balance\ last\ year + Social\ Security\ surplus\ this\ year.$ (Strictly, that equation is true provided that we continue to suppose that the real interest rate is zero.) If tax revenues exceed payments, then there is a surplus, and the Trust Fund increases. If tax revenues are less than payments, then there is a deficit (or, to put it another way, the surplus is negative), so the Trust Fund decreases. The small surpluses that have existed since the start of the system mean that the Trust Fund has been growing over time. Unfortunately, it has not been growing fast enough, and in 2010, the fund switched from running a surplus to running a deficit. There are still substantial funds in the system—almost a century’s worth of accumulated surpluses. But the dependency ratio is so high that those accumulated funds will disappear within a few decades. Resolving the Problem: Some Proposals We can use the life-cycle model of consumption/saving along with the government budget constraint to better understand proposals to deal with Social Security imbalances. We saw that the surplus is given by the following equation: $Social\ Security\ surplus = number\ of\ workers \times tax\ rate \times income − number\ of\ retirees \times Social\ Security\ payment.$ The state of the Social Security system in any year depends on five factors: 1. The level of income 2. The Social Security tax rate on income 3. The size of the benefits 4. The number of workers 5. The number of retirees Other things being equal, increases in income (economic growth) help push the system into surplus.The effect of economic growth is lessened because of the fact that Social Security payments are linked to past earnings. Higher growth therefore implies higher payouts as well as higher revenues. Still, on net, higher growth would help Social Security finances. A larger number of the population of working age also tends to push the system into surplus, as does a higher Social Security tax. On the other hand, if benefits are higher or there are more retirees, that tends to push the system toward deficit. Increasing Taxes or Decreasing Benefits Many of the proposals for reforming Social Security can be understood simply by examining the equation for the surplus. Remember that the number of workers × the tax rate × income is the tax revenue collected from workers, whereas the number of retirees × the Social Security payment is the total transfer payments to retirees. If the system is running a deficit, then to restore balance, either revenues must increase or payouts must be reduced. The tax rate and the amount of the payment are directly under the control of the government. In addition, there is a ceiling on income that is subject to the Social Security tax ($106,800 in 2011). At any time, Congress can pass laws changing these variables. It could increase the tax rate, increase the income ceiling, or decrease the payment. If we simply think of the problem as a mathematical equation, then the solution is easy: either increase tax revenues or decrease benefits. Politics, though, is not mathematics. Politically, such changes are very difficult. Indeed, politicians often refer to increases in taxes and/or reductions in benefits as a political “third rail” (a metaphor that derives from the high-voltage electrified rail that provides power to subway trains—in other words, something not to be touched). Another way to increase revenue is through increases in GDP. If the economy is expanding and output is increasing, then the government will collect more tax revenues for Social Security. There are no simple policies that guarantee faster growth, however, so we cannot plan on solving the problem this way. Delaying Retirement We have discussed the tax rate, the payment, and the level of income. This leaves the number of workers and the number of retirees. We can change these variables as well. Specifically, we can make the number of workers bigger and the number of retirees smaller by changing the retirement age. This option is frequently discussed. After all, one of the causes of the Social Security imbalance is the fact that people are living longer. So, some ask, if people live longer, should they work longer as well? Moving to a Fully Funded Social Security System The financing problems of Social Security stem from a combination of two things: demographic change and the pay-as-you-go approach to financing. Suppose that, instead of paying current retirees by taxing current workers, the government were instead simply to tax workers, invest those funds on their behalf, and then pay workers back when they are retired. Economists call this a fully funded Social Security system. In this setup, demographic changes such as the baby boom would not be such a big problem. When the baby boom generation was working, the government would collect a large amount of funds so that it would later have the resources to pay the baby boomers their benefits. As an example, Singapore has a system known as the Central Provident Fund, which is in effect a fully funded Social Security system. Singaporeans make payments into this fund and are guaranteed a minimum return on their payments. In fact, Singapore sets up three separate accounts for each individual: one specifically for retirement, one that can be used to pay for medical expenses, and one that can be used for specific investments such as a home or education. Some commentators have advocated that the United States should shift to a fully funded Social Security system, and many economists would agree with this proposal. “Economic Letter,” Federal Reserve Bank of San Francisco, March 13, 1998, accessed July 20, 2011, http://www.frbsf.org/econrsrch/wklyltr/wklyltr98/el98-08.html. This letter discussed the transition to a fully funded Social Security system. Were it to adopt such a system, the US government would not in the future have the kinds of problems that we currently face. Indeed, the Social Security reforms of the 1980s can be considered a step away from pay-as-you-go and toward a fully funded system. At that time, the government stopped keeping the system in (approximate) balance and instead started to build up the Social Security Trust Fund. But this is not a way to solve the current crisis in the United States. It is already too late to make the baby boomers pay fully for their own retirement. Think about what happened when Social Security was first established. At that time, old workers received benefits that were much greater than their contributions to the system. That generation received a windfall gain from the establishment of the pay-as-you-go system. That money is gone, and the government cannot get it back. Suppose the United States tried to switch overnight from a pay-as-you-go system to a fully funded system. Then current workers would be forced to pay for Social Security twice: once to pay for those who are already retired and then a second time to pay for their own retirement benefits. Obviously, this is politically infeasible, as well as unfair. Any realistic transition to a fully funded system would therefore have to be phased in over a long period of time. Privatization Recent discussion of Social Security has paid a lot of attention to privatization. Privatization is related to the idea of moving to a fully funded system but with the additional feature that Social Security evolves (at least in part) toward a system of private accounts where individuals have more control over their Social Security savings. In particular, individuals would have more choice about the assets in which their Social Security payments would be invested. Advocates of this view argue that individuals ought to be responsible for providing for themselves, even in old age, and suggest that private accounts would earn a higher rate of return. Opponents of privatization argue, as did the creators of Social Security in the 1930s, that a privatized system would not provide the assistance that elderly people need. Some countries already have social security systems with privatized accounts. In 1981, Chile’s pay-as-you-go system was virtually bankrupt and replaced with a mandatory savings scheme. Workers are required to establish an account with a private pension company; however, the government strictly regulates these companies. The system has suffered from compliance problems, however, with much of the workforce not actually contributing to a plan. In addition, it turns out that many workers have not earned pensions above the government minimum, so in the end it is not clear that the private accounts are really playing a very important role. Recent reforms have attempted to address these problems, but it remains unclear how successful Chile’s transition to privatization will be. As with the move to a fully funded Social Security system, a big issue with privatization is the transition period. If, for example, the government announced a plan today to privatize Social Security, it would have to deal with the fact that many retired people would no longer have Social Security income. Furthermore, many working people would have already paid into the program. Thus proposals to privatize Social Security must include a plan for dealing with existing retirees and those who have paid into the system through payroll taxes. Some recent discussion has suggested, implicitly or explicitly, that privatization would help solve the current Social Security imbalance. This is misleading. By cutting off the payroll tax revenues, privatization makes the problem worse in the short run, not better. Although privatization is certainly a proposal that can be discussed on its own merits, it should be kept separate from the debate about how to balance existing Social Security claims with revenues. Key Takeaways 1. Many studies predict that, if there are no policy changes, the Social Security system will be bankrupt by the middle of this century. A main cause of this problem is demographic change: fewer workers are supporting more retirees, and life expectancies have increased. 2. Some possible policy remedies include raising taxes on workers, reducing benefits, and increasing the retirement age. Exercises 1. What is the dependency ratio? Why might it change over time? 2. What is the Social Security Trust Fund?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/28%3A_Social_Security/28.04%3A_Social_Security_in_Crisis.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What are the benefits of having a Social Security system? 2. How does a Social Security system help someone deal with the uncertainties of life? 3. What are the effects of Social Security on national saving? We have seen how demographic changes in the economy, combined with the pay-as-you-go form of Social Security, are leading to funding problems within the US system. The United States is not alone; many other countries also have pay-as-you-go systems and are facing similar demographic challenges. We have also examined some ways of resolving these financing problems. Yet we have not addressed another more basic question: why have a Social Security system at all? After all, our analysis suggests that people may adjust their private saving behavior in a way that undoes the effects of Social Security. What advantages and disadvantages of Social Security have we so far missed? The Uncertainties of Life A century or two ago, if you were unlucky enough to fall into serious poverty, there was very little in the way of government help, even in the richest countries. You were likely to end up in the poorhouse (sometimes called a workhouse or an almshouse), where you obtained the bare minimum of shelter and food in exchange for grueling work. For those who were old and poor, the poorhouse was a place to die an ignominious death: Numerous as are the old men’s homes, old ladies’ homes, and homes for aged couples that are supported by private charity, they are yet, as every worker among the poor knows, too few to meet the demand. Our almshouses are also practically homes for the aged poor. Some almshouse inmates became paupers before they were aged, but many of them led independent and self-respecting lives, and even put by something for the future while physically able to earn wages. When wages ceased, savings, if any were made, were used up or else lost in unwise investments, and at the end almshouse relief and the pauper’s grave were preferred to exposure and starvation. Henry Seager, Social Insurance: A Program of Social Reform, Chapter V—“Provision of Old Age,” 1910, accessed August 9, 2011, http://www.ssa.gov/history/pdf/seager5.pdf. Social Security in the United States and other countries was set up largely to save old people from this fate. Carlo did not face any of the problems suggested by the quotation. In Carlo’s world there was no uncertainty: working and retirement income were known at the start of his working life, and his dates of retirement and death were also known with certainty. Carlo had no risk of using up all his savings before he died, or of losing his money in “unwise investments.” But Carlo’s world is not the world in which we live. In practice, individuals face enormous uncertainty both about their lifetime income and their consumption needs in retirement. The mere fact that we live in an uncertain world is not, in and of itself, a reason for the government to intervene. Private insurance markets might be available that allow individuals to purchase insurance to cover themselves against these kinds of risks. As an example, many people have disability insurance that they either purchase individually or obtain through their employer. Disability insurance means that if you are unlucky enough to suffer an accident or illness that prevents you from working, you will still receive income. It is also possible to purchase annuities (which are sort of a reverse life insurance): these are assets that pay out a certain amount each year while you are alive and allow you to insure yourself against the uncertain time of your death. Early discussions of Social Security highlighted the insurance role of the program. During the Great Depression, it became clear that insurance provided through markets was woefully incomplete. Thus the government created a variety of safety nets, financed by public funds. Social Security was one of these programs. An early pamphlet on Social Security summarizes this view: In general, the Social Security Act helps to assure some income to people who cannot earn and to steady the income of millions of wage earners during their working years and their old age. In one way and another taxation is spread over large groups of people to carry the cost of giving some security to those who are unfortunate or incapacitated at any one time. The act is a foundation on which we have begun to build security as States and as a people, against the risks which families cannot meet one by one.Mary Ross, “Why Social Security?” Bureau of Research and Statistics, 1937, accessed July 20, 2011, http://www.ssa.gov/history/whybook.html. Financial sophistication has increased markedly since the 1930s, but insurance markets are still far from perfect, so most people agree that the government should continue to provide the insurance that private markets fail to deliver. As President George W. Bush’s Council of Economic Advisors wrote, “To protect against this risk [of living an unusually long time], a portion of the retirement wealth that a worker has accumulated must be converted into an annuity, a contract that makes scheduled payments to the individual and his or her dependents for the remainder of their lifetimes.” Economic Report of the President (Washingon, DC: GPO, 2004), accessed July 20, 2011, www.gpoaccess.gov/usbudget/fy05/pdf/2004_erp.pdf, p. 130. Once we acknowledge two things—(1) there is major uncertainty in life, and (2) insurance markets are lacking—we see a clear role for Social Security. The Complexity of Optimization There is another reason to think that our analysis of Carlo was much too simple. For Carlo, it was quite straightforward to determine his optimal level of consumption: all he had to do was to calculate his lifetime income, divide by the number of years of life that he had left, and he knew his optimal level of consumption. We said earlier that the basic idea of this life-cycle model continues to hold even in a more complicated world, where incomes are not constant, real interest rates are not zero, and consumption needs may vary over one’s lifetime. If you have a PhD in economics, you even learn to solve these problems in a world of uncertainty. Yet when one considers all the uncertainties of life, the problem certainly becomes very complex. Most individuals do not have PhDs in economics, and most people—even including those with economics PhDs—are not able to forecast their income and consumption needs very accurately. As a result, it seems likely that many people are not capable of making good decisions when they are thinking about consumption and saving over their entire lifetimes. As stated in the 2004 Economic Report of the President, “Some individuals may not be capable of making the relevant calculations themselves and may not be able to enlist the service of a financial professional to advise them.” Economic Report of the President (Washingon, DC: GPO, 2004), accessed July 20, 2011, www.gpoaccess.gov/usbudget/fy05/pdf/2004_erp.pdf, p. 130. Social Security can therefore be seen as a program that provides assistance to individuals unable to make optimal decisions on their own.That said, figuring payments under the current social security system is not easy either. To understand why, check out the information on benefits at the Social Security Administration website. “Your Retirement Benefit: How It Is Figured,” Social Security Administration, January 2011, accessed July 20, 2011, http://www.ssa.gov/pubs/10070.html. In general, economists believe both that people are aware of their own self-interests and are capable of making good decisions. Economists tend to be suspicious of arguments that suggest that the government can make better decisions for people than they can make for themselves. At the same time, research by economists and psychologists suggests that individuals are subject to biases and errors of judgment in their decision making. And if government paternalism makes sense anywhere, then it is likely to be in the context of lifetime saving decisions. After all, we are not talking about deciding which kind of coffee to buy or what price to set for a product this month. There is no room for learning from your mistakes, there are no second chances, and the consequences of error are enormous. In life, you only get old once. Distortions and Administrative Costs The key arguments in favor of Social Security are therefore that it provides some insurance that may not be available through private markets and protects people in the face of their inability to make sound decisions when they are planning for the distant future. But just because there are some shortcomings of private insurance and annuity markets, we should not presume that government can do things better. Against the benefits of the Social Security system must also be set some costs. First, any government program requires resources to operate. It costs about 1 percent of the benefits paid to administer the Social Security system. This is a direct cost of the program. Second—and more interestingly in terms of economics—whenever we have a government scheme that affects the taxes that people pay, there will be some distortionary effects on people’s willingness to work. Taxes lower the relative price of leisure compared to consumption goods, which may induce people to work less. Because Social Security imposes a tax on the incomes of working people, it distorts their choices. This is another cost of the Social Security system. The Effect on National Savings There is another effect of Social Security that is much more subtle. It reduces the savings of the nation as a whole. This means less capital and ultimately lower living standards. The intuition is as follows. When individuals save, they make funds available in the financial markets for firms to borrow. Thus saving leads to investment and a buildup of the economy’s capital stock. But as we saw, Social Security reduces the individual incentive to save. People don’t need to save if the government will provide for them in retirement. Furthermore, the taxes being collected by the government are not being used to finance capital investment either; they are being paid out to old workers. A pay-as-you-go system thus tends to reduce overall national saving. In a fully funded Social Security system, this is not an issue, and indeed this is one of the most compelling arguments in favor of a gradual shift to a fully funded system. Redistributive Effects of Social Security Social Security redistributes income in ways that may not be desirable. After all, those who benefit the most from Social Security are those who live the longest. Thus the scheme effectively redistributes money from the unlucky people who die young to the lucky ones who live for a very long time. This is a politically charged argument, for life expectancy is correlated with poverty, race, and sex. The life expectancy of poor African American men is significantly lower than the life expectancy of rich white American women, for example. Social Security may redistribute resources, from poor African American men to rich white American women. Key Takeaways 1. Some benefits of a Social Security system arise from the provision of insurance over the uncertainties of life and in helping people make once in a lifetime choices that are very complex. 2. Through the Social Security system, retirees receive benefits until they die. This is a form of insurance to deal with the uncertainties of life. 3. Since a pay-as-you-go Social Security system provides income during retirement years, it reduces the incentive for households to save. Exercises 1. How does Social Security help people who are unable to make choices on their own? 2. In what ways does Social Security redistribute resources across households?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/28%3A_Social_Security/28.05%3A_The_Benefits_and_Costs_of_a_Social_Security_System.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What aspects of the real world are highlighted, and which are missed in our simple framework? 2. Why do people disagree about Social Security reform? Our discussion of Social Security deliberately used a simple framework. Using that framework, we first showed that, in the simplest case, the Social Security system actually has no effect on the lifetime consumption of households. We also explained that, once we move away from this simple setup, there are some arguments both for and against a Social Security system. Complications The world is much more complicated than our simple framework, and we need to make sure that our analysis has not left out some important feature of the real world that would change our conclusion. In this section, we briefly discuss some complications to our model. Some of these complications provide some additional reasons to support a Social Security system; others identify additional costs of the system. However, these additional costs and benefits are much less important than those we have already identified. Positive Real Interest Rates We based all our discussion on an assumption that the real interest rate is zero. When the real interest rate is zero, it is legitimate to add real income in different years and consumption in different years. With a real interest rate of zero, adding income levels in different periods is not a problem. But if the real interest rate is positive, this is not correct. To add income and consumption in different years, we have to calculate discounted present values. Toolkit: Section 31.5 "Discounted Present Value" You can review discounted present value in the toolkit. Suppose you will receive some income next year. The value of that this year is given by the following equation: Income earned in the future has a lower value from the perspective of today. The mathematics of the lifetime budget constraint is harder once we allow for nonzero interest rates, so we will not go through the formal calculations here. Without going through all the details of the analysis, what can we conclude? The main observation is a rather surprising one. Once we introduce a positive real interest rate, the Social Security system makes people worse off. Remember that we concluded earlier that the system had no effect on the total resources in the hands of the household. Households are taxed when they are young, though, and get that money returned to them when they are old. With positive real interest rates, they would strictly prefer the money when they were young. This result seems odd. A Social Security system allows the government, in effect, to borrow from the future, taxing younger generations to pay older generations. So how does it end up making people worse off? A key part of the answer is that, when the system was first introduced, the first generation of old people obtained benefits without having to make contributions. In the past, therefore, the introduction of the Social Security system did make one group of people better off. Economic Growth As we know, most economies grow over time. We neglected this in our analysis. Economic growth has two implications for Social Security: one unimportant and one more significant. First, economic growth is another reason why individuals’ incomes increase over the course of their lifetimes. We have already observed that this does not change the fundamental idea of lifetime consumption smoothing: you still add lifetime income in both working and nonworking years and then divide by the number of years of life to find the optimal level of consumption. More interestingly, economic growth also means that Social Security payments increase over time. As the income of workers increases because of economic growth, so too does the amount of tax collected by the government. If the Social Security system is in balance at all times, Social Security payments must also increase. Thus when workers are retired, they continue to enjoy the benefits of economic growth. (In fact, if the growth rate of the economy happened to be the same as the real interest rate, the effect of positive economic growth would exactly offset the negative effect of real interest rates.) Normally, the effect of economic growth partially offsets the negative effect of positive real interest rates. Access to Credit Markets In our setting, individuals were able to save without difficulty at the market real interest rate (which was zero in our basic formulation). In the jargon of economics, individuals have good access to credit markets. Yet many individuals in reality have a limited ability to borrow and lend.In our example, individuals wanted to save and not to borrow because they obtained income early in life. If we made more realistic assumptions about the patterns of wages over the lifetime, we would typically find that people want to borrow at certain times of their lives. For example, people often borrow early in life to finance their education. There is ample evidence that many people do not actively participate in stock markets: they do not hold mutual funds or shares of individual companies’ stocks. Such individuals typically save by putting money in a bank, and the interest they earn is relatively low. In particular, it is lower than the interest that the Social Security Trust Fund can earn. Social Security in effect allows the government to do some saving on behalf of individuals at a better interest rate than they themselves can earn. Thus individuals who do not have good access to capital markets can be made better off by access to a Social Security system. This is in some ways the exact opposite of the argument for privatization. Supporters of privatization argue that if individuals can make their own investment decisions, they can earn a better interest rate than is provided by Social Security. They point out that, on average, the stock market provides a better rate of return than is provided by the system. This argument is correct: people may be able to do better. We need to recognize, though, that these higher returns would come at the cost of higher risk—which brings us right back to the original argument for why we need a Social Security system. Moral Hazard Finally, because Social Security serves as a form of insurance, it is subject to problems that are faced by all insurance systems. One of these goes by the name moral hazard, which simply means that the presence of insurance may cause people to change their behavior in bad ways. For example, if people have fire insurance, they may be less likely to keep a fire extinguisher in their homes. Similarly, because people know that the government will provide them with Social Security, they have less incentive to manage their own saving in a careful manner. Why Do People Disagree about Social Security? President George W. Bush’s suggestions for reforming the Social Security system encountered a lot of opposition and rapidly became a partisan issue in US politics. Yet it seems as if Social Security is a program that we could analyze completely and carefully using the tools of economics. Why is a basic economic program such as Social Security so politicized? Some people, of course, will view any proposal from the perspective of politics. There are undoubtedly people who supported President George W. Bush’s proposals not on their merits or demerits but just because they support the Republican Party. Likewise, there are surely Democrats who opposed the president’s proposals simply because they came from a Republican. But leaving such extreme partisan viewpoints aside, there are still good reasons why reasonable people might have different opinions on Social Security: • People differ in their assessment of the importance of market failure in insurance markets. A key argument for Social Security is that private markets do not permit people to insure themselves against the risk of poverty in old age. Insurance and annuity markets do exist, so some people argue that this failure of markets is no longer very significant. At the same time, it requires financial sophistication to take advantage of these markets. Many people do not have the expertise to use these markets or access to financial professionals who could advise them. • People differ in their beliefs about whether individuals can make good decisions about lifetime consumption and savings. Economists generally think that individuals are the best judges of their own well-being. As a consequence, economists are suspicious of arguments that suggest that the government knows better than you do how you should make your own private decisions (such as how to manage your money). However, the decision making required for lifetime financial planning is very complicated, and the consequences of error are so severe, that many economists nonetheless think that failures of individual decision making are a good reason to support Social Security. • People differ in their beliefs about how much government should be involved in people’s lives. Some people are, in general, philosophically opposed to significant government involvement in individual decisions. Even if individuals make poor decisions about their lifetime consumption and savings and end up poor, these people would argue that individuals should bear the consequences of their own mistakes, and government should not bail them out. Others tend to the view that government has a critical role to play in protecting the unfortunate and unlucky. • People have different views about fairness and equality. Some people have the view that an important function of government is to protect the worst off in society and to redistribute some resources from those who are relatively rich to those who are poorer. Such people tend to be strong supporters of programs such as Social Security because it protects those who, through bad luck or poor decisions, would otherwise end their lives in poverty. Others disagree, saying that government should not be involved in redistribution of resources. They also point out, as we observed earlier, that Social Security, by its very nature, benefits those who live for a long time, so it is not a good deal for groups with lower life expectancies. Beyond Social Security You may have heard in the news that discussion of the need to reform Social Security applies to other government programs. In particular, if a part of the Social Security program is a growing imbalance in the age distribution, then other programs that support transfers to older people are potentially in trouble as well. A leading example of this is the Medicare program. You can find information about this program at Medicare.gov: http://www.medicare.gov/default.asp. This program provides health care to the elderly. A second example is Medicaid, which is also a publically funded program, administered at the state level, to provide health care; this program is intended to provide assistance to poor people. “Medicaid Program—General Information,” US Department of Health and Human Services, June 16, 2011, accessed July 20, 2011, www.cms.hhs.gov/MedicaidGenInfo. These programs, like Social Security, entail large outlays by the government. In his testimony in June 2008 to the Senate Finance Committee, Peter Orszag, the director of the CBO, stated the following: “The Congressional Budget Office (CBO) projects that total federal Medicare and Medicaid outlays will rise from 4 percent of GDP [gross domestic product] in 2007 to 12 percent in 2050 and 19 percent in 2082, which, as a share of the economy, is roughly equivalent to the total amount that the federal government spends today. The bulk of that projected increase in health care spending reflects higher costs per beneficiary rather than an increase in the number of beneficiaries associated with an aging population.” “The Long-Term Budget Outlook and Options for Slowing the Growth of Health Care Costs,” Congressional Budget Office, June 17, 2008, accessed July 20, 2011, http://www.cbo.gov/doc.cfm?index=9385. This quote contains two key ideas. First, it seems likely that outlays for these two programs will be growing rapidly over the next 50 or so years. From the CBO projections, the share of spending on Medicare and Medicaid grows while the share of spending on Social Security is basically constant after 2020. This comes from figure 1 of the following testimony: “The Long-Term Budget Outlook and Options for Slowing the Growth of Health Care Costs,” Congressional Budget Office, June 17, 2008, accessed September 20, 2011, http://www.cbo.gov/doc.cfm?index=9385. Second, in contrast to Social Security, the problem is not only demographics. Instead, as noted in the testimony, a significant part of the increased cost of these programs comes from the increases in treatment per individual, rather than the number of individuals. Thus as you use the tools provided in this chapter to ponder Social Security, keep in mind that other programs have similar budgetary challenges. Long-term solutions are needed either to finance the projected increase in outlays or to reduce the costs of these programs. Key Takeaways 1. The framework we presented captures the idea that saving is used to smooth consumption over a lifetime, and lifetime income includes taxes paid during working years together with retirement benefits. The framework did not allow for positive real interest rates or economic growth. It also ignored uncertainties of life. 2. Much of the disagreement about Social Security can be traced to a debate about its value in terms of providing insurance over uncertain lifetimes and the ability of individuals to act in their own interests when making consumption and saving choices. Exercises 1. Give two reasons why there is disagreement about Social Security reform. 2. What does it mean not to have access to credit markets? 3. What other government programs are facing budgetary problems? Are the sources of these problems the same as Social Security?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/28%3A_Social_Security/28.06%3A_Social_Security_in_the_Real_World.txt
In Conclusion Throughout the world, people contribute to and benefit from social security programs like that in the United States. Yet, owing to demographic changes and other factors, the US Social Security system as we currently know it is unlikely to survive. The challenges faced by the United States are present in many other countries with similar demographics. In much of the developed world, the ratio of workers to retirees will decrease over the next decades. Armed with the tools of this chapter, you are now equipped to understand the implications of proposed changes to Social Security programs, both in the United States and the rest of the world. Our analysis of Social Security combines two tools often used in macroeconomics. The first is the life-cycle model of consumption/saving, which provides insights into how individuals and households make consumption and saving decisions over long time horizons. We saw that people do not have to match their consumption to their spending each year; instead they can save or borrow to keep their consumption relatively smooth over their lifetimes. However, they must still satisfy a budget constraint over their entire lifetime. The second is the government budget constraint. We first examined the case where the government kept the Social Security system in balance. In this case, revenues and payments were equal each year. Then we examined the case where the government did not necessarily match revenues and spending. In this case, there is still an accounting of government flows that links surpluses and deficits today with future obligations. Our discussion illustrates a very important fact about how the economy works: household behavior typically responds to government policy. In the case of Social Security, we saw that households reduce their saving when the government saves on their behalf. Key Links exercises 1. Suppose that disposable income is \$50,000, working years is 50, retirement years is 20, and the Social Security payment is \$20,000. What is the lifetime income for this household? 2. Suppose a household lives for two periods, working and earning disposable income of \$10,000 in the first period and obtaining retirement income of \$5,000 in the second period. Suppose that the real interest rate is not 0 percent (as in our example of Carlo) but rather is 10 percent. What is the discounted present value of the household’s lifetime income? (Refer to the toolkit if you need a reminder of how to calculate a discounted present value.) How would you write the lifetime budget constraint when the real interest rate is not 0 percent? 3. Some rapidly growing countries, such as China, have a very high saving rate. Everything else being the same, explain why a household in a rapidly growing economy would tend to have a low and not a high saving rate. The social security system in China is not very generous. Explain how this would help you to understand the high saving rate in China. 4. Using the life-cycle model, how would the level of consumption respond to an increase in 1. retirement relative to working years? 2. the annual labor income during working years? 3. payments of Social Security during retirement relative to income earned during working years? 5. The equation for lifetime earnings is key to understanding the effects of Social Security. Explain in your words why the last two terms on the right side of that equation disappear using the government budget constraint. 6. Suppose you expect to live for 50 more years. Suppose also that, because the company you work for had a successful year, you get a \$50,000 bonus. If you smooth your consumption perfectly, how much of your bonus will you spend this year, and how much will you save? (You can assume the real interest rate is zero.) 7. Suppose you expect to live for 50 more years. Suppose also that, because you have done an excellent job this year, you get a \$2,000 raise. This means you expect that your income will be \$2,000 higher every year. If you smooth your consumption perfectly, how much of this raise will you spend this year, and how much will you save? (You can assume the real interest rate is zero.) 8. Suppose you expect to live for 50 more years. Suppose also that, because the company you work for had a successful year, your boss tells you (and you believe her!) that you will get a \$50,000 bonus one year from now. If you smooth your consumption perfectly, what will happen to your consumption and saving this year? (You can assume the real interest rate is zero.) 9. Why do you think that the Singaporean government allows people to withdraw funds from the government saving scheme in order to buy a house or apartment but not in order to take a vacation? 10. Suppose a government institutes a pay-as-you-go social security scheme. Explain why the first generation of recipients are clear beneficiaries from the scheme. 11. Give two reasons why households do not smooth their consumption perfectly. Economics Detective 1. Find the most recent Social Security Administration release. What is the current status of the program? When is it forecasted to go bankrupt? 2. Pick a country other than the United States. What is the social security system like there? What is its current status? 3. Go to http://www.ssa.gov/OACT/COLA/cbb.html#Series. What does the contribution and benefits base mean? Using the correcting for inflation tool, what has happened to the contribution and benefit bases in real terms over the past 20 years? 4. Go to the Social Security Administration ( http://www.ssa.gov/pubs/10070.html) to figure out how to calculate the benefits for someone about to retire in your group of family or friends. Spreadsheet Exercises 1. Consider the life of Carlo, as summarized in Figure 28.2.1 "Lifetime Income". Write a spreadsheet program to reproduce the calculations of lifetime income and consumption made in that figure. Introduce a real interest rate of 5 percent into your program. Recalculate the discounted present value of lifetime income. What will Carlo consume each period of his life? 2. Use your spreadsheet program from Problem 1 to determine how changes in Social Security affect consumption and saving. Do this first with a real interest rate of 0 and then with a 5 percent real interest rate. Compare your results.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/28%3A_Social_Security/28.07%3A_End-of-Chapter_Material.txt
…is a big number. It is the total amount of US government debt outstanding as of April 13, 2020. This number, which changes every day, is reported by the US Treasury Department at the Treasury Direct website ( www.treasurydirect.gov/NP/BPDLogin?application=np). The debt of the United States is the subject of a growing political storm in Washington. Indeed, in August 2011 there seemed to be a very real possibility that the US Congress would refuse to raise the “debt ceiling”—an upper limit on the size of the government debt. Had that occurred, the government would no longer have been able to fulfill all its obligations. Many commentators believe that the US government is facing a crisis with respect to its budget policies—specifically, the fact that the government is running persistent budget deficits. The issues are not the stuff of dry academic debate. If you are a typical reader of this book, you will be working and paying taxes over the next 50 years. Yours is the future generation that will be called on to deal with present-day deficits; debates about government deficits today are debates about your standard of living. If deficits matter to anyone, they should matter to you. Just like a household, a government has income and outlays. If a household’s outlays exceed its income, then it must borrow to finance its spending. And if a household borrows repeatedly, it builds up debt. The same is true of governments. If a government spends more than its income, then it is running a deficit that must be financed by borrowing. Repeated government deficits lead to the existence of a stock of government debt. In recent decades, the US federal government has run a deficit much more often than not. The federal government has been in deficit for all but 4 years between 1960 and 2011. As a consequence, the stock of debt outstanding in the United States has increased from \$290 billion to more than \$14 trillion. Most of us cannot really conceptualize what this sum means. We can try visual images: if we stacked up 14 trillion dollar bills, we would get a pile half a million miles high—more than twice the distance to the moon. But it is easiest to get a handle on the deficit if we divide by the number of people in the economy to obtain the debt per person. As of August 9, 2011, according to the US National Debt Clock ( www.brillig.com/debt_clock), this number is \$46,905.36. This means that if the government wanted to pay off its debt today, each and every woman, man, and child in the United States would have to be taxed by this amount, on average, to pay off the obligations of the government. US citizens hold more than half of the debt—about 60 percent. So if the government were to pay off its debt, the majority would end up being redistributed in the economy from taxpayers to holders of US government bonds. Foreigners hold the remaining 40 percent, so this money would be transferred from US taxpayers to citizens of other countries. The US government is not proposing to pay off the existing debt, however. To the contrary, the government is projected to run budget deficits for the foreseeable future, meaning that the stock of debt, and the obligation of future generations, will continue to grow. These forecasts are available from the Congressional Budget Office (CBO; http://www.cbo.gov). In response to concern over government deficits, one proposal has arisen over and over again: a balanced-budget amendment to the US Constitution. Such a measure would simply make deficits illegal. A balanced-budget amendment came within one vote of passing in a 1997 US Senate vote, and one was passed by the US House of Representatives in 1997. Another bill was introduced by a group of US House members in 2003. Here is part of the text of the 2003 bill: SECTION 1. Total outlays for any fiscal year shall not exceed total receipts for that fiscal year, unless three-fifths of the whole number of each House of Congress shall provide by law for a specific excess of outlays over receipts by a rollcall vote. SECTION 2. The limit on the debt of the United States held by the public shall not be increased, unless three-fifths of the whole number of each House shall provide by law for such an increase by a rollcall vote. SECTION 3. Prior to each fiscal year, the President shall transmit to the Congress a proposed budget for the United States Government for that fiscal year in which total outlays do not exceed total receipts. Although such bills are typically termed “balanced-budget amendments,” they often, as is the case here, permit surpluses. US House of Representatives, “H.J.RES.22—Proposing a Balanced-Budget Amendment to the Constitution of the United States,” February 13, 2003, accessed July 20, 2011, thomas.loc.gov/cgi-bin/query/z?c108:H.J.RES.22:. The Tea Party, which rose to some political prominence in the United States in 2010, campaigned in favor of a balanced-budget amendment as well. In July 2011, the House of Representatives passed HR 2560, called the Cut, Cap, and Balance Act, which (among other things) called for a constitutional amendment to balance the budget to be transmitted to the states for their consideration.The Cut, Cap, and Balance Bill is presented at “Bill Text Versions: 112th Congress (2011–2012) H.R.2560,” THOMAS: The Library of Congress, accessed September 20, 2011, thomas.loc.gov/cgi-bin/query/z?c112:H.R.2560: For ongoing discussion, read Rep. Mike Coffman, “Balanced Budget Amendment Caucus,” accessed July 20, 2011, coffman.house.gov/index.php?option=com_content&view=article&id=257&Itemid=10. This bill was not passed by the Senate. Whether this political activity will ever generate a constitutional amendment remains an open question and a point of debate in the 2012 election. The discussion of constitutional limits on budget deficits is not limited to the United States. In 2009, Germany amended its constitution to limit federal budget deficits to 0.35% of GDP by 2016. This limit applies when the German economy is operating near its potential output. The regulations allow the German government to run deficits during recessions but require surpluses in times of high economic activity. The fiscal situation in Germany is described in “Reforming the Constitutional Budget Rules in Germany,” Federal Ministry of Finance—Economics Department, September 2009, accessed September 20, 2011, http://www.kas.de/wf/doc/kas_21127-1522-4-30.pdf?101116013053. Should the government be forced to balance its budget each year, as such measures suggest? There are certainly good reasons why households sometimes incur debt—to pay for a house, a new car, or advanced education. Perhaps the same is true of governments. We should not presume that deficits are harmful without first trying to understand why they occur. Others have even argued that deficits are neither good nor bad but are simply unimportant. Indeed, Vice President Cheney is reported to have said that “[President] Reagan proved that deficits don’t matter.” Quoted in Ron Suskind, The Price of Loyalty: George W. Bush, the White House, and the Education of Paul O’Neill (New York: Simon and Schuster), 291. So are deficits bad for the economy, good for the economy, or just irrelevant? Our goal in this chapter is to understand the economic effects of government budget deficits so that we can evaluate competing claims such as these and ultimately help you answer the following question. Should the government be forced to balance its budget? Road Map We go through five steps in our evaluation of the merits of a balanced-budget amendment: 1. We make sure that we know what we are talking about. “Debt” and “deficit” are technical terms with precise meanings. We go through their definitions carefully. 2. We examine the causes of the deficit in an accounting sense. Specifically, we examine how and why the budget deficit depends on the state of the economy. We can then explore the implications—again in an accounting sense—of a balanced-budget law. 3. We progress to a deeper understanding of why deficits occur. We examine why governments choose to run deficits. At this point, we examine possible benefits of deficits to the economy. 4. We examine why deficits might be harmful to the economy. 5. We examine the argument for why deficits might be irrelevant.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/29%3A_Balancing_the_Budget/29.01%3A_24%2C211%2C075%2C873%2C711.89.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What is the difference between the deficit and the debt? 2. What are the links between the deficit and the debt? 3. What are the budget constraints faced by the government? We begin by being careful and precise about terminology. The terms deficit and debt are sometimes used sloppily in everyday discourse; as a consequence, much nonsense is spoken about fiscal policy. We must first make sure that we understand exactly what these terms mean. The CBO ( http://www.cbo.gov/showdoc.cfm?index=6060&sequence=13) has a glossary of terms on its web page. Budget Deficit: Definition The government deficit is the difference between government outlays and government revenues. Inflows and outflows are part of the circular flow of income. Revenues flow to the government when it imposes taxes on households and firms and when it collects money through various other fees. For our purposes here, we do not need to distinguish all the different kinds of taxes, and we do not worry about whether they are paid by firms or by households. All that matters is that, in the end, some of the income generated in the economy flows to the government. The government also collects Social Security payments, which are discussed in more detail in Chapter 28 "Social Security". These are just another kind of tax. Money flows out in the form of government purchases of goods and services and government transfers. Government purchases include things like roads, streetlamps, schools, and missiles. They also include wage payments for government employees—that is, the purchase of the services of teachers, soldiers, and civil servants. Outlays also occur when government gives money to households. These are called transfer payments, or transfers for short. Examples include unemployment insurance, Social Security payments, and Medicare payments. Finally, transfers include the interest payments of the government on its outstanding obligations. The outlays of the government and its revenues are not always equal. The difference between government purchases and transfers and government revenues represents a government deficit, as set out in the following definition: \[government\ deficit = outlays – revenues = government\ purchases + transfers − tax revenues = government\ purchases − (tax\ revenues − transfers) = government\ purchases − net\ taxes.\] Often we find it useful to group taxes and transfers together as “net taxes” and separate out government purchases, as in the last line of our definition. When outflows are less than inflows, then we say there is a government surplus. In other words, a negative government deficit is the same thing as a positive government surplus, and a negative government surplus is the same thing as a positive government deficit: \[government\ surplus = −government\ deficit.\] A government surplus is sometimes called “government savings.” When the government runs a deficit, borrowing from the financial markets funds such spending. When the government runs a surplus, these funds flow into the financial markets and are available for firms to borrow. To illustrate the calculation of the deficit, we examine some made-up numbers in Table 29.2.1 "Calculating the Deficit". Our equation defining the deficit tells us that we can calculate it two ways. Look, for example, at year 3. The level of government spending is 200, tax receipts are 160, and transfers are 20. • We can add together purchases and transfers to get total outlays of the government, which is 220. Then we can subtract revenues of 160 to find that the deficit is 60. • We can subtract transfers from tax receipts to get the amount of net taxes. Here, net taxes are 140. We subtract this from purchases of 200 to find a deficit of 60. Obviously, we get the same answer either way; it is just a matter of how we group the different terms together. It might seem natural to group transfers with government expenditures (since they are both outlays). Conceptually, though, transfers are more like taxes, in that they represent a flow of dollars that is not matched by a flow of goods or services. The difference is that taxes flow from into the government; transfers flow the other way. Government expenditures are very different: they represent purchases of real gross domestic product (real GDP) produced in the economy, thus contributing to the overall demand for output. Year Government Purchases Tax Revenues Transfers Net Taxes Deficit 1 50 30 10 20 30 2 100 160 40 120 −20 3 200 160 20 140 60 4 200 220 20 200 0 5 140 160 20 140 0 Table \(1\): Calculating the Deficit In Table 29.2.1 "Calculating the Deficit", the deficit varies considerably over time. It is low in year 1, negative in year 2 (in other words, there is a surplus), high in year 3, and zero in years 4 and 5. Between year 1 and year 2, government purchases and transfers increased, but tax revenues increased even more. In fact, they increased sufficiently to turn the deficit into a surplus. Between years 2 and 3, government purchases increased, and transfers decreased. However, the decrease in transfers was less than the increase in government purchases, so total government outlays increased substantially. Tax revenues stayed constant, so the government went back into deficit. In years 4 and 5 the government ran a balanced budget. If we compare year 4 to year 3, we see that the budget could be balanced by raising taxes (from 160 to 220) and leaving outlays unchanged. Conversely, by comparing year 5 to year 3, we see that the budget could be balanced by cutting spending and leaving taxes unchanged. A balanced budget is consistent with high taxes and high spending or low taxes and low spending. It is the combination of low taxes and high spending that give us a deficit. Table 29.2.1 "Calculating the Deficit" makes it clear that changes in the deficit can be explained only by examining all components of the government budget constraint. The Single-Year Government Budget Constraint We begin with the government budget constraint as it operates in a single year. This budget constraint can be seen in terms of the flows into and from the government sector in the circular flow, as shown in Figure 29.2.1 "The Government Sector in the Circular Flow" (which explicitly shows that taxes come from households and firms). Later we discuss a second government budget constraint that links spending and revenues over longer periods of time. The inflows into the government sector come from taxes and borrowing from the financial sector. The outflows comprise government purchases and government transfers. You might be wondering how it is possible for the government to have outlays that exceed its revenues. The answer is given by the government budget constraint. The government budget constraint says that the deficit, which is the difference between outlays and revenues, must be financed by borrowing. If outlays exceed revenues in a given year, then the government must somehow make up the difference. It does so by borrowing from the public. In this sense, the government is no different from a household. Each of us can, like the government, spend more than we earn. When we do, we must either borrow from someone or draw on our savings from the past. The government borrows by issuing government debt. This debt can take several forms. The government has many types of obligations, ranging from short-term Treasury Bills to longer-term bonds. For our analysis, we do not need to distinguish among these different assets. Toolkit: Section 31.33 "The Government Budget Constraint" and Section 31.27 "The Circular Flow of Income" You can review the government budget constraint and the circular flow of income in the toolkit. The Deficit: Recent Experience Table 29.2.2 "Recent Experience of Deficits and Surpluses (Billions of Dollars)" shows some actual numbers for the United States: receipts, outlays, and the federal budget deficit in current dollars for fiscal years 1990 to 2010.Government budget numbers in the United States are reported for a “fiscal year,” which runs from October to September. Thus fiscal year 2000 ran from October 1, 1999, to September 30, 2000. Fiscal Year Receipts Outlays Surplus or Deficit (−) 1990 1,032.0 1,253.1 −221.0 1991 1,055.1 1,324.3 −269.2 1992 1,091.3 1,381.6 −290.3 1993 1,154.5 1,409.5 −255.1 1994 1,258.7 1,461.9 −203.2 1995 1,351.9 1,515.9 −164.0 1996 1,453.2 1,560.6 −107.4 1997 1,579.4 1,601.3 −21.9 1998 1,722.0 1,652.7 69.3 1999 1,827.6 1,702.0 125.6 2000 2,025.5 1,789.2 236.2 2001 1,991.4 1,863.2 128.2 2002 1,853.4 2,011.2 −157.8 2003 1,782.5 2,160.1 −377.6 2004 1,880.3 2,293.0 −412.7 2005 2,153.9 2,472.2 −318.3 2006 2,406.9 2,655.1 −248.2 2007 2,568.0 2,728.7 –160.7 2008 2,524.0 2,982.5 –458.6 2009 2,105.0 3,517.7 –1,412.7 2010 2,161.7 3,455.8 –1,294.1 Table \(2\): Recent Experience of Deficits and Surpluses (Billions of Dollars) Source: “Historical Budget Tables,” Congressional Budget Office, January 2011, accessed September 20, 2011, http://www.cbo.gov/ftpdocs/120xx/doc12039/HistoricalTables[1].pdf. In the early 1990s, the government ran a deficit of about \$200–300 billion every year. (Note that a negative number in the last column corresponds to a government deficit.) In the mid-1990s, however, the deficit began to decrease. Both outlays and receipts were increasing, but receipts were increasing more quickly. By 1998, the federal budget was in surplus, and it reached a peak of \$236 billion in 2000. Thereafter, revenues decreased for several years, while spending continued to increase. By 2002, the budget had gone back into deficit again, and by the middle of the decade, the deficit was at record levels. As is evident from Table 29.2.2 "Recent Experience of Deficits and Surpluses (Billions of Dollars)", the budgetary picture changed dramatically with the onset of the severe recession in 2008. Revenues decreased and outlays increased so that the budget deficit widened considerably, to more than \$1 trillion in both 2009 and 2010. If you look at data on the government budget, you will see that the federal budget is divided into “on-budget” and “off-budget” items. Table 29.2.3 "On-Budget, Off-Budget, and Total Surplus, 2010 (Billions of Dollars)" shows these numbers for fiscal year 2010. The Congressional Budget Office defines off-budget items as follows. “Spending or revenues excluded from the budget totals by law. The revenues and outlays of the two Social Security trust funds (the Federal Old-Age and Survivors Insurance Trust Fund and the Disability Insurance Trust Fund) and the transactions of the Postal Service are off-budget.” Congressional Budget Office, Glossary, accessed October 19, 2011, www.cbo.gov/doc.cfm?index=2727&type=0&sequence=14 The transactions of the US Postal Service are not that important, so you can essentially think of the off-budget items as being the Social Security system. Since the Social Security system was in surplus over much of this period, the on-budget deficit is larger than the total. From Table 29.2.3 "On-Budget, Off-Budget, and Total Surplus, 2010 (Billions of Dollars)", the total government deficit of \$1,294 billion in 2010 reflects an on-budget deficit and a small off-budget surplus. The idea behind the separate budgeting is that Social Security represents a known set of future government obligations. For this reason the government has, in effect, set aside a separate account for Social Security revenues and outlays (much as you, as an individual, might decide you want a separate account for your savings). We discussed the Social Security Trust Fund, as this account is called, in Chapter 28 "Social Security". At least in theory, this separates the debate about Social Security from the debate about current government spending and receipts. Many policy discussions do focus just on the “on-budget” accounts. In the end, though, all these monies flow either into or from the federal government. The humorist Dave Barry once remarked that what distinguishes off-budget items is that “these are written down on a completely different piece of paper from the regular budget.” Dave Barry, “The Mallomar Method,” DaveBarry.com, March 24, 1991, accessed August 28, 2011, http://www.davebarry.com/misccol/mallomar.htm. What is more, there are other known future obligations, such as Medicare, that are not treated separately. The on-budget/off-budget distinction is really no more than an accounting fiction, and in terms of the overall economic effects of the deficit, it is better to focus on the total. Receipts Outlays Surplus or Deficit (−) On-Budget 1,530.1 2,9091.1 −1,371.1 Off-Budget 631.7 554.7 77.0 Total 2,161.7 3,455.8 −1,294.1 Table \(3\): On-Budget, Off-Budget, and Total Surplus, 2010 (Billions of Dollars) Source: US Treasury, Financial Management Service, October 2010 Statement, accessed September 20, 2011, www.fms.treas.gov/mts/mts0910.txt. There are mixed messages to take away from Table 29.2.2 "Recent Experience of Deficits and Surpluses (Billions of Dollars)". The experience of budget surpluses in the 1990s tells us that budget balancing is possible. At the same time, more recent experience suggests that substantial changes in receipts and/or outlays are now needed to balance the budget. To explore this somewhat further, look at Table 29.2.4 "Federal Outlays, 2010 (Billions of Dollars)", which shows various outlays for 2010. As we already know, total spending for that year was \$3.5 trillion. National defense, Social Security, and health-care programs together account for \$2.2 trillion, or about 63 percent of the total outlays. Other nondiscretionary spending—largely outlays such as retirement payments to federal employees, unemployment insurance, housing assistance, and food stamps—accounts for a further \$401 billion. Interest payments account for \$196 billion. These categories together account for more than 80 percent of federal outlays. Item Amount Total Outlays (%) Defense 689 19.9 Nondefense Discretionary Spending 658 19.0 Social Security 701 20.3 Health Care Programs (including Medicare and Medicaid) 810 23.4 Other Nondiscretionary Spending 401 11.6 Interest Payments 196 5.7 Total 3,456 100.0 Table \(4\): Federal Outlays, 2010 (Billions of Dollars) Source: Compiled from data in CBO, “The Budget and Economic Outlook: An Update,” August 2011, accessed September 20, 2011, www.cbo.gov/ftpdocs/123xx/doc12316/08-24-BudgetEconUpdate.pdf. Totals do not add up because of rounding errors. Just looking at those numbers should make it clear that it is very difficult to balance the budget simply by cutting federal spending. Almost everyone agrees that there is waste in the federal government, and there are programs that could and almost certainly should be abolished. (This is not to say that you could find even a single program that everyone would want to abolish. Every program benefits someone, after all. But there are certainly programs that most people would agree are wasteful.) However, the vast majority of the budget is taken up with either essential functions of government or programs that enjoy huge political popularity. Few politicians would sign up for closing the public schools, the abolition of unemployment insurance, or the cancellation of veterans’ benefits. The budget accounts distinguish between mandatory and discretionary spending. Many of the big items listed in Table 29.2.4 "Federal Outlays, 2010 (Billions of Dollars)" fall into the mandatory category—that is, outlays that are required by existing law. Less than 40 percent of outlays in 2010 were discretionary, and half of those were national defense spending. The remaining outlays were mandatory spending or payment of interest on the outstanding debt. “Budget and Economic Outlook: Historical Budget Data,” Congressional Budget Office, January 2011, accessed July 20, 2011, http://www.cbo.gov/ftpdocs/120xx/doc12039/HistoricalTables[1].pdf. If the government were to pass a balanced-budget amendment, in other words, the hard job of cutting spending or raising taxes would remain. Recall Section 3 of the amendment that we quoted in the chapter opener: “the President shall transmit to Congress a proposed budget…in which total outlays do not exceed total receipts.” Even with a balanced-budget amendment, the president would still have to propose either major cuts in existing popular programs or increases in taxes. However, such an amendment might provide “political cover” for the president and Congress: they could explain their support for unpopular spending cuts or tax increases by saying that the balanced-budget amendment gave them no choice. The Intertemporal Government Budget Constraint We discussed in 29.2 Section "The Single-Year Government Budget Constraint" that the single-period government budget constraint links spending and revenues to the deficit (or surplus) of the government each year. There is a second constraint faced by the government, called the intertemporal budget constraint, linking deficits in one year to deficits in other years. When you take out a loan, you will ultimately have to repay it. The same is true of the government; when it takes out a loan, it will ultimately have to repay the loan as well. If the government chooses to pay for its expenditures today by borrowing instead of through current taxes, then it will need additional taxes at some point in the future to pay off its loan. The intertemporal budget constraint is just a fancy way of saying that, like everyone else, the government has to pay off its loans at some point. Actually, there is one way in which the government is different from private individuals. For practical purposes, we expect that the government will go on forever. This means that the government could always have a stock of outstanding debt. However, there are practical limits on this stock—for one thing, households will not lend unlimited amounts to the government. Thus it is generally fair to say that additional borrowing by the government will have to be repaid. As a consequence, tax and spending decisions at different dates are linked. Although governments can borrow or lend in a given year, the government’s total spending over time must be matched by revenues. To express the intertemporal budget constraint, we introduce a measure of the deficit called the primary deficit. The primary deficit is the difference between government outlays, excluding interest payments on the debt, and government revenues. The primary surplus is equal to the minus of the primary deficit and is the difference between government revenues and government outlays, excluding interest payments on the debt. In our example in Table 29.2.1 "Calculating the Deficit", the deficit in year 1 was 30. If payment of interest on outstanding debt was 5, then the primary deficit would be 25, and the primary surplus would be −25. The intertemporal budget constraint says that if the government has some existing debt, it must run surpluses in the future so that it can ultimately pay off that debt. Specifically, it is the requirement that \[current\ debt\ outstanding = discounted\ present\ value\ of\ future\ primary\ surpluses.\] This condition means that the debt outstanding today must be offset by primary budget surpluses in the future. Because we are adding together flows in the future, we have to use the tool of discounted present value. If, for example, the current stock of debt is zero, then the intertemporal budget constraint says that the discounted present value of future primary surpluses must equal zero. Toolkit: Section 31.5 "Discounted Present Value" You can review the meaning and calculation of discounted present value in the toolkit. Linking the Debt and the Deficit The stock of debt is linked directly to the government budget deficit. When the government runs a budget deficit, it finances the deficit by issuing new debt. The deficit is a flow, which is matched by a change in the stock of government debt: \[change\ in\ government\ debt (in\ given\ year) = deficit (in\ given\ year).\] If there is a government surplus, then the change in the debt is a negative number, so the debt decreases. The total government debt is simply the accumulation of all the previous years’ deficits. From this equation, the stock of debt in a given year is equal to the deficit over the previous year plus the stock of debt from the start of the previous year. (In this discussion, we leave aside the fact that the government may finance part of its deficit by issuing new money. In the United States and most other economies, this is a minor source of funding for the government. See Chapter 26 "Inflations Big and Small" for more discussion. More precisely, then, every year, \(change\ in\ government\ debt = deficit − change\) in money supply.Written this way, the equation tells us that the part of the deficit that is not financed by printing money results in an increase in the government debt.) To see the interactions between deficits and the stock of debt in action, examine Table 29.2.5 "Deficit and Debt", which takes the deficit numbers from Table 29.2.1 "Calculating the Deficit" and calculates the corresponding debt. We suppose that there is initially zero debt at the beginning of year 1. The deficit of 30 in the first year means that there is outstanding debt of 30 at the end of that year. In the second year, there is a budget surplus of 20. This reduces the debt, but it is not sufficient to bring the debt all the way back to zero. Outstanding debt at the end of the year is 10. In the third year, the deficit of 60 must be added to the existing debt of 10, so the debt at the end of the year is 70. Year Deficit Debt (Start of Year) Debt (End of Year) 1 30 0 30 2 −20 30 10 3 60 10 70 4 0 70 70 5 0 70 70 Table \(5\): Deficit and Debt In years 4 and 5, the government runs a balanced budget: the deficit is zero. But the stock of debt stays unchanged. The debt is equal to the accumulation of all the deficits. Eliminating deficits (for example, by a balanced-budget amendment) means that the debt stays at its existing level. Eliminating deficits is not the same thing as paying off the debt. Source: Congressional Budget Office. The experience of the US deficit and debt held by the public since 1962 is summarized in Figure 29.2.2 "US Surplus and Debt, 1962–2010". The surplus is shown in the upper figure, and the level of debt is shown in the lower figure. All values are in current dollars. At the far left of the graph, we see that the US government ran relatively small deficits (negative surpluses) in the 1960s and early 1970s. As a result, the debt increased slowly. From the mid-1970s to the mid-1990s, deficits were substantial, so the amount of debt outstanding grew rapidly. As we saw earlier, there was a brief period of surplus in the late 1990s and a corresponding decrease in the debt, but deficit spending recommenced during the George W. Bush administration (2001–2008). The debt increased again. Although an analysis of deficits and debt is often presented using data similar to those in Figure 29.2.2 "US Surplus and Debt, 1962–2010", this figure is incomplete in two ways: (1) these numbers are not corrected for inflation (they are current dollar figures), and (2) there is no sense of how large the deficit and the debt are relative to the aggregate economy. Figure 29.2.3 "US Surplus and Debt as a Fraction of GDP, 1962–2010" remedies both defects by showing the surplus and the debt as a fraction of nominal GDP. Because nominal GDP is also measured in dollars, these ratios are just numbers. We see that the deficit has been a relatively stable fraction of GDP, averaging about 2.7 percent of GDP. The debt level has averaged about 36 percent over the period. Source: Congressional Budget Office and Economic Report of the President. The federal debt is now in excess of \$14 trillion. So if the United States were to pass a balanced-budget amendment binding on the federal government, to take effect in 2012, say, the stock of debt would thereafter remain fixed at well over \$14 trillion. To reduce the stock of debt outstanding, the deficit must be negative: the change in the stock of debt will be negative only if the government runs a surplus. Moreover, the government must pay interest on its outstanding debt. Recall that when the government runs up debt, it is borrowing from the general public. The debt of the government is an asset from the perspective of households: it is one of the ways in which people can hold their saving. Holders of government bonds earn interest on these assets. Look again at Table 29.2.4 "Federal Outlays, 2010 (Billions of Dollars)". In the United States, interest payments on the debt amounted to \$184 billion in 2005. Interest payments on the debt amount to more than half of the deficit. Balancing the budget therefore means that, once we exclude interest payments, spending plus transfers would have to be much smaller than tax revenues. If there is outstanding debt, a balanced budget means that the government must run a primary surplus. To summarize, we have discovered three things about a balancing the budget: 1. A balanced budget means that the deficit equals zero. 2. A balanced budget means that the debt is constant. 3. If there is existing debt, a balanced budget means that the government must run a primary surplus. Who Holds the Debt? Given that the US government makes such large interest payments on outstanding debt, who receives those payments? US government debt is held by households, firms, and governments in many countries. Table 29.2.6 "Foreign Holdings of US Treasury Securities as of August 2008 (Billions of Dollars)" lists some of the foreign countries holding US Treasury securities (bills, bonds, and notes) in two different months: August 2008 and May 2011. Country Holding as of August 2008 Holdings as of May 2011 Japan 585.9 912.4 China 541.0 1159.8 oil exporters 179.8 229.8 Mexico 33.5 27.7 Canada 27.7 90.7 total 2,740.3 4,514.0 Table \(6\): Foreign Holdings of US Treasury Securities as of August 2008 (Billions of Dollars) Source: “Major Foreign Holders of Treasury Securities,” US Department of the Treasury, July 18, 2011, accessed July 20, 2011, http://www.treasury.gov/resource-center/data-chart-center/tic/Documents/mfh.txt. In May 2011, the total foreign ownership of US Treasury securities was more than 45 percent of the total privately held US public debt (“privately held” means we are excluding debt held by the Federal Reserve System). As you can see from Table 29.2.6 "Foreign Holdings of US Treasury Securities as of August 2008 (Billions of Dollars)", the ownership of US debt has changed significantly over the past few years. Japan was the largest holder of US debt in August 2008, but more recently China has taken its place. You might wonder how these countries came to hold such a large fraction of US debt. Part of the answer goes back to the interaction between trade and capital flows between the United States and the rest of the world. The key is the link between trade deficits and borrowing from abroad: \[borrowing\ from\ other\ countries = imports − exports = trade\ deficit.\] This equation tells us that whenever a country runs a trade deficit, it must finance that deficit by borrowing from abroad. The United States has been running trade deficits since the early 1970s. Consequently, foreign countries have been accumulating US assets, and government debt is one important such asset. Observers sometimes comment on the fact that a substantial fraction of government debt is “owed to ourselves” (that is, it is held by US citizens) and therefore less of a cause for concern than the fraction that is owned by foreigners. Does this reasoning make sense? The answer is “not very much.” To see why, consider a US citizen who owns some US government bonds. Now imagine that she sells those bonds to a German bank and uses the proceeds to buy some General Motors (GM) shares that are currently owned by a French investment bank. All that has happened here is some rebalancing of portfolios. One individual decided to shift her assets around, so she now owns GM shares instead of government bonds. Likewise, the German bank decided it wanted more US bonds in its portfolio, whereas the French investment bank decided it wanted fewer GM shares. These kinds of transactions go on all the time in our economy. Our hypothetical citizen is just as wealthy as she was before; she is simply holding her wealth in a different form. The same is true for the German and French financial institutions. Yet foreigners hold more of the national debt than previously. Domestic or foreign ownership of the debt can change with no implications for the overall indebtedness of individuals or the country. It is more meaningful to look at the amount of foreign debt that has been accumulated by a country as a result of its borrowing from abroad. Foreign debt represents obligations that will have to be repaid at some future date. Commentators sometimes express worry over the fact that foreign central banks—notably those of Japan and China—own substantial amounts of US debt. There is a legitimate concern here: if one or more of those banks suddenly decided they no longer wanted to hold that debt, then there might be a large change in US interest rates and resulting financial instability. But the real issue is not that the debt is foreign owned. Rather, it is that a large amount of debt is held by individual institutions big enough to move the market. At the same time, the Chinese are equally concerned about the value of the US government debt they hold. In their view, they traded away goods and services for pieces of paper that are claims to be paid by the US government. These claims are in nominal terms (in dollars). Hence any change in the exchange rate changes the value of this debt to the Chinese. If, for example, the dollar depreciates relative to the Chinese renmimbi (RMB), then the real value (in terms of Chinese goods and services) of this debt is reduced. The RMB/dollar exchange rate was 8.28 in January 2000. A holder of a US dollar bill could obtain 8.28 RMB in exchange. This rate was 8.07 in January 2006. However, by June 2011, the exchange rate was 6.48. This means that someone who exchanged RMB for dollars in 2000 and then sold those dollars for RMB in June 2011 lost about 20 percent in nominal terms. Key Takeaways 1. The deficit is the difference between government outlays and government revenues. It is a flow. The debt is a measure of the stock of outstanding obligations of the government at a point in time. 2. The change in the debt between two dates is equal to the deficit incurred during the time between those two dates. 3. The government faces a single-year constraint that its deficit must be financed by issuing new debt. The government also faces an intertemporal budget constraint that its debt at a point in time must equal the discounted present value of future primary surpluses. Exercises 1. What is the difference between the budget deficit and the primary deficit? 2. If the government runs a surplus, does this mean the stock of debt must be negative? 3. Is it legal for residents of other countries to hold US debt? 4. Table 29.2.2 "Recent Experience of Deficits and Surpluses (Billions of Dollars)" is in current dollars. What does that mean?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/29%3A_Balancing_the_Budget/29.02%3A_Deficits_and_Debt.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. How does fiscal policy affect the budget deficit? 2. How does the state of the economy affect the budget deficit? 3. How do we determine whether a budget deficit results from fiscal policy or the state of the economy? Now that we have defined budget deficits, budget surpluses, and the government debt, it is time to examine what determines these economic variables. The budget deficit reflects two forces: the stance of fiscal policy and the state of the economy. Fiscal policy refers to the choice by the government of (1) its levels of spending on goods and services, (2) its transfers to households, and (3) the tax rates it sets on households and firms. Most countries have different levels of government, so some tax and spending decisions are made for the whole country, whereas others are made locally. In principle, we can include all levels of government in our discussion. This means that, in the United States, “government” can refer to the totality of local government, state government, and the federal government. In practice, though, it is the decisions of the federal government that have the main impact on the overall fiscal policy of the country. The same is true in other countries—local government decisions are not usually very important for the overall stance of fiscal policy. Tools of Fiscal Policy There are two aspects of fiscal policy: government spending and tax/transfer policy. These fiscal policy choices determine the deficit.In other chapters we examine the effects of government spending on the aggregate economy. For example, Chapter 22 "The Great Depression" explained how changes in government spending can sometimes be used to stimulate the overall economy. Government Spending Over long periods of time, government spending increases as an economy gets richer. Over shorter periods of time, however, the level of government spending is not closely influenced by the overall level of economic activity. For this reason, we typically suppose that government spending is an exogenous variable that is determined “outside” our framework of analysis. We illustrate this in Figure 29.3.1 "Government Spending". We suppose that government spending is independent of the level of gross domestic product (GDP), which means that it shows up as a horizontal line. Taxation Our interest here is in deficits and the debt rather than the details of taxation, so we take a very simple approach to taxation. We assume that there is a constant tax rate that applies to all levels of income and abstract away from all the other complexities of the tax schedule. This view of the tax and transfer system is summarized by the following equation: $net\ taxes = tax\ rate \times income.$ We illustrate this relationship in Figure 29.3.2 "The Tax Function". The slope of the line is the tax rate. In other words, for every dollar increase in income, net tax receipts increase by the amount of the tax rate. Net tax receipts depend on the state of the economy. When income is higher, the government collects more in taxes and pays out less in transfers. Taxes depend positively on income because of the way the tax code is written. Conversely, transfers (such as unemployment insurance or Medicare payments) tend to depend negatively on income: when people are richer, they are less likely to need transfers from the government. The tax rate in the figure captures the overall effect: higher income increases net tax revenues both because people pay more taxes and because they receive fewer transfers. Table 29.3.1 "Tax Receipts and Income" provides an example of tax receipts at different levels of income, when the tax rate is 10 percent. At the level of an individual household, taxes increase and transfers decrease as the household’s income increases. At the level of the entire economy, exactly the same thing is true. As real GDP increases, tax receipts increase and transfers decrease. Increased income, holding the tax rate fixed, leads to increased tax receipts. At the same time, increases in the tax rate lead to higher tax receipts at each level of income. Thus there are two factors determining tax receipts in the economy: the tax rate and the overall level of economic activity. Income Tax Rate Tax Receipts 0 0.1 0 100 0.1 10 500 0.1 50 1,000 0.1 100 2,000 0.1 200 5,000 0.1 500 Table $1$: Tax Receipts and Income The Budget Deficit and the State of the Economy As the level of economic activity—real GDP—increases, the tax receipts of the government also increase. To determine the deficit, we need to know both the current fiscal policy (as summarized by the level of government purchases and the tax rate) and the level of economic activity. Building on the example in Table 29.3.1 "Tax Receipts and Income", suppose that government purchases are 200 and the tax rate is 10 percent. The relationship between the level of economic activity (GDP) and the deficit is given in Table 29.3.2 "Deficit and Income". In this example, the level of GDP must reach 2,000 before the budget is in balance ( Figure 29.3.3 "Government Spending and Tax Receipts"). GDP Government Purchases Tax Receipts Deficit 0 200 0 200 100 200 10 190 500 200 50 150 1,000 200 100 100 2,000 200 200 0 5,000 200 500 −300 Table $2$: Deficit and Income Tax receipts increase as income increases, whereas government spending is unaffected by the level of GDP. The dependence of the deficit on real GDP and the stance of fiscal policy are summarized in Figure 29.3.4 "Deficit/Surplus and GDP", which graphs the numbers from Table 29.3.2 "Deficit and Income". The deficit/surplus is measured on the vertical axis, and real GDP is measured on the horizontal axis. The deficit/surplus line is drawn for a given tax rate. As real GDP increases, the deficit decreases. Thus the line in Figure 29.3.4 "Deficit/Surplus and GDP" has a negative slope. The deficit equals government purchases minus net tax receipts. The deficit is positive when GDP is low, but the budget goes into surplus when GDP is sufficiently high. The deficit/surplus is the difference between the level of government purchases and the level of receipts. There is a particular level of economic activity such that the budget is exactly in balance. In our example, this level of GDP is 2,000. The deficit is zero when income is 2,000 because that is the point at which government purchases equal tax revenues. For levels of income in excess of this level of GDP, the government budget is in surplus. In Figure 29.3.4 "Deficit/Surplus and GDP", we see that the budget deficit/surplus line crosses the horizontal axis when GDP is 2,000. Increases in government purchases or reductions in the tax rate are examples of expansionary fiscal policy. Decreases in government purchases or increases in the tax rate are called contractionary fiscal policy. Expansionary fiscal policy increases the deficit for a given level of real GDP. An increase in government spending shifts the deficit line upward, as shown in Figure 29.3.5 "Expansionary Fiscal Policy". With a decrease in the tax rate, by contrast, the intercept stays the same, but the line rotates upward. The effect is still to increase the deficit at all positive levels of income. Expansionary fiscal policy causes the deficit to increase at all levels of income, so the deficit line shifts upward. This picture illustrates the case of an increase in government purchases. Cyclically Adjusted Budget Deficit Given that the deficit depends on both the level of real GDP and the stance of fiscal policy, it is useful to have a way to distinguish these two influences. Put differently, it is helpful to know if the deficit is large because of the level of economic activity or because of the choices of government spending and taxes. This distinction came to the forefront in the 2004 presidential election in the United States. One of the issues raised in the debates between President George W. Bush and Senator Kerry was how the forecasted surplus from 2000 turned into the massive deficits of 2004. Were the deficits caused by the state of the economy or the policy decisions undertaken by President George W. Bush? To answer such questions, we need to decompose changes in the deficit into changes due to fiscal policy and changes due to the level of economic activity. The Congressional Budget Office (CBO; http://www.cbo.gov) produces a measure of the budget deficit, called the cyclically adjusted budget deficit, for this purpose. The CBO first calculates a measure of potential output—the level of GDP when the economy is at full employment. Then it calculates the outlays and revenues of the federal government under the assumption that the economy is operating at potential GDP. The deficit is calculated by subtracting revenues from outlays. For obvious reasons, the cyclically adjusted budget deficit is also sometimes called the full-employment deficit.“The Cyclically Adjusted and Standardized Budget Measures,” Congressional Budget Office, April 2008, accessed July 20, 2011, cbo.gov/ftpdocs/90xx/doc9074/StandBudgetTOC.2.1.htm. Figure 29.3.6 "The Cyclically Adjusted Budget Deficit" illustrates this idea. We first calculate the level of potential output and then use the deficit line to tell us the cyclically adjusted budget deficit or surplus for the economy. The figure shows two possibilities. In the first case, there is a government deficit when actual output is equal to potential output. In the second case, there is a government surplus when output is equal to potential output. Of course, the practical calculations are somewhat trickier than this picture suggests, but the idea is straightforward. To determine the cyclically adjusted deficit or surplus in an economy, calculate the level of potential output and then use the deficit/surplus line to determine what the deficit or surplus would be at that level of output. In panel (a), the economy has a cyclically adjusted deficit, whereas in panel (b), it has a cyclically adjusted surplus. Figure 29.3.7 "Cyclical Deficit" and Figure 29.3.8 "Structural Deficit" show that there are two distinct reasons why a government might go from surplus into deficit—as happened in 2002, for example. Suppose that, last year, the economy was at potential output and there was a cyclically adjusted surplus (point A). Now imagine that this year there is a government deficit. One possibility is that the economy went into recession, as in Figure 29.3.7 "Cyclical Deficit", point B. This is called a cyclical deficit because it is due to the state of the business cycle. Another is that the stance of fiscal policy has changed—for example, because of an increase in government spending, as in Figure 29.3.8 "Structural Deficit", point C. The CBO calls this a standardized deficit (or structural deficit).A key simplification in these pictures is that the level of potential GDP is independent of taxes and government spending. Chapter 27 "Income Taxes" explains why potential output itself might be affected by the tax code. The economy went from surplus (A) to deficit (B) because of recession. Real GDP declines, tax receipts decrease, and the budget goes into deficit. The economy moves along the deficit/surplus line. The economy went from surplus (A) to deficit (C) because of changes in fiscal policy. Real GDP does not change: it is at potential output in both cases. The deficit/surplus line shifts upward. Cyclical Deficits and a Balanced-Budget Requirement We have identified two factors that determine the size of the deficit: the stance of fiscal policy and the state of the economy. We can use this information to learn more about the effects of a balanced-budget amendment on the economy. Suppose that the economy is at potential output. A balanced-budget requirement would say that the economy must be neither in surplus nor in deficit at this point. In other words, a balanced-budget requirement describes the overall stance of fiscal policy. The deficit/surplus line must be shifted to ensure that it passes through the horizontal axis at potential output, as shown in Figure 29.3.9 "Balanced-Budget Requirement". A balanced-budget requirement implies that the full-employment deficit/surplus must be zero. The deficit/surplus line must pass through zero when real GDP equals potential output. Now suppose that, for some reason, the economy goes into recession. In Figure 29.3.10 "Recession with a Balanced-Budget Amendment", this means that output goes from potential output to some lower level. We know that this leads to a deficit, which is shown as a shift from point A to point B. Under a balanced-budget rule, the government is not allowed to let this situation persist. Instead the government must respond by increasing taxes or cutting spending, moving the economy from point B to point C. Similarly, if the economy went into a boom, this would tend to lead to a surplus. The government would be forced to cut taxes or increase spending to bring the budget back into balance. A balanced-budget amendment would force the government to conduct procyclical fiscal policy.In fact, the effects of a balanced-budget amendment would be even worse. The countercyclical fiscal policy would cause GDP to decrease even further, thus requiring even bigger cuts in spending or increases in taxes. If the economy were to go into recession, a balanced-budget requirement would force the government to increase taxes or cut spending to bring the budget back into balance. Key Takeaways 1. At a given level of GDP, an expansionary fiscal policy increases the budget deficit, and a contractionary fiscal policy decreases the budget deficit. 2. As the level of economic activity increases, tax revenues increase, transfers decrease, and the budget deficit decreases. 3. By examining the cyclically adjusted budget deficit, it is possible to evaluate how much of the budget deficit is due to the state of the economy and how much is due to the stance of fiscal policy. Exercises 1. In Table 29.3.2 "Deficit and Income", why do tax receipts increase with real GDP? 2. What do we know about fiscal policy if the cyclically adjusted budget deficit is negative? 3. If the budget is in deficit, what do we know about the level of real GDP compared to potential GDP?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/29%3A_Balancing_the_Budget/29.03%3A_The_Causes_of_Budget_Deficits.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. When do countries run government budget deficits? 2. Why might a country incur a government budget deficit? To evaluate the merits of a balanced-budget amendment, we need to know why governments run deficits in the first place. After all, governments may have good reasons for these policies. We have seen one explanation for deficits: governments run deficits because of economic downturns. Reductions in gross domestic product (GDP), other things being equal, lead to increases in the budget deficit. We are more concerned with why governments choose to run persistent structural deficits, though. We first look to history for clues. Government Debt: A Historical Perspective Figure \(1\): Ratio of US Debt to GDP, 1791–2009 Source: Debt data from http://www.treasurydirect.gov/govt/reports/pd/histdebt/histdebt.htm; GDP data from https://eh.net/. Figure 29.4.1 "Ratio of US Debt to GDP, 1791–2009" shows the ratio of US federal government debt to GDP from 1791 to 2009. The US Civil War in the 1860s, World War I in 1917, and World War II in the early 1940s all jump out from this figure. These are periods in which the stock of US federal debt soared. During the Civil War, the stock of debt was \$64,842,287 in 1860 and peaked at \$2,773,236,174 in 1866. The debt level was more than 40 times higher in 1866 than in 1860. In 1915 (after World War I had started but before the United States had entered the war), the stock of debt was \$3,058,136,873.16, not much more than the level in 1866. By 1919, the level of the debt was \$27,390,970,113.12, an increase of almost 800 percent. During World War II, there was again a significant buildup of the debt. In 1940, the level of debt outstanding was \$42,967,531,037.68, or about 42 percent of GDP. By 1946, this had increased by about 527 percent to \$269,422,099,173.26. In 1946, the outstanding debt was 121 percent of GDP. There are two other periods that show a significant buildup of the debt relative to GDP. The first is the Great Depression. This buildup was not due to a big increase in borrowing by the government. Rather, it was largely driven by the decline in the level of GDP (the denominator in the ratio). The second is the period from the 1980s to the present. The buildup of the debt in the 1980s was unprecedented in peacetime history. Figure 29.4.1 "Ratio of US Debt to GDP, 1791–2009" also shows a dramatic asymmetry in the behavior of the debt-to-GDP ratio. Although the increases in this ratio are typically rather sudden, the decreases are much more gradual. Look again at the rapid increase in the debt-to-GDP ratio around the Civil War. After the Civil War ended, the debt-to-GDP ratio decreased but only slowly. As seen in the figure, the debt-to-GDP ratio decreased for about 45 years, from 1870 to 1916. Part of this decrease was due to the growth in GDP over the 45 years, and part was due to a decrease in nominal debt outstanding until around 1900. Why Do Governments Run Deficits? It is evident that during periods of war the debt is higher. What underlies this relationship between wars and deficits? War is certainly expensive. Take, for example, the conflicts in Iraq and Afghanistan. Congress has already appropriated about \$1 trillion for these wars, and a Congressional Budget Office study projected the conflicts would eventually cost the United States about \$2.4 trillion. When government purchases increase due to a war, a government can either increase taxes to pay for the war or issue government debt. Remember that when the government runs a deficit to pay for a war, it is borrowing from the general public. The government’s intertemporal budget constraint reminds us that—since government debt is ultimately paid for by taxes in the future—the choice is really between taxing households now or taxing them later. History tells us that deficits have been the method of choice: governments have chosen to tax future generations to pay for wars. There are two arguments in favor of this policy: 1. Fairness. Any gains from winning a war will be shared by future generations. Hence the costs should be shared as well: the government should finance the war with debt so that future generations will repay some of the obligations. To take an extreme case, suppose a country is fighting for its right to exist. If it wins the war, future generations will also benefit. 2. Tax smoothing. A good fiscal policy is one where tax rates are relatively constant. In the face of a rapid increase in spending, such as a war, the best policy is one that pays for the spending increase over many periods of time, not in one year. Taxation is expensive to the economy because it distorts economic decisions, such as saving and labor supply. The amount people want to work depends on their real wage, after taxes. So if tax rates are increased to finance government spending, this reduces the benefit from working. Put differently, increased income taxes increase the price of consumption relative to leisure. The fact that people work less when taxes increase is a distortionary effect of taxation. Instead of bunching all this distortionary taxation into a short amount of time, such as a year, it is more efficient for the government to spread the taxation over many years. This is called tax smoothing. So by running a budget deficit, the government imposes relatively small distortions over many years rather than imposing large distortions within a single year. Toolkit: Section 31.3 "The Labor Market" For more analysis of the choice underlying labor supply, you can review the labor market in the toolkit. Similar arguments apply to other cases in which governments engage in substantial spending. Imagine that the government is considering putting a large amount of resources into cancer research. The discovery of a successful cancer treatment would, of course, benefit many generations of citizens. Because households would share the gains in the future, the costs should be shared as well. By running a budget deficit, the government is able to distribute the costs across generations of citizens in parallel with the benefits. From the perspective of both fairness and efficiency, there are some gains to deficit spending. More generally, we might want to make a distinction among different types of government purchases, just as we do among private purchases. We know that the national accounts distinguish consumption purchases (broadly speaking, things from which we get short-run benefit, such as food and movies) from investment purchases (things that bring long-term benefit, such as factories and machinery). Likewise, we might want to distinguish between government consumption, such as wages of employees at the Department of Motor Vehicles, from government investment, such as spending on cancer research. We could then argue that it makes more sense to borrow to finance government investment rather than government consumption. Although a very nice idea in principle, this approach to the government accounts often founders on the practicalities and the politics of implementation. First, it is not at all clear how to classify many government expenditures. Was a launch of the space shuttle consumption or investment? What about the wages of teachers in the public schools? What about the money spent on national parks? Second, politicians would have a strong incentive to classify expenditures as investment rather than consumption, to justify deferring payment. Another benefit of deficits is that they can play a role in economic stabilization. Chapter 22 "The Great Depression" spelled out in detail the role for fiscal policy in stabilizing output. In the short run, the level of economic activity can deviate from potential GDP. As a consequence, aggregate expenditures play a role in determining the level of output. Fiscal policy influences the level of aggregate expenditures. Changes in government purchases directly affect aggregate expenditures because they are a component of spending, and changes in taxes indirectly affect aggregate demand through their effect on consumption. Hence deficit spending can help to stabilize the economy. In summary, there are several arguments for allowing governments to run deficits. We would forswear these benefits if we were to adopt a balanced-budget amendment.One of the arguments for deficits—funding wars—is an explicit exception (and the only such exception) written into the bill from that we quoted earlier. But we conclude by noting that there is a further, much less benign, reason for government deficits: they may benefit politicians even if they do not benefit the country as a whole. Deficits allow politicians to provide benefits to constituents today and leave the bill to future generations. If politicians and voters care more about current benefits than future costs, then they have a strong incentive to incur large deficits and let future generations worry about the consequences. Deficits around the World Do other countries also run deficits in the way that the United States does? Table 29.4.1 "Budget Deficits around the World, 2005*" summarizes the recent budgetary situation for several countries around the world. With the exception of Argentina, all the countries were running deficits in 2005.The table deliberately does not express the deficits relative to any measure of economic activity in the country. Thus it is hard to say whether these deficits are large or small. An Economics Detective exercise at the end of the chapter encourages you to look at this question. Country Revenues Expenditures Deficit Argentina 42.6 39.98 −2.62 China 392.1 424.3 32.2 France 1,006 1,114 108 Germany 1,249 1,362 113 Italy 785.7 861.5 75.8 \(^{*}\) Data are in millions of US dollars. Table \(1\): Budget Deficits around the World, 2005\(^{*}\) Source: CIA Fact Book, http://www.cia.gov/cia/publications/factbook/fields/2056.html. France, Germany, and Italy are of particular interest. These three countries are part of the European Union (EU). In January 1999, when the Economic and Monetary Union was formed, a restriction on the budget deficits of EU countries went into effect. This measure was contained in legislation called the Stability and Growth Pact. This pact is discussed in detail in “Stability and Growth Pact,” European Commission Economic and Financial Affairs, accessed September 20, 2011, http://ec.europa.eu/economy_finance/sgp/index_en.htm. Its main component is a requirement that member countries keep deficits below a threshold of 3 percent of GDP. This threshold is not set to zero to allow countries the ability to deal with fluctuations in real GDP. In other words, although the EU does not impose a strict balanced-budget requirement, it does impose limits on member countries. In recent years, however, these limits have been exceeded. For example, in 2005, Germany’s deficit was more than 4.5 percent of its GDP. During the past few years, Germany has been in a recession and, as highlighted by Figure 29.3.4 "Deficit/Surplus and GDP", its deficit grew considerably. Instead of imposing contractionary fiscal policies to reduce its deficit, Germany allowed its deficit to grow outside the bounds set by the Stability and Growth Pact. The economic crisis of 2008 and subsequent recession that impacted many of the world economies had a further effect on the budget deficits of countries in Europe, contributing to severe debt crises and bailouts in Greece, Ireland, and Portugal. We examine what happened in these countries in Chapter 30 "The Global Financial Crisis". Key Takeaways 1. Countries run government budget deficits when faced with large expenditures, such as a war. 2. By running a deficit, a government is able to spread distortionary taxes over time. Also, a deficit allows a government to allocate tax obligations across generations of citizens who all benefit from some form of government spending. Finally, stabilization policy often requires the government to run a deficit. Exercises 1. What does it mean to say that a tax is “distortionary”? 2. What is the political benefit to deficit spending? 3. When does “fairness” provide a basis for running a deficit?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/29%3A_Balancing_the_Budget/29.04%3A_The_Benefits_of_Deficits.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What is the crowding-out effect? 2. When is the crowding-out effect of government deficits large? We now turn to the costs of deficit spending. (Although we refer to this as “deficit spending,” the same arguments apply if we analyze the effects of a reduction in the government surplus.) First, we need to understand what happens in the financial sector of the economy if the government runs a deficit. Savings and Investment Earlier, we examined the circular flow of income in the government sector. Now we turn our attention to the circular flow in the financial sector, which is shown in Figure 29.5.1. We also examined this sector in Chapter 20 "Globalization and Competitiveness". As with all sectors in the circular flow, the flows into and from the sector must match. In the case of the government sector earlier in the chapter, the balance of these flows is another way of saying that the government must satisfy its budget constraint. The rules of accounting tell us that, in the financial sector, the flows in must likewise match the flows out, but what is the underlying economic reason for this? The answer is that the flows are brought into balance by adjusting interest rates in the economy. We think of the financial sector of the economy as a large credit market in which the price is the real interest rate. Toolkit: Section 31.24 "The Credit (Loan) Market (Macro)" You can review the credit market in the toolkit. The Credit Market The supply of loans in the credit market comes from (1) private savings of households and firms, (2) savings or borrowing of governments, and (3) savings or borrowing of foreigners. Households generally respond to an increase in the real interest rate by saving more. Higher real interest rates also encourage foreigners to send funds to the domestic economy. National savings are defined as private savings plus government savings (or, equivalently, private savings minus the government deficit). The total supply of savings is therefore equal to national savings plus the savings of foreigners (that is, borrowing from other countries). The demand for credit comes from firms who borrow to finance investment. As the real interest rate increases, investment spending decreases. For firms, a high interest rate represents a high cost of funding investment expenditures. The matching of savings and investment in the aggregate economy is described by the following equations: \[investment = national\ savings + borrowing\ from\ other\ countries\] or \[investment = national\ savings − lending\ to\ other\ countries.\] The response of savings and investment to the real interest rate is shown in Figure 29.5.2. In equilibrium, the quantity of credit supplied equals the quantity of credit demanded. We have assumed that the country is borrowing from abroad, but nothing at all would change—other than the way we describe the supply curve—if the domestic economy were instead lending to other countries. Crowding Out Armed with this framework, we can determine what happens to saving, investment, and interest rates when the deficit increases. Figure 29.5.3 begins with the credit market in equilibrium at point A. The increased government deficit is shown as a leftward shift of the national savings line. At each level of the real interest rate, the increased government deficit means that national savings is lower. This shift in the savings line implies that the market for loans is no longer in equilibrium at the original interest rate. Real interest rates increase in response to the excess of investment over savings until the market is once again in equilibrium, at point B in Figure 29.5.3. Comparing A to B, we can see there are two consequences of the government deficit: (1) real interest rates increase, and (2) the amount of credit, and hence the level of investment, is lower. The reduction in investment spending caused by an increase in government spending is called crowding out. In addition, household spending on durable goods also decreases when interest rates increase: this is also an example of crowding out. To the extent that household spending on durables and investment are sensitive to changes in real interest rates, the crowding-out effect can be substantial. Crowding out also operates through net exports. From Figure 29.5.3, we know that an increase in the deficit leads to an increase in interest rates. Increased interest rates have three effects: 1. They cause investment to decrease. This is the crowding-out effect. 2. They cause private saving to increase. Higher interest rates encourage people to save rather than consume. 3. They attract funds from other countries. Investors in other countries see the higher interest rates and decide to invest in the domestic economy. The second and third effects explain why the supply of credit slopes upward in Figure 29.3.5. As a result, the decrease in investment is not as large as the increase in the deficit. The decrease in government saving is partly offset by an increase in private saving and an increase in borrowing from abroad. Increased borrowing from abroad must result in a decrease in net exports to keep the flows into and from the foreign sector in balance. To understand these linkages, imagine that the United States sells additional government debt, some of which is purchased by banks in Europe, Canada, Japan, and other countries. These purchases of government debt require transactions in the foreign exchange market. If a bank in Europe purchases US government debt, there is an increased demand for dollars in the euro market for dollars, which leads to an appreciation in the price of the dollar. When the dollar appreciates, US citizens find that European goods and services are cheaper, whereas Europeans find that US goods and services are more expensive. US imports increase and exports decrease, so net exports decrease. To summarize, an increased government deficit leads to the following: • An increase in the real interest rate • An appreciation of the exchange rate • A reduction in investment and in purchases of consumer durables • An increase in the trade deficit Table 29.5.1 "Investment, Savings, and Net Exports (Billions of Dollars)" shows the US experience during the 1980s, when the US federal government ran a large budget deficit (the negative entries in the federal budget surplus column). The table also reveals that the United States ran a sizable trade deficit starting in 1983. This phenomenon became known as the twin deficits. Year Investment Trade Surplus National Saving Budget Surplus Error 1980 579.5 11.4 549.4 −23.6 41.5 1981 679.3 6.3 654.7 −19.4 30.9 1982 629.5 0.0 629.1 −94.2 0.4 1983 687.2 −31.8 609.4 −132.3 46.0 1984 875 −86.7 773.4 −123.5 14.9 1985 895 −110.5 767.5 −126.9 17.0 1986 919.7 −138.9 733.5 −139.2 47.3 1987 969.2 −150.4 796.8 −89.8 22.0 1988 1,007.7 −111.7 915.0 −75.2 −19 1989 1,072.6 −88.0 944.7 −66.7 39.9 Table \(1\): Investment, Savings, and Net Exports (Billions of Dollars) Source: Economic Report of the President (Washington, DC: GPO, 2004), table B-32. Even though recent years have also seen high deficits in the United States, interest rates have not increased, so we have not seen crowding out. This is because the Federal Reserve has also been operating in credit markets to keep interest rates low. Although crowding out is associated with fiscal policy, it also depends on what policies the monetary authority chooses to pursue. When crowding out does occur, its long-term consequences may be significant. Lower investment translates, in the long run, into a lower standard of living. Chapter 21 "Global Prosperity and Global Poverty" explained how investment feeds into long-run economic growth. An increase in government spending means that the country has chosen to consume more now and less in the future. Similarly, crowding out of net exports means that the economy is borrowing more from other countries. This again means that the country has chosen to consume more now in exchange for debt that must be paid back later. The crowding-out effect is perhaps the most powerful argument in favor of a balanced-budget requirement. Key Takeaways 1. Crowding out occurs when government deficits lead to higher real interest rates and lower investment. The high interest rates can also cause the domestic currency to appreciate, leading to a decrease in net exports. 2. The crowding-out effect is large when spending by households on durables and investment spending are sensitive to variations in the real interest rate and when exports are sensitive to changes in the exchange rate. Exercises 1. Why do higher interest rates cause the currency to appreciate? 2. In using the credit market to study the effects of government deficits on real interest rates, what did we assume about household saving?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/29%3A_Balancing_the_Budget/29.05%3A_The_Costs_of_Deficits.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What is the Ricardian theory about the effects of deficits on interest rates and real gross domestic product (GDP)? 2. What is the evidence on the Ricardian theory? Buried in our analysis of the crowding-out effect is a critical assumption. We argued that an increase in the government deficit would reduce national savings at every level of the interest rate. Implicitly, we assumed that the change in government behavior had no direct effect on private savings. Instead, there was an indirect effect: savings increased when the interest rate increased. But at any given level of interest rates, we assumed that private saving was unchanged. Perhaps that is not the most reasonable assumption. Consider the following thought experiment: • The government sends you and everyone else a check for \$1,000, representing a tax cut. • The government finances this increase in the deficit by selling government bonds. • The government announces that it will increase taxes next year by the amount of the tax cut plus the interest it owes on the bonds that it issued. What will be your response to this policy? A natural reaction is just to save the entire tax cut. After all, if the government cuts taxes in this fashion, then all it is doing is postponing your tax bill by one year. Your lifetime resources have not increased at all. Hence you can save the entire tax cut, accumulate the interest income, and use this income to pay off your increased tax liability next year. The Household’s Lifetime Budget Constraint The household’s lifetime budget constraint tells us that households must equate the discounted present values of income and expenditures over their lifetimes. We use it here to help us understand how households behave when there are changes in the timing of their income. In general, the budget constraint must be expressed in terms of discounted present values: discounted present value of lifetime consumption = discounted present value of lifetime disposable income. When the real interest rate is zero, life is simple. It is legitimate simply to add together income and consumption in different years. In this case, the lifetime budget constraint says that \[total\ lifetime\ disposable\ income = total\ lifetime\ consumption.\] The measure of income used in the household’s budget constraint is lifetime disposable income. You can think of discounted lifetime disposable income as the difference between the discounted present value of income (before taxes) and the discounted present value of taxes. The effect of a government’s tax policy is through the discounted present value of household taxes. Toolkit: Section 31.34 "The Life-Cycle Model of Consumption" You can review the life-cycle model of consumption in the toolkit. Private Savings and Government Savings In our earlier thought experiment, the increase in the government deficit was exactly offset by an increase in private savings. This implication is shown in Figure 29.6.1 "Ricardian Equivalence": nothing happens. The composition of national savings changes, so public savings decrease, and private savings increase. But these two changes exactly offset each other since the private sector saves the entire amount of the tax cut. As a result, the supply curve does not shift. Since national savings do not change, the equilibrium remains at point A, and there is no crowding-out effect. Economists call this idea Ricardian equivalence, after David Ricardo, the 19th century economist who first suggested such a link between public and private saving. Ricardian equivalence occurs when an increase in the government deficit leads to an equal increase in private saving and no change in either the real interest rate or investment. Figure \(1\): Ricardian Equivalence An increase in the government deficit is equivalent to a decrease in government savings, which shifts national savings leftward. In a Ricardian world, private savings increases by an offsetting amount, so the final result is no change in national savings. The Ricardian perspective can be summarized by two related claims: 1. The timing of taxes is irrelevant. 2. If government purchases are unchanged, tax cuts or increases should have no effect on the economy. These claims follow from the government’s intertemporal budget constraint and the household’s lifetime budget constraint, taken together. The government’s constraint tells us that a given amount (that is, a given discounted present value) of government spending implies a need for a given (discounted present value) amount of taxes. These taxes could come at all sorts of different times, with different implications for the deficit, but the total amount of taxes must be enough to pay for the total amount of spending. The household’s lifetime budget constraint tells us that the timing of taxes may be irrelevant to households as well: they should care about the total lifetime (after-tax) resources that they have available to them. The implications of the Ricardian perspective are not quite as stark if the increased deficit is due to increased government spending. Households should still realize that they have to pay for this spending with higher taxes at some future date. Lifetime household income will decrease, so consumption will decrease. However, consumption smoothing suggests that the decrease in consumption will be spread between the present and the future. The decrease in current consumption will be less than the increase in government spending, so national savings will decrease, as in the analysis in 29.5 Section "The Costs of Deficits". Since the Ricardian perspective says that the timing of taxes is irrelevant, the effect is the same as it would be if the taxes were also imposed today. So one way of thinking about this is to suppose that the government increases spending and finances that increase with current taxes. If the Ricardian perspective is an accurate description of how people behave, then much of our analysis in this chapter becomes irrelevant. Deficits are not needed to spread out the costs of major government expenditures because households can do this smoothing for themselves. Changes in taxes have no effect on aggregate spending, so there is no crowding-out effect. As for a balanced-budget amendment, it too would be much less significant in such a world. Ricardian households effectively “undo” government taxation decisions. However, the exact effect of an amendment would depend on how the government chose to ensure budget balance. Suppose the economy went into recession, so tax revenues decreased. There are two ways to restore budget balance. One is to increase taxes. According to the Ricardian perspective, this would have no effect on the economy at all. The other is to cut government purchases. As we have seen, this would have some effects. Evidence The Ricardian perspective seems very plausible when we consider a thought experiment such as a tax cut this year matched by a corresponding tax increase next year. At the same time, a typical tax cut is not matched by an explicit future tax increase at a specified date. Instead, a tax cut today means that at some unspecified future date taxes will have to be increased. Furthermore, the Ricardian perspective requires that households have a sophisticated economic understanding of the intertemporal budget constraint of the government. It is therefore unclear whether this Ricardian view is relevant when we evaluate government deficits. Do households understand the government budget constraint and adjust their behavior accordingly, or is this just an academic idea—theoretically interesting, perhaps, but of limited relevance to the real world? This is an empirical question, so we turn to the data. There are two natural ways to examine this question. The first is to determine the relationship between government deficits and real interest rates in the data. The second approach is to examine the relationship between government deficits and private saving. Deficits and Interest Rates We want to answer the following question: do increases in government deficits cause real interest rates to increase? Figure \(2\): US Surplus/GDP Ratio and Real Interest Rate, 1965–2009 There is some evidence that declines in the government surplus are associated with higher real interest rates, contrary to the Ricardian view. Source: Economic Report of the President, 2010, Tables B-63 and B-72. Figure 29.6.2 "US Surplus/GDP Ratio and Real Interest Rate, 1965–2009" shows two series. The first is the ratio of the US budget surplus to GDP, measured on the left axis. (Be careful—this is the surplus, not the deficit. The economy is in deficit when this series is negative.) The second is a measure of the real interest rate, measured on the right axis. The figure shows that interest rates do seem to increase when the surplus decreases and vice versa. We can compute the correlation between the surplus-to-GDP ratio and the real interest rate. For this data the correlation is −0.16. The minus sign means that when the surplus is above average, the real interest rate tends to be below its average value, consistent with the impression we get from the graph. However, the correlation is not very large. The 1980s stand out in the figure. During this period, the budget deficit grew substantially, reflecting low economic activity as well as tax cuts that were enacted during the early years of the Reagan administration. Starting in 1982, real interest rates increased substantially, just as the budget deficit was widening. This is consistent with crowding out and contrary to the Ricardian perspective. We must be cautious about inferring causality, however. It is false to conclude from this evidence that an increase in the deficit caused interest rates to increase. It might be that some other force caused high interest rates and low economic activity. For example, as explained in Chapter 25 "Understanding the Fed", tight monetary policy (such as that enacted in the 1980s) leads to high interest rates and can push the economy into recession, leading to a deficit. Toolkit: Section 31.23 "Correlation and Causality" You can review the definition of a correlation in the toolkit. Government Deficits and Private Saving According to the Ricardian perspective, increases in the government deficit should be matched by increases in private saving and vice versa. Private and government savings rates for the United States are shown in Figure 29.6.4 "US Government and Private Savings Rates". These calculations rely on data from the Economic Report of the President (Washington, DC: GPO, 2011), table B-32, accessed September 20, 2011, www.gpoaccess.gov/eop. The private saving rate equals private saving as a percentage of real GDP. The government saving rate essentially equals the government surplus as a percentage of GDP (there are some minor accounting differences that we do not need to worry about). There is some evidence that private and government saving move in opposite directions, as suggested by the Ricardian view. Source: Calculations based on Economic Report of the President, Table B-32. Private savings increased from the 1980 to 1985 period and decreased thereafter. Large deficits emerged during the early 1980s (negative government savings). At this time, there was an increase in the private savings rate. The government savings rate increased steadily during the 1990s, and, during this period, the private savings rate decreased. These data are therefore more supportive of the Ricardian view: private and government savings were moving in opposite directions. Turning to international evidence, an Organisation for Economic Co-operation and Development study that examined 21 countries between 1970 and 2002 found that changes in government deficits were associated with partially offsetting movements in private saving. On average, the study found that changes in private savings offset about one-third to one-half of changes in the government deficit. See Luiz de Mello, Per Mathis Kongsrud, and Robert Price, “Saving Behaviour and the Effectiveness of Fiscal Policy,” Economics Department Working Papers No. 397, Organisation for Economic Co-operation and Development, July 2004, accessed September 20, 2011, www.oecd.org/officialdocuments/displaydocumentpdf/?cote=eco/wkp(2004)20&doclanguage=en. Figure 29.6.4 "Government and Private Savings Rates in Spain and Greece" and Figure 29.6.5 "Government and Private Savings Rates in France and Ireland" reproduce some figures from this study. In Spain and Greece, for example, we see patterns of savings that are consistent with the Ricardian perspective: private savings and government savings move in opposite directions. By contrast, the pictures for Ireland and France show little evidence of such an effect. Source: Economic Report of the President, 2010, Tables B-63 and B-72. Figure \(5\): Government and Private Savings Rates in France and Ireland Source: Calculations based on Economic Report of the President, Table B-32. The data from the United States and other countries indicate that this is almost certainly one of those questions where the truth is in the middle. We do not observe households behaving completely in accordance with the Ricardian perspective. As a result, we conclude that deficits do have the real effects on the economy that we discussed at length in this chapter. At the same time, there is evidence suggesting that households pay attention to the government budget constraint. The Ricardian perspective is more than just an academic curiosity: some households, some of the time, adjust their behavior to some extent. Key Takeaways 1. According to Ricardian theory, a government deficit will be offset by an increase in household saving, leaving real interest rates and the level of economic activity unchanged. The key to the theory is the anticipation of households of future taxes when the government runs a deficit. 2. There is some evidence that interest rates are high when deficits are high, contrary to the prediction of the Ricardian view. But during some periods of large deficits, the household saving rate is high as well. The evidence on Ricardian equivalence is not conclusive. Exercises 1. If the government cuts taxes, what happens to public saving, private saving, and national saving according to the Ricardian theory? 2. What is the difference between causation and correlation when we examine the relationship between budget deficits and real interest rates?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/29%3A_Balancing_the_Budget/29.06%3A_The_Ricardian_Perspective.txt
In Conclusion We started this chapter by asking whether the United States should adopt a balanced-budget amendment to the constitution. This question has both political and economic ramifications. It is not our purpose in this book to answer this question, or others like it, for you. Most interesting questions do not have easy answers. Instead, they come down to assessments of costs and benefits and judgments about which frameworks best describe the world that we live in. Our intent here was to provide you with the ability to assess the arguments about a balanced-budget amendment and, more generally, the effects of deficit spending on the economy. We saw in this chapter that there are certainly both benefits and costs associated with deficit finance. Key benefits include the ability to spread out the payments for large government purchases and the opportunity to use deficits to stimulate economies in recession. The main cost of deficits is that they increase real interest rates, thus crowding out investment and slowing long-term growth. As we also saw, these effects might be tempered by an increase in household savings in response to government deficits. The evidence suggests that the Ricardian perspective on deficits has partial validity. Changes in government savings are likely to be partially, but not completely, offset by changes in households’ saving behavior. We also noted that a balanced-budget amendment would not absolve government of the difficult choices involved in balancing the budget. It is one thing to pass a law saying that the budget must be balanced. It is quite another to come up with the spending cuts and tax increases that are needed to make it happen. Meanwhile, time is passing. Go and look again at the size of the debt outstanding reported at the US Treasury ( www.treasurydirect.gov/NP/BPDLogin?application=np). How much has it changed since you first checked it? How much has your share of the debt changed? Key Links exercises 1. The following table is a table of the same form as Table 29.2.1 "Calculating the Deficit" but with some missing entries. Complete the table. In which years was there a balanced budget? Year Government Purchases Tax Revenues Transfers Net Taxes Deficit 1 60 10 20 −10 2 80 100 20 3 120 20 100 −20 4 140 180 0 5 20 140 40 TABLE \(1\): CALCULATING THE DEFICIT 1. The following table lists income and the tax rate at different levels of income. In this exercise the tax rate is different at different levels of income. For income below 500, the tax rate is 20 percent. For income in excess of 500, the tax rate is 25 percent. Calculate tax receipts for this case. Income Marginal Tax Rate Tax Receipts 0 0.2 100 0.2 500 0.2 1000 0.25 2000 0.25 5000 0.25 TABLE \(2\): TAX RECEIPTS AND INCOME 1. Consider the following table. Suppose that government purchases are 500, and the tax rate is 20 percent. Furthermore, suppose that real gross domestic product (GDP) takes the values indicated in the table. If the initial stock of debt is 1,000, find the level of debt for each of the 5 years in the table. Year GDP Deficit Debt (Start of Year) Debt (End of Year) 1 3,000 1,000 2 2,000 3 4,000 4 1,500 5 2,500 TABLE \(3\): EXERCISE 1. For the example in the preceding table titled “Exercise”, are the deficits and surpluses due to variations in the level of GDP or fiscal policy? Suppose you were told that potential GDP was 4,000. Is there a full employment deficit or surplus when actual GDP is 3,000? Design a fiscal policy so that the budget is in balance when real GDP is equal to potential GDP. 2. Draw a version of Figure 29.3.4 "Deficit/Surplus and GDP" using the data for tax receipts you calculated in the table titled “Tax Receipts and Income”, and assuming government purchases equal 475. At what level of GDP is the budget in balance? 3. The text says that expansionary fiscal policy increases the deficit given the level of GDP. Would an expansionary fiscal policy necessarily increase the deficit if GDP changes as well? 4. Compare Figure 29.4.1 "Ratio of US Debt to GDP, 1791–2009" (from 1940 onward) with Figure 29.2.2 "US Surplus and Debt, 1962–2010". Why do the figures look so different from each other? 5. Suppose that investment is very sensitive to real interest rates. What does this mean for the slope of the demand curve in the credit market? Will it make the crowding-out effect large or small? Economics Detective 1. The price of government debt during the Civil War makes for a fascinating case study. Both the Union and the Confederacy were issuing debt to finance their expenditures. Try to do some research on the value of Civil War debt to answer the following questions. 1. How much did the Union and the Confederacy rely on deficits rather than taxes to finance the war efforts? 2. What do you think happened to the value of the Union and Confederacy debt over the course of the war? 3. Do you think these values were positively or negatively correlated? 4. A starting point for your research is a website ( http://www.tax.org/Museum/1861-1865.htm) that summarizes the way in which the North and the South financed their war efforts. 2. What happened to the budget deficits of European Union member countries during the financial crisis that started in 2008? Were these cyclically adjusted budget deficits? 3. Using the CBO as a source, make a table of the budget deficits for the period 1990 to the present in constant rather than current dollars (that is, obtain figures for real receipts, outlays, and deficits). Describe the behavior of real receipts, real outlays, and the real deficit over this period. Does it differ qualitatively from the description in the text? (If necessary, check the toolkit for instructions on how to convert nominal variables into real variables.) 4. Using the CBO as a source, make a table of the on-budget deficits for the period 1990 to the present. Compare these calculations with those reported in Table 29.2.2 "Recent Experience of Deficits and Surpluses (Billions of Dollars)". Explain the main differences between these tables. 5. Each month, the Congressional Budget Office (CBO) posts its monthly budget review. Look for the most recent monthly budget review. What are the largest outlays and revenues? How large are interest payments on the debt? 6. We saw that the government budget went from surplus to deficit in 2002. Based on the discussion in the text, try to find two different things that happened around this time that might explain this change. 7. This exercise builds on Table 29.5.1 "Budget Deficits around the World, 2005*". 1. Find the levels of GDP in 2005 for each country listed in Table 29.5.1 "Budget Deficits around the World, 2005*". Using this information, find the ratio of the deficit to GDP for each of the countries. 2. Which country in the world has the highest ratio of debt to GDP? How do the countries listed in Table 29.5.1 "Budget Deficits around the World, 2005*" compare in terms of the debt-to-GDP ratio? 3. For the countries listed in Table 29.5.1 "Budget Deficits around the World, 2005*", find the growth rate of real GDP in 2005. Do countries that grow faster have smaller deficits? Hint: The CIA Fact Book ( https://www.cia.gov/library/publications/the-world-factbook/index.html) will be useful. Spreadsheet Exercises 1. Suppose that government purchases are 500 and the tax rate is 20 percent. Create a table to calculate the budget deficit for each level of income from 0 to 1,000, increasing by 50 each time. At what level of income does the budget balance? Compare your results to those in Table 29.3.2 "Deficit and Income". 2. Create a spreadsheet to study the debt as in Table 29.2.5 "Deficit and Debt" using the data from Table 29.2.1 "Calculating the Deficit". But assume that the level of debt outstanding at the start of the first period was 100, not 0. Assume that the interest rate is 2 percent each year. Add a column to Table 29.2.1 "Calculating the Deficit" to indicate the payment of interest on the debt. Calculate the deficit for each year and then the debt outstanding at the start of the next year. Also calculate the primary deficit in your spreadsheet. What happens to these calculations when the interest rate increases to 5 percent?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/29%3A_Balancing_the_Budget/29.07%3A_End-of-Chapter_Material.txt
The following quotation describes a meeting held in Washington, DC, among the G-20 countries.Associated Press, “World Leaders Pledge to Combat Global Crisis,” Minnesota Public Radio News, November 17, 2008, accessed July 25, 2011, http://minnesota.publicradio.org/display/web/2008/11/17/financial_meltdown. The G-20 countries are a group of 20 of the richest countries in the world. President George W. Bush, who served as host for the G-20 discussions, said it was the seriousness of the current crisis that had convinced him that massive government intervention was warranted. He said he felt “extraordinary measures” were needed after being told “if you don’t take decisive measures then it’s conceivable that our country could go into depression greater than the Great Depression.” As we wrote this chapter in 2011, the world economy was slowly emerging from the worst financial crisis since the Great Depression. Economists and others formerly thought that the Great Depression was an interesting piece of economic history and nothing more. After all, they thought, we now understand the economy much better than did the policymakers at that time, so we could never have another Great Depression. But this belief that monetary and fiscal policymakers around the world knew how to ensure economic stability was shattered by financial turmoil that began in 2007, blossomed into a full-fledged global crisis in the fall of 2008, and led to sustained downturns in many economies in the years that followed. That was the background to the November 2008 meeting of the G-20 countries. The world leaders attending that meeting were attempting to cope with economic problems that they had never even contemplated. The events that led to this meeting were unprecedented since the Great Depression, in part because of the magnitude and worldwide nature of the crisis. As the quotation from President George W. Bush attests, extraordinary times prompted extraordinary action. The US government passed an “emergency rescue plan” in October 2008 to provide \$700 billion in funding to (among other things) buy up assets of troubled banks and firms. This was followed by a large stimulus package, called the American Recovery and Reinvestment Act of 2009, which was passed during the first year of the Obama administration. Other countries brought in similar stimulus packages. Increased government expenditures and cuts in taxes were enacted by governments around the world. Monetary authorities also took extraordinary steps, with many countries rapidly reducing interest rates to very low levels. In addition, the US Federal Reserve and other monetary authorities engaged in other unprecedented policies in an attempt to provide liquidity to the financial system. Although the roots of the crisis can be traced to 2007 or before, and although the implications of the crisis are still being felt, the full-fledged crisis began in 2008. As shorthand, we therefore refer to all these events as the “crisis of 2008,” and the question we ask in this chapter is as follows: What happened during the crisis of 2008? Road Map In this chapter, we explore the policies enacted by governments to deal with the crisis. First we need a framework to understand these events. We make sense of the events of the past few years by drawing on the tools that we have developed in this book. We aim to do more than just give a narrative account of what happened; we also offer explanations of what happened. Whereas other chapters in this book are largely self-contained, this chapter is designed as a capstone. We therefore make frequent references to topics discussed in other chapters. The crisis of 2008 was a highly complex event, with many different and imperfectly understood causes. Moreover, some of the details involve highly arcane aspects of financial markets. We are not going to give you a comprehensive account of the crisis. But we will show you how you can use the tools you have learned in this book to make some sense of what happened. We highlight three themes in particular. 1. As emphasized in Chapter 19 "The Interconnected Economy", markets in the economy and around the world are interconnected. Various connections among markets caused the crisis to spill over across different financial markets, from financial markets into the real economy, and from the United States to economies all around the world. These are sometimes called “contagion problems.” 2. There were coordination failures in addition to contagion problems. 3. Monetary and fiscal policies are interconnected. We will see that responses to the crisis around the globe often required monetary and fiscal authorities to work together. We start by summarizing events in the United States. In doing so, we use a tool from game theory to study how financial instability might arise. We use this framework to consider both recent events in the United States and events from the Great Depression. We then look specifically at the housing market at the start of the 21st century. After understanding the experience in the United States, we study how the crisis spread from the United States to other countries. We stress both financial and trade links across countries as ways in which the crisis spread. We look at a few countries in particular, such as the United Kingdom, China, Iceland, and the countries of the European Union. The crisis in the European Union is particularly interesting to economists because the interconnections between the monetary and fiscal authorities are very different to those in other places. Finally, we consider exchange rates and currency crises.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/30%3A_The_Global_Financial_Crisis/30.01%3A_A_World_in_Crisis.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What was the role of coordination games in the crisis? 2. What was the monetary policy response to the crisis? 3. What was the fiscal policy response to the crisis? Starting in 2007 and stretching well into 2008, the United States and other countries experienced financial crises that resembled those of the Great Depression. Through the summer of 2011 (when this chapter was written), unemployment remained high, and real gross domestic product (real GDP) growth was low in the US economy. Some countries in Western Europe, such as Greece, were close to defaulting on their government debt. One indicator of the seriousness of these events is the dramatic action that policymakers took in response. For example, on October 3, 2008, President George W. Bush signed into law the Emergency Economic Stabilization Act of 2008, which authorized the US Treasury to spend up to $700 billion for emergency economic stabilization. The full text of the bill and related facts are available at “Bill Summary & Status: 110th Congress (2007–2008) H.R.1424,” THOMAS: The Library of Congress, accessed September 20, 2011, thomas.loc.gov/cgi-bin/bdquery/z?d110:h.r.01424:. As stated in the bill, The purposes of this Act are— 1. to immediately provide authority and facilities that the Secretary of the Treasury can use to restore liquidity and stability to the financial system of the United States; and 2. to ensure that such authority and such facilities are used in a manner that— 1. protects home values, college funds, retirement accounts, and life savings; 2. preserves homeownership and promotes jobs and economic growth; 3. maximizes overall returns to the taxpayers of the United States; and 4. provides public accountability for the exercise of such authority. This was an extraordinary amount of funding—equivalent to more than$2000 for every man, woman, and child in the United States. Perhaps even more strikingly, the funding was to allow the Treasury to do something it had never done before: to purchase shares (that is, become part owners) of financial institutions, such as banks and insurance companies. The United States, unlike some other countries, has never had many cases of firms being owned by the government. Moreover, in previous decades, the trend around the world has been for less government ownership of business—not more. It would have been almost unthinkable even a few months previously for a Republican president to have put in place mechanisms to permit this extent of government involvement in the private economy. News accounts at the time made many different claims about the financial crisis, including the following: • Banks and other financial institutions were failing. • Housing prices had plummeted. • So-called subprime mortgage loans had been made to borrowers in the early part of the decade, and the default rate on mortgages was rising because borrowers were no longer able to repay the loans. • Low interest rates fueled asset bubbles that eventually burst. • The financial crisis started in the United States but then spread to other countries. • Stock markets around the world fell substantially. • The next Great Depression might be around the corner. Each news item has an element of truth, yet each can also mislead. We first sort through the events of 2008 and the policy responses. Then we look at the current state of the economy and at more recent policy actions. Coordination Games and Coordination Failures As discussed in Chapter 22 "The Great Depression", the United States and other economies experienced severe economic downturns in the early 1930s, together with instability in financial markets. It was little wonder that news accounts in 2008 and 2009 were filled with discussions of the parallels and differences between then and now. When we looked at financial instability during the Great Depression in Chapter 22 "The Great Depression", we studied a “bank-run game”—a strategic situation where depositors had to decide whether to leave their money in the bank or take it out. The bank-run problem is a leading example of a coordination game—a game with two key characteristics: 1. The game has multiple Nash equilibria. 2. These Nash equilibria can be ranked. In a Nash equilibrium, everyone pursues their own self-interests given the actions of others. This means that no single individual has an incentive to change his or her behavior, given the choices of others. In a coordination game, there is more than one such equilibrium, and one of the Nash equilibria is better than the others. When the outcome of the coordination game is one of the outcomes that are worse than other possible equilibrium outcomes, then we say a coordination failure has occurred. Toolkit: Section 31.18 "Nash Equilibrium" Nash equilibrium is explained in more detail in the toolkit. The possibility of coordination failure suggests two more fundamental questions: 1. What gives rise to these coordination games? 2. What can the government do about them? Economists know that there are many situations that give rise to coordination games. Bank runs are just one example. In the crisis of 2008, actual bank runs did not occur in the United States, but they did happen in other countries. More generally, the financial instability that arose was similar in nature to a bank run. A recent article by Russell Cooper and Jonathan Willis explores in more detail the significance of coordination problems and beliefs during the recent crisis: “Coordination of Expectations in the Recent Crisis: Private Actions and Policy Responses,” Federal Reserve Bank of Kansas City Quarterly Review, First Quarter 2010, accessed July 25, 2011, http://www.kansascityfed.org/PUBLICAT/ECONREV/PDF/10q1CooperWillis.pdf. Instead of failures of small neighborhood banks, we saw the failure or the near failure of major financial institutions on Wall Street, many of which had other banks as their clients. As noted by then president of the Federal Reserve Bank of New York, Timothy Geithner, the process of intermediation has gone beyond traditional banks to create a parallel (shadow) financial system in the United States: “The scale of long-term risky and relatively illiquid assets financed by very short-term liabilities made many of the vehicles and institutions in this parallel financial system vulnerable to a classic type of run, but without the protections such as deposit insurance that the banking system has in place to reduce such risks” From an address given by Geithner to the Economic Club of New York Press: “Timothy F. Geithner: Reducing Systemic Risk in a Dynamic Financial System,” Bank for International Settlements, June 9, 2008, http://www.bis.org/review/r080612b.pdf?frames=0. But the use of coordination games does not stop with bank runs. We can think of the decline in housing values as coming from a coordination failure. Even more strikingly, the circular flow of income itself can generate something that looks very like a coordination game. Imagine a situation where the economy is in a recession, with high unemployment and low levels of income. Because income is low, households choose low levels of spending. Because spending is low, firms choose low levels of production, leading to low income. By contrast, when income is high, then households engage in lots of spending. This leads firms to choose high levels of production, leading to high income. What can governments do in the face of coordination games? One feature of these games is that the outcome of the game depends on the beliefs that people hold. An important aspect of economic policy may therefore be to support optimism in the economy. If people believe the economy is in trouble, this can be a self-fulfilling prophecy. But if they believe the economy is strong, they act in such a way that the economy actually is strong. Crisis in the United States There was no single root cause of the crisis of 2008. Economists and others have pointed to all sorts of factors that sowed the seeds of the crisis; we will not go through all these here. What is clear is that the housing market in the United States played a critical early role. As we saw in Chapter 19 "The Interconnected Economy", events in the housing market were linked to events in the credit market, the labor market, and the foreign exchange market. We begin with an equation that teaches us how the value of a house is determined. In Chapter 24 "Money: A User’s Guide", we explained that houses are examples of assets and that the value of any asset depends on the income that the asset generates. More specifically, the value of a house this year is given by the value of the services provided by the house plus the price of the house next year: This equation tells us that three factors determine the value of a house. One is the flow of services that the house provides over the course of the coming year. In the case of a house that is rented out, this flow of services is the rental payment. If you own the home that you live in, you can think of this flow of services as being how much you would be willing to pay each year for the right to live in your house. That value reflects the size of the house, its location, and other amenities. The higher the flow of services from a house, the higher is its current price. The second factor is the price you would expect to receive were you to choose to sell the house next year. If you expect housing prices to be high in the future, then the house is worth more today. This is true even if you do not actually plan to sell the house next year. One way of seeing this is to recognize that if you choose not to sell the house, its worth to you must be at least as large as that price. The third factor is the interest rate—remember that the interest factor equals $(1 + the interest rate)$. The flow of services and next year’s price both lie in the future, and we know that income in the future is worth less than income today. We use the technique of discounted present value to convert the flow of services and the future price into today’s terms. As in the formula, we do so by dividing by the interest factor. One implication is that a change in the interest rate affects the current value of a house. In particular, a reduction in interest rates leads to higher housing prices today because a reduction in interest rates tells us that the future has become more relevant to the present. Although we have written the equation in nominal terms, we could equally work with the real version of the same equation. In that case, the value of the service flow and the future price of the house must be adjusted for inflation, so we would use the real interest factor rather than the nominal interest factor. Toolkit: Section 31.5 "Discounted Present Value" You can review discounted present value in the toolkit. Now that you understand what determines the current value of a house, imagine you are making a decision about whether or not to buy a house. Unless you have a lot of cash, you will need to take out a mortgage to make this purchase. If interest rates are low, then you are more likely to qualify for a mortgage to buy a house. In the early 2000s, mortgage rates were relatively low, with the consequence that large numbers of households qualified for loans. In addition, many lenders offered special deals with very low initial mortgage rates (which were followed by higher rates a year or so later) to entice borrowers. The low interest rates encouraged people to buy houses. We saw this link between interest rates and spending in Chapter 25 "Understanding the Fed". Lenders are also more willing to give you a mortgage if they think the price of a house is going to increase. Normally, you need a substantial down payment to get a loan. But if your mortgage lender expects housing prices to rise, then the lender will think that it will have the option of taking back the house and selling it for a profit if you cannot repay your mortgage in the future. Thus, the expectation of rising housing prices in the future increases the current demand for houses and thus the current price of houses. In the early and mid-2000s, rising housing prices were seen in many markets in the United States and elsewhere. The rise in prices was fueled at least in part by expectations, in a manner that is very similar to a coordination game. However, the optimism that underlies the price increases can at some point be replaced by pessimism, leading instead to a decrease in housing prices. Looking back at our equation for the value of a house, how can we explain the decrease in housing prices in 2007 and 2008? Interest rates did not rise over that time. It also seems unlikely that the service flow from a house decreased dramatically. This suggests that the main factor explaining the collapse of housing prices was a drop in the expected future price of houses. Notice the self-fulfilling nature of expectations: if everyone expects an asset to decrease in value in the future, it decreases in value today. But what happens when housing prices start to decrease? Suppose you had put down $20,000 and borrowed$200,000 from a bank to buy a $220,000 home. If the price of your house decreases to, say,$150,000, you might just walk away from the house and default on the loan. Of course, default does not mean that the house disappears. Instead, it is taken over by the bank. But the bank does not want the house, so it is likely to try to sell it. When lots of banks find themselves with houses that they do not want, then the supply of houses increases, and the price of houses decreases. We now see that there is a vicious circle operating: 1. Housing prices decrease. 2. People default on their loans. 3. Banks sell more houses in the market. 4. Housing prices decrease even more. This again looks a lot like a coordination game. If housing prices are low, there are more mortgage defaults and thus houses put on the market for sale. The increased supply of houses drives down housing prices even further. The crisis of 2008 may have begun in the housing market, but it did not stop there. It spread beyond housing to all corners of the financial markets. As explained in Chapter 24 "Money: A User’s Guide", a loan from your perspective is an asset from the perspective of the bank. Banks that held mortgage assets did not simply hold on to those assets, but neither did they merely sell them on to other banks. Instead, they bundled them up in various creative ways and then sold these bundled assets to other financial institutions. These financial institutions in turn rebundled the assets for sale to other financial institutions and so forth. The bundling of assets was designed to create more efficient sharing of the risk in financial markets. Fannie Mae ( www.fanniemae.com/kb/index?page=home) and Freddie Mac ( http://www.freddiemac.com), two government created and supported enterprises, were among those involved in the bundling and reselling of mortgages to facilitate this sharing of risks. These companies are currently in conservatorship. But there were also costs: (1) it became harder to evaluate the riskiness of assets, and (2) the original bank had a reduced incentive to carefully evaluate the loans that it made because it knew the risk would be passed on to others. This incentive problem made the bundles of mortgage loans riskier. The Policy Response in the United States The US government did not stand idle as these events were unfolding. They took the following actions: (1) they provided more deposit insurance, (2) they decreased interest rates, (3) they facilitated various mergers and acquisitions of financial entities, and (4) they bailed out some financial institutions. Some of these actions were an outgrowth of policies enacted after the Great Depression. The most important of these, deposit insurance, is discussed next. Guarantee Funds and the Role of Deposit Insurance In Chapter 22 "The Great Depression", we explained that, during the Great Depression, much of the disruption to the financial system came through bank runs. But in 2007 and 2008, we did not see bank runs in the United States. This was a striking difference between the crisis of 2008 and the Great Depression. The absence of bank runs is almost certainly because deposit insurance “changes the game.” To see how, look at the bank-run coordination game in part (a) of Figure 30.2.1 "The Payoffs in a Bank-Run Game with and without Deposit Insurance". In particular, look at the outcome if other players run and you do not run. In that case you get zero, so this would be a bad decision. You do better if you choose to participate in the run, obtaining 20. If everybody else chooses to run on the bank, you should do the same thing. In this case, the bank fails. But if everyone else leaves their money in the bank, you should do likewise. In this case, the bank is sound. The fact that there are two possible equilibrium outcomes is what makes this a coordination game. Deposit insurance, which is run by the Federal Deposit Insurance Corporation (FDIC; http://www.fdic.gov/deposit), insures the bank deposits of individuals (up to a limit). Suppose that deposit insurance provides each depositor who leaves money in the bank a payoff of 110 even if everyone else runs. Now the game has the payoffs shown in part (b) of Figure 30.2.1 "The Payoffs in a Bank-Run Game with and without Deposit Insurance". The strategy of “do not run” is now better than “run” regardless of what other people do. You choose “do not run”—as does everyone else in the game. The outcome is that nobody runs and the banks are stable. Remarkably, this policy costs the government nothing. Since there are no bank runs, the government never has to pay any deposit insurance. By changing the rules of the game, the government has made the bad equilibrium disappear. You deposit $100 in the bank. Part (a) shows payoffs without deposit insurance. There are two Nash equilibria: if all people leave their money in the bank, then you should do the same, but if all people make runs on the bank, you are better running as well. In Part (b), deposit insurance means that the game has a unique equilibrium. Decreasing Interest Rates Deposit insurance may have prevented bank runs, but credit markets still did not function smoothly during the crisis of 2008. So what else was going on in credit markets? During the financial crisis, the Federal Reserve (the Fed) decreased its target interest rate. The way in which it does this and its implications for the aggregate economy are covered in Chapter 25 "Understanding the Fed". The Federal Open Market Committee (FOMC) reduced the target federal funds rate from 4.75 percent in September 2007 to 1.0 percent by the end of October 2008 and 0.25 percent by the end of the year. The target rate is indicated in the last column of the Table 30.2.1 "The Federal Funds Rate: Target and Realized Rates". However, the Fed lost its usual ability to tightly control the actual federal funds rate. We see this in the other columns of Table 30.2.1 "The Federal Funds Rate: Target and Realized Rates". The column labeled “average” is the average federal funds rate over the day. The highest and lowest rates during the day are indicated as well. Prior to September 2008, the average and target rates were very close, but from mid-September onward, the average rate frequently diverged from the target. In addition, the difference between the high and low rates was much higher after the middle of September 2008. Date Average Low High Target October 14, 2008 1.1 0.25 2 1.5 October 7, 2008 2.97 0.01 6.25 2 September 29, 2008 1.56 0.01 3 2 September 15, 2008 2.64 0.01 7 2 July 16, 2008 1.95 0.5 2.5 2 Table $1$: The Federal Funds Rate: Target and Realized Rates Source: Data summarized from “Federal Funds Chart,” Federal Reserve Bank of New York, 2008, http://www.newyorkfed.org/charts/ff. As we explained in Chapter 25 "Understanding the Fed", these low interest rates meant that the Fed had hit the zero lower bound on monetary policy. Because nominal interest rates cannot be less than zero, the Fed was no longer able to stimulate the economy using the normal tools of monetary policy. Because its traditional tools were proving less effective than usual, the Fed turned to other, unusual, policy measures. The Fed created several lending facilities through which it provided funds to financial markets. For example, a commercial paper funding facility was created on October 7, 2008, to promote liquidity in a market that is central to the credit needs of both businesses and households. These are summarized by the Board of Governors, “Information Regarding Recent Federal Reserve Actions,” Federal Reserve, accessed September 20, 2011, http://www.federalreserve.gov/newsevents/recentactions.htm. They are also discussed in the press release: “Press Release,” Federal Reserve, October 7, 2008, accessed September 20, 2011, http://www.federalreserve.gov/newsevents/press/monetary/20081007c.htm. The Board of Governors listed these as tools of the Fed in addition to three familiar tools: open-market operations, discount-window lending, and changes in reserve requirements. Short-term interest rates, such as the overnight rate on interbank loans (the so-called LIBOR [London Interbank Offered Rate]), followed the Federal Funds rate down for much of 2008. (As the name suggests, this is the rate on loans that banks make to each other overnight.) But when the crisis became severe in September and October 2008, the LIBOR rose sharply. “US Dollar LIBOR Rates 2008,” accessed September 20, 2011, http://www.global-rates.com/interest-rates/libor/american-dollar/2008.aspx. This rate averaged just about 5 percent for the month of September and 4.64 percent in October.The data are from the British Bankers’ Association. Short-term rates increased despite the Fed’s attempts to reduce interest rates. Why did these rates not decrease along with the Fed’s targeted federal funds rate? One explanation comes from the following equation: $loan\ rate \times probability\ of\ loan\ repayment = cost\ of\ funds\ to\ the\ bank.$ On the left hand side is the loan rate charged by a bank—for example, the interest rate on a car loan, a household improvement loan, or a small business loan. The other term is the likelihood that the loan will actually be repaid. Together these give the expected return to the bank from making a loan. The right side is the cost of funds to the bank. This might be measured as the rate paid to depositors or the rate paid to other banks for loans from one bank to another. When this equation holds, the cost of the input into the loan process, measured as the interest cost on funds to the bank, equals the return on loans made. The bank does not then expect to make any profits or losses on the loan. In Chapter 25 "Understanding the Fed", we argued that interest rates on loans usually follow the federal funds rate quite closely. If the Fed reduces the targeted federal funds rate, this reduces the cost of funds to banks. Banks typically follow by decreasing their lending rates. This close connection between the cost of funds and the loan rate holds true provided there is a stable probability of loan repayment. In normal times, that is (approximately) true, so variations in the federal funds rate lead directly to variations in loan rates. During the fall of 2008, the link was weakened. Though the Fed reduced its targeted interest rate so that the cost of funds decreased, loan rates did not decrease. The reason was a fall in the perceived probability of loan repayment: banks perceived the risk of default to be much higher. Banks were cautious because they had suffered through the reduction in the value of mortgage-based assets and had seen some financial institutions fail. The state of the economy, with increasing unemployment and decreasing asset prices, led banks to be more prudent. In terms of our equation, the probability of repayment was decreasing at the same time as the cost of funds was decreasing. As a consequence, loan rates did not decrease as rapidly as the bank’s costs of funds. There was also a reduction in the amount of lending. The quantity of loans decreased because banks became more careful about whom to lend to. When you go to a bank to borrow, it makes an evaluation of how likely you are to repay the loan. During the fall of 2008, bank loans became much more difficult to obtain because many customers were viewed as higher credit risks. Even more significantly, the uncertainty of repayment was not limited only to loans from banks to households. Many of the loans in the short-term market are from banks to other banks or to firms. Uncertainty over asset valuations, growing out of the belief that some mortgage securities were overvalued, permeated the market, making lenders less willing to extend credit to other financial institutions. Another factor in keeping interest rates high was the behavior of investors who held deposits that were not covered by federal deposit insurance—particularly deposits in “money market funds.” In the early part of October 2008, there were huge outflows from money market funds into insured deposits as investors sought safety. This was a problem for the banking system, because it left the financial system with fewer funds to provide to borrowers. It also led short-term interest rates to rise. So while the presence of deposit insurance was valuable in reducing the risk faced by individual households, banks still perceived higher lending risks. They therefore looked for ways to limit this risk. One prominent device they used is known as a credit default swap. This is a fancy term for a kind of financial insurance contract. The buyer of the contract pays a premium to the seller of the contract to cover bankruptcy risk. For example, suppose an institution owns some risky bonds issued by a bank. To shed this risk, the institution engages in a credit swap with an insurance provider. These swaps played a big role in the stories of two key players in the financial crisis: American International Group (AIG) and Lehman Brothers. The US government eventually bailed out AIG, but Lehman Brothers went bankrupt. Because Lehman Brothers was an active trader of credit default swaps, their exit severely curtailed the functioning of this market. Without the added protection of these default swaps, lenders directly faced default risk and hence decided to charge higher loan rates. AIG was also a prominent player in the credit swap market.PBS Newshour provides a full report on AIG and credit swaps. They state that “AIG wrote some$450 billion worth of credit default swap insurance” of the $62 trillion of credit swaps. “Risky Credit Default Swaps Linked to Financial Troubles,” PBS Newshour, October 7, 2008, accessed September 20, 2011, www.pbs.org/newshour/bb/business/july-dec08/solmancredit_10-07.html. They sold insurance to cover many defaults, some linked directly to the holding of mortgages. As the mortgage crisis loomed, likely claims against AIG increased, putting them too on the brink of bankruptcy. So although there were no bank runs during the crisis in the United States, credit markets were still severely disrupted. Given the centrality of the financial sector in the circular flow, disruption in the credit markets led to a downturn in overall economic activity. Put yourself in the place of a builder of new homes. Your customers are finding it hard to qualify for mortgages. As a result, the demand for your product is lower (the demand curve has shifted inward). Meanwhile (since the construction of a new home takes time), you need to borrow from a bank to finance payments to your suppliers of raw materials and to pay your carpenters and other workers. Tight credit markets mean that you find it more expensive to obtain funds: interest rates are higher, and the terms are less generous. Not surprisingly, disruption in the credit markets shows up particularly starkly in the market for new houses. Facilitating Takeovers of Financial Firms The problems of AIG, Lehman Brothers, and other financial firms led policymakers to worry about such firms going bankrupt. In some cases, these firms had too many bad assets on their books and were not able to continue in the market. One example is Bear Stearns, which was heavily involved in the trading of assets that were backed by mortgages. In March 2008, it became clear that those assets were highly overvalued. When the prices of these assets decreased, Bear Stearns was close to bankruptcy. With the help of a loan ( http://www.federalreserve.gov/newsevents/press/other/other20080627a2.pdf) from the Board of Governors of the Federal Reserve (operating through the Federal Reserve Bank of New York), JPMorgan Chase and Company acquired Bear Stearns. It is perhaps remarkable that the Fed took such an active role in this acquisition. When a local grocer goes out of business, you simply shift your business to another seller. Nobody expects the government to take a role in rescuing the store. But when we are talking about large financial firms, shifting from one financial intermediary to another may not be as easy. When a large institution fails, it is highly disruptive to the financial system as a whole. The minutes of the March 16, 2008, meeting of the Board of Governors confirm this view: The evidence available to the Board indicated that Bear Stearns would have difficulty meeting its repayment obligations the next business day. Significant support, such as an acquisition of Bear Stearns or an immediate guarantee of its payment obligations, was necessary to avoid serious disruptions to financial markets. Thus the Fed thought it was necessary to ensure the takeover of Bear Stearns and hence the continuation of its operations. In fact, prior to this takeover, Bear Stearns was listed among a small set of financial firms as “primary dealers.” These are financial intermediaries that are viewed as central to the orderly operation of financial markets and the conduct of monetary policy. The list of primary dealers is available at “Primary Dealers List,” Federal Reserve Bank of New York, accessed July 25, 2011, http://www.newyorkfed.org/markets/pridealers_current.html. AIG received a loan up to$85 billion from the Fed in September 2008. The monetary authority was concerned that a failure of AIG would further destabilize financial markets. As part of this deal, the US government acquired a 79.9 percent equity ownership in AIG. See the announcement by the Board of Governors: “Press Release,” Board of Governors of the Federal Reserve System, September 16, 2008, accessed July 25, 2011, http://www.federalreserve.gov/newsevents/press/other/20080916a.htm. As authority for this action, this announcement by the Fed cites Section 13(3) of the Federal Reserve Act, which allows the Fed to provide this type of funding in “unusual and exigent” circumstances: “Section 13: Powers of Federal Reserve Banks,” Board of Governors of the Federal Reserve System, December 14, 2010, accessed July 25, 2011, http://www.federalreserve.gov/aboutthefed/section13.htm. AIG was special enough to warrant this government loan because of its role in providing insurance (through credit default swaps) against the default on debt by individual companies. Without this insurance, the debt of these companies becomes riskier, and they find it harder to borrow. AIG was a large enough actor in this market for its departure to have meant severe disruptions in the provision of insurance. In contrast to these actions for Bear Stearns and AIG, the Fed did nothing to help Lehman Brothers, a 158-year-old financial firm. It went out of business in September 2008. There was no bailout for this company from the Fed or the US Treasury. It simply disappeared from the financial markets. A $700 Billion Bailout In October 2008, Congress passed and the president signed legislation called the “Emergency Economic Stabilization Act of 2008” to provide$700 billion in funding available to the Department of the Treasury. The legislation authorized the Treasury ( www.house.gov/apps/list/press/financialsvcs_dem/essabill.pdf) to purchase mortgages and other assets of financial institutions (including shares) to create a flow of credit within the financial markets. The Treasury Department then set up a Troubled Asset Relief Program as a vehicle for making asset purchases. In addition to these measures, the legislation called for an increase in FDIC deposit insurance to cover deposits up to a cap of $250,000 instead of the standard cap of$100,000. See “Insured or Not Insured?,” Federal Deposit Insurance Corporation, accessed July 25, 2011, http://www.fdic.gov/consumers/consumer/information/fdiciorn.html. The FDIC does not insure money market funds, though these were protected under a temporary Treasury program: “Frequently Asked Questions about Treasury’s Temporary Guarantee Program for Money Market Funds,” US Department of the Treasury, September 29, 2008, accessed September 20, 2011, http://www.treasury.gov/press-center/press-releases/Pages/hp1163.aspx. One interesting element of the bailout legislation was the explicit interaction of the Treasury and the Fed. A joint statement issued after the passage of this act indicated that these players in the conduct of fiscal and monetary policy were working together to resolve the crisis. In the United States, the Treasury and the Fed each contributed to the financing of these rescue packages. In Chapter 26 "Inflations Big and Small", we pointed out that $government\ deficit = change\ in\ government\ debt + change\ in\ money\ supply.$ In other words, when the government runs a deficit, it must finance this deficit by either issuing more debt or printing money. This equation is consistent with the institutional structure in the United States where the Treasury and the Fed are independent entities. In effect, the Treasury issues debt to finance a deficit, then some of that debt is purchased by the Fed. When the Fed purchases debt, it injects new money into the economy. Key Takeaways 1. Though there were no bank runs in the United States during the crisis of 2008, the structure of coordination games is useful for thinking about instability of the housing sector, the interactions of banks within the financial system, and the interaction between income and spending. 2. During the crisis, the Fed moved aggressively to decrease interest rates and provide liquidity to the system. 3. The George W. Bush administration created a \$700 billion program to purchase or guarantee troubled assets, such as mortgages and shares of financial firms. Exercises 1. As the probability of default increases, what happens to the lending rate? 2. What is a credit default swap? 3. Why does the sale of bank-owned houses cause the price of houses to decrease? 4. We said that deposit insurance was available in 2008. Was it available during the Great Depression?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/30%3A_The_Global_Financial_Crisis/30.02%3A_The_Financial_Crisis_in_the_United_States.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. How did the financial crisis spread to the aggregate economy? 2. What was the fiscal policy response? 3. What was the monetary policy response? So far we have focused on the financial side of the crisis of 2008 because the initial stage of the crisis was within the financial sector. As in the Great Depression, though, the disruptions in the financial sector then spread to the rest of the economy. From Housing to the Aggregate Economy The crisis of 2008 saw financial disruptions spread from financial markets to the economy at large. In Chapter 22 "The Great Depression", we introduced the aggregate expenditure model to understand the reduction in economic activity in the early 1930s. That same framework is useful in understanding recent events. Toolkit: Section 31.30 "The Aggregate Expenditure Model" You can review the aggregate expenditure model in the toolkit. The aggregate expenditure model takes as its starting point the fact that gross domestic product (GDP) measures both total spending and total production. When planned and actual spending are in balance, $real\ GDP = planned\ spending= autonomous\ spending + marginal\ propensity\ to\ spend \times real\ GDP.$ Autonomous spending is the intercept of the planned spending line. It is the amount of spending that there would be in the economy if income were zero. The equilibrium level of real GDP is as follows: The framework tells us that a reduction in autonomous spending leads to a decrease in real GDP. Just as in the Great Depression, the two leading candidates for the decrease in autonomous spending are consumption and investment. Specifically, the crisis in the housing market had two significant implications for the rest of the economy. First, the decrease in housing prices starting in 2008 reduced the wealth of many households. Because households were poorer, they reduced their consumption. Second, the disruptions in the financial system made it difficult for firms to obtain financing, which meant that there was less investment. The aggregate expenditure model teaches us that these reductions in consumption and investment can lead to a reduction in real GDP. Reductions in autonomous spending are magnified through the circular flow of income. As spending decreases, income decreases, leading to further reductions in spending. This is the multiplier process; it shows up as the term $1/(1 − marginal\ propensity\ to\ spend)$, which multiplies autonomous spending in the expression for real GDP. Toolkit: Section 31.27 "The Circular Flow of Income" You can review the circular flow of income and the multiplier in the toolkit. Stabilization Policy We have already observed that, in contrast to the Great Depression, policymakers in the crisis of 2008 took several actions to try to address the economic problems. In addition to the measures aimed specifically at dealing with problems in the financial markets, policymakers turned to monetary and fiscal policy in an attempt to counteract the economic downturn. To begin our discussion of this stabilization policy, it is useful to start with a summary of the state of the economy in the 2006–10 period. By so doing, we are making life somewhat easier for us than it was for policymakers because they did not know in early 2009 what would happen in the aggregate economy during that year. The annual growth rates of the main macroeconomic variables during the crisis are highlighted in Table 30.3.1 "State of the Economy: Growth Rates from 2006 to 2010". All variables are in percentage terms. From Table 30.3.1 "State of the Economy: Growth Rates from 2006 to 2010", you can see how US real GDP growth slowed in 2007, stalled in 2008, and turned negative in 2009. The recovery in 2010 had a positive growth rate slightly larger than the decline in 2009. Had these growth rates been identical in absolute value, the economy would have recovered, roughly speaking, to the 2008 level of real GDP. The annual growth rate of real GDP in the last quarter of 2010 was a robust 3.1 percent, but the growth rate in the first quarter of 2011 was only 1.8 percent. Concerns remain over the viability of the current recovery. The next four columns of Table 30.3.1 "State of the Economy: Growth Rates from 2006 to 2010" show that the declines in real GDP came largely from spending on investment and durables by firms and households. Housing played a particularly significant role. This fits with the theory of consumption smoothing that we discussed in Chapter 27 "Income Taxes" and Chapter 28 "Social Security". The last column shows the unemployment rate. Although the economy enjoyed positive real GDP growth in 2010, the unemployment rate remained high.A recent BLS publication looked at job creation and job destruction up to 2009 to try to understand the slow recovery of unemployment. “Business Dynamics Statistics Briefing: Historically Large Decline in Job Creation from Startup and Existing Firms in the 2008–2009 Recession,” U.S. Census Bureau’s Business Dynamics Statistics, March 2011, accessed September 20, 2011, http://www.ces.census.gov/docs/bds/plugin-BDS%20March%202011%20single_0322_FINAL.pdf. Year GDP Consumption Household Durables Investment Housing Unemployment Rate (%) 2006 2.7 2.9 4.1 2.7 −7.3 4.4 2007 1.9 2.4 4.2 −3.1 −18.7 5.0 2008 0.0 −0.3 −5.2 −9.5 −24.0 7.4 2009 −2.6 −1.2 −3.7 −22.6 −22.9 10.0 2010 2.9 1.7 7.7 17.1 -3.0 9.6 Table $1$: State of the Economy: Growth Rates from 2006 to 2010 Source: Bureau of Economic Analysis, Department of Commerce ( www.bea.gov/newsreleases/national/gdp/2010/txt/gdp2q10_adv.txt and www.bea.gov/newsreleases/national/gdp/2011/pdf/gdp1q11_2nd.pdf) and Bureau of Labor Statistics ( http://www.bls.gov/cps). Fiscal Policy One of the priorities of the Obama administration after taking office in January 2009 was to formulate a stimulus package to deal with the looming recession. As is clear from Table 30.3.1 "State of the Economy: Growth Rates from 2006 to 2010", growth in the economy was near zero for the preceding year, and the unemployment rate was much higher than it had been in the previous two years. Although the financial rescue plans of the George W. Bush administration may have stemmed the financial crisis, the aggregate economy was now limping along at best. The American Recovery and Reinvestment Act of 2009 (ARRA) was signed into law on February 17, 2009. The stimulus package contained approximately $800 billion in spending increases and tax cuts. These numbers are approximate for a couple of reasons: (1) parts of the package depend on the state of the economy in the future, so the exact outlays are not determined in the legislation, and (2) the disbursements were not all within a single year, so the timing of the outlays and thus their discounted present value could not be precisely known at the time of passage. The package contained a mixture of spending increases and tax cuts. According to a Congressional Budget Office (CBO) study ( www.cbo.gov/ftpdocs/106xx/doc10682/Frontmatter.2.2.shtml) from November 2009, federal government purchases of goods and services were to increase by about$90 billion over the 2009–19 period. Transfer payments to households were set to increase by about $100 billion, and transfers to state and local governments were to increase by nearly$260 billion. This last category of outlays was quite visible, taking the form of road projects and other construction in towns across the United States. Interestingly, the federal government was investing in infrastructure, thus building up the public component of the capital stock. In the same publication, the CBO provided a summary of ARRA’s macroeconomic effects in November 2009. At that point, due to ARRA, the CBO estimated that federal government outlays (not only spending on goods and services) had increased by about $100 billion, and tax collections were lower by about$90 billion. So clearly some but not the entire stimulus went into the US economy within seven months of ARRA’s passage. The CBO also produced its own assessment of the effects of ARRA through September 2009. To do so, it had to use an economic model to calculate the effects of the increases in outlays and reductions in taxes. In many ways, the framework for the assessment is quite similar to the analysis of the Kennedy tax cuts we discussed in Chapter 27 "Income Taxes". According to the CBO, ARRA meant that real GDP in the United States was between 1.2 percent and 3.2 percent higher than it would otherwise have been, whereas the unemployment rate was between 0.3 and 0.9 percentage points lower. These numbers were obtained by attaching a multiplier to each component of the stimulus package and calculating the change in real GDP from that component. For example, the CBO estimated that the multiplier associated with federal government purchases of goods and services was between 1 and 2.5. The effect of this federal spending on real GDP is simply the product of the spending of the federal government funded under ARRA times the multiplier. The CBO did this calculation for each component of the stimulus package and then added up the effects on real GDP. The range of the effects reflects the range for each multiplier used in their analysis. The CBO also calculated that 640,000 jobs were either created or retained due to ARRA. This calculation underlies their estimate of how much ARRA reduced the unemployment rate in the United States. Some economists have disputed the effects of ARRA on economic activity, however. John Taylor, a Stanford University economist, argued that the short-term nature of the tax cuts meant that most households simply saved the tax cut, as the theory of consumption smoothing predicts. This argument was supported by evidence of increasing saving rates by households in the United States during the period of the tax cuts. Testimony reproduced in John B. Taylor, “The 2009 Stimulus Package: Two Years Later,” Hoover Institution, February 16, 2011, accessed September 20, 2011, http://media.hoover.org/sites/default/files/documents/2009-Stimulus-two-years-later.pdf. During 2010 and 2011, there were some calls for further stimulus. The unemployment rate in the United States remained high despite the stimulus; it was 9.5 percent in July 2010. The Bureau of Labor Statistics ( http://www.bls.gov/news.release/empsit.b.htm) tells us that while job creation had been brisk in May 2010 at 432,000 jobs, the total job destruction in June and July 2010 was 350,000. Further, real GDP growth was only 2.4 percent in the second quarter of 2010, down from 3.7 percent in the first quarter. Together this news put more pressure on policymakers to conduct further attempts at stabilization policy. But at the same time, policymakers became increasingly concerned about the long-run fiscal health of the government. In effect, they began to worry about the government budget constraint, which we explained in Chapter 29 "Balancing the Budget". The attention of policymakers moved away from stimulus and toward “fiscal consolidation.” This culminated in a political battle in the summer of 2011 over an increase in the debt ceiling, a limit on the amount of US debt outstanding. Ultimately an agreement was reached to allow an increase in the ceiling, but this agreement was combined with a reduction in government spending of nearly $900 billion over the coming 10 years and an agreement to seek further cuts in spending amounting to another$1.5 trillion.The bill passed by the House of Representatives is contained here: “Text of Budget Control Act Amendment,” House of Representatives Committee on Rules, accessed September 20, 2011, http://rules.house.gov/Media/file/PDF_112_1/Floor_Text/DEBT_016_xml.pdf. This agreement was not enough to avert a downgrade of US debt from AAA to AA+ by Standard and Poors. The decision to downgrade the debt is discussed at “Research Update: United States of America Long-Term Rating Lowered to ‘AA+’ on Political Risks and Rising Debt Burden; Outlook Negative,” Standard & Poor’s, August 5, 2011, accessed September 20, 2011, www.standardandpoors.com/servlet/BlobServer?blobheadername3 =MDT-Type&blobcol=urldata&blobtable=MungoBlobs&blobheadervalue 2=inline%3B+filename%3DUS_Downgraded_AA%2B.pdf&blobheadername2=Content-Dis position&blobheadervalue1=application%2Fpdf&blobkey=id&blob headername1=content-type&blobwhere=1243942957443&blobh eadervalue3=UTF-8. Monetary Policy The current state of monetary policy is well summarized in the Federal Open Market Committee (FOMC) statement of August 10, 2010. Here is an excerpt: “Press Release,” Federal Open Market Committee, August 10, 2010, accessed July 26, 2011, http://www.federalreserve.gov/newsevents/press/monetary/20100810a.htm. Press Release Release Date: August 10, 2010 For immediate release Information received since the Federal Open Market Committee met in June indicates that the pace of recovery in output and employment has slowed in recent months.… Nonetheless, the Committee anticipates a gradual return to higher levels of resource utilization in a context of price stability, although the pace of economic recovery is likely to be more modest in the near term than had been anticipated. Measures of underlying inflation have trended lower in recent quarters and, with substantial resource slack continuing to restrain cost pressures and longer-term inflation expectations stable, inflation is likely to be subdued for some time. The Committee will maintain the target range for the federal funds rate at 0 to 1/4 and continues to anticipate that economic conditions, including low rates of resource utilization, subdued inflation trends, and stable inflation expectations, are likely to warrant exceptionally low levels of the federal funds rate for an extended period. To help support the economic recovery in a context of price stability, the Committee will keep constant the Federal Reserve’s holdings of securities at their current level by reinvesting principal payments from agency debt and agency mortgage-backed securities in longer-term Treasury securities.… Voting for the FOMC monetary policy action were: Ben S. Bernanke, Chairman; William C. Dudley, Vice Chairman; James Bullard; Elizabeth A. Duke; Donald L. Kohn; Sandra Pianalto; Eric S. Rosengren; Daniel K. Tarullo; and Kevin M. Warsh. Voting against the policy was Thomas M. Hoenig, who judges that the economy is recovering modestly, as projected. Accordingly, he believed that continuing to express the expectation of exceptionally low levels of the federal funds rate for an extended period was no longer warranted and limits the Committee’s ability to adjust policy when needed.… We can make several observations about this FOMC statement. First, the FOMC shared the general perception that the recovery is not very robust and is showing signs of slowing. Their response was to maintain the targeted federal funds rate at between 0 and 0.25 percent. The FOMC put the targeted rate into this range in December 2008; in August 2011 the Fed indicated that it would keep rates low for at least another two years. You can find the FOMC statements and minutes of the meetings from December 2008 onward at “Meeting Calendars, Statements, and Minutes (2006–2012),” Federal Reserve, accessed July 26, 2011, http://www.federalreserve.gov/monetarypolicy/fomccalendars.htm. Second, the FOMC talks about “reinvesting principal payments from agency debt and agency mortgage-backed securities….” This somewhat complicated phrase refers to the fact that the Fed purchased various assets in the attempt to keep financial markets working during the financial crisis. Those programs are summarized at “Credit and Liquidity Programs and the Balance Sheet,” Federal Reserve, accessed July 26, 2011, http://www.federalreserve.gov/monetarypolicy/bst_crisisresponse.htm. As reported by the Fed, “[s]ince the beginning of the financial market turmoil in August 2007, the Federal Reserve’s balance sheet has grown in size and has changed in composition. Total assets of the Federal Reserve have increased significantly from $869 billion on August 8, 2007, to well over$2 trillion.” The Fed maintains an interactive web site that displays and explains its balance sheet items. “Credit and Liquidity Programs and the Balance Sheet,” Federal Reserve, accessed July 26, 2011, http://www.federalreserve.gov/monetarypolicy/bst_recenttrends.htm. Observers are waiting for the Fed to reduce its holdings of these assets. The policy statement indicated that the Fed was not yet ready to take those steps. The final point concerns the position of Thomas Hoenig, the president of the Federal Reserve Bank of Kansas City. Over the year, he took the view that monetary policy was too lax. As the economy recovered, there was, he believed, no longer any need to keep interest rates at such low levels. One of the implicit concerns here is that periods of low interest rates have tended to promote bubbles in assets, such as housing. The FOMC had to weigh this concern against the view that, with a slow economic recovery and no signs of inflation, expansionary monetary policy was still warranted. When the FOMC took the unusual decision to commit to low interest rates for two years, three members of the committee dissented from the decision. Key Takeaways 1. Disruptions in the financial system led to reductions in consumption and investment, which led to a decrease in real GDP.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/30%3A_The_Global_Financial_Crisis/30.03%3A_New_Page.txt
Learning Objectives After you have read this section, you should be able to answer the following questions: 1. What are the ways the crisis spread from the United States to the rest of the world? 2. In what ways did the institutional structure of the European Union (EU) hamper Europe’s ability to cope with the crisis? In Chapter 24 "Money: A User’s Guide", we spoke of the day when people in several European countries woke up to a new monetary regime that used different pieces of paper than were used previously. In that chapter we used that experience to help us understand why people hold money. When these countries adopted the euro, they were not expecting to wake up about a decade later to read something like this: “On the eve of a confidence vote that may bring down Papandreou’s government, euro-area finance ministers pushed Greece to pass laws to cut the deficit and sell state assets. They left open whether the country will get the full 12 billion euros (\$17.1 billion) promised for July” as part of last year’s 110 billion-euro lifeline. “We forcefully reminded the Greek government that by the end of this month they have to see to it that we are all convinced that all the commitments they made are fulfilled,” Luxembourg Prime Minister Jean-Claude Juncker told reporters early today after chairing a crisis meeting in Luxembourg. James G. Neuger and Stephanie Bodoni, “Bailout Bid for Greece Falters as Europe Insists Papandreou Cut Budget Gap,” June 20, 2011, accessed July 26, 2011, http://www.bloomberg.com/news/2011-06-20/europe-fails-to-agree-on-greek-aid-payout-pressing-papandreou-to-cut-debt.html; Karen Kissane, “EU puts brakes on loan to Greece,” Sydney Morning Herald, June 21, 2011, accessed July 26, 2011, http://www.smh.com.au/world/eu-puts-brakes-on-loan-to-greece-20110620-1gbxw.html; James Neuger and Stephanie Bodoni, “Europe Fails to Agree on Greek Aid Payout,” June 20, 2011, accessed September 20, 2011, http://www.bloomberg.com/news/2011-06-20/europe-fails-to-agree-on-greek-aid-payout-pressing-papandreou-to-cut-debt.html. The euro was established by the Maastricht Treaty, but the implications of that treaty went beyond the introduction of new pieces of paper. The nature of fiscal and monetary interactions across the countries within the Economic and Monetary Union (EMU) changed dramatically as well. On the monetary side, in addition to losing their national currencies, the countries that joined the euro effectively lost their central banks. The Central Bank of Italy, say, which formerly conducted monetary policy in that country, handed over that duty to the European Central Bank (ECB). The same thing happened in other countries. Most significantly, the German Bundesbank, which was one of the most important central banks in the world, also ceded its powers to the ECB. Further, the Maastricht Treaty—and the Stability and Growth Pact that followed a few years later—placed restrictions on fiscal policy by member countries. For a discussion of the history and content of the Stability and Growth Pact, see “Stability and Growth Pact,” European Commission Economic and Financial Affairs, accessed September 20, 2011, http://ec.europa.eu/economy_finance/sgp/index_en.htm. Prior to the introduction of the euro, member governments had complete discretion over their fiscal policy. Within the EMU, however, constraints on deficit spending were placed on member countries. Taken together, these two factors radically changed the conduct of monetary and fiscal policy in the countries of the EMU. Some commentators questioned whether adequate tools for stabilization of aggregate economies were still available. Others wondered whether the constraints on fiscal policy would be violated by member countries, leading to the possibility of a debt crisis for a country within the euro area. In that event, how would the other member countries respond? The crisis of 2008 provided the first big tests of these questions. Debt problems—not only in Greece but also in Portugal and Ireland—revealed that these concerns were well placed. We start by discussing how the crisis spread from the United States to Europe and then turn to the policy actions within Europe. Sources of Spillovers In Chapter 20 "Globalization and Competitiveness", we explained how countries are linked through the flows across national borders of goods, services, labor, financial capital, and information. Countries do not exist in isolation, and these linkages imply that problems in one country can be transmitted to others. In the crisis of 2008, we can point to three broad channels of spillover from the United States to the rest of the world: 1. Within financial markets across borders (integrated financial markets) 2. From financial markets into real markets in the United States, followed by real spillovers across countries 3. Contagion effects through market psychology The first two linkages can be seen in the circular flow of income in Figure 30.4.1 "The Foreign Sector in the Circular Flow". In this version of the circular flow, we highlight the interactions between a single country and the rest of the world. These interactions operate through the flows of goods and services and financial assets. During good times, they are a key part of the workings of the world economy. But during bad times, such as a financial crisis, these same links create channels for the sharing of financial crises. Figure \(1\): The Foreign Sector in the Circular Flow Households purchase goods from other countries; these are called imports. Citizens of other countries purchase our products; these are called exports. A trade deficit requires borrowing from the rest of the world. There are three international flows in Figure 30.4.1 "The Foreign Sector in the Circular Flow": 1. Exports. Households, firms, and the government in the rest of the world purchase goods and services produced in the home country. 2. Imports. Households (and also firms and the government) purchase goods and services produced in the rest of the world. 3. Financial flows. Financial intermediaries in the home country buy and sell financial assets flows from/to the rest of the world. The net flow can go in either direction; Figure 30.4.1 "The Foreign Sector in the Circular Flow" shows the case where there is a net flow of money into the home country. International Spillovers in Financial Markets One channel through which the crisis of 2008 spread was the holding of US financial assets by governments, financial institutions, and banks in other countries. Take, for example, mortgages that were marketed and issued in the United States. These mortgages were usually not ultimately held by the banks that issued them to homeowners. Instead they were bundled together with other mortgages and then resold. These “mortgage-backed securities” were marketed and sold all over the world, not just in the United States. This means that any risk associated with these assets was shared across investors in different countries. The spread of this risk across world markets also provided a way for the crisis to propagate across countries. When it became clear that these assets were less valuable than investors had previously thought, the reduction in their price reduced the wealth of investors all over the globe. Moreover, the various financial institutions in the United States that were either bought out or went bankrupt were partly owned by investors in other countries. Thus financial links across the world economy provided one avenue for the spread of the crisis. Second, the financial flows across countries played a significant role in the spread of the crisis. Since the early 1970s, the United States has run current account deficits each year. One consequence of this is that it has been borrowing from abroad to finance these deficits. In other words, foreigners hold substantial amounts of US assets. These assets include US government debt and, in many cases, large amounts of mortgage-backed securities. One way to see the extent of these financial interactions is to look at the behavior of stock markets around the globe. Figure 30.4.2 "Stock Markets around the World Crashed Together" shows the values for six indices around the world: the Dow Jones Industrial Average (United States), CAC (France), FTSE (United Kingdom), Hang Seng (China), Nikkei (Japan), and Merval (Argentina). The figure shows that the last six months of 2008 were problematic for stock markets across many countries. Figure \(2\): Stock Markets around the World Crashed Together Spillovers through the Trade of Goods and Services Trade is another source of linkage across countries. Because countries sell goods and services to each other, a recession in one country will naturally spread to others. If the major trading partners of a country are in a recession, then there will be a reduced demand for the goods and services produced by that country. So, for example, if the United States enters into a recession as a consequence of financial market distress, then the demand for goods and services produced in other countries will decrease. This reduction in aggregate spending in other countries will then lead to lower economic activity in those countries. Spillovers through Expectations The circular flow of income shows two of the three spillovers we have identified: financial flows and trade flows. The third spillover has to do with people’s perceptions and expectations about market outcomes. There are two parts to this linkage: (1) expectations matter, and (2) outcomes in one market can have effects in others. The second of these is termed a contagion effect. To the extent that part of the financial distress is due to pessimism, as suggested by the coordination game we discussed in 30.2 Section "The Financial Crisis in the United States", this too is likely to spread across countries. If, day after day, the news from the United States is that the prices of stock and other assets are decreasing, investors in other countries may begin to share this pessimism. This will lead them to sell their assets, leading to decreases in the prices of the assets that they are selling. Decreasing asset prices can feed on themselves through pessimistic expectations. As an example, consider again the September 2008 bankruptcy of Lehman Bros. Landon Thomas Jr., “Examining the Ripple Effect of the Lehman Bankruptcy, New York Times, September 15, 2008, accessed July 26, 2011, www.nytimes.com/2008/09/15/business/worldbusiness/15iht-lehman.4.16176487.html. “Everybody is frozen here after Lehman,” said one senior executive from a major financial institution who was paying visits this week to all the major sovereign funds in Asia and the Middle East. His voice was worn from hours spent in conference rooms trying to explain to clients why Lehman failed and who might be next. “Its just fear.” In 30.2 Section "The Financial Crisis in the United States", we gave an equation to explain the price of an asset—specifically, a house. A key part of that equation is that the value of a house today depends in part on the price of the house expected in the future. To emphasize this key point again: if you think that people will pay a lot for a house a year from now, you will be willing to pay a lot for it today. The logic applies to all other assets as well, so there is a link between prices expected for the future and prices today. Think about a stock that you might buy on the New York Stock Exchange. The stock yields a dividend and also has a future price. The higher the price you expect the stock can sell for in the future, the more you are willing to pay for the stock today. Expectations matter. But where do these expectations come from? During normal times, expectations are disciplined by the usual state of a market. If housing prices have been rising by say 3 percent a year for the past 20 years, most people will predict that over the next year, housing prices will again rise by 3 percent. Most of the time that prediction will be roughly right—but not all the time. Sometimes markets are subject to unpredictable movements in prices. When asset prices decline rapidly and unexpectedly, this is often referred to as a “bubble bursting.” All this discussion suggests that asset prices can be somewhat fragile—and this is where contagion effects can come into play. If you are trading houses in one location and the prices of houses in other locations are all decreasing quickly, you might get concerned that whatever is hurting housing values in those markets will affect yours as well. If so, you might be tempted to try to sell the houses that you own. Of course, others think the same way. As a consequence, the price of houses in your location decreases as well. This is the contagion effect: the behavior of prices in other markets influences expectations in your market and leads to a price reduction in your market. You and the other market participants who feared a decrease in prices are, in the end, correct. In that sense, contagion effects can be self-fulfilling prophesies. The Crisis in the EMU When the crisis hit in the United States, the secretary of the Treasury and the chairman of the Board of Governors of the Federal Reserve System held a joint press conference. By so doing, they made it clear that, in the United States, monetary and fiscal policy were being used jointly to resolve the financial crisis. In Europe, the picture was very different. As we have explained, countries within the EMU have a common central bank but do not have a common fiscal policy. Fiscal policy is decided at the country level, while the ECB is supposed to target the overall European inflation rate (as discussed in Chapter 25 "Understanding the Fed") and is not supposed to play any role in bailing out individual governments. This system may work in normal times. The events of the crisis of 2008 revealed that it did not work so well in abnormal times; this in turn led to some calls for change. Sarkozy Calls for “Economic Government” for Eurozone French President Nicolas Sarkozy called Tuesday for “clearly identified economic government” for the eurozone, working alongside the European Central Bank. “It is not possible for the eurozone to continue without clearly identified economic government” Sarkozy told the European Parliament in Strasbourg. The European Central Bank, currently the only joint institution overseeing the 15-nation eurozone, “must be independent,” but the Frankfurt-based monetary body “should be able to discuss with an economic government,” Sarkozy added. See “Sarkozy Calls for ‘Economic Government’ for Eurozone,” The Economic Times, October 21, 2008, accessed July 25, 2011, http://articles.economictimes.indiatimes.com/2008-10-21/news/28393734_1_eurozone-french-president-nicolas-sarkozy-economic-government. President Sarkozy’s concern was that there is no centralized entity in the EU that can play the same role as the Treasury in the United States. Member governments devise their own fiscal policies to deal with their own country’s problems and do not take account of the effects of their actions on others in the European Union. This matters because the EU countries are so closely linked through trade and capital flows. Governments within the EU did indeed act unilaterally to preserve their individual banking systems. The French government agreed to a 360 billion euro package of support for the French banking system and made a statement that no banks would collapse. Other countries took similar measures to restore confidence in their banking systems. Such measures sound similar to those taken in the United States, but there is an important difference. For the United States, such spending could be financed by taxes, government borrowing, or monetary expansion. But for, say, France, the equation is different. If the rescue package is not financed by increased taxes, then the French will have to issue more debt. They no longer control their money supply, so they cannot print currency to finance these bailouts. Moreover, the Stability and Growth Pact, as we explained, places restrictions on the permissible magnitude of deficits by member governments. The reason for these restrictions is that, if many countries in the EMU were to run large deficits, there would be pressure on the ECB to finance some of this spending through additional money creation. In the aftermath of the crisis, many countries violated the fiscal restrictions, and how the monetary and fiscal authorities will ultimately respond to such pressure remains an open question. One part of the response has been the establishment of additional facilities within Europe to pool resources to provide assistance to member states. Countries within Europe have been fulfilling a similar role to that played by the International Monetary Fund (IMF). In particular, the crisis in Greece, and related debt problems in Ireland and Portugal, led to the creation in May 2010 of the European Financial Stability Facility ( www.efsf.europa.eu/about/index.htm) to provide for the stabilization of countries undergoing financial and debt problems. A June 2011 press release discusses the provision of funds for Ireland and Portugal under this stabilization fund ( www.efsf.europa.eu/mediacentre/news/2011/2011-006-eu-and-efsf-funding-plans-to-provide-financial-assistance-for-portugal-and-ireland.htm). The funds for Greece are coming from the EU member states directly. Within the ECB, the discussion by President Trichet ( http://www.ecb.int/press/key/date/2009/html/sp090427.en.html) summarized the perspective and policy choices of the central bank, including the provision of liquidity. Given that the ECB maintains an inflation target, how is this provision of liquidity consistent with that goal? One answer often given is that without this liquidity, the European economies might have fallen into deeper recessions and thus opened up the possibility of deflationary periods, as witnessed in the Great Depression years in the United States and in Japan during the 1990s. Costs and Benefits of a Common Currency Sarkozy’s discussion of European economic government, and subsequent events in various countries, brought the debate over monetary integration back to the forefront in Europe. After the establishment of the European Monetary System, many European leaders thought the logical next step was a complete monetary union. This dream, embodied in the Maastricht Treaty, was finally realized in January 1999. A concise history of the steps to the Economic and Monetary Union is available at “Economic and Monetary Union (EMU),” European Central Bank, accessed July 20, 2011, http://www.ecb.int/ecb/history/emu/html/index.en.html. During the recent financial turmoil, however, the monetary ties that bind the European countries have been greatly strained. The costs of delegating monetary policy to a common central bank became very visible because individual countries were unable to respond to their own economic situations. In addition, the fiscal constraints included in the Maastricht Treaty hampered the ability of countries to conduct their desired fiscal policy. A recent report highlighted these concerns. See “Crisis Puts European Unity to the Test,” MoneyWeek, October 10, 2008, accessed July 26, 2011, www.moneyweek.com/news-and-charts/economics/crisis-puts-european-unity-to-the-test-13811.aspx. Milton Friedman always said that the European Union would not survive a deep recession. Well, that theory is certainly being put to the test now. As the financial crisis radiated across the globe this week, the EU fell into disarray as an ugly bout of tit-for-tat policies helped fuel a rout of European banks. It began with Ireland’s decision on 30 September to guarantee the deposits of its six main banks. This was a chance for European leaders to shore up banking confidence across Europe, says Leo McKinstry in the Daily Express. But instead of rallying behind the decision, German Chancellor Angela Merkel condemned it. Yet that didn’t stop Greece from pledging to guarantee its own banks. The recent crisis has forced a reevaluation of the costs and benefits of the common currency. Most of the advantages of a common currency are self-apparent. As explained in Chapter 24 "Money: A User’s Guide", money acts as a medium of exchange, facilitating transactions among households and firms. A common currency obviates the need to exchange currencies when buying goods, services, and assets. Secondly, the monetary union removes the uncertainty associated with fluctuations in the exchange rate: within a monetary union, there are, of course, no exchange rate fluctuations at all. Further, unlike in a fixed exchange rate system, there is no need to buy and sell currencies to support the agreed-on exchange rates. Finally, because money also acts as a unit of account, a common currency makes it easier to compare prices across countries. All of these factors encourage countries to benefit from more efficient flows of goods and capital across borders. There is another gain from a common currency that is more subtle. In some cases, individual countries are unable to do a good job of managing their own monetary policy. In Chapter 26 "Inflations Big and Small", we explained that governments that run large deficits may decide to pay for these deficits by printing money. We also observed that there are situations where the monetary authority might be tempted to try unexpectedly expansionary policies when inflation is low. Such choices, while tempting, are ultimately damaging to an economy. Yet countries all too frequently indulge in such short-sighted policies. The underlying difficulty is a commitment problem. Ahead of time, the monetary authority might like to keep inflation low, but there is pressure to print money; in the end, countries experience high inflation caused by excessive money growth. In the case of the EMU, this was not an especially pressing concern. The ECB was conducting conservative monetary policy. At the same time, the governments in the euro area ran reasonably sensible fiscal policies for the most part, so there was no pressure on the ECB to finance excessive spending. After the 2008 financial crisis, however, the deficit and debt pictures changed for many countries—particularly Greece, Ireland, and Portugal. The debt situation has now put enormous pressure on European institutions, including the ECB. So far, the ECB has remained on the sidelines by not being a direct contributor to bailout packages. Commitment problems have arisen often in the monetary affairs of other countries. Argentina adopted a currency board in the 1990s because the monetary authorities could not commit to low-inflation policies in the late 1980s and early 1990s. To combat this problem, Argentina effectively adopted the US dollar as its currency. This monetary system meant that the Central Bank of Argentina was not able to increase the money supply independently: in effect, it delegated monetary policy to the United States. The monetary authority in Argentina was able to commit not to print pesos in response to fiscal pressures. Some European countries, such as Denmark, elected not to make the euro its common currency. However, they adopted fixed exchange rates relative to the euro. Others, like the United Kingdom, did not make the euro its common currency and also elected to have floating exchange rates. Given all the advantages of a common currency, why did some countries reject the idea (and, for that matter, why is there not a single world currency)? The answer is that there are also costs to adopting a common currency. As we have explained, the EMU entrusted monetary policy to a single central bank that decided monetary policy across a large number of countries. When these countries have different views about appropriate monetary policy, the delegation of monetary policy becomes problematic. Further, the fiscal restrictions imposed on the euro countries further reduced the ability of countries to respond to their own stabilization needs. In recent years, both Germany and France have violated the terms of the Stability and Growth Pact and the future of these fiscal restrictions remains in doubt. Monetary Policy Chapter 25 "Understanding the Fed" describes in detail the manner in which a central bank can use tools of monetary policy to influence aggregate economic activity and the price level. Monetary policy is a critical tool for stabilizing the macroeconomy. After the introduction of the euro, countries in the common currency area were no longer able to conduct independent monetary policy. The right to conduct monetary policy was ceded to the ECB. Suppose that all the countries in the EMU were similar in their macroeconomic fortunes, meaning that the state of the macroeconomy in Italy was roughly the same as that in France, Ireland, Portugal, Belgium, and so forth. For example, suppose that when France experiences a period of recession, all the other countries in the union are in recession as well. In this case, the monetary policy that each country would have pursued if it had its own currency would most likely be very similar to the policy pursued by a central bank representing the interests of all the countries together. Each country, acting individually, would choose to cut interest rates to stimulate economic activity. The ECB would have an incentive to stimulate the economies of EMU member countries exactly as those members would have done with their own monetary policies. If countries are similar, in other words, the delegation of monetary policy to a central monetary authority is not that costly. If countries are very different, it is more costly to move to a common currency. Suppose that Austria is undergoing a boom at the same time that Belgium is in recession. Belgium would like to cut interest rates. Austria would like to increase them. The ECB cannot satisfy both countries and may end up making them both unhappy. The crisis of 2008 did not have an even impact across all the countries in the euro area. Some countries saw major problems in their financial institutions, whereas others were less affected. As a result, different countries in the euro area had different desires in terms of the actions of the ECB. Monetary policy operates through exchange rates as well as interest rates. By adopting a common currency, countries also give up the ability to stimulate their economies through depreciation or devaluation of their currency. Greece, Portugal, and Ireland have been forced to enact severe austerity measures to bring their debt under control. As a consequence, these countries have seen major recessions. If, say, Portugal was still using the escudo rather than the euro, it could have stimulated the economy by decreasing the value of the currency, thus encouraging net exports. It no longer has this option. It is possible for the real exchange rate to decrease even if the nominal rate is fixed, but this may require deflation in the domestic economy. Fiscal Policy The adoption of the single currency in Europe did not directly affect fiscal policy. In principle, it could have been adopted without any reference to fiscal policy. In practice, however, the single currency was accompanied by the fiscal limitations enshrined in the Stability and Growth Pact. In particular, the Stability and Growth Pact said that member countries were not allowed to run a government deficit that exceeded 3 percent of gross domestic product (GDP). The idea was that member countries were permitted to run deficits in periods of low economic activity but were encouraged to avoid large and sustained budget deficits. As with the monetary agreement, there are costs and benefits to such fiscal restrictions. It is possible that a member country experiencing a period of low economic activity (a recession) would find itself unable to increase its government deficit, even if it wanted to stimulate economic activity. Chapter 29 "Balancing the Budget" explained that there are sometimes gains to running deficits. One cost of the Stability and Growth Pact is that it reduces the ability of countries to use deficits for macroeconomic stabilization. Fiscal restrictions are common within monetary unions. Within the United States, there are restrictions, largely imposed on the states by themselves, which limit budget deficits at the state level. The idea is that large deficits at the level of a European country or a US state might create an incentive for the central bank to print money and thus bail out the delinquent government. This would occur if the monetary authority lacks the ability to say “no” to a state or a country in financial distress. By limiting deficits in the first place, such bailouts need not occur. This is a gain for all the countries within the EMU. The Crisis in the United Kingdom So far we have looked at two large economies: the United States and the euro area. We now turn to the experience of some smaller economies, beginning with the United Kingdom. The United Kingdom is part of the EU but it is not in the euro area. It retained its own currency (the pound sterling) rather than adopting the euro. This meant, of course, that it also retained its own central bank. The Bank of England is known as a very independent monetary authority, and operates under very strict rules of inflation targeting. Yet it, too, responded to the crisis. The United Kingdom was one of the first countries to face serious implications of the financial crisis when, in September 2007, there was a run on a lending institution called Northern Rock. The Bank of England evidently could have—but chose not to—take action early in the crisis to avoid the run on Northern Rock. Once the run commenced, however, the Bank of England injected liquidity into the system. In October 2008, the Bank of England was, along with other central banks, cutting interest rates. However, the cuts it enacted were modest relative to the action taken in the United States and other countries. More significantly, the United Kingdom partially nationalized some of its banks over this period under a 400-billion-pound bailout plan. Just as in the US plan, the aim was to provide liquidity directly to these banks and thus open up the market for loans among banks. But, according to contemporary reports, UK banks were still not making new commitments weeks after this bailout plan was enacted. The Crisis in Iceland Iceland is a relatively small, very open economy. It has close links to the EU but retains its own currency: the krona. It was particularly hard hit by the financial crisis, in part because Icelandic banks had been borrowing extensively from abroad in the years prior to the financial crisis. According to one estimate, banks held foreign assets and liabilities worth about 10 times Iceland’s entire GDP. This is partly based on a BBC article about Iceland: Jon Danielsson, “Why Raising Interest Rates Won’t Work,” BBC News, October 28, 2008, accessed July 26, 2011, http://news.bbc.co.uk/2/hi/business/7658908.stm. The sheer size of asset holdings meant that if there was a substantial decrease in asset values, it was simply not possible for the Icelandic central bank or fiscal authorities to bail out domestic banks. Any attempt to bail out the banks would simply have bankrupted the government. You also might wonder why, as a last resort, Iceland could not generate print money to get itself out of trouble, financing a bailout through an inflation tax. We explained earlier that this would be a possibility in the United States, for example. The difference is that most of the liabilities of US financial institutions are denominated in US dollars, so inflation would reduce the real value of these liabilities. But much of the debt of Icelandic banks was not denominated in krona; it was denominated in euros, US dollars, or other currencies. Inflation in Iceland would simply lead to a depreciation of the currency and would not reduce the real value of the debt. Based on estimates from the IMF, the financial and exchange rate problems of Iceland led to a contraction in real GDP of around 3 percent in 2009. In late October 2008, Iceland negotiated a \$2.1 billion dollar loan from the IMF ( http://www.imf.org/external/np/sec/pr/2008/pr08256.htm) for emergency funding to help stabilize its economy. To put this in perspective, Iceland’s GDP is only \$12 billion, and the loan was equivalent to almost \$7,000 per person. Meanwhile, there was a precipitous decline in the value of the krona: between January and October 2008, the krona lost nearly half of its value. Iceland’s banking system was effectively nationalized in 2008. The government took over three of the biggest banks. During late October, the government tried to peg the krona at about 131 per euro. Their attempt failed, and the government was forced to allow the krona to decrease in response to market forces. There was a report of a trade at 340 krona per euro, far from the government’s attempted peg.See Bo Nielsen, “Iceland’s Krona Currency Trading Halts as Kaupthing Taken Over,” October 9, 2008, accessed July 26, 2011, http://www.bloomberg.com/apps/news?pid=20601085&refer=europe&sid=aiz5QIq94nrw. One way to think about the decline in the value of the krona is through the government budget constraint. Once the government took over the banks, what had been a private liability became a government liability to depositors. One way to meet this obligation is through higher taxes; another is through the creation of more currency. The rapid depreciation of the krona indicates that market participants were anticipating more inflation in Iceland, so the value of the currency decreased. Iceland was merely the first country that ran into considerable distress as a result of the crisis of 2008. It was followed a few days later by Ukraine, which agreed to a \$16.5 billion loan from the IMF. Countries such as Greece and Spain also faced problems as investors started to worry that their governments might default on their debt. The Crisis in China The financial crisis had an impact on China largely through trade linkages. China exports a lot of goods to western economies. As the level of economic activity in these economies slowed, the demand for goods and services produced in China decreased as well. This led to lower real GDP in China. As shown in the circular flow of income ( Figure 30.4.1 "The Foreign Sector in the Circular Flow"), the reduction in exports by China led to reduced output from Chinese firms, reduced income for Chinese households, and lower spending through the multiplier process. Even though China owned many US assets, most were not directly linked to mortgage-backed securities. Instead, the Chinese were holding about \$900 billion of US Treasury securities. Data on foreign holdings of US government securities is available at “Major Foreign Holders of Treasury Securities, ” US Department of the Treasury, September 16, 2011, accessed September 20, 2011, http://www.ustreas.gov/tic/mfh.txt. Although the value of these securities changed with the financial situation, this simply led to changes in the value of portfolios and did not lead to bankruptcy of financial institutions. China differs from the United States and Europe because many of the banks operating in China are owned by the government. The top four state-owned banks had about 66 percent of China’s deposit market in 2007. So if the assets of those banks decrease in value, this loss is ultimately reflected in the budget of the government. Whereas the governments of England, the United States, and other countries attacked the crisis of 2008 by partial nationalization—that is, the purchase of bank shares by the government—this was unnecessary in China because the government already had a substantial ownership share in the banks. Deposit insurance is also rather different in China. In the case of publicly owned banks, the government directly guarantees deposits so banks will not go bankrupt. There is no explicit deposit insurance for private banks, but the lack of explicit deposit insurance does not mean the Chinese government would not bail out a bank that was under attack. The Law of the People’s Republic of China on Commercial Banks Article 64 reads as follows: When a commercial bank has suffered or will possibly suffer, credit crisis, thereby seriously affecting the interests of the depositors, the banking regulatory authority under the State Council may assume control over the bank. The purposes of assumption of control are, through taking such measures as are necessary in respect of the commercial bank over which control is assumed, to protect the interests of the depositors and to enable the commercial bank to resume normal business. The debtor-creditor relationship with regard to a commercial bank over which control is assumed shall not change as a result of the assumption of control. “Article 64,” Law of the People’s Republic of China on Commercial Banks, May 10, 1995, accessed July 26, 2011, http://www.china.org.cn/english/DAT/214824.htm. The Crisis in Argentina What about the experience in Latin America during the crisis of 2008? Many countries, notably Argentina, Brazil, and Mexico, experienced their own financial and currency crises in recent decades. Those crises were “homegrown” because they were largely caused by domestic economic policies. But the upheavals of recent years were not created in these countries. The linkages we explained earlier also caused these countries to be affected by the financial events that afflicted the United States and Europe. Figure 30.4.2 "Stock Markets around the World Crashed Together" shows that the stock market in Argentina had similar volatility and losses to those experienced in other countries. This volatility, along with other financial upheavals, created an interesting response within Argentina: the government announced the nationalization of private pension plans. What is the connection here? The government announced it was taking over private pensions to protect households who faced added financial risks. Instead of facing the risks of private asset markets, households were now shielded from that risk through a national pension system. Skeptics have argued that this was simply an opportunity for the government of Argentina to obtain some additional resources. Promises of future compensation for the lost pensions were not viewed as credible. The Crisis in Australia Finally, not every country in the world was badly hit by the crisis of 2008. Australia, for example, saw a significant stock market decrease but otherwise went through the crisis years with little more than a minor slowdown in economic growth. There are several reasons for this. Australia, like other countries, used both monetary and fiscal policy to stimulate the economy. On the fiscal side, it cut taxes and increased government purchases; on the monetary side, the Reserve Bank of Australia decreased interest rates (although not by as much as many other countries). Australia has historically kept its government debt very close to zero, so there were no concerns about default on Australian debt. Australia made a very well-publicized cash transfer of about \$1,000 to about half the population. Even though much of those transfers were probably saved rather than spent, they are credited with helping to support confidence and limit contagion effects in Australia. Finally, Australia has benefitted from a major resources boom, so demand for net exports was a robust component of aggregate expenditures during the crisis period. Key Takeaways 1. The United States and the rest of the world are linked through many channels. Key channels that allowed the crisis to spread were financial links due to both holdings of assets across borders and the spread of pessimism across markets. In addition, links across countries due to trade flows meant that as income decreased in some countries, exports and thus real GDP decreased in others. 2. Within the EMU, individual countries were limited in fiscal policy responses due to restrictions on outstanding debt. Further, the ECB follows an inflation target rule and thus is not able to directly intervene to stabilize European economies. In the end, countries did take fiscal actions, and the ECB ultimately provided the needed liquidity to Europe. But this experience highlighted some of the costs of a monetary union. Exercises 1. During the crisis of 2008, what happened to stock markets across the world? 2. To avoid spillovers from a financial crisis, what would a country have to do? 3. Why do other countries hold US government debt?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/30%3A_The_Global_Financial_Crisis/30.04%3A_New_Page.txt
Exercises After you have read this section, you should be able to answer the following questions: 1. What are the causes of a currency crisis? 2. How are currency crises and financial crises related? In some countries, the financial crisis of 2008 led to a currency crisis. A currency crisis is a sudden and unexpected rapid decrease in the value of a currency. Currency crises are particularly severe in the case of a fixed exchange rate because such crises typically force a monetary authority to abandon the fixed rate. In the case of flexible exchange rates, a currency crisis occurs when the value of the currency decreases substantially in a short period of time. Such rapid depreciation is not as disruptive as the collapse of a fixed exchange rate, but it can still cause significant turmoil in an economy. Exchange Rates in the Current Crisis If you look at exchange rate data for September and October 2008, you can see that the dollar appreciated relative to the euro at that time. In other words, the dollar price of a euro decreased. Over the 10 days ending October 24, 2008, the dollar price of a euro decreased from about \$1.35 to \$1.25. More generally, several currencies experienced rapid depreciations during the financial crisis. Though there were no runs on these currencies, they nonetheless lost considerable value. The dollar value of the British pound decreased to \$1.62, its lowest level in 5 years, after the October 21, 2008, announcement that the UK economy was on the brink of a recession. There was a drop in value of about 7 percent over the previous week alone. The pound also decreased against the euro. Currency Crises under Flexible Exchange Rates A currency crisis can arise from a change in expectations, in ways that are similar to some of our earlier examples. Remember, for instance, how the current value of a house decreases when people expect that its future value will decrease. If you think that the value of Argentine pesos will decline (so each peso will be worth less in dollars), you may respond by selling pesos that you currently own. If everyone in the market shares your beliefs, then everyone will sell, and the value of the peso will decrease now. Currency Crises under Fixed Exchange Rates If everyone believes that a monetary authority can and will maintain the exchange rate, then people are happy to hold onto a currency. But if people believe that the fixed exchange rate is not sustainable, then there will be a run on the currency. Consider, for example, Brazil trying to stabilize its currency—the real. The monetary authority sets a fixed exchange rate, meaning that it stands ready to exchange Brazilian real for US dollars at a set price. If a fixed exchange rate is set too high, then the Brazilian central bank can maintain this value for a while by buying real with its own stocks of dollars. But the central bank does not possess unlimited reserves of dollars. If the low demand for the real persists, then eventually the central bank will run out of reserves and thus no longer be able to support the currency. When that happens, the value of the real will have to decrease. A decrease in a fixed exchange rate is called devaluation. In fact, the decrease in the value of the real would occur well before the central bank runs out of reserves. If you believe that the monetary authority will be forced to abandon the fixed rate, you will take your real and exchange them for dollars—and you will want to do this sooner rather than later to ensure you make the exchange before the real decreases in value. When lots of investors do this, the supply curve for real shifts outward. This makes the problem of maintaining the fixed exchange rate even more difficult for the central bank, so the devaluation of the currency will happen even sooner. If everyone does this, then the monetary authority will not have enough dollars on hand and will have to give up the fixed rate. The risk of such currency crises is the biggest potential problem with fixed exchange rates. History has given us many examples of such crises, and shows that they are very disruptive for the economy—and sometimes even for the world as a whole. You may have noticed that a currency crisis looks a lot like a bank run. In both cases, pessimistic expectations of investors (about the future of a bank in one case and the future value of a currency in the other) lead them to all behave in a way that makes the pessimism self-fulfilling. In the case of a bank run, if all depositors are worried about their deposits and take their money out of the bank, then the bank fails and the depositors’ pessimism was warranted. Likewise, if investors believe the devaluation of a currency is likely, they will all want to sell their currency. This drives down the price and makes the devaluation much more likely to occur. A currency crisis, like a bank run, is an example of a coordination game. Key Takeaways 1. A currency crisis can occur for several reasons, including being a consequence of a financial crisis or a fiscal crisis, or, in some cases, just driven by expectations like a bank run. 2. A financial crisis can lead to a currency crisis if depositors in one country, seeing the collapse of a financial system, rush to convert a home currency into foreign currencies. Exercises 1. What is the difference between a fixed exchange rate system and a flexible exchange rate system? 2. What is the difference between a currency crisis and a devaluation?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/30%3A_The_Global_Financial_Crisis/30.05%3A_New_Page.txt
In Conclusion Five or six years ago, economists studied a period that they named “the Great Moderation.” In the period after World War II, and even more specifically from the mid 1980s to the mid 2000s, economic performance in the United States, Europe, and many countries had been relatively placid. These countries enjoyed respectable levels of long-run growth, experienced only mild recessions, and enjoyed low and stable inflation. Many observers felt that this performance was in large measure due to the fact that economists and policymakers had learned how to conduct effective monetary and fiscal policies. We learned from the mistakes of the Great Depression and knew how to prevent serious economic downturns. We also learned from the mistakes made in the 1970s and knew how to avoid inflationary policies. To be sure, other countries still experienced their share of economic problems. Many countries in Latin America experienced currency crises and debt crises in the 1980s. Many countries in Southeast Asia suffered through painful exchange rate crises in the 1990s. Japan suffered a protracted period of low growth. Some countries saw hyperinflation, while others experienced economic decline. Still, for the most part, mature and developed economies experienced very good economic performance. Macroeconomics was becoming less about diagnosing failure and more about explaining success. The last few years shook that worldview. The crisis of 2008 showed that a major economic catastrophe was not as unthinkable as economists and others hoped. The world experienced the most severe economic downturn since the Great Depression, and there was a period where it seemed possible that the crisis could even be on the same scale as the Great Depression. Countries like the United States and the United Kingdom faced protracted recessions. Countries such as Greece, Portugal, Ireland, and Iceland found themselves mired in debt crises. Spillovers and interconnections—real, financial, and psychological—meant that events like the bankruptcy of Lehman Brothers reverberated throughout the economies of the world. Because it resurrected old problems, the crisis of 2008 also resurrected old areas of study in macroeconomics. The events in Europe have prompted economists to review the debate over common currencies and the conduct of monetary policy. There has been increased investigation of the size of fiscal policy multipliers. At the same time, macroeconomists are devoting much attention to topics such as the connection between financial markets and the real economy. But this difficult period for the world economy has also been an exciting time for macroeconomists. The study of macroeconomics has become more vital than ever—more alive and more essential. Key Links exercises 1. Consider the bank run game. If a government is supposed to provide deposit insurance but depositors doubt the word of the government, might there still be a bank run? 2. Comparing the Great Depression to the financial crisis starting in 2008, what were the differences in the response of fiscal and monetary policy between these two episodes? 3. We explained in 30.2 Section "The Financial Crisis in the United States" that an increase in the expected future price of houses leads to an increase in the current price. Draw a supply-and-demand diagram to illustrate this idea. 4. Consider the crisis from the perspective of China. United States imports from China are roughly \$300 billion each year. Due to the recession in the United States, imports from China decreased about 10 percent. If the marginal propensity to spend is 0.5 in China, what is the change in Chinese output predicted by the aggregate expenditure model? How much must government spending increase to offset this reduction in exports? 5. In the CBO assessment of ARRA, the multiplier from government purchases was assumed to be larger than the multiplier from tax cuts. How would you explain the differences in these multipliers? 6. If countries within the EMU are supposed to limit their deficits, what must happen to government spending during a recession when tax revenues decrease? 7. (Advanced) We argued that the provision of deposit insurance prevents bank runs. Is there an analogous policy to prevent currency crises? Economics Detective 1. What has been the ECB’s role in the European Financial Stability Facility and in the bailout packages for Greece, Ireland, and Portugal? 2. Find the details of the recent rescue package for Greece. What were the different views of Germany and France about this bailout? How was the IMF involved? 3. What predictions were made about job creation under the Obama administration’s stimulus package? What happened to job creation rates in the 2008–10 period?
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/30%3A_The_Global_Financial_Crisis/30.6%3A_End-of-Chapter_Material.txt
In this chapter, we present the key tools used in the macroeconomics and microeconomics part of this textbook. This toolkit serves two main functions: 1. Because these tools appear in multiple chapters, the toolkit serves as a reference. When using a tool in one chapter, you can refer back to the toolkit to find a more concise description of the tool as well as links to other parts of the book where the tool is used. 2. You can use the toolkit as a study guide. Once you have worked through the material in the chapters, you can review the tools using this toolkit. The charts below show the main uses of each tool in green, and the secondary uses are in orange. 31.02: New Page Individual demand refers to the demand for a good or a service by an individual (or a household). Individual demand comes from the interaction of an individual’s desires with the quantities of goods and services that he or she is able to afford. By desires, we mean the likes and dislikes of an individual. We assume that the individual is able to compare two goods (or collections of goods) and say which is preferred. We assume two things: (1) an individual prefers more to less and (2) likes and dislikes are consistent. An example is shown in part (a) of Figure 31.2.1 "Individual Demand". (This example is taken from Chapter 4 "Everyday Decisions".) In this example, there are two goods: music downloads ($1 each) and chocolate bars ($5 each). The individual has income of \$100. The budget line is the combination of goods and services that this person can afford if he spends all of his income. In this example, it is the solid line connecting 100 downloads and 20 chocolate bars. The horizontal intercept is the number of chocolate bars the individual could buy if all income was spent on chocolate bars and is income divided by the price of a chocolate bar. The vertical intercept is the number of downloads the individual could buy if all income was spent on downloads and is income divided by the price of a download. The budget set is all the combinations of goods and services that the individual can afford, given the prices he faces and his available income. In the diagram, the budget set is the triangle defined by the budget line and the horizontal and vertical axes. An individual’s preferred point is the combination of downloads and chocolate bars that is the best among all of those that are affordable. Because an individual prefers more to less, all income will be spent. This means the preferred point lies on the budget line. The most that an individual would be willing to pay for a certain quantity of a good (say, five chocolate bars) is his valuation for that quantity. The marginal valuation is the most he would be willing to pay to obtain one extra unit of the good. The decision rule of the individual is to buy an amount of each good such that marginal valuation = price. The individual demand curve for chocolate bars is shown in part (b) of Figure 31.2.1 "Individual Demand". On the horizontal axis is the quantity of chocolate bars. On the vertical axis is the price. The demand curve is downward sloping: this is the law of demand. As the price of a chocolate bar increases, the individual substitutes away from chocolate bars to other goods. Thus the quantity demanded decreases as the price increases. In some circumstances, the individual’s choice is a zero-one decision: either purchase a single unit of the good or purchase nothing. The unit demand curve shows us the price at which a buyer is willing to buy the good. This price is the same as the buyer’s valuation of the good. At any price above the buyer’s valuation, the individual will not want to buy the good. At any price below the buyer’s valuation, the individual wants to buy the good. If this price is exactly equal to the buyer’s valuation, then the buyer is indifferent between purchasing the good or not. In other words, the individual’s decision rule is to purchase the good if the valuation of the good exceeds its price. This is consistent with the earlier condition because the marginal valuation of the first unit is the same as the valuation of that unit. The difference between the valuation and the price is the buyer surplus. (See 31.11 Section "Buyer Surplus and Seller Surplus" for more discussion.) Key Insights • The individual demand for a good or a service comes from the interactions of desires with the budget set. • The individual purchases each good to the point where marginal valuation equals price. • As the price of a good or a service increases, the quantity demanded will decrease. More Formally Let pd be the price of a download, pc the price of a chocolate bar, and I the income of an individual. Think of prices and income in terms of dollars. Then the budget set is the combinations of downloads (d) and chocolate bars (c) such that $I \geq p_{d} \times d+p_{c} \times c$ The budget line is the combinations of d and c such that $I=p_{d} \times d+p_{c} \times c$ In the graph, with downloads on the vertical axis, the equation for the budget line is You can use this equation to understand how changes in income and prices change the position of the budget line. You can also use this equation to find the vertical and horizontal intercepts of the budget line, along with the slope of $−(p_{c}/p_{d})$. The individual purchases downloads up to the point where $MV_{d}=p_{d}$ (where MV represents marginal valuation) and purchases chocolate bars up to the point where $MV_{c}=p_{c}.$ Combining these expressions we get which tells us that (minus) the ratio of marginal valuations equals the slope of the budget line. The ratio of marginal valuations is the rate at which an individual would like to trade one good for the other. The ratio of prices (the slope of the budget line) is the rate at which the market allows an individual to make these trades.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.01%3A_New_Page.txt
Elasticity measures the proportionate change in one variable relative to the change in another variable. Consider, for example, the response of the quantity demanded to a change in the price. The price elasticity of demand is the percentage change in the quantity demanded divided by the percentage change in the price: When the price increases (the percentage change in the price is positive), the quantity decreases, meaning that the percentage change in the quantity is negative. In other words, the law of demand tells us that the elasticity of demand is a negative number. For this reason we often use −(elasticity of demand) because we know this will always be a positive number. • If \(−(elasticity of demand) > 1\), demand is relatively elastic. • If \(−(elasticity of demand) < 1\), demand is relatively inelastic. We can use the idea of the elasticity of demand whether we are thinking about the demand curve faced by a firm or the market demand curve. The definition is the same in either case. If we are analyzing the demand curve faced by a firm, then we sometimes refer to the elasticity of demand as the own-price elasticity of demand. It tells us how much the quantity demanded changes when the firm changes its price. If we are analyzing a market demand curve, then the price elasticity of demand tells us how the quantity demanded in the market changes when the price changes. Similarly, the price elasticity of supply tells us how the quantity supplied in a market changes when the price changes. The price elasticity of supply is generally positive because the supply curve slopes upward. The income elasticity of demand is the percentage change in the quantity demanded divided by the percentage change in income. The income elasticity of demand for a good can be positive or negative. • If the income elasticity of demand is negative, it is an inferior good. • If the income elasticity of demand is positive, it is a normal good. • If the income elasticity of demand is greater than one, it is a luxury good. The cross-price elasticity of demand tells us how the quantity demanded of one good changes when the price of another good changes. • If the cross-price elasticity of demand is positive, the goods are substitutes. • If the cross-price elasticity of demand is negative, the goods are complements. In general, we can use elasticity whenever we want to show how one variable responds to changes in another variable. Key Insights • Elasticity measures the responsiveness of one variable to changes in another variable. • Elasticities are unitless: you can measure the underlying variables in any units (for example, dollars or thousands of dollars), and the elasticity will not change. • Elasticity is not the same as slope. For example, the price elasticity of demand depends on both the slope of the demand curve and the place on the demand curve where you are measuring the elasticity. 31.04: New Page The labor market is the market in which labor services are traded. Individual labor supply comes from the choices of individuals or households about how to allocate their time. As the real wage (the nominal wage divided by the price level) increases, households supply more hours to the market, and more households decide to participate in the labor market. Thus the quantity of labor supplied increases. The labor supply curve of a household is shifted by changes in wealth. A wealthier household supplies less labor at a given real wage. Labor demand comes from firms. As the real wage increases, the marginal cost of hiring more labor increases, so each firm demands fewer hours of labor input—that is, a firm’s labor demand curve is downward sloping. The labor demand curve of a firm is shifted by changes in productivity. If labor becomes more productive, then the labor demand curve of a firm shifts rightward: the quantity of labor demanded is higher at a given real wage. The labor market equilibrium is shown in Figure 31.2.2 "Labor Market Equilibrium". The real wage and the equilibrium quantity of labor traded are determined by the intersection of labor supply and labor demand. At the equilibrium real wage, the quantity of labor supplied equals the quantity of labor demanded. Key Insights • Labor supply and labor demand depend on the real wage. • Labor supply is upward sloping: as the real wage increases, households supply more hours to the market. • Labor demand is downward sloping: as the real wage increases, firms demand fewer hours of work. • A market equilibrium is a real wage and a quantity of hours such that the quantity demanded equals the quantity supplied. 31.05: New Page Individuals make decisions that unfold over time. Because individuals choose how to spend income earned over many periods on consumption goods over many periods, they sometimes wish to save or borrow rather than spend all their income in every period. Figure 31.2.3 "Choices over Time" shows examples of these choices over a two-year horizon. The individual earns income this year and next. The combinations of consumption that are affordable and that exhaust all of an individual’s income are shown on the budget line, which in this case is called an intertemporal budget constraint. The slope of the budget line is equal to \((1 + real\ interest\ rate)\), which is equivalent to the real interest factor. The slope is the amount of consumption that can be obtained tomorrow by giving up a unit of consumption today. The preferred point is also indicated; it is the combination of consumption this year and consumption next year that the individual prefers to all the points on the budget line. The individual in part (a) of Figure 31.2.3 "Choices over Time" is consuming less this year than she is earning: she is saving. Next year she can use her savings to consume more than her income. The individual in part (b) of Figure 31.2.3 "Choices over Time" is consuming more this year than he is earning: he is borrowing. Next year, his consumption will be less than his income because he must repay the amount borrowed this year. When the real interest rate increases, individuals will borrow less and (usually) save more (the effect of interest rate changes on saving is unclear as a matter of theory because income effects and substitution effects act in opposite directions). Thus individual loan supply slopes upward. Of course, individuals live for many periods and make frequent decisions on consumption and saving. The lifetime budget constraint is obtained using the idea of discounted present value: \[discounted\ present\ value\ of\ lifetime\ income = discounted\ present\ value\ of\ lifetime\ consumption.\] The left side is a measure of all the disposable income the individual will receive over his lifetime (disposable means after taking into account taxes paid to the government and transfers received from the government). The right side calculates the value of consumption of all goods and services over an individual’s lifetime. Key Insights • Over a lifetime, an individual’s discounted present value of consumption will equal the discounted present value of income. • Individuals can borrow or lend to obtain their preferred consumption bundle over their lifetimes. • The price of borrowing is the real interest rate.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.03%3A_New_Page.txt
Discounted present value is a technique used to add dollar amounts over time. We need this technique because a dollar today has a different value from a dollar in the future. The discounted present value this year of \$1.00 that you will receive next year is as follows: If the nominal interest rate is 10 percent, then the nominal interest factor is 1.1, so \$1 next year is worth \$1/1.1 = \$0.91 this year. As the interest rate increases, the discounted present value decreases. More generally, we can compute the value of an asset this year from the following formula: The flow benefit depends on the asset. For a bond, the flow benefit is a coupon payment. For a stock, the flow benefit is a dividend payment. For a fruit tree, the flow benefit is the yield of a crop. If an asset (such as a bond) yields a payment next year of \$10 and has a price next year of \$90, then the “flow benefit from asset + price of the asset next year” is \$100. The value of the asset this year is then If the nominal interest rate is 20 percent, then the value of the asset is \$100/1.2 = 83.33. We discount nominal flows using a nominal interest factor. We discount real flows (that is, flows already corrected for inflation) using a real interest factor, which is equal to (1 + real interest rate). Key Insights • If the interest rate is positive, then the discounted present value is less than the direct sum of flows. • If the interest rate increases, the discounted present value will decrease. More Formally Denote the dividend on an asset in period t as Dt. Define Rt as the cumulative effect of interest rates up to period t. For example, \(R_{2}=(1+r_{1})(1+r_{2})\). Then the value of an asset that yields Dt dollars in every year up to year T is given by If the interest rate is constant (equal to r), then the one period interest factor is \(R=1+r\), and \(R_{t}=R^{t}\). The discounted present value tool is illustrated in Table 31.2.1 "Discounted Present Value with Different Interest Rates". The number of years (T) is set equal to 5. The table gives the value of the dividends in each year and computes the discounted present values for two different interest rates. For this example, the annual interest rates are constant over time. Year Dividend (\$) Discounted Present Value with R = 1.05 (\$) Discounted Present Value with R = 1.10 (\$) 1 100 100 100 2 100 95.24 90.91 3 90 81.63 74.38 4 120 103.66 90.16 5 400 329.08 273.20 Discounted present value 709.61 628.65 Table \(3\): Discounted Present Value with Different Interest Rates 31.07: New Page An individual’s choices over time determine how much he or she will borrow or lend. In particular, individual loan supply is upward sloping: when the real interest rate increases, a typical household will supply a greater quantity of funds to the credit market. Market loan supply is obtained by adding together the individual loan supplies of everyone in an economy. We use the terms “credit” and “loans” interchangably. The demand for credit comes from households and firms that are borrowing. Market loan demand is obtained by adding together all the individual demands for loans. When real interest rates increase, borrowing is more expensive, so the quantity of loans demanded decreases. That is, loan demand obeys the law of demand. Borrowers and lenders interact in the credit market (or loan market), which is illustrated in Figure 31.2.4 "The Credit Market (or Loan Market)". Credit market equilibrium occurs at the real interest rate where the quantity of loans supplied equals the quantity of loans demanded. At this equilibrium real interest rate, lenders lend as much as they wish, and borrowers can borrow as much as they wish. All gains from trade through loans are exhausted in equilibrium. Key Insight • As the real interest rate increases, more loans are supplied, and fewer loans are demanded. Figure \(4\): The Credit Market (or Loan Market)
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.06%3A_New_Page.txt
Probability is the percentage chance that something will occur. For example, there is a 50 percent chance that a tossed coin will come up heads. We say that the probability of getting the outcomeheads” is 1/2. There are five things you need to know about probability: 1. The list of possible outcomes must be complete. 2. The list of possible outcomes must not overlap. 3. If an outcome is certain to occur, it has probability 1. 4. If an outcome is certain not to occur, it has probability 0. 5. If we add together the probabilities for all the possible outcomes, the total must equal 1. The expected value of a situation with financial risk is a measure of how much you would expect to win (or lose) on average if the situation were to be replayed a large number of times. You can calculate expected value as follows: • For each outcome, multiply the probability of that outcome by the amount you will receive. • Add together these amounts over all the possible outcomes. For example, suppose you are offered the following proposal. Roll a six-sided die. If it comes up with 1 or 2, you get $90. If it comes up 3, 4, 5, or 6, you get$30. The expected value is $(1/3) \times 90 + (2/3) \times 30 = 50.$ Most people dislike risk. They prefer a fixed sum of money to a gamble that has the same expected value. Risk aversion is a measure of how much people want to avoid risk. In the example we just gave, most people would prefer a sure $50 to the uncertain proposal with the expected value of$50. Suppose we present an individual with the following gamble: • With 99 percent probability, you lose nothing. • With 1 percent probability, you lose $1,000. The expected value of this gamble is −$10. Now ask the individual how much she would pay to avoid this gamble. Someone who is risk-neutral would be willing to pay only $10. Someone who is risk-averse would be willing to pay more than$10. The more risk-averse an individual, the more the person would be willing to pay. The fact that risk-averse people will pay to shed risk is the basis of insurance. If people have different attitudes toward risky gambles, then the less risk-averse individual can provide insurance to the more risk-averse individual. There are gains from trade. Insurance is also based on diversification, which is the idea that people can share their risks so it is much less likely that any individual will face a large loss. Key Insights • Expected value is the sum of the probability of an event times the gain/loss if that event occurs. • Risk-averse people will pay to avoid risk. This is the basis of insurance. More Formally Consider a gamble where there are three and only three possible outcomes (x, y, z) that occur with probabilities Pr(x), Pr(y), and Pr(z). Think of these outcomes as the number of dollars you get in each case. First, we know that $Pr(x) + Pr(y) + Pr(z) = 1.$ Second, the expected value of this gamble is $EV=(Pr(x)^{*}x)+(Pr(y)^{*}y)+(Pr(z)^{*}z).$
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.08%3A_New_Page.txt
If you have some data expressed in nominal terms (for example, in dollars), and you want to convert them to real terms, you should use the following four steps. 1. Select your deflator. In most cases, the Consumer Price Index (CPI) is the best deflator to use. You can find data on the CPI (for the United States) at the Bureau of Labor Statistics website ( http://www.bls.gov). 2. Select your base year. Find the value of the index in that base year. 3. For all years (including the base year), divide the value of the index in that year by the value in the base year. The value for the base year is 1. 4. For each year, divide the value in the nominal data series by the number you calculated in step 3. This gives you the value in “base year dollars.” Table 31.2.2 "Correcting Nominal Sales for Inflation" shows an example. We have data on the CPI for three years, as listed in the second column. The price index is created using the year 2000 as a base year, following steps 1–3. Sales measured in millions of dollars are given in the fourth column. To correct for inflation, we divide sales in each year by the value of the price index for that year. The results are shown in the fifth column. Because there was inflation each year (the price index is increasing over time), real sales do not increase as rapidly as nominal sales. Year CPI Price Index (2000 Base) Sales (Millions) Real Sales (Millions of Year 2000 Dollars) 2000 172.2 1.0 21.0 21.0 2001 177.1 1.03 22.3 21.7 2002 179.9 1.04 22.9 21.9 Table \(2\): Correcting Nominal Sales for Inflation Source: Bureau of Labor Statistics for the Consumer Price Index This calculation uses the CPI, which is an example of a price index. To see how a price index like the CPI is constructed, consider Table 31.2.3 "Constructing a Price Index", which shows a very simple economy with three goods: T-shirts, music downloads, and meals. The prices and quantities purchased in the economy in 2012 and 2013 are summarized in the table. Year T-shirts Music Downloads Meals Cost of 2013 Basket Price Index Price (\$) Quantity Price (\$) Quantity Price (\$) Quantity Price (\$) 2012 20 10 1 50 25 6 425 1.00 2013 22 12 0.80 60 26 5 442 1.04 Table \(3\): Constructing a Price Index To construct a price index, you must choose a fixed basket of goods. For example, we could use the goods purchased in 2013 (12 T-shirts, 60 downloads, and 5 meals). This fixed basket is then priced in different years. To construct the cost of the 2013 basket at 2013 prices, the product of the price and the quantity purchased for each good in 2013 is added together. The basket costs \$442. Then we calculate the cost of the 2013 basket at 2012 prices: that is, we use the prices of each good in 2012 and the quantities purchased in 2013. The sum is \$425. The price index is constructed using 2012 as a base year. The value of the price index for 2013 is the cost of the basket in 2013 divided by its cost in the base year (2012). When the price index is based on a bundle of goods that represents total output in an economy, it is called the price level. The CPI and gross domestic product (GDP) deflator are examples of measures of the price level (they differ in terms of exactly which goods are included in the bundle). The growth rate of the price level (its percentage change from one year to the next) is called the inflation rate. We also correct interest rates for inflation. The interest rates you typically see quoted are in nominal terms: they tell you how many dollars you will have to repay for each dollar you borrow. This is called a nominal interest rate. The real interest rate tells you how much you will get next year, in terms of goods and services, if you give up a unit of goods and services this year. To correct interest rates for inflation, we use the Fisher equation: \[real\ interest\ rate ≈ nominal\ interest\ rate − inflation\ rate.\] For more details, see 31.3 Section "The Fisher Equation: Nominal and Real Interest Rates" on the Fisher equation. Key Insights • Divide nominal values by the price index to create real values. • Create the price index by calculating the cost of buying a fixed basket in different years.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.09%3A_New_Page.txt
The supply-and-demand framework is the most fundamental framework in economics. It explains both the price of a good or a service and the quantity produced and purchased. The market supply curve comes from adding together the individual supply curves of firms in a particular market. A competitive firm, taking prices as given, will produce at a level such that \[price = marginal\ cost.\] Marginal cost usually increases as a firm produces more output. Thus an increase in the price of a product creates an incentive for firms to produce more—that is, the supply curve of a firm is upward sloping. The market supply curve slopes upward as well: if the price increases, all firms in a market will produce more output, and some new firms may also enter the market. A firm’s supply curve shifts if there are changes in input prices or the state of technology. The market supply curve is shifted by changes in input prices and changes in technology that affect a significant number of the firms in a market. The market demand curve comes from adding together the individual demand curves of all households in a particular market. Households, taking the prices of all goods and services as given, distribute their income in a manner that makes them as well off as possible. This means that they choose a combination of goods and services preferred to any other combination of goods and services they can afford. They choose each good or service such that \[price = marginal\ valuation.\] Marginal valuation usually decreases as a household consumes more of a product. If the price of a good or a service decreases, a household will substitute away from other goods and services and toward the product that has become cheaper—that is, the demand curve of a household is downward sloping. The market demand curve slopes downward as well: if the price decreases, all households will demand more. The household demand curve shifts if there are changes in income, prices of other goods and services, or tastes. The market demand curve is shifted by changes in these factors that are common across a significant number of households. A market equilibrium is a price and a quantity such that the quantity supplied equals the quantity demanded at the equilibrium price ( Figure 31.2.5 "Market Equilibrium"). Because market supply is upward sloping and market demand is downward sloping, there is a unique equilibrium price. We say we have a competitive market if the following are true: • The product being sold is homogeneous. • There are many households, each taking the price as given. • There are many firms, each taking the price as given. A competitive market is typically characterized by an absence of barriers to entry, so new firms can readily enter the market if it is profitable, and existing firms can easily leave the market if it is not profitable. Key Insights • Market supply is upward sloping: as the price increases, all firms will supply more. • Market demand is downward sloping: as the price increases, all households will demand less. • A market equilibrium is a price and a quantity such that the quantity demanded equals the quantity supplied. Figure 31.2.5 "Market Equilibrium" shows equilibrium in the market for chocolate bars. The equilibrium price is determined at the intersection of the market supply and market demand curves. More Formally If we let p denote the price, qd the quantity demanded, and I the level of income, then the market demand curve is given by \[qd=a−bp+cI,\] where a, b, and c are constants. By the law of demand, \(b > 0\). For a normal good, the quantity demanded increases with income: \(c > 0\). If we let qs denote the quantity supplied and t the level of technology, the market supply curve is given by \[qs = d + ep + ft,\] where d, e, and f are constants. Because the supply curve slopes upward, \(e > 0\). Because the quantity supplied increases when technology improves, \(f > 0\). In equilibrium, the quantity supplied equals the quantity demanded. Set \(qs = qd = q^{*}\) and set \(p = p^{*}\) in both equations. The market clearing price (p*) and quantity (q*) are as follows: and \[q^{*} = d + ep^{*} + ft.\]
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.10%3A_Supply_and_Demand.txt
If you buy a good, then you obtain buyer surplus. If you did not expect to obtain any surplus, then you would not choose to buy the good. • Suppose you buy a single unit of the good. Your surplus is the difference between your valuation of the good and the price you pay. This is a measure of how much you gain from the exchange. • If you purchase many units of a good, then your surplus is the sum of the surplus you get from each unit. To calculate the surplus from each unit, you subtract the price paid from your marginal valuation of that unit. If you sell a good, then you obtain seller surplus. If you did not expect to obtain any surplus, you would not sell the good. • Suppose you sell a single unit of a good. Your surplus is equal to the difference between the price you receive from selling the good and your valuation of the good. This valuation may be a measure of how much you enjoy the good or what you think you could sell it for in some other market. • If you sell many units of a good, then the surplus you receive is the sum of the surplus for each unit you sell. To calculate the surplus from selling each unit, you take the difference between the price you get for each unit sold and your marginal valuation of that extra unit. Buyer surplus and seller surplus are created by trade in a competitive market ( Figure 31.2.6 "A Competitive Market"). The equilibrium price and the equilibrium quantity are determined by the intersection of the supply and demand curves. The area below the demand curve and above the price is the buyer surplus; the area above the supply curve and below the price is the seller surplus. The sum of the buyer surplus and the seller surplus is called total surplus or the gains from trade. Buyer surplus and seller surplus can also arise from individual bargaining ( Figure 31.2.7 "Individual Bargaining"). When a single unit is traded (the case of unit demand and unit supply), the total surplus is the difference between the buyer’s valuation and the seller’s valuation. Bargaining determines how they share the gains from trade. The quantity of trades, indicated on the horizontal axis, is either zero or one. The valuations of the buyer and the seller are shown on the vertical axis. In this case, the valuation of the buyer (\$3,000) exceeds the valuation of the seller (\$2,000), indicating that there are gains from trade equal to \$1,000. How these gains are shared between the buyer and seller depends on the price they agree on. In part (a) of Figure 31.2.7 "Individual Bargaining", the buyer gets most of the surplus; in part (b) of Figure 31.2.7 "Individual Bargaining", the seller gets most of the surplus. Key Insights • Buyer surplus and seller surplus are created by trade. • Buyer surplus is the difference between the marginal value of a good and the price paid. • Seller surplus is the difference between the price received and the marginal value of a good.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.11%3A_Buyer_Surplus_and_Seller_Surplus.txt
The outcome of a competitive market has a very important property. In equilibrium, all gains from trade are realized. This means that there is no additional surplus to obtain from further trades between buyers and sellers. In this situation, we say that the allocation of goods and services in the economy is efficient. However, markets sometimes fail to operate properly and not all gains from trade are exhausted. In this case, some buyer surplus, seller surplus, or both are lost. Economists call this a deadweight loss. The deadweight loss from a monopoly is illustrated in Figure 31.2.8 "Deadweight Loss". The monopolist produces a quantity such that marginal revenue equals marginal cost. The price is determined by the demand curve at this quantity. A monopoly makes a profit equal to total revenue minus total cost. When the total output is less than socially optimal, there is a deadweight loss, which is indicated by the red area in Figure 31.2.8 "Deadweight Loss". Deadweight loss arises in other situations, such as when there are quantity or price restrictions. It also arises when taxes or subsidies are imposed in a market. Tax incidence is the way in which the burden of a tax falls on buyers and sellers—that is, who suffers most of the deadweight loss. In general, the incidence of a tax depends on the elasticities of supply and demand. A tax creates a difference between the price paid by the buyer and the price received by the seller ( Figure 31.2.9 "Tax Burdens"). The burden of the tax and the deadweight loss are defined relative to the tax-free competitive equilibrium. The tax burden borne by the buyer is the difference between the price paid under the tax and the price paid in the competitive equilibrium. Similarly, the burden of the seller is the difference between the price in the competitive equilibrium and the price received under the equilibrium with taxes. The burden borne by the buyer is higher—all else being the same—if demand is less elastic. The burden borne by the seller is higher—all else being the same—if supply is less elastic. The deadweight loss from the tax measures the sum of the buyer’s lost surplus and the seller’s lost surplus in the equilibrium with the tax. The total amount of the deadweight loss therefore also depends on the elasticities of demand and supply. The smaller these elasticities, the closer the equilibrium quantity traded with a tax will be to the equilibrium quantity traded without a tax, and the smaller is the deadweight loss. Key Insights • In a competitive market, all the gains from trade are realized. • If sellers have market power, some gains from trade are lost because the quantity traded is below the competitive level. • Other market distortions, such as taxes, subsidies, price floors, or price ceilings, similarly cause the amount to be traded to differ from the competitive level and cause deadweight loss. Figure \(8\): Deadweight Loss Figure \(9\): Tax Burdens 31.13: Production Possibilities Frontier The production possibilities frontier shows the combinations of goods and services that an economy can produce if it is efficiently using every available input. A key component in understanding the production possibilities frontier is the term efficiently. If an economy is using its inputs in an efficient way, then it is not possible to produce more of one good without producing less of another. Figure 31.2.10 "The Production Possibilities Frontier" shows the production possibilities frontier for an economy producing web pages and meals. It is downward sloping: to produce more web pages, the production of meals must decrease. Combinations of web pages and meals given by points inside the production possibilities frontier are possible for the economy to produce but are not efficient: at points inside the production possibilities frontier, it is possible for the economy to produce more of both goods. Points outside the production possibilities frontier are not feasible given the current levels of inputs in the economy and current technology. The negative slope of the production possibilities frontier reflects opportunity cost. The opportunity cost of producing more meals is that fewer web pages can be created. Likewise, the opportunity cost of creating more web pages means that fewer meals can be produced. The production possibilities frontier shifts over time. If an economy accumulates more physical capital or has a larger workforce, then it will be able to produce more of all the goods in an economy. Further, it will be able to produce new goods. Another factor shifting the production possibilities frontier outward over time is technology. As an economy creates new ideas (or receives them from other countries) on how to produce goods more cheaply, then it can produce more goods. Key Insights • The production possibilities frontier shows the combinations of goods and services that can be produced efficiently in an economy at a point in time. • The production possibilities frontier is downward sloping: producing more of one good requires producing less of others. The production of a good has an opportunity cost. • As time passes, the production possibilities frontier shifts outward due to the accumulation of inputs and technological progress.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.12%3A_Efficiency_and_Deadweight_Loss.txt
Comparative advantage explains why individuals and countries trade with each other. Trade is at the heart of modern economies: individuals specialize in production and generalize in consumption. To consume many goods while producing relatively few, individuals must sell what they produce in exchange for the output of others. Countries likewise specialize in certain goods and services and import others. By so doing, they obtain gains from trade. Table 31.2.4 "Hours of Labor Required" shows the productivity of two different countries in the production of two different goods. It shows the number of labor hours required to produce two goods—tomatoes and beer—in two countries: Guatemala and Mexico. From these data, Mexico has an absolute advantage in the production of both goods. Workers in Mexico are more productive at producing both tomatoes and beer in comparison to workers in Guatemala. Tomatoes (1 Kilogram) Beer (1 Liter) Guatemala 6 3 Mexico 2 2 Table $4$: Hours of Labor Required In Guatemala, the opportunity cost of 1 kilogram of tomatoes is 2 liters of beer. To produce an extra kilogram of tomatoes in Guatemala, 6 hours of labor time must be taken away from beer production; 6 hours of labor time is the equivalent of 2 liters of beer. In Mexico, the opportunity cost of 1 kilogram of tomatoes is 1 liter of beer. Thus the opportunity cost of producing tomatoes is lower in Mexico than in Guatemala. This means that Mexico has a comparative advantage in the production of tomatoes. By a similar logic, Guatemala has a comparative advantage in the production of beer. Guatemala and Mexico can have higher levels of consumption of both beer and tomatoes if they trade rather than produce in isolation; each country should specialize (either partially or completely) in the good in which it has a comparative advantage. It is never efficient to have both countries produce both goods. Key Insights • Comparative advantage helps predict the patterns of trade between individuals and/or countries. • A country has a comparative advantage in the production of a good if the opportunity cost of producing that good is lower in that country. • Even if one country has an absolute advantage in all goods, it will still gain from trading with another country. • Although this example is cast in terms of countries, the same logic is also used to explain production patterns between two individuals. 31.14 Costs of Production The costs of production for a firm are split into two categories. One type of cost, fixed costs, is independent of a firm’s output level. A second type of cost, variable costs, depends on a firm’s level of output. Total costs are the sum of the fixed costs and the variable costs. The change in costs as output changes by a small amount is called marginal cost. It is calculated as follows: Because fixed costs do not depend on the quantity, if we produce one more unit, then the change in total cost and the change in the variable cost are the same. Marginal cost is positive because variable costs increase with output. Marginal cost is usually increasing in the level of output, reflecting the diminishing marginal product of factors of production. For example, suppose that total costs are given by $total\ costs = 50 + 10 \times quantity.$ Here the fixed cost is 50, and the variable cost is 10 times the level of output. In this example, marginal cost equals 10. These costs are shown in Table 31.2.5. Output Fixed Cost Variable Cost Total Cost 0 50 0 50 10 50 100 150 20 50 200 250 50 50 500 550 Table $5$ We sometimes divide fixed costs into two components: entry costs, which are the one-time fixed costs required to open a new business or set up a new plant, and fixed operating costs, which are the fixed costs incurred regularly during the normal operation of a business. Some costs are sunk costs; once incurred, these costs cannot be recovered. Such costs should be ignored in forward-looking business decisions. Other costs are partially or fully recoverable costs. For example, if a firm purchases an asset that can be resold, then the cost of that asset is recoverable. Figure 31.2.11 "Cost Measures" shows these various measures of costs. It is drawn assuming a fixed cost of 50 and variable costs given by $variable\ costs = 10 \times quantity + 0.1 \times quantity^{2}.$ For this example, marginal cost is positive and increasing. Key Insights • Fixed costs are independent of the level of output, whereas variable costs depend on the output level of a firm. • Pricing decisions depend on marginal costs. • Decisions to enter and/or exit an industry depend on both fixed and variable costs.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.14%3A_Comparative_Advantage.txt
The costs of production for a firm are split into two categories. One type of cost, fixed costs, is independent of a firm’s output level. A second type of cost, variable costs, depends on a firm’s level of output. Total costs are the sum of the fixed costs and the variable costs. The change in costs as output changes by a small amount is called marginal cost. It is calculated as follows: Because fixed costs do not depend on the quantity, if we produce one more unit, then the change in total cost and the change in the variable cost are the same. Marginal cost is positive because variable costs increase with output. Marginal cost is usually increasing in the level of output, reflecting the diminishing marginal product of factors of production. For example, suppose that total costs are given by $total\ costs = 50 + 10 \times quantity.$ Here the fixed cost is 50, and the variable cost is 10 times the level of output. In this example, marginal cost equals 10. These costs are shown in Table 31.2.5. Output Fixed Cost Variable Cost Total Cost 0 50 0 50 10 50 100 150 20 50 200 250 50 50 500 550 Table $5$ We sometimes divide fixed costs into two components: entry costs, which are the one-time fixed costs required to open a new business or set up a new plant, and fixed operating costs, which are the fixed costs incurred regularly during the normal operation of a business. Some costs are sunk costs; once incurred, these costs cannot be recovered. Such costs should be ignored in forward-looking business decisions. Other costs are partially or fully recoverable costs. For example, if a firm purchases an asset that can be resold, then the cost of that asset is recoverable. Figure 31.2.11 "Cost Measures" shows these various measures of costs. It is drawn assuming a fixed cost of 50 and variable costs given by $variable\ costs = 10 \times quantity + 0.1 \times quantity^{2}.$ For this example, marginal cost is positive and increasing. Key Insights • Fixed costs are independent of the level of output, whereas variable costs depend on the output level of a firm. • Pricing decisions depend on marginal costs. • Decisions to enter and/or exit an industry depend on both fixed and variable costs. 31.16: Pricing with Market Power The goal of the managers of a firm is to maximize the firm’s profit. $profit = revenues − costs.$ We can think of a firm as choosing either the price to set or the quantity that it sells. Either way, the firm faces a demand curve and chooses a point on that curve that maximizes its profits. In reality, most firms choose the price of the good that they sell. However, it is often simpler to analyze a firm’s behavior by looking at the quantity that it chooses. Profits are maximized ( Figure 31.2.12 "Markup Pricing") when the extra revenue from selling one more unit of output (marginal revenue) is equal to the extra cost of producing one more unit (marginal cost). The firm’s decision rule is to select a point on the demand curve such that $marginal\ revenue = marginal\ cost.$ We can rearrange this condition to obtain a firm’s pricing rule: $price = markup \times marginal\ cost.$ Figure 31.2.12 "Markup Pricing" illustrates this pricing decision. The markup depends on the price elasticity of demand. When demand is relatively inelastic, firms have a lot of market power and set a high markup. This is not a “plug-and-play” formula because both the markup and marginal cost depend, in general, on the price that a firm chooses. However, it does provide a useful description of a firm’s decision. Key Insights • When marginal cost is higher, a firm sets a higher price. • When demand is more inelastic (so a firm has more market power), the markup is higher, so a firm sets a higher price. • When demand is perfectly elastic, the markup is 1, and the firm sets its price equal to marginal cost. This is the case of a competitive market. • Any price you see has two components: the marginal cost and the markup. When a price changes, one or both of these must have changed. Figure $12$: Markup Pricing More Formally We can derive the markup pricing formula as follows, where π = profit, R = revenues, C = costs, MR = marginal revenue, MC = marginal cost, P = price, Q = output, ε = (ΔQ/Q)/(ΔP/P) = elasticity of demand, and µ = markup. First we note that The firm sets marginal revenue equal to marginal cost: Rearranging, we obtain $P = \mu \times MC,$ where the markup is given by 31.17: Comparative Statics Comparative statics is a tool used to predict the effects of exogenous variables on market outcomes. Exogenous variables shift either the market demand curve (for example, news about the health effects of consuming a product) or the market supply curve (for example, weather effects on a crop). By market outcomes, we mean the equilibrium price and the equilibrium quantity in a market. Comparative statics is a comparison of the market equilibrium before and after a change in an exogenous variable. A comparative statics exercise consists of a sequence of five steps: 1. Begin at an equilibrium point where the quantity supplied equals the quantity demanded. 2. Based on a description of the event, determine whether the change in the exogenous variable shifts the market supply curve or the market demand curve. 3. Determine the direction of this shift. 4. After shifting the curve, find the new equilibrium point. 5. Compare the new and old equilibrium points to predict how the exogenous event affects the market. Figure 31.2.13 "A Shift in the Demand Curve" and Figure 31.2.14 "A Shift in the Supply Curve" show comparative statics in action in the market for Curtis Granderson replica shirts and the market for beer. In Figure 31.2.13 "A Shift in the Demand Curve", the market demand curve has shifted leftward. The consequence is that the equilibrium price and the equilibrium quantity both decrease. The demand curve shifts along a fixed supply curve. In Figure 31.2.14 "A Shift in the Supply Curve", the market supply curve has shifted leftward. The consequence is that the equilibrium price increases and the equilibrium quantity decreases. The supply curve shifts along a fixed demand curve. Key Insights • Comparative statics is used to determine the market outcome when the market supply and demand curves are shifting. • Comparative statics is a comparison of equilibrium points. • If the market demand curve shifts, then the new and old equilibrium points lie on a fixed market supply curve. • If the market supply curve shifts, then the new and old equilibrium points lie on a fixed market demand curve.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.15%3A_Pricing_with_Market_Power.txt
The production function characterizes the output of a firm given the inputs it uses. The link between inputs and output is shown Figure 31.2.15 "The Production Function". The production function combines a firm’s physical capital stock, labor, raw materials (or intermediate inputs), and technology to produce output. Technology is the knowledge (the “blueprints”) that the firm possesses, together with managerial skills. Production functions generally have two important properties: 1. Positive marginal product of an input 2. Diminishing marginal product of an input By input, we mean any of the factors of production, such as physical capital, labor, or raw materials. The marginal product of an input is the extra output obtained if extra input is used. In this conceptual exercise, all other inputs are held fixed so that we change only one input at a time. The first property asserts that additional output will be obtained from additional units of an input. Adding another machine, another worker, some more fuel, and so on, increases the output of a firm. A positive marginal product does not necessarily mean that the extra output is profitable: it might be that the cost of the extra input is high relative to the value of the additional output obtained. The second property explains how the marginal product of an input changes as we increase the amount of that input, keeping the quantities of other inputs fixed. An additional unit of an input will (usually) increase output more when there is a small (rather than a large) amount of that input being used. For example, the extra output obtained from adding the first machine is greater than the additional output obtained from adding the 50th machine. A simple production function relating output to labor input is shown in Figure 31.2.16 "Labor Input in the Production Function". This figure illustrates the two properties of positive and diminishing marginal product of labor. As more labor is added, output increases: there is a positive marginal product of labor (that is, the slope of the relationship is positive). But the extra output obtained from adding labor is greater when the labor input is low: there is diminishing marginal product of labor. From the graph, the slope of the production function (which is the marginal product of labor) is greater at low levels of the labor input. Key Insights • The production function shows the output produced by a firm given its inputs. • The production function displays two important properties: positive marginal product and diminishing marginal product. 31.19: Nash Equilibrium A Nash equilibrium is used to predict the outcome of a game. By a game, we mean the interaction of a few individuals, called players. Each player chooses an action and receives a payoff that depends on the actions chosen by everyone in the game. A Nash equilibrium is an action for each player that satisfies two conditions: 1. The action yields the highest payoff for that player given her predictions about the other players’ actions. 2. The player’s predictions of others’ actions are correct. Thus a Nash equilibrium has two dimensions. Players make decisions that are in their own self-interests, and players make accurate predictions about the actions of others. Consider the games in Table 31.2.6 "Prisoners’ Dilemma", Table 31.2.7 "Dictator Game", Table 31.2.8 "Ultimatum Game", and Table 31.2.9 "Coordination Game". The numbers in the tables give the payoff to each player from the actions that can be taken, with the payoff of the row player listed first. Left Right Up 5, 5 0, 10 Down 10, 0 2, 2 Table $6$: Prisoners’ Dilemma Number of dollars (x) 100 − x, x Table $7$: Dictator Game Accept Reject Number of dollars (x) 100 − x, x 0, 0 Table $8$: Ultimatum Game Left Right Up 5, 5 0, 1 Down 1, 0 4, 4 Table $9$: Coordination Game • Prisoners’ dilemma. The row player chooses between the action labeled Up and the one labeled Down. The column player chooses between the action labeled Left and the one labeled Right. For example, if row chooses Up and column chooses Right, then the row player has a payoff of 0, and the column player has a payoff of 10. If the row player predicts that the column player will choose Left, then the row player should choose Down (that is, down for the row player is her best response to left by the column player). From the column player’s perspective, if he predicts that the row player will choose Up, then the column player should choose Right. The Nash equilibrium occurs when the row player chooses Down and the column player chooses Right. Our two conditions for a Nash equilibrium of making optimal choices and predictions being right both hold. • Social dilemma. This is a version of the prisoners’ dilemma in which there are a large number of players, all of whom face the same payoffs. • Dictator game. The row player is called the dictator. She is given $100 and is asked to choose how many dollars (x) to give to the column player. Then the game ends. Because the column player does not move in this game, the dictator game is simple to analyze: if the dictator is interested in maximizing her payoff, she should offer nothing (x = 0). • Ultimatum game. This is like the dictator game except there is a second stage. In the first stage, the row player is given$100 and told to choose how much to give to the column player. In the second stage, the column player accepts or rejects the offer. If the column player rejects the offer, neither player receives any money. The best choice of the row player is then to offer a penny (the smallest amount of money there is). The best choice of the column player is to accept. This is the Nash equilibrium. • Coordination game. The coordination game has two Nash equilibria. If the column player plays Left, then the row player plays Up; if the row player plays Up, then the column player plays Left. This is an equilibrium. But Down/Right is also a Nash equilibrium. Both players prefer Up/Left, but it is possible to get stuck in a bad equilibrium. Key Insights • A Nash equilibrium is used to predict the outcome of games. • In real life, payoffs may be more complicated than these games suggest. Players may be motivated by fairness or spite. More Formally We describe a game with three players (1, 2, 3), but the idea generalizes straightforwardly to situations with any number of players. Each player chooses a strategy (s1, s2, s3). Suppose σ1(s1, s2, s3) is the payoff to player 1 if (s1, s2, s3) is the list of strategies chosen by the players (and similarly for players 2 and 3). We put an asterisk (*) to denote the best strategy chosen by a player. Then a list of strategies (s*1, s*2, s*3) is a Nash equilibrium if the following statements are true: $\sigma_{1}\left(s^{*}_{1}, s^{*}_{2}, s^{*}_{3}\right) \geq \sigma_{1}\left(s_{1}, s^{*}_{2}, s^{*}_{3}\right) \sigma_{2}\left(s^{*}_{1}, s_{2}^{*}, s^{*}_{3}\right) \geq \sigma_{2}\left(s^{*}_{1}, s_{2}, s^{*}_{3}\right) \sigma_{3}\left(s^{*}_{1}, s^{*}_{2}, s^{*}_{3}\right) \geq \sigma_{3}\left(s^{*}_{1}, s^{*}_{2}, s_{3}\right)$ In words, the first condition says that, given that players 2 and 3 are choosing their best strategies (s*2, s*3), then player 1 can do no better than to choose strategy s*1. If a similar condition holds for every player, then we have a Nash equilibrium.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.18%3A_Production_Function.txt
Some economic transactions have effects on individuals not directly involved in that transaction. When this happens, we say there is an externality present. An externality is generated by a decision maker who disregards the effects of his actions on others. In the case of a positive externality, the individual’s actions increase the welfare of others (for example, research and development by firms). In the case of a negative externality, an individual’s actions decrease the welfare of others (for example, pollution). Economic outcomes are not efficient when externalities are present. So the government may be able to improve on the private outcome. The possible remedies are as follows: • Subsidies (in the case of positive externalities) and taxes (in the case of negative externalities) • The creation of markets by the government If people are altruistic, then they may instead take into account others’ welfare and may internalize some of the effects of their actions. We typically see externalities associated with nonexcludable goods (or resources)—goods for which it is impossible to selectively deny access. In other words, it is not possible to let some people consume the good while preventing others from consuming it. An excludable good (or resource) is one to which we can selectively allow or deny access. If a good is nonexcludable or partially excludable, there are positive externalities associated with its production and negative externalities associated with its consumption. We say that a good is a rival if one person’s consumption of the good prevents others from consuming the good. Most of the goods we deal with in economics are rival goods. A good is nonrival if one person can consume the good without preventing others from consuming the same good. Knowledge is a nonrival good. If a good is both nonexcludable and nonrival, it is a public good. Key Insights • When externalities are present, the outcome is inefficient. • The market will typically not provide public goods. 31.21: Foreign Exchange Market A foreign exchange market is where one currency is traded for another. There is a demand for each currency and a supply of each currency. In these markets, one currency is bought using another. The price of one currency in terms of another (for example, how many dollars it costs to buy one Mexican peso) is called the exchange rate. Foreign currencies are demanded by domestic households, firms, and governments that wish to purchase goods, services, or financial assets denominated in the currency of another economy. For example, if a US auto importer wants to buy a German car, the importer must buy euros. The law of demand holds: as the price of a foreign currency increases, the quantity of that currency demanded will decrease. Foreign currencies are supplied by foreign households, firms, and governments that wish to purchase goods, services, or financial assets denominated in the domestic currency. For example, if a Canadian bank wants to buy a US government bond, the bank must sell Canadian dollars. As the price of a foreign currency increases, the quantity supplied of that currency increases. Exchange rates are determined just like other prices—by the interaction of supply and demand. At the equilibrium exchange rate, the supply and demand for a currency are equal. Shifts in the supply or the demand for a currency lead to changes in the exchange rate. Because one currency is exchanged for another in a foreign exchange market, the demand for one currency entails the supply of another. Thus the dollar market for euros (where the price is dollars per euro and the quantity is euros) is the mirror image of the euro market for dollars (where the price is euros per dollar and the quantity is dollars). To be concrete, consider the demand for and the supply of euros. The supply of euros comes from the following: • European households and firms that wish to buy goods and services from countries that do not have the euro as their currency • European investors who wish to buy assets (government debt, stocks, bonds, etc.) that are denominated in currencies other than the euro The demand for euros comes from the following: • Households and firms in noneuro countries that wish to buy goods and services from Europe • Investors in noneuro countries that wish to buy assets (government debt, stocks, bonds, etc.) that are denominated in euros Figure 31.2.17 "The Foreign Exchange Market" shows the dollar market for euros. On the horizontal axis is the quantity of euros traded. On the vertical axis is the price in terms of dollars. The intersection of the supply and demand curves determines the equilibrium exchange rate. Figure \(17\): The Foreign Exchange Market The foreign exchange market can be used as a basis for comparative statics exercises. We can study how changes in an economy affect the exchange rate. For example, suppose there is an increase in the level of economic activity in the United States. This leads to an increase in the demand for European goods and services. To make these purchases, US households and firms will demand more euros. This causes an outward shift in the demand curve and an increase in the dollar price of euros. When the dollar price of a euro increases, we say that the dollar has depreciated relative to the euro. From the perspective of the euro, the depreciation of the dollar represents an appreciation of the euro. Key Insight • As the exchange rate increases (so a currency becomes more valuable), a greater quantity of the currency is supplied to the market and a smaller quantity is demanded.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.20%3A_Externalities_and_Public_Goods.txt
If some variable x (for example, the number of gallons of gasoline sold in a week) changes from x1 to x2, then we can define the change in that variable as $\Delta x=x_{2}-x_{1}$. But there are difficulties with this simple definition. The number that we calculate will change, depending on the units in which we measure x. If we measure in millions of gallons, x will be a much smaller number than if we measure in gallons. If we measured x in liters rather than gallons (as it is measured in most countries), it would be a bigger number. So the number we calculate depends on the units we choose. To avoid these problems, we look at percentage changes and express the change as a fraction of the individual value. In what follows, we use the notation %Δx to mean the percentage change in x and define it as follows: $\% \Delta x=\left(x_{2}-x_{1}\right) / x_{1}$. A percentage change equal to 0.1 means that gasoline consumption increased by 10 percent. Why? Because 10 percent means 10 “per hundred,” so 10 percent = 10/100 = 0.1. Very often in economics, we are interested in changes that take place over time. Thus we might want to compare gross domestic product (GDP) between 2012 and 2013. Suppose we know that GDP in the United States in 2012 was $14 trillion and that GDP in 2013 was$14.7 trillion. Using the letter Y to denote GDP measured in trillions, we write Y2012 = 14.0 and Y2013 = 14.7. If we want to talk about GDP at different points in time without specifying a particular year, we use the notation Yt. We express the change in a variable over time in the form of a growth rate, which is just an example of a percentage change. Thus the growth rate of GDP in 2013 is calculated as follows: $\% \Delta Y_{2013}=\left(Y_{2013}-Y_{2012}\right) / Y_{2012}=(14.7-14) / 14=0.05$ The growth rate equals 5 percent. In general, we write $\% \Delta Y_{t+1}=\left(Y_{t+1}-Y_{t}\right) / Y_{t}$. Occasionally, we use the gross growth rate, which simply equals 1 + the growth rate. So, for example, the gross growth rate of GDP equals Y2013/Y2012, or 1.05. There are some useful rules that describe the behavior of percentage changes and growth rates. The Product Rule. Suppose we have three variables, x, y, and z, and suppose $x = yz.$ Then $\% \Delta x=\% \Delta y+\% \Delta z$ In other words, the growth rate of a product of two variables equals the sum of the growth rates of the individual variables. The Quotient Rule. Now suppose we rearrange our original equation by dividing both sides by z to obtain If we take the product rule and subtract %Δz from both sides, we get the following: $\% \Delta x=\% \Delta y-\% \Delta z$ The Power Rule. There is one more rule of growth rates that we make use of in some advanced topics, such as growth accounting. Suppose that $y=x^{a}$. Then $\% \Delta y=a(\% \Delta x)$. For example, if y = x2, then the growth rate of y is twice the growth rate of x. If then the growth rate of y is half the growth rate of x (remembering that a square root is the same as a power of ½). More Formally Growth rates compound over time: if the growth rate of a variable is constant, then the change in the variable increases over time. For example, suppose GDP in 2020 is 20.0, and it grows at 10 percent per year. Then in 2021, GDP is 22.0 (an increase of 2.0), but in 2022, GDP is 24.2 (an increase of 2.2). If this compounding takes place every instant, then we say that we have exponential growth. Formally, we write exponential growth using the number e = 2.71828.… If the value of Y at time 0 equals Y0 and if Y grows at the constant rate g (where g is an “annualized” or per year growth rate), then at time t (measured in years), $Y_{t}=e^{gt}Y_{0}$. A version of this formula can also be used to calculate the average growth rate of a variable if we know its value at two different times. We can write the formula as $e^{gt}=Y_{t}/Y_{0}$, which also means $gt=ln(Y_{t}/Y_{0}$, where ln() is the natural logarithm. You do not need to know exactly what this means; you can simply calculate a logarithm using a scientific calculator or a spreadsheet. Dividing by t we get the average growth rate $g=ln(Y_{t}/Y_{0})/t$. For example, suppose GDP in 2020 is 20.0 and GDP in 2030 is 28.0. Then Y2030/Y2020 = 28/20 = 1.4. Using a calculator, we can find ln(1.4) = 0.3364. Dividing by 10 (since the two dates are 10 years apart), we get an average growth rate of 0.034, or 3.4 percent per year.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.22%3A_Growth_Rates.txt
To start our presentation of descriptive statistics, we construct a data set using a spreadsheet program. The idea is to simulate the flipping of a two-sided coin. Although you might think it would be easier just to flip a coin, doing this on a spreadsheet gives you a full range of tools embedded in that program. To generate the data set, we drew 10 random numbers using the spreadsheet program. In the program we used, the function was called RAND and this generated the choice of a number between zero and one. Those choices are listed in the second column of Table 31.2.10. The third column creates the two events of heads and tails that we normally associate with a coin flip. To generate this last column, we adopted a rule: if the random number was less than 0.5, we termed this a “tail” and assigned a 0 to the draw; otherwise we termed it a “head” and assigned a 1 to the draw. The choice of 0.5 as the cutoff for heads reflects the fact that we are considering the flips of a fair coin in which each side has the same probability: 0.5. Draw Random Number Heads (1) or Tails (0) 1 0.94 1 2 0.84 1 3 0.26 0 4 0.04 0 5 0.01 0 6 0.57 1 7 0.74 1 8 0.81 1 9 0.64 1 10 0.25 0 Table $10$ Keep in mind that the realization of the random number in draw i is independent of the realizations of the random numbers in both past and future draws. Whether a coin comes up heads or tails on any particular flip does not depend on other outcomes. There are many ways to summarize the information contained in a sample of data. Even before you start to compute some complicated statistics, having a way to present the data is important. One possibility is a bar graph in which the fraction of observations of each outcome is easily shown. Alternatively, a pie chart is often used to display this fraction. Both the pie chart and the bar diagram are commonly found in spreadsheet programs. Economists and statisticians often want to describe data in terms of numbers rather than figures. We use the data from the table to define and illustrate two statistics that are commonly used in economics discussions. The first is the mean (or average) and is a measure of central tendency. Before you read any further, ask, “What do you think the average ought to be from the coin flipping exercise?” It is natural to say 0.5, since half the time the outcome will be a head and thus have a value of zero, whereas the remainder of the time the outcome will be a tail and thus have a value of one. Whether or not that guess holds can be checked by looking at Table 31.2.10 and calculating the mean of the outcome. We let ki be the outcome of draw i. For example, from the table, k1 = 1 and k5 = 0. Then the formula for the mean if there are N draws is $\mu=\Sigma_{i} k_{j} / N$. Here Σiki means the sum of the ki outcomes. In words, the mean, denoted by μ, is calculated by adding together the draws and dividing by the number of draws (N). In the table, N = 10, and the sum of the draws of random numbers is about 51.0. Thus the mean of the 10 draws is about 0.51. We can also calculate the mean of the heads/tails column, which is 0.6 since heads came up 6 times in our experiment. This calculation of the mean differs from the mean of the draws since the numbers in the two columns differ with the third column being a very discrete way to represent the information in the second column. A second commonly used statistic is a measure of dispersion of the data called the variance. The variance, denoted σ2, is calculated as $\sigma^{2}=\Sigma_{j}\left(k_{i}-\mu\right)^{2} /(N)$. From this formula, if all the draws were the same (thus equal to the mean), then the variance would be zero. As the draws spread out from the mean (both above and below), the variance increases. Since some observations are above the mean and others below, we square the difference between a single observation (ki) and the mean (μ) when calculating the variance. This means that values above and below the mean both contribute a positive amount to the variance. Squaring also means that values that are a long way away from the mean have a big effect on the variance. For the data given in the table, the mean of the 10 draws was given as μ = 0.51. So to calculate the variance, we would subtract the mean from each draw, square the difference, and then add together the squared differences. This yields a variance of 0.118 for this draw. A closely related concept is that of the standard deviation, which is the square root of the variance. For our example, the standard deviation is 0.34. The standard deviation is greater than the variance since the variance is less than 1.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.23%3A_Mean_and_Variance.txt
Correlation is a statistical measure describing how two variables move together. In contrast, causality (or causation) goes deeper into the relationship between two variables by looking for cause and effect. Correlation is a statistical property that summarizes the way in which two variables move either over time or across people (firms, governments, etc.). The concept of correlation is quite natural to us, as we often take note of how two variables interrelate. If you think back to high school, you probably have a sense of how your classmates did in terms of two measures of performance: grade point average (GPA) and the results on a standardized college entrance exam (SAT or ACT). It is likely that classmates with high GPAs also had high scores on the SAT or ACT exam. In this instance, we would say that the GPA and SAT/ACT scores were positively correlated: looking across your classmates, when a person’s GPA is higher than average, that person’s SAT or ACT score is likely to be higher than average as well. As another example, consider the relationship between a household’s income and its expenditures on housing. If you conducted a survey across households, it is likely that you would find that richer households spend more on most goods and services, including housing. In this case, we would conclude that income and expenditures on housing are positively correlated. When economists look at data for a whole economy, they often focus on a measure of how much is produced, which we call real gross domestic product (real GDP), and the fraction of workers without jobs, called the unemployment rate. Over long periods of time, when GDP is above average (the economy is doing well), the unemployment rate is below average. In this case, GDP and the unemployment rate are negatively correlated, as they tend to move in opposite directions. The fact that one variable is correlated with another does not inform us about whether one variable causes the other. Imagine yourself on an airplane in a relaxed mood, reading or listening to music. Suddenly, the pilot comes on the public address system and requests that you buckle your seat belts. Usually, such a request is followed by turbulence. This is a correlation: the announcement by the pilot is positively correlated with air turbulence. The correlation is of course not perfect because sometimes you hit some bumps without warning, and sometimes the pilot’s announcement is not followed by turbulence. But—obviously—this does not mean that we could solve the turbulence problem by turning off the public address system. The pilot’s announcement does not cause the turbulence. The turbulence is there whether the pilot announces it or not. In fact, the causality runs the other way. The turbulence causes the pilot’s announcement. We noted earlier that real GDP and unemployment are negatively correlated. When real GDP is below average, as it is during a recession, the unemployment rate is typically above average. But what is the causality here? If unemployment caused recessions, we might be tempted to adopt a policy that makes unemployment illegal. For example, the government could fine firms if they lay off workers. This is not a good policy because we do not think that low unemployment causes high real GDP. Neither do we necessarily think that high real GDP causes low unemployment. Instead, based on economic theory, there are other influences that affect both real GDP and unemployment. More Formally Suppose you have N observations of two variables, x and y, where xi and yi are the values of these variables in observation i = 1, 2,…, N. The mean of x, denoted μx, is the sum over the values of x in the sample is divided by N; the scenario applies for y. and We can also calculate the variance and standard deviations of x and y. The calculation for the variance of x, denoted is as follows: The standard deviation of x is the square root of With these ingredients, the correlation of (x,y), denoted corr(x,y), is given by
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.24%3A_Correlation_and_Causality.txt
Consider a simple example of a loan. Imagine you go to your bank to inquire about a loan of \$1,000, to be repaid in one year’s time. A loan is a contract that specifies three things: 1. The amount being borrowed (in this example, \$1,000) 2. The date(s) at which repayment must be made (in this example, one year from now) 3. The amount that must be repaid What determines the amount of the repayment? The lender—the bank—is a supplier of credit, and the borrower—you—is a demander of credit. We use the terms credit and loans interchangeably. The higher the repayment amount, the more attractive this loan contract will look to the bank. Conversely, the lower the repayment amount, the more attractive this contract is to you. If there are lots of banks that are willing to supply such loans, and lots of people like you who demand such loans, then we can draw supply and demand curves in the credit (loan) market. The equilibrium price of this loan is the interest rate at which supply equals demand. In macroeconomics, we look at not only individual markets like this but also the credit (loan) market for an entire economy. This market brings together suppliers of loans, such as households that are saving, and demanders of loans, such as businesses and households that need to borrow. The real interest rate is the “price” that brings demand and supply into balance. The supply of loans in the domestic loans market comes from three different sources: 1. The private saving of households and firms 2. The saving of governments (in the case of a government surplus) 3. The saving of foreigners (when there is a flow of capital into the domestic economy) Households will generally respond to an increase in the real interest rate by reducing current consumption relative to future consumption. Households that are saving will save more; households that are borrowing will borrow less. Higher interest rates also encourage foreigners to send funds to the domestic economy. Government saving or borrowing is little affected by interest rates. The demand for loans comes from three different sources: 1. The borrowing of households and firms to finance purchases, such as housing, durable goods, and investment goods 2. The borrowing of governments (in the case of a government deficit) 3. The borrowing of foreigners (when there is a flow of capital from the domestic economy) As the real interest rate increases, investment and durable goods spending decrease. For firms, a high interest rate represents a high cost of funding investment expenditures. This is an application of discounted present value and is evident if a firm borrows to purchase capital. It is also true if it uses internal funds (retained earnings) to finance investment because the firm could always put those funds into an interest-bearing asset instead. For households, higher interest rates likewise make it more costly to borrow to purchase housing and durable goods. The demand for credit decreases as the interest rate rises. When it is expensive to borrow, households and firms will borrow less. Equilibrium in the market for loans is shown in Figure 31.2.18 "The Credit Market". On the horizontal axis is the total quantity of loans in equilibrium. The demand curve for loans is downward sloping, whereas the supply curve has a positive slope. Loan market equilibrium occurs at the real interest rate where the quantity of loans supplied equals the quantity of loans demanded. At this equilibrium real interest rate, lenders lend as much as they wish, and borrowers can borrow as much as they wish. Equilibrium in the aggregate credit market is what ensures the balance of flows into and out of the financial sector in the circular flow diagram. Key Insights • As the real interest rate increases, more loans are supplied, and fewer loans are demanded. • Adjustment of the real interest rate ensures that, in the circular flow diagram, the flows into the financial sector equal the flows from the sector. The Main Uses of This Tool Figure \(18\): The Credit Market
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.25%3A_The_Credit_%28Loan%29_Market_%28Macro%29.txt
When you borrow or lend, you normally do so in dollar terms. If you take out a loan, the loan is denominated in dollars, and your promised payments are denominated in dollars. These dollar flows must be corrected for inflation to calculate the repayment in real terms. A similar point holds if you are a lender: you need to calculate the interest you earn on saving by correcting for inflation. The Fisher equation provides the link between nominal and real interest rates. To convert from nominal interest rates to real interest rates, we use the following formula: $real\ interest\ rate ≈ nominal\ interest\ rate − inflation\ rate.$ To find the real interest rate, we take the nominal interest rate and subtract the inflation rate. For example, if a loan has a 12 percent interest rate and the inflation rate is 8 percent, then the real return on that loan is 4 percent. In calculating the real interest rate, we used the actual inflation rate. This is appropriate when you wish to understand the real interest rate actually paid under a loan contract. But at the time a loan agreement is made, the inflation rate that will occur in the future is not known with certainty. Instead, the borrower and lender use their expectations of future inflation to determine the interest rate on a loan. From that perspective, we use the following formula: $contracted\ nominal\ interest\ rate ≈ real\ interest\ rate + expected\ inflation\ rate.$ We use the term contracted nominal interest rate to make clear that this is the rate set at the time of a loan agreement, not the realized real interest rate. Key Insight • To correct a nominal interest rate for inflation, subtract the inflation rate from the nominal interest rate. More Formally Imagine two individuals write a loan contract to borrow P dollars at a nominal interest rate of i. This means that next year the amount to be repaid will be $P \times (1 + i)$. This is a standard loan contract with a nominal interest rate of i. Now imagine that the individuals decided to write a loan contract to guarantee a constant real return (in terms of goods not dollars) denoted r. So the contract provides P this year in return for being repaid (enough dollars to buy) (1 + r) units of real gross domestic product (real GDP) next year. To repay this loan, the borrower gives the lender enough money to buy (1 + r) units of real GDP for each unit of real GDP that is lent. So if the inflation rate is π, then the price level has risen to $P \times (1 + π)$, so the repayment in dollars for a loan of P dollars would be $P(1 + r) \times (1 + π)$. Here (1 + π) is one plus the inflation rate. The inflation rate πt+1 is defined—as usual—as the percentage change in the price level from period t to period t + 1. $\Pi_{t+1}=\left(P_{t+1}-P_{t}\right) / P_{t}$. If a period is one year, then the price level next year is equal to the price this year multiplied by (1 + π): $P_{t+1}=(1+\Pi_{t}) \times P_{t}$. The Fisher equation says that these two contracts should be equivalent: $(1 + i) = (1 + r) \times (1 + π).$ As an approximation, this equation implies $i ≈ r + π.$ To see this, multiply out the right-hand side and subtract 1 from each side to obtain $i = r + π + r\Pi.$ If r and π are small numbers, then r\Pi is a very small number and can safely be ignored. For example, if r = 0.02 and \Pi = 0.03, then r\Pi = 0.0006, and our approximation is about 99 percent accurate.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.26%3A_The_Fisher_Equation%3A_Nominal_and_Real_Interest_Rates.txt
The aggregate production function describes how total real gross domestic product (real GDP) in an economy depends on available inputs. Aggregate output (real GDP) depends on the following: • Physical capital—machines, production facilities, and so forth that are used in production • Labor—the number of hours that are worked in the entire economy • Human capital—skills and education embodied in the workforce of the economy • Knowledge—basic scientific knowledge, and blueprints that describe the available production processes • Social infrastructure—the general business, legal and cultural environment • The amount of natural resources available in an economy • Anything else that we have not yet included We group the inputs other than labor, physical, and human capital together, and call them technology. The aggregate production function has several key properties. First, output increases when there are increases in physical capital, labor, and natural resources. In other words, the marginal products of these inputs are all positive. Second, the increase in output from adding more inputs is lower when we have more of a factor. This is called diminishing marginal product. That is, • The more capital we have, the less additional output we obtain from additional capital. • The more labor we have, the less additional output we obtain from additional labor. • The more natural resources we have, the less additional output we obtain from additional resources. In addition, increases in output can also come from increases in human capital, knowledge, and social infrastructure. In contrast to capital and labor, we do not assume that there are diminishing returns to human capital and technology. One reason is that we do not have a natural or an obvious measure for human capital, knowledge, or social infrastructure, whereas we do for labor and capital (hours of work and hours of capital usage). Figure 31.2.19 shows the relationship between output and capital, holding fixed the level of other inputs. This figure shows two properties of the aggregate production function. As capital input is increased, output increases as well. But the change in output obtained by increasing the capital stock is lower when the capital stock is higher: this is the diminishing marginal product of capital. Figure $19$ In many applications, we want to understand how the aggregate production function responds to variations in the technology or other inputs. This is illustrated in Figure 31.2.20. An increase in, say, technology means that for a given level of the capital stock, more output is produced: the production function shifts upward as technology increases. Further, as technology increases, the production function is steeper: the increase in technology increases the marginal product of capital. Figure $2$ Key Insight • The aggregate production function allows us to determine the output of an economy given inputs of capital, labor, human capital, and technology. Specific Forms for the Production Function We can write the production function in mathematical form. We use Y to represent real GDP, K to represent the physical capital stock, L to represent labor, H to represent human capital, and A to represent technology (including natural resources). If we want to speak about production completely generally, then we can write $Y = F(K,L,H,A)$. Here F() means “some function of.” A lot of the time, economists work with a production function that has a specific mathematical form, yet is still reasonably simple: $Y = A \times K^{a} \times (L \times H)^{(1-a)},$ where a is just a number. This is called a Cobb-Douglas production function. It turns out that this production function does a remarkably good job of summarizing aggregate production in the economy. In fact, we also know that we can describe production in actual economies quite well if we suppose that a = 1/3.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.27%3A_The_Aggregate_Production_Function.txt
The circular flow of income describes the flows of money among the five main sectors of an economy. As individuals and firms buy and sell goods and services, money flows among the different sectors of an economy. The circular flow of income describes these flows of dollars (pesos, euros, or whatever). From a simple version of the circular flow, we learn that—as a matter of accounting— \[gross\ domestic\ product (GDP) = income = production = spending.\] This relationship lies at the heart of macroeconomic analysis. There are two sides to every transaction. Corresponding to the flows of money in the circular flow, there are flows of goods and services among these sectors. For example, the wage income received by consumers is in return for labor services that flow from households to firms. The consumption spending of households is in return for the goods and services that flow from firms to households. A complete version of the circular flow is presented in Figure 31.2.21. ( Chapter 18 "The State of the Economy" contains a discussion of a simpler version of the circular flow with only two sectors: households and firms.) The complete circular flow has five sectors: a household sector, a firm sector, a government sector, a foreign sector, and a financial sector. Different chapters of the book emphasize different pieces of the circular flow, and Figure 31.2.21 shows us how everything fits together. In the following subsections, we look at the flows into and from each sector in turn. In each case, the balance of the flows into and from each sector underlies a useful economic relationship. The Firm Sector Figure 31.2.21 includes the component of the circular flow associated with the flows into and from the firm sector of an economy. We know that the total flow of dollars from the firm sector measures the total value of production in an economy. The total flow of dollars into the firm sector equals total expenditures on GDP. We therefore know that \[production = consumption + investment + government\ purchases + net\ exports.\] This equation is called the national income identity and is the most fundamental relationship in the national accounts. By consumption we mean total consumption expenditures by households on final goods and services. Investment refers to the purchase of goods and services that, in one way or another, help to produce more output in the future. Government purchases include all purchases of goods and services by the government. Net exports, which equal exports minus imports, measure the expenditure flows associated with the rest of the world. The Household Sector The household sector summarizes the behavior of private individuals in their roles as consumers/savers and suppliers of labor. The balance of flows into and from this sector is the basis of the household budget constraint. Households receive income from firms, in the form of wages and in the form of dividends resulting from their ownership of firms. The income that households have available to them after all taxes have been paid to the government and all transfers received is called disposable income. Households spend some of their disposable income and save the rest. In other words, \[disposable\ income = consumption + household\ savings.\] This is the household budget constraint. In Figure 31.2.21, this equation corresponds to the fact that the flows into and from the household sector must balance. The Government Sector The government sector summarizes the actions of all levels of government in an economy. Governments tax their citizens, pay transfers to them, and purchase goods from the firm sector of the economy. Governments also borrow from or lend to the financial sector. The amount that the government collects in taxes need not equal the amount that it pays out for government purchases and transfers. If the government spends more than it gathers in taxes, then it must borrow from the financial markets to make up the shortfall. The circular flow figure shows two flows into the government sector and two flows out. Since the flows into and from the government sector must balance, we know that \[government\ purchases + transfers = tax\ revenues + government\ borrowing.\] Government borrowing is sometimes referred to as the government budget deficit. This equation is the government budget constraint. Some of the flows in the circular flow can go in either direction. When the government is running a deficit, there is a flow of dollars to the government sector from the financial markets. Alternatively, the government may run a surplus, meaning that its revenues from taxation are greater than its spending on purchases and transfers. In this case, the government is saving rather than borrowing, and there is a flow of dollars to the financial markets from the government sector. The Foreign Sector The circular flow includes a country’s dealings with the rest of the world. These flows include exports, imports, and borrowing from other countries. Exports are goods and services produced in one country and purchased by households, firms, and governments of another country. Imports are goods and services purchased by households, firms, and governments in one country but produced in another country. Net exports are exports minus imports. When net exports are positive, a country is running a trade surplus: exports exceed imports. When net exports are negative, a country is running a trade deficit: imports exceed exports. The third flow between countries is borrowing and lending. Governments, individuals, and firms in one country may borrow from or lend to another country. Net exports and borrowing are linked. If a country runs a trade deficit, it borrows from other countries to finance that deficit. If we look at the flows into and from the foreign sector, we see that \[borrowing\ from\ other\ countries + exports = imports.\] Subtracting exports from both sides, we obtain \[borrowing\ from\ other\ countries = imports − exports = trade\ deficit.\] Whenever our economy runs a trade deficit, we are borrowing from other countries. If our economy runs a trade surplus, then we are lending to other countries. This analysis has omitted one detail. When we lend to other countries, we acquire their assets, so each year we get income from those assets. When we borrow from other countries, they acquire our assets, so we pay them income on those assets. Those income flows are added to the trade surplus/deficit to give the current account of the economy. It is the current account that must be matched by borrowing from or lending to other countries. A positive current account means that net exports plus net income flows from the rest of the world are positive. In this case, our economy is lending to the rest of the world and acquiring more assets. The Financial Sector The financial sector of an economy summarizes the behavior of banks and other financial institutions. The balance of flows into and from the financial sector tell us that investment is financed by national savings and borrowing from abroad. The financial sector is at the heart of the circular flow. The figure shows four flows into and from the financial sector. 1. Households divide their after-tax income between consumption and savings. Thus any income that they receive today but wish to put aside for the future is sent to the financial markets. The household sector as a whole saves so, on net, there is a flow of dollars from the household sector into the financial markets. 2. The flow of money from the financial sector into the firm sector provides the funds that are available to firms for investment purposes. 3. The flow of dollars between the financial sector and the government sector reflects the borrowing (or lending) of governments. The flow can go in either direction. When government expenditures exceed government revenues, the government must borrow from the private sector, and there is a flow of dollars from the financial sector to the government. This is the case of a government deficit. When the government’s revenues are greater than its expenditures, by contrast, there is a government surplus and a flow of dollars into the financial sector. 4. The flow of dollars between the financial sector and the foreign sector can also go in either direction. An economy with positive net exports is lending to other countries: there is a flow of money from an economy. An economy with negative net exports (a trade deficit) is borrowing from other countries. The national savings of the economy is the savings carried out by the private and government sectors taken together. When the government is running a deficit, some of the savings of households and firms must be used to fund that deficit, so there is less left over to finance investment. National savings is then equal to private savings minus the government deficit—that is, private savings minus government borrowing: \[national\ savings = private\ savings − government\ borrowing.\] If the government is running a surplus, then \[national\ savings = private\ savings + government\ surplus.\] National savings is therefore the amount that an economy as a whole saves. It is equal to what is left over after we subtract consumption and government spending from GDP. To see this, notice that \[private\ savings − government\ borrowing = income − taxes + transfers − consumption − (government\ purchases + transfers − taxes)= income − consumption − government\ purchases.\] This is the domestic money that is available for investment. If we are borrowing from other countries, there is another source of funds for investment. The flows into and from the financial sector must balance, so \[investment = national\ savings + borrowing\ from\ other\ countries.\] Conversely, if we are lending to other countries, then our national savings is divided between investment and lending to other countries: \[national\ savings = investment + lending\ to\ other\ countries.\]
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.28%3A_The_Circular_Flow_of_Income.txt
Growth accounting is a tool that tells us how changes in real gross domestic product (real GDP) in an economy are due to changes in available capital, labor, human capital, and technology. Economists have shown that, under reasonably general circumstances, the change in output in an economy can be written as follows: $output\ growth\ rate = a \times capital\ stock\ growth\ rate + [(1 − a) \times labor\ hours\ growth\ rate]+ [(1 − a) \times human\ capital\ growth\ rate] + technology\ growth\ rate.$ In this equation, a is just a number. For example, if a = 1/3, the growth in output is as follows: $output\ growth\ rate = (1/3 \times capital\ stock\ growth\ rate) + (2/3 \times labor\ hours\ growth\ rate)+ (2/3 \times human\ capital\ growth\ rate) + technology\ growth\ rate.$ Growth rates can be positive or negative, so we can use this equation to analyze decreases in GDP as well as increases. This expression for the growth rate of output, by the way, is obtained by applying the rules of growth rates (discussed in Section 31.2.21 "Growth Rates") to the Cobb-Douglas aggregate production function (discussed in 31.3 Section "The Aggregate Production Function"). What can we measure in this expression? We can measure the growth in output, the growth in the capital stock, and the growth in labor hours. Human capital is more difficult to measure, but we can use information on schooling, literacy rates, and so forth. We cannot, however, measure the growth rate of technology. So we use the growth accounting equation to infer the growth in technology from the things we can measure. Rearranging the growth accounting equation, $technology\ growth\ rate = output\ growth\ rate − (a \times capital\ stock\ growth\ rate)− [(1 − a) \times labor\ hours\ growth\ rate] − [(1 − a) \times human\ capital\ growth\ rate].$ So if we know the number a, we are done—we can use measures of the growth in output, labor, capital stock, and human capital to solve for the technology growth rate. In fact, we do have a way of measuring a. The technical details are not important here, but a good measure of (1 − a) is simply the total payments to labor in the economy (that is, the total of wages and other compensation) as a fraction of overall GDP. For most economies, a is in the range of about 1/3 to 1/2. Key Insight • The growth accounting tool allows us to determine the contributions of the various factors of economic growth. 31.30: The Solow Growth Model The analysis in Chapter 21 "Global Prosperity and Global Poverty" is (implicitly) based on a theory of economic growth known as the Solow growth model. Here we present two formal versions of the mathematics of the model. The first takes as its focus the capital accumulation equation and explains how the capital stock evolves in the economy. This version ignores the role of human capital and ignores the long-run growth path of the economy. The second follows the exposition of the chapter and is based around the derivation of the balanced growth path. They are, however, simply two different ways of approaching the same problem. Presentation 1 There are three components of this presentation of the model: technology, capital accumulation, and saving. The first component of the Solow growth model is the specification of technology and comes from the aggregate production function. We express output per worker (y) as a function of capital per worker (k) and technology (A). A mathematical expression of this relationship is $y = Af(k),$ where f(k) means that output per worker depends on capital per worker. As in our presentation of production functions, output increases with technology. We assume that f() has the properties that more capital leads to more output per capita at a diminishing rate. As an example, suppose $y = Ak^{1/3}.$ In this case the marginal product of capital is positive but diminishing. The second component is capital accumulation. If we let kt be the amount of capital per capita at the start of year t, then we know that $k_{t+1}=k_{\ell}(1-\delta)+i_{t}.$ This expression shows how the capital stock changes over time. Here δ is the rate of physical depreciation so that between year t and year t +1, δkt units of capital are lost from depreciation. But during year t, there is investment (it) that yields new capital in the following year. The final component of the Solow growth model is saving. In a closed economy, saving is the same as investment. Thus we link it in the accumulation equation to saving. Assume that saving per capita (st) is given by $s_{t} = s \times y_{t}.$ Here s is a constant between zero and one, so only a fraction of total output is saved. Using the fact that savings equals investment, along with the per capita production function, we can relate investment to the level of capital: $i_{t]=sAf(k_{t}).$ We can then write the equation for the evolution of the capital stock as follows: $k_{t+1}=k_{l}(1-\delta)+s A f\left(k_{t}\right).$ Once we have specified the function f(), we can follow the evolution of the capital stock over time. Generally, the path of the capital stock over time has two important properties: 1. Steady state. There is a particular level of the capital stock such that if the economy accumulates that amount of capital, it stays at that level of capital. We call this the steady state level of capital, denoted k*. 2. Stability. The economy will tend toward the per capita capital stock k*. To be more specific, the steady state level of capital solves the following equation: $k^{*}=k^{*}(1-\delta)+s A f\left(k^{*}\right).$ At the steady state, the amount of capital lost by depreciation is exactly offset by saving. This means that at the steady state, net investment is exactly zero. The property of stability means that if the current capital stock is below k*, the economy will accumulate capital so that $k_{t+1}>k_{t}$. And if the current capital stock is above k*, the economy will decumulate capital so that $k_{t+1}<k_{t}$. If two countries share the same technology (A) and the same production function [f(k)], then over time these two countries will eventually have the same stock of capital per worker. If there are differences in the technology or the production function, then there is no reason for the two countries to converge to the same level of capital stock per worker. Presentation 2 In this presentation, we explain the balanced-growth path of the economy and prove some of the claims made in the text. The model takes as given (exogenous) the investment rate; the depreciation rate; and the growth rates of the workforce, human capital, and technology. The endogenous variables are output and physical capital stock. The notation for the presentation is given in Table 31.2.11 "Notation in the Solow Growth Model": We use the notation gx to represent the growth rate of a variable x; that is, There are two key ingredients to the model: the aggregate production function and the equation for capital accumulation. Variable Symbol Real gross domestic product Y Capital stock K Human capital H Workforce L Technology A Investment rate i Depreciation rate δ Table $11$: Notation in the Solow Growth Model The Production Function The production function we use is the Cobb-Douglas production function: $Y = K^{a}(HL)^{1-a}A.$ Growth Accounting If we apply the rules of growth rates to Equation 31.2.?????, we get the following expression: $g_{Y}=ag_{k}+(1-a)(g_{L}+g_{H})+g_{A}.$ Balanced Growth The condition for balanced growth is that $g_{Y}=g_{K}$. When we impose this condition on our equation for the growth rate of output ( Equation 31.2.???????), we get where the superscript “BG” indicates that we are considering the values of variables when the economy is on a balanced growth path. This equation simplifies to Equation 31.3 The growth in output on a balanced-growth path depends on the growth rates of the workforce, human capital, and technology. Using this, we can rewrite Equation 31.2.????????? as follows: Equation 31.4 The actual growth rate in output is an average of the balanced-growth rate of output and the growth rate of the capital stock. Capital Accumulation The second piece of our model is the capital accumulation equation. The growth rate of the capital stock is given by Equation 31.5 Divide the numerator and denominator of the first term by Y, remembering that $i = I/Y$. Equation 31.6 The growth rate of the capital stock depends positively on the investment rate and negatively on the depreciation rate. It also depends negatively on the current capital-output ratio. The Balanced-Growth Capital-Output Ratio Now rearrange Equation 31.6 to give the ratio of capital to gross domestic product (GDP), given the depreciation rate, the investment rate, and the growth rate of the capital stock: When the economy is on a balanced growth path, gK = , so We can also substitute in our balanced-growth expression for ( Equation 31.3) to get an expression for the balanced-growth capital output ratio in terms of exogenous variables. Convergence The proof that economies will converge to the balanced-growth ratio of capital to GDP is relatively straightforward. We want to show that if K/Y < then capital grows faster than output. If capital is growing faster than output, $g_{K}-g_{Y}>0$. First, go back to Equation 31.4: Subtract both sides from the growth rate of capital: Now compare the general expression for ratio of capital to GDP with its balanced growth value: and If K/Y < then it must be the case that gK > , which implies (from the previous equation) that gK > gY. Output per Worker Growth If we want to examine the growth in output per worker rather than total output, we take the per-worker production function ( Equation 31.2) and apply the rules of growth rates to that equation. $(1 − a)g_{Y} = a[g_{K}-g_{Y}] + (1 − a)[g_{L}+g_{H}] + g_{A} = a[g_{K}-g_{Y}] + (1 − a)[g_{L}+g_{H}] + g_{A}. We then we divide by (1 − a) to get and subtract gL from each side to obtain Finally, we note that $g_{Y}-g_{L}=g{Y/L}$: With balanced growth, the first term is equal to zero, so Endogenous Investment Rate In this analysis, we made the assumption from the Solow model that the investment rate is constant. The essential arguments that we have made still apply if the investment rate is higher when the marginal product of capital is higher. The argument for convergence becomes stronger because a low value of K/Y implies a higher marginal product of capital and thus a higher investment rate. This increases the growth rate of capital and causes an economy to converge more quickly to its balanced-growth path. Endogenous Growth Take the production function \[Y=K^{a}(H L)^{1-a} A$ Now assume A is constant and so
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.29%3A_Growth_Accounting.txt
The aggregate expenditure model relates the components of spending (consumption, investment, government purchases, and net exports) to the level of economic activity. In the short run, taking the price level as fixed, the level of spending predicted by the aggregate expenditure model determines the level of economic activity in an economy. An insight from the circular flow is that real gross domestic product (real GDP) measures three things: the production of firms, the income earned by households, and total spending on firms’ output. The aggregate expenditure model focuses on the relationships between production (GDP) and planned spending: $GDP = planned\ spending= consumption + investment + government\ purchases + net\ exports.$ Planned spending depends on the level of income/production in an economy, for the following reasons: • If households have higher income, they will increase their spending. (This is captured by the consumption function.) • Firms are likely to decide that higher levels of production—particularly if they are expected to persist—mean that they should build up their capital stock and should thus increase their investment. • Higher income means that domestic consumers are likely to spend more on imported goods. Since net exports equal exports minus imports, higher imports means lower net exports. The negative net export link is not large enough to overcome the other positive links, so we conclude that when income increases, so also does planned expenditure. We illustrate this in Figure 31.2.22 "Planned Spending in the Aggregate Expenditure Model" where we suppose for simplicity that there is a linear relationship between spending and GDP. The equation of the line is as follows: $spending = autonomous\ spending + marginal\ propensity\ to\ spend \times real\ GDP.$ Figure $22$: Planned Spending in the Aggregate Expenditure Model The intercept in Figure 31.2.22 "Planned Spending in the Aggregate Expenditure Model" is called autonomous spending. It represents the amount of spending that there would be in an economy if income (GDP) were zero. We expect that this will be positive for two reasons: (1) if a household finds its income is zero, it will still want to consume something, so it will either draw on its existing wealth (past savings) or borrow against future income; and (2) the government would spend money even if GDP were zero. The slope of the line in Figure 31.2.22 "Planned Spending in the Aggregate Expenditure Model" is given by the marginal propensity to spend. For the reasons that we have just explained, we expect that this is positive: increases in income lead to increased spending. However, we expect the marginal propensity to spend to be less than one. The aggregate expenditure model is based on the two equations we have just discussed. We can solve the model either graphically or using algebra. The graphical approach relies on Figure 31.2.23. On the horizontal axis is the level of real GDP. On the vertical axis is the level of spending as well as the level of GDP. There are two lines shown. The first is the 45° line, which equates real GDP on the horizontal axis with real GDP on the vertical axis. The second line is the planned spending line. The intersection of the spending line with the 45° line gives the equilibrium level of output. Figure $23$ More Formally We can also solve the model algebraically. Let us use Y to denote the level of real GDP and E to denote planned expenditure. We represent the marginal propensity to spend by β. The two equations of the model are as follows: $Y = E$ and $E=E_{0}+\beta \times Y$ Here, E0 is autonomous expenditure. We can solve the two equations to find the values of E and Y that are consistent with both equations. Substituting for E in the first equation, we find that The equilibrium level of output is the product of two terms. The first term—$(1/(1 − \beta))$—is called the multiplier. If, as seems reasonable, β lies between zero and one, the multiplier is greater than one. The second term is the autonomous level of spending. Here is an example. Suppose that C = 100 + 0.6Y, I = 400, G = 300, and NX = 200 − 0.1Y, where C is consumption, I is investment, G is government purchases, and NX is net exports. First group the components of spending as follows: $C + I + G + NX = (100 + 400 + 300 + 200) + (0.6Y − 0.1Y)$ Adding together the first group of terms, we find autonomous spending: $E_{0}= 100 + 400 + 300 + 200 = 1,000.$ Adding the coefficients on the income terms, we find the marginal propensity to spend: $\beta = 0.6 − 0.1 = 0.5.$ Using β = 0.5, we calculate the multiplier: We then calculate real GDP: $Y = 2 \times 1,000 = 2,000.$
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.31%3A_The_Aggregate_Expenditure_Model.txt
The price adjustment equation summarizes, at the level of an entire economy, all the decisions about prices that are made by managers throughout the economy. The price adjustment equation is as follows: $inflation\ rate = autonomous\ inflation − inflation\ sensitivity \times output\ gap.$ The equation tells us that there are two reasons for rising prices. The first is because the output gap is negative. The output gap is the difference between potential output and actual output: $output\ gap = potential\ real\ gross\ domestic\ product (real\ GDP) − actual\ real\ GDP.$ A positive gap means that the economy is in recession—below potential output. If the economy is in a boom, then the output gap is negative. The second reason for rising prices is that autonomous inflation is positive. Autonomous inflation refers to the inflation rate that prevails in an economy when an economy is at potential output (so the output gap is zero). Looking at the second term of the price adjustment equation, we see that when real GDP is greater than potential output, the output gap is negative, so there is upward pressure on prices in the economy. The inflation rate will exceed autonomous inflation. By contrast, when real GDP is less than potential output, the output gap is negative, so there is downward pressure on prices. The inflation rate will be below the autonomous inflation rate. The “inflation sensitivity” tells us how responsive the inflation rate is to the output gap. The output gap matters because, as GDP increases relative to potential output, labor and other inputs become scarcer. Firms are likely to see rising costs and increase their prices as a consequence. Even leaving this aside—that is, even when an economy is at potential output—firms are likely to increase their prices somewhat. For example, firms may anticipate that their suppliers or their competitors are likely to increase prices in the future. A natural response is to increase prices, so autonomous inflation is positive. Figure 31.2.24 "Price Adjustment" shows the price adjustment equation graphically. 31.33: Consumption and Saving The consumption function is a relationship between current disposable income and current consumption. It is intended as a simple description of household behavior that captures the idea of consumption smoothing. We typically suppose the consumption function is upward-sloping but has a slope less than one. So as disposable income increases, consumption also increases but not as much. More specifically, we frequently assume that consumption is related to disposable income through the following relationship: $consumption = autonomous\ consumption + marginal\ propensity\ to\ consume \times disposable\ income.$ A consumption function of this form implies that individuals divide additional income between consumption and saving. • We assume autonomous consumption is positive. Households consume something even if their income is zero. If a household has accumulated a lot of wealth in the past or if a household expects its future income to be larger, autonomous consumption will be larger. It captures both the past and the future. • We assume that the marginal propensity to consume is positive. The marginal propensity to consume captures the present; it tells us how changes in current income lead to changes in current consumption. Consumption increases as current income increases, and the larger the marginal propensity to consume, the more sensitive current spending is to current disposable income. The smaller the marginal propensity to consume, the stronger is the consumption-smoothing effect. • We also assume that the marginal propensity to consume is less than one. This says that not all additional income is consumed. When a household receives more income, it consumes some and saves some. Figure 31.2.25 "The Consumption Function" shows this relationship. More Formally In symbols, we write the consumption function as a relationship between consumption (C) and disposable income (Yd): $C = a + bY^{d}$ where a and b are constants. Here a represents autonomous consumption and b is the marginal propensity to consume. We assume three things about a and b: 1. a > 0 2. b > 0 3. b < 1 The first assumption means that even if disposable income is zero $(Y^{d}= 0)$, consumption will still be positive. The second assumption means that the marginal propensity to consume is positive. By the third assumption, the marginal propensity to consume is less that one. With 0 < b < 1, part of an extra dollar of disposable income is spent. What happens to the remainder of the increase in disposable income? Since consumption plus saving is equal to disposable income, the increase in disposable income not consumed is saved. More generally, this link between consumption and saving (S) means that our model of consumption implies a model of saving as well. Using $Y^{d}= C + S$ and $C = a + bY^{d}$ we can solve for S: $S = Y^{d} − C = −a + (1 − b)Y^{d}.$ So −a is the level of autonomous saving and (1 − b) is the marginal propensity to save. We can also graph the savings function. The savings function has a negative intercept because when income is zero, the household will dissave. The savings function has a positive slope because the marginal propensity to save is positive. Economists also often look at the average propensity to consume (APC), which measures how much income goes to consumption on average. It is calculated as follows: $APC = C / Y^{d}.$ When disposable income increases, consumption also increases but by a smaller amount. This means that when disposable income increases, people consume a smaller fraction of their income: the average propensity to consume decreases. Using our notation, we are saying that using $C = a + bY^{d}$, so we can write $APC = a/Y^{d} + b.$ An increase in disposable income reduces the first term, which also reduces the APC.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.32%3A_Price_Adjustment.txt
Like households, governments are subject to budget constraints. These can be viewed in two ways, either within a single year or across many years. The Single-Year Government Budget Constraint In any given year, money flows into the government sector, primarily from the taxes that it imposes on individuals and corporations. We call these government revenues. Money flows out in the form of outlays: government purchases of goods and services and government transfers. The circular flow of income tells us that any difference between government purchases and transfers and government revenues represents a government deficit. Often, we find it useful to group taxes and transfers together as “net taxes” and separate out government purchases, as in the last line of our definition. When outflows are less than inflows, then we say a government is running a surplus. In other words, a negative government deficit is the same as a positive government surplus, and a negative government surplus is the same as a positive government deficit. $government\ surplus = −government\ deficit.$ When a government runs a deficit, it must borrow from the financial markets. When a government runs a surplus, these funds flow into the financial markets and are available for firms to borrow. A government surplus is sometimes called government saving. Intertemporal Government Budget Constraint Tax and spending decisions at different dates are linked. Although governments can borrow or lend in a given year, a government’s total spending over time must be matched with revenues. When a government runs a deficit, it typically borrows to finance it. It borrows by issuing more government debt (government bonds). To express the intertemporal budget constraint, we introduce a measure of the deficit called the primary deficit. The primary deficit is the difference between government outlays, excluding interest payments on the debt, and government revenues. The primary surplus is minus the primary deficit and is the difference between government revenues and government outlays, excluding interest payments on the debt. The intertemporal budget constraint says that if a government has some existing debt, it must run surpluses in the future so that it can ultimately pay off that debt. Specifically, it is the requirement that $current\ debt\ outstanding = discounted\ present\ value\ of\ future\ primary\ surpluses.$ This condition means that the debt outstanding today must be offset by primary budget surpluses in the future. Because we are adding together flows in the future, we have to use the tool of discounted present value. If, for example, the current stock of debt is zero, then the intertemporal budget constraint says that the discounted present value of future primary surpluses must equal zero. The stock of debt is linked directly to the government budget deficit. As we noted earlier, when a government runs a budget deficit, it finances the deficit by issuing new debt. The deficit is a flow that is matched by a change in the stock of government debt: $change\ in\ government\ debt (in\ given\ year) = deficit (in\ given\ year).$ The stock of debt in a given year is equal to the deficit over the previous year plus the stock of debt from the start of the previous year. If there is a government surplus, then the change in the debt is a negative number, so the debt decreases. The total government debt is simply the accumulation of all the previous years’ deficits. When a government borrows, it must pay interest on its debt. These interest payments are counted as part of the deficit (they are included in transfers). If a government wants to balance the budget, then government spending must actually be less than the amount government receives in the form of net taxes (excluding interest). This presentation of the tool neglects one detail. There is another way in which a government can fund its deficit. As well as issuing government debt, it can print money. More precisely, then, every year, $change\ in\ government\ debt = deficit − change\ in\ money\ supply.$ Written this way, the equation tells us that the part of the deficit that is not financed by printing money results in an increase in the government debt. More Formally We often denote government purchases of goods and services by G and net tax revenues (tax revenues minus transfers) by T. The equation for tax revenues is as follows: $T = \tau \times Y,$ where τ is the tax rate on income and Y is real gross domestic product (real GDP). The deficit is given as follows: $government\ deficit = G − T = G − \tau \times Y.$ From this equation, the deficit depends on the following: • Fiscal policy through the choices of G and τ • The level of economic activity (Y)
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.34%3A_The_Government_Budget_Constraint.txt
The life-cycle model of consumption looks at the lifetime consumption and saving decisions of an individual. The choices made about consumption and saving depend on income earned over an individual’s entire lifetime. The model has two key components: the lifetime budget constraint and individual choice given that constraint. Consider the consumption/saving decision of an individual who expects to work for a known number of years and be retired for a known number of years thereafter. Suppose his disposable income is the same in every working year, and he will also receive an annual retirement income—again the same in every year. According to the life-cycle model of consumption, the individual first calculates the discounted present value (DPV) of lifetime income: $DPV\ of\ lifetime\ income = DPV\ of\ income\ from\ working + DPV\ of\ retirement\ income.$ (If the real interest rate is zero, then the DPV calculation simply involves adding income flows across years.) We assume the individual wants to consume at the same level in each period of life. This is called consumption smoothing. In the special case of a zero real interest rate, we have the following: More Formally Suppose an individual expects to work for a total of N years and to be retired for R years. Suppose his disposable income is equal to Yd in every year, and he receives annual retirement income of Z. Then lifetime income, assuming a zero real interest rate, is given as follows: $lifetime\ income = NY^{d} + RZ.$ If we suppose that he wants to have perfectly smooth consumption, equal to C in each year, then his total lifetime consumption will be $C \times (N + R).$ The lifetime budget constraint says that lifetime consumption equals lifetime income: $C \times (N + R) = NY^{d} + RZ.$ To obtain his consumption, we simply divide this equation by the number of years he is going to live (N + R): Provided that income during working years is greater than income in retirement years, the individual will save during his working years and dissave during retirement. If the real interest rate is not equal to zero, then the basic idea is the same—an individual smooths consumption based on a lifetime budget constraint—but the calculations are more complicated. Specifically, the lifetime budget constraint must be written in terms of the discounted present values of income and consumption. 31.36: Aggregate Supply and Aggregate Demand The aggregate supply and aggregate demand (ASAD) model is presented here. To understand the ASAD model, we need to explain both aggregate demand and aggregate supply and then the determination of prices and output. The aggregate demand curve tells us the level of expenditure in an economy for a given price level. It has a negative slope: the demand for real gross domestic product (real GDP) decreases when the price level increases. The downward sloping aggregate demand curve does not follow from the microeconomic “law of demand.” As the price level increases, all prices in an economy increase together. The substitution of expensive goods for cheap goods, which underlies the law of demand, does not occur in the aggregate economy. Instead, the downward sloping demand curve comes from other forces. First, as prices rise, the real value of nominal wealth falls, and this leads to a fall in household spending. Second, as prices rise today relative to future prices, households are induced to postpone consumption. Finally, a higher price level can lead to a higher interest rate through the response of monetary policy. All these factors together imply that higher prices lead to lower overall demand for real GDP. Aggregate supply is equal to potential output at all prices. Potential output is determined by the available technology, physical capital, and labor force and is unaffected by the price level. Thus the aggregate supply curve is vertical. In contrast to a firm’s supply curve, as the price level increases, all prices in an economy increase. This includes the prices of inputs, such as labor, into the production process. Since no relative prices change when the price level increases, firms are not induced to change the quantity they supply. Thus aggregate supply is vertical. The determination of prices and output depends on the horizon: the long run or the short run. In the long run, real GDP equals potential GDP, and real GDP also equals aggregate expenditure. This means that, in the long run, the price level must be at the point where aggregate demand and aggregate supply meet. This is shown in Figure 31.2.26 "Aggregate Supply and Aggregate Demand in the Long Run". In the short run, output is determined by aggregate demand at the existing price level. Prices need not be at their long-run equilibrium levels. If they are not, then output will not equal potential output. This is shown in Figure 31.2.27 "Aggregate Supply and Aggregate Demand in the Short Run". The short-run price level is indicated on the vertical axis. The level of output is determined by aggregate demand at that price level. As prices are greater than the long-run equilibrium level of prices, output is below potential output. The price level adjusts over time to its long-run level, according to the price-adjustment equation. The Main Uses of This Tool We do not explicitly use this tool in our chapter presentations. However, the tool can be used to support the discussions in the following chapters.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.35%3A_The_Life-Cycle_Model_of_Consumption.txt
The IS-LM model provides another way of looking at the determination of the level of short-run real gross domestic product (real GDP) in the economy. Like the aggregate expenditure model, it takes the price level as fixed. But whereas that model takes the interest rate as exogenous—specifically, a change in the interest rate results in a change in autonomous spending—the IS-LM model treats the interest rate as an endogenous variable. The basis of the IS-LM model is an analysis of the money market and an analysis of the goods market, which together determine the equilibrium levels of interest rates and output in the economy, given prices. The model finds combinations of interest rates and output (GDP) such that the money market is in equilibrium. This creates the LM curve. The model also finds combinations of interest rates and output such that the goods market is in equilibrium. This creates the IS curve. The equilibrium is the interest rate and output combination that is on both the IS and the LM curves. LM Curve The LM curve represents the combinations of the interest rate and income such that money supply and money demand are equal. The demand for money comes from households, firms, and governments that use money as a means of exchange and a store of value. The law of demand holds: as the interest rate increases, the quantity of money demanded decreases because the interest rate represents an opportunity cost of holding money. When interest rates are higher, in other words, money is less effective as a store of value. Money demand increases when output rises because money also serves as a medium of exchange. When output is larger, people have more income and so want to hold more money for their transactions. The supply of money is chosen by the monetary authority and is independent of the interest rate. Thus it is drawn as a vertical line. The equilibrium in the money market is shown in Figure 31.2.28 "Money Market Equilibrium". When the money supply is chosen by the monetary authority, the interest rate is the price that brings the market into equilibrium. Sometimes, in some countries, central banks target the money supply. Alternatively, central banks may choose to target the interest rate. (This was the case we considered in Chapter 25 "Understanding the Fed".) Figure 31.2.28 "Money Market Equilibrium" applies in either case: if the monetary authority targets the interest rate, then the money market tells us what the level of the money supply must be. Figure $28$: Money Market Equilibrium To trace out the LM curve, we look at what happens to the interest rate when the level of output in the economy changes and the supply of money is held fixed. Figure 31.2.29 "A Change in Income" shows the money market equilibrium at two different levels of real GDP. At the higher level of income, money demand is shifted to the right; the interest rate increases to ensure that money demand equals money supply. Thus the LM curve is upward sloping: higher real GDP is associated with higher interest rates. At each point along the LM curve, money supply equals money demand. We have not yet been specific about whether we are talking about nominal interest rates or real interest rates. In fact, it is the nominal interest rate that represents the opportunity cost of holding money. When we draw the LM curve, however, we put the real interest rate on the axis, as shown in Figure 31.2.30 "The LM Curve". The simplest way to think about this is to suppose that we are considering an economy where the inflation rate is zero. In this case, by the Fisher equation, the nominal and real interest rates are the same. In a more complete analysis, we can incorporate inflation by noting that changes in the inflation rate will shift the LM curve. Changes in the money supply also shift the LM curve. Figure $29$: A Change in Income IS Curve The IS curve relates the level of real GDP and the real interest rate. It incorporates both the dependence of spending on the real interest rate and the fact that, in the short run, real GDP equals spending. The IS curve is shown in Figure 31.2.29 "A Change in Income". We label the horizontal axis “real GDP” since, in the short run, real GDP is determined by aggregate spending. The IS curve is downward sloping: as the real interest rate increases, the level of spending decreases. In fact, we derived the IS curve in Chapter 25 "Understanding the Fed". The dependence of spending on real interest rates comes partly from investment. As the real interest rate increases, spending by firms on new capital and spending by households on new housing decreases. Consumption also depends on the real interest rate: spending by households on durable goods decreases as the real interest rate increases. The connection between spending and real GDP comes from the aggregate expenditure model. Given a particular level of the interest rate, the aggregate expenditure model determines the level of real GDP. Now suppose the interest rate increases. This reduces those components of spending that depend on the interest rate. In the aggregate expenditure framework, this is a reduction in autonomous spending. The equilibrium level of output decreases. Thus the IS curve slopes downwards: higher interest rates are associated with lower real GDP. Equilibrium Combining the discussion of the LM and the IS curves will generate equilibrium levels of interest rates and output. Note that both relationships are combinations of interest rates and output. Solving these two equations jointly determines the equilibrium. This is shown graphically in Figure 31.2.32. This just combines the LM curve from Figure 31.2.30 "The LM Curve" and the IS curve from Figure 31.2.31 "The IS Curve". The crossing of these two curves is the combination of the interest rate and real GDP, denoted (r*,Y*), such that both the money market and the goods market are in equilibrium. Figure $32$ Equilibrium in the IS-LM Model. Comparative Statics Comparative statics results for this model illustrate how changes in exogenous factors influence the equilibrium levels of interest rates and output. For this model, there are two key exogenous factors: the level of autonomous spending (excluding any spending affected by interest rates) and the real money supply. We can study how changes in these factors influence the equilibrium levels of output and interest rates both graphically and algebraically. Variations in the level of autonomous spending will lead to a shift in the IS curve, as shown in Figure 31.2.33 "A Shift in the IS Curve". If autonomous spending increases, then the IS curve shifts out. The output level of the economy will increase. Interest rates rise as we move along the LM curve, ensuring money market equilibrium. One source of variations in autonomous spending is fiscal policy. Autonomous spending includes government spending (G). Thus an increase in G leads to an increase in output and interest rates as shown in Figure 31.2.33 "A Shift in the IS Curve". Variations in the real money supply shift the LM curve, as shown in Figure 31.2.34 "A Shift in the LM Curve". If the money supply decreases, then the LM curve shifts in. This leads to a higher real interest rate and lower output as the LM curve shifts along the fixed IS curve. Figure $34$: A Shift in the LM Curve More Formally We can represent the LM and IS curves algebraically. LM Curve Let L(Y,r) represent real money demand at a level of real GDP of Y and a real interest rate of r. (When we say “real” money demand, we mean that, as usual, we have deflated by the price level.) For simplicity, suppose that the inflation rate is zero, so the real interest rate is the opportunity cost of holding money.If we wanted to include inflation in our analysis, we could write the real demand for money as L(Y, r + π), where π is the inflation rate. Assume that real money demand takes a particular form: $L(Y,r) = L_{0} + L_{1}Y– L_{2}r.\] In this equation, L0, L1, and L2 are all positive constants. Real money demand is increasing in income and decreasing in the interest rate. Letting M/P be the real stock of money in the economy, then money market equilibrium requires $M/P = L_{0} + L_{1}Y– L_{2}r.$ Given a level of real GDP and the real stock of money, this equation can be used to solve for the interest rate such that money supply and money demand are equal. This is given by $r = (1/L_{2})[L_{0}+L_{1}Y – M/P].$ From this equation we learn that an increase in the real stock of money lowers the interest rate, given the level of real GDP. Further, an increase in the level of real GDP increases the interest rate, given the stock of money. This is another way of saying that the LM curve is upward sloping. IS Curve Recall the two equations from the aggregate expenditure model: $Y = E$ and $E = E_{0}(r) + \betaY.$ Here we have shown explicitly that the level of autonomous spending depends on the real interest rate r. We can solve the two equations to find the values of E and Y that are consistent with both equations. We find Given a level of the real interest rate, we solve for the level of autonomous spending (using the dependence of consumption and investment on the real interest rate) and then use this equation to find the level of output. Here is an example. Suppose that C = 100 + 0.6Y, I = 400 − 5r, G = 300, and NX = 200 − 0.1Y, where C is consumption, I is investment, G is government purchases, and NX is net exports. First group the components of spending as follows: $C + I + G + NX = (100 + 400 − 5r + 300 + 200) + (0.6Y − 0.1Y)$ Adding together the first group of terms, we find autonomous spending: $E_{0} = 100 + 400 + 300 + 200 − 5r = 1000 − 5r.$ Adding the coefficients on the income terms, we find the marginal propensity to spend: $\beta = 0.6 − 0.1 = 0.5.$ Using β = 0.5, we calculate the multiplier: We then calculate real GDP, given the real interest rate: $Y = 2 \times (1000 − 5r) = 2000 − 10r.$ Equilibrium Combining the discussion of the LM and the IS curves will generate equilibrium levels of interest rates and output. Note that both relationships are combinations of interest rates and output. Solving these two equations jointly determines the equilibrium. Algebraically, we have an equation for the LM curve: $r=\left(1 / L_{2}\right)\left[L_{0}+L_{1} Y-M / P\right]$ And we have an equation for the IS curve: $Y=m E_{0}(r),$ where we let \(m = (1/(1 – \times))$ denote the multiplier. If we assume that the dependence of spending in the interest rate is linear, so that $E_{0}(r)=e_{0}-e_{1} r$, then the equation for the IS curve is $Y=m\left(e_{0}-e_{1} r\right),$ To solve the IS and LM curves simultaneously, we substitute Y from the IS curve into the LM curve to get $r=\left(1 / L_{2}\right)\left[L_{0}+L_{1} m\left(e_{0}-e_{1} r\right)-M / P\right]$ Solving this for r we get $r=A_{r}-B_{r} M / P$ where both A r and B r are constants, with $A_{r}=\left(L_{0}+L_{1} m e_{0}\right) /\left(L_{1} m e_{1}+L_{2}\right)$ and $B_{r}=1 /\left(L_{1} m e_{1}+L_{2}\right)$. This equation gives us the equilibrium level of the real interest rate given the level of autonomous spending, summarized by e0, and the real stock of money, summarized by M/P. To find the equilibrium level of output, we substitute this equation for r back into the equation for the IS curve. This gives us $Y=A_{y}+B_{y}(M / P)$ where both Ay and By are constants, with $A_{y}=m\left(e_{0}-e_{1} A_{r}\right)$ and $B_{y}=m e_{1} B_{1}$. This equation gives us the equilibrium level of output given the level of autonomous spending, summarized by e0, and the real stock of money, summarized by M/P. Algebraically, we can use the equations to determine the magnitude of the responses of interest rates and output to exogenous changes. An increase in the autonomous spending, e0, will increase both Ar and Ay, implying that both the interest rate and output increase.To see that Ay increases with e0 requires a bit more algebra. An increase in the real money stock will reduce interest rates by Br and increase output by By. A key part of monetary policy is the sensitivity of spending to the interest rate, given by e1. The more sensitive is spending to the interest rate, the larger is e1 and therefore the larger is By. The Main Uses of This Tool We do not explicitly use this tool in our chapter presentations. However, the tool can be used to support the discussions in the following chapters.
textbooks/socialsci/Economics/Economics_-_Theory_Through_Applications/31%3A_Toolkit/31.37%3A_The_IS-LM_Model.txt
Insights from Authentic Happiness on Careers Everyone wants to find their dream job, but it might take you a few tries before you actually get it. A number of successful people in business, philanthropy, and the arts have told me they had to have a few different gigs before they landed their dream job. So what is a “dream job”? Simply put, it is a job you love. And, as the saying goes, “Choose a job you love, and you will never have to work a day in your life.” But even if you do not get it on the first try, you can still find a job you at least enjoy. The key is to match your character strengths (what makes you who you are) with the job you want. Dr. Martin Seligman from the University of Pennsylvania emphasizes how important it is to identify your top character strengths. In his book Authentic Happiness, he states that if you find a career that utilizes your strengths, you will have higher job satisfaction. Dr. Seligman has made it easy to figure out those strengths by providing the VIA Survey of Character Strengths. I strongly recommend you also take the Authentic Happiness Inventory and the Grit Survey. Click on the Questionnaire Item on the menu at the top of the page. Take the questionnaire titled, “VIA Survey of Character Strengths (Measures 24 Character Strengths).” This questionnaire takes about 50 minutes, so do not rush through it. You will be asked to create a username and password but do not worry about that. They will not contact you for any other purpose. The purpose is to identify you as a unique subject for their research. It also allows you to return to the site and take other questionnaires. Once you find out your top character strengths, discuss them with your family, friends, and advisors. Ask them what they think of the findings (you will be surprised how much they agree with the results), and then ask them to help you think of careers that would utilize these strengths. For example, if you like science and you are good at working with others, you might be happy with a career in the medical field. Or if you prefer solitude and are interested in computers, you could look for a career in information technology instead. If you do not figure out your top character strengths before you search for a career, you will have no real criteria for choosing the kind of job you want. When you chose your major, you already took some steps in defining your career interests. However, your major is not always a reliable indicator of where you will find job satisfaction. Several students change majors, and many graduates end up in fields unrelated to what they studied in college. Wisdom from What Color Is Your Parachute? Now that you know your top character strengths, it is time to start looking for a job. In What Color Is Your Parachute for Teens, Carol Christen lays out four basic steps to finding your dream job: 1. Conduct informational interviews. 2. Cultivate contacts and create networks. 3. Research organizations of interest. 4. Begin a campaign to get the job you want. Conduct Informational Interviews Informational interviews might be one of the most fun things you can do. Basically, you are having a conversation with someone in a career you are interested in. Since you are both interested in this line of work, you will have plenty to talk about. Ask how they chose their career. What skills are required to get a job in the field? What do they like and not like about it? What are the salary ranges? What are the challenges in this particular industry? Finally, can they introduce you to two or three other people in the field that you can talk to? Your job is to get them to talk and then to just listen. You might not know anyone with your dream job, but it is easy to find someone in the field who will talk to you. Most people are willing to help a young person with career advice, just like someone helped them in their own careers. Search online for organizations in your field and reach out to the person that’s best to talk to. Or you might even know someone who knows someone who knows someone else. Cultivate Contacts and Create Networks People you interview become part of your network. Keep in touch and ask them to let you know when there are any openings in your field. Friends and family can also be a part of your network, and they might be willing to help you in your job search. LinkedIn is another good tool to develop your network. Start an account while you are still in college and add students who will graduate before you. Also, almost all colleges have an alumni/ae network you can use to meet graduates in your field. Try visiting the alumni/ae office for advice on how to connect with them. Most importantly, stay in touch with your contacts, not only during college, but after you get your first job. On average, employees only stay about three years at a job, so you will want to keep those contacts for your next search. In addition, you should pay it forward by helping other young job seekers. Keep in touch with your alumni/ae group for those kinds of opportunities. Research Organizations of Interest There are two times to research organizations in your field. The first is to find companies you would like to work for, and the second is when you have been invited for an interview. A simple Google search can help you find companies in your field, and online recruitment sites will give you a sense of how well a company treats its employees and potential salaries it might offer. Also, you can often find articles that list the top companies to work for in your city. When you are researching a potential employer, you should try to find answers to these questions: • Is the company financially stable? • Does it treat its employees well? • Is the company ethical? • Does it have significant competition? • Do employees have well-defined career paths? • How are the employee benefits? The Inverted Pyramid of Hiring Eric Schlesinger, the former Senior Director of Human Resources at the World Bank (and my good friend) says that most new hires are found in a way that’s completely opposite of what most people would think. He calls it the “Inverted Pyramid of Hiring.” Given this hiring structure, Schlesinger has this advice for job seekers: 1. 80% of vacancies are not advertised, so the best way to find job openings and get an interview is to network. The best people to network with are the Four F’s: Friends, Family, Faculty, and Former employers. 2. Do not focus on job listings. You can apply to these, but they represent only 20% of current vacancies. 3. Once you get an interview, remember that the employer is not really interested in what they can do for you; rather, they want to know what you can do for them. You should be able to say, “I see you have a problem. I can solve it.” 4. While you are in college, get as many internships as you can. Career counselors say that 80% of internships lead to a full-time job offer. Internships If you have had retail or service jobs, it likely will not impress a potential employer nearly as much as an internship. Of course, a lot of students have to take part-time jobs, but you should first try to find a paid internship. A professional development or career center at your college can help you find internships and might offer stipends to help with unpaid opportunities. Mount Holyoke College recently introduced a program to give students stipends so that they could take one or more internships (Townsley et al., 2017). They found that along with GPA, the graduates with more internships had higher odds of being employed six months after graduation. A recent Gallup-Purdue survey also found that employers valued work experience more than a student’s GPA when hiring. Resumes In What Color is Your Parachute, Carol Christen says, “Resumes are not a very effective job-search tool for adults. They are even less effective for younger workers. Usually younger workers lack experience in the jobs or fields in which they most want to work” (2015). Instead, she says it’s better for job seekers to create a website. My experience is that websites are the expected vehicle for people in creative fields, such as copywriters, cinematographers and artists, but companies will still want to see resumes. For graduating job-seekers, the classic “skills-focused” resume highlights your strengths and your work experience. Your school’s career center can help you refine your resume. Temple University’s Career Center has a number of sample resumes from students from a variety of majors. Currently, there is debate about whether resumes should be one page or two pages long. The existing wisdom is that two pages is perfectly acceptable. However, Eric Schlesinger cites research that one-page resumes are read more often than two-page resumes, and conversely, two-page resumes are more often ignored. Whatever the length of your resume, it should be well designed and contain no errors or fabrications. Any errors will show sloppy (or unethical) character and get you rejected fast. Even if you do not get caught fabricating anything at first, you could be fired when it comes to light. The purpose of the resume is not to sell yourself but to get an interview. You want to make a good impression; eight seconds is the average time an employer spends initially looking at your resume. After that, each resume goes into one of two piles: “Fuggetaboutit” and “Maybe.” You might feel angry it only takes eight seconds to be rejected or accepted, but there are a lot of candidates out there similar to you. As much as your parents and teachers praised your “uniqueness,” others out there are just as “unique” as you. In order to make your resume stand out, you should think of the qualities that make you a better candidate than other applicants. Most candidates have multiple resumes to emphasize different skills they have. For example, let’s say you are good at engineering and at making sales. You want to emphasize your strength in sales for a sales job and your engineering ability for an engineering job. In other words, your resume should be custom tailored to each job opening. Cover Letters Always send a cover letter. The letter should show you have knowledge of the organization and be addressed to the appropriate person. As with your resume, it should be one page and customized to emphasize your skills. Temple’s Career Services has some sample cover letters that you can check out. The Job Interview When you are invited to interview, the employer has decided you likely have the minimum skills required for the job, and now they want to find out more about you. Think about what they might want to know and what questions they could ask. You will need to come up with your elevator speech. Imagine you happen to get into an elevator with the CEO of a company you want to work for. You have about one minute to get them interested in you. Practice this elevator speech to prepare for your interview. Your college’s career center likely gives practice interviews, and some even have alumni/ae conduct the interviews themselves. Make sure your answers are no less than 20 seconds (or you will appear to lack communication skills) and no longer than 2 minutes (or you will appear too self-involved). Talk half the time and listen for the other half. Ask for more details about the job and company. Remember: the employer is not interested in what they can do for you; they want to know what you can do for them. As I said before, this is when you should do some research into the company. Find out what they are trying to accomplish and what challenges they face. It could be that they want to make more sales, or it could be that they lack organization. Whatever the case may be, you can use your research in the interview. Show a detailed understanding of the company and let them know how your skills fit their needs. Essentially, to manage a company is to confront a series of problems every day. If you show your potential employer you can help solve their problems, you will be golden. Job interviews are for both you and the employer to see if you are a good fit with the organization. However, the probability of each of you figuring this out in one or two interviews is very low. There are even researchers who say that face-to-face job interviews are useless in gaining any information that will tell the potential employer whether you will be a good employee or not. That is why you need to do a lot of research about the organization (and possibly even about the person who will be interviewing you) ahead of time. Show Them You Will Be an Engaged Employee But that’s not all! Here’s the secret sauce that will almost certainly get you a job offer: signal that you will be an engaged employee. Every employer wants engaged employees, workers who are committed to the goals and values of the organization. The Gallup Organization has built a large consulting practice around measuring employee engagement. In a 2015 telephone survey of 80,000 workers who worked for American organizations,Gallup found that: 1. The percentage of U.S. workers in 2015 who Gallup considered “engaged” in their jobs averaged 32%. 2. The majority (50.8%) of employees were “not engaged.” 3. Another 17.2% were “actively disengaged.” 4. The 2015 averages are largely on par with the 2014 averages and reflect little improvement in employee engagement over the past year. 5. The percentage of engaged employees has been essentially flat since Gallup began taking the survey in the year 2000. The biennial Gallup Employee Engagement Survey in 2017 showed the number of engaged employees essentially constant at 33% vs. 32% in the 2015 survey. What employer would want to hire another “not engaged employee?” Yet American organizations are stuck with the vast majority of their workers being “not-engaged” or “actively disengaged.” Do your research and show the potential employer that you are familiar with the company’s values. Almost every organization has a mission statement and code of ethics on their website. Prepare a short (but sincere sounding) speech that shows you understand and identify with the company’s goals and values. Recent evidence shows that signaling you will be a committed employee will give you a good chance of getting hired over a somewhat more qualified competitor for job. In a recent article in The Wall Street Journal, “Afraid You’re ‘Too’ Qualified for a Job? Here’s What You Can Do”, Heidi Mitchell (2019) reports on the work of Oliver Hahl, assistant professor of organizational theory and strategy at the Tepper School of Business at Carnegie Mellon University. In Hahl’s study, hiring managers were given the resumes of both highly qualified and just sufficiently qualified candidates. They were told that the candidates’ commitment level to the organization had been assessed and they were determined to be either “neutral” or “committed.” In the case of two “neutral” candidates, where there was no mention of loyalty to the company, the less qualified candidate was more likely to be hired. When asked for a justification, Hahl reports that hiring managers stated their belief that the more qualified “neutral” candidate might not stay and would be difficult to manage. Among candidates who were deemed to be “committed,” the highly qualified applicants were more likely to be hired. However, if hiring managers were provided with a “neutral” candidate and a “committed” candidate, the “committed” employee had a more than 50% chance of being hired over a more qualified candidate. And when given two equally qualified candidates, the “committed” one is more likely to be hired. Hahl concludes, “Managers are concerned with selecting not just the highest-ability candidate but the one who is both capable and committed.”(Mitchell, 2019). Always Send a Thank-You Note Job hunters send out lots of resumes but often receive no response, which is rude on the part of employers . You, the applicant, should respond to each rejection letter or email and ask if the employer knows of any other job openings or employers that could use your talents. Also, after every interview, always send a thank-you note. For job seekers, it is not just common courtesy but an important competitive advantage. It will help you stand out, and it gives you the opportunity to emphasize two or three things you want them to remember about you. At the same time, if you misstated something or did not represent yourself well, the thank you note gives you a chance to clear things up. Even if it is obvious from the interview you are not going to be hired, you can use the note to ask if they can let you know any other organizations that could use your skills. I guarantee your courtesy will be rewarded. Do Not Take the First Offer (Maybe) Do not ask about salary too early in the interview, as that can make you seem only interested in yourself. However, it is perfectly fine to ask for information about benefits, though you’d be better off waiting until you are near the end of the interview. Read the interviewer and decide for yourself if it seems like you are a good fit for the job. Obviously, if it is clear you and the organization are not a match, asking about benefits is a waste of time for both of you. When you do feel like there is potential, though, asking about benefits is an easy way to lead into asking about salary without appearing greedy. Benefits vary widely from company to company and should be considered when making a decision about a job offer. Base your choice on the total compensation package you are offered, not just the base salary. For example, in 2019, the average annual health insurance premiums for an individual were about \$8,000 and about \$20,000 for a family of four. Most companies ask employees to pay 25-50% of the premium as a co-pay. A co-pay reduces your salary dollar for dollar. You should also factor retirement benefits into your decision about a position. Most organizations offer a 401(k) Retirement Plan. Under a 401(K), employees authorize a payroll deduction (typically 3-5% gross salary) and the employer matches some or all of that amount. A 401(k) is valuable as those deductions are tax deferred; you pay no taxes on your contributions or investment gains until retirement. Besides retirement and health, benefits like vacation, childcare, and tuition are also important parts of your compensation. If you do feel like it is okay to ask about salary, do so gently. Try saying something like, “Can you give me a sense of the salary range for this position?” As with everything else, you should research the salary ahead of time. Websites like Glassdoor can give you some sense of a company’s salaries, and many industry associations take annual salary surveys you can find online. Getting the employer to quote the salary first lets you make a counteroffer, one that’s based off your research. Emphasize that you want to work there but need a more competitive salary. When thinking about your salary, you should also take the cost of living into account. Living in San Francisco or Manhattan can cost twice as much as living in Kansas City. There are many websites (like this one) that can help you compare the cost of living in various parts of the United States. I say maybe you should not take the first offer from a company because it often depends on the size of the company you are negotiating with. A medium or large company (over 50 employees) will have established a competitive salary range. Your offer should be in that range. If you do some research and find that the offer is competitive, you might want to accept it without trying to negotiate. However, do not forget: benefits are extremely important to the total compensation package, and you should try to negotiate these if they are not in line with similar positions. If you receive an offer from an organization of under 50 employees, the company might try to offer the lowest salary possible in order to keep their costs down. If that happens, you should point out that their offer is not competitive and try to negotiate a better total compensation package. Since the company has decided they want you, they should be willing to increase the offer. If, on the other hand, they are not willing to increase their offer, this will show you what type of managers they are. How to Ask for More Money Negotiating a salary is different than negotiating a one-time transaction. When you buy a car or a house, you will likely never see the person again. The seller’s incentive is to get as high a price as possible, while the buyer wants to pay as little as possible. If you pay too much, well, you can get angry, but that is the end of the transaction. However, once you are employed, you are expected to work for the good of the organization. In any case, if you are ever in doubt, ask for help! A career counselor will have access to all kinds of job resources, and your family and friends might have some wisdom to offer you. I also recommend reading What Color Is Your Parachute? by Richard Bolles, for some helpful rules for salary negotiations. Drug Tests Generally, companies have zero tolerance for drug use. In certain fields, you will almost always be required to take a drug test as a condition of employment. These include jobs in public safety, childcare, the federal government, and medicine. The Last (and Definitive) Word on Job Searches Searching for a job is more art than science. I have already said that networking is the best method to find job openings and hopefully get an interview. However, the most successful job hunters are those who have the same conviction of Winston Churchill: “This is the lesson: never give in, never give in, never, never, never, never – in nothing , great or small, large or petty – never give in except to convictions of honour or good sense.”
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.01%3A_Your_First_Big_Job_-_How_to_Get_It.txt
How to Behave the First Week on the Job Since this is a book about financial literacy, this is likely your first or second job, and you will need to learn some of the fundamentals. First, you should remember that a job is immersed in a social setting. You have to get along with people—especially your boss and co-workers! Unfortunately, there is no manual for how to do this. Instead, you have to use your people skills. Listen to your supervisor and do what they direct you to do. Do not question a supervisor’s orders; you will need to earn their trust before you can do that. With your co-workers, be willing to listen and to not be so vocal with your opinions. In the beginning, you are there to learn about the organization and build trust. A know-it-all will not be trusted. Gossip at Work R.I. Dunbar states that “Analyses of freely forming conversations indicate that approximately two thirds of conversation time is devoted to social topics, most of which can be given the generic label gossip” (2004). This gossip is what you need to get connected to in your workplace, and you can find it by making friends at your job. Take co-workers out to lunch or, better yet, go for drinks after work. When you are with your co-workers, you should listen more than you talk. Your co-workers can tell you things like which bosses are mean, which co-workers will stab you in the back, and which men are sexual harassers. This gossip will also tell you who has power in the organization. For example, often a personal assistant controls access to the President, so it is important to be kind to them. More than just providing gossip, friends at work also increase your well-being. Remember that work is not just about doing your job but getting along with your co-workers. Modern organizations are built around teamwork, but more importantly, people who report that they consider a co-worker their best friend are much more likely to also report that they love their job. Insights from Give and Take byKurzban and Houser You not only need friends, though; you also need networks, both inside and outside of work. Adam Grant, a professor of Industrial Psychology at the University of Pennsylvania, talks about the importance of networks in his book, Give and Take: By developing a strong network, people can gain invaluable access to knowledge, expertise, and influence. Extensive research demonstrates that people with rich networks achieve higher performance ratings, get promoted faster, and earn more money. (2013) Interacting in networks (or teams) involves giving and taking, and Grant states that there are three different styles of reciprocity: giving, taking and matching. Each of these has a different type of network. A “taker” likes to get more than they give to a network or relationship. A “giver” (admittedly a rare breed in the workplace) prefers to give more than they get, and a “matcher” strives to preserve an equal balance of giving and getting. Our personality is 50% the result of nature (or evolution, which equals genetics) and 50% the result of nurture (or the interaction of our genetics with our environment). However, where Grant takes these types as a given, Kurzban and Houser used experiments to establish that evolution has created a relatively stable mix of these three reciprocity styles (2005). According to Kurzban and Houser, this is the breakdown: Table 2.1. Kurzban and Houser Reciprocity Styles Kurzban Type Name Percent Grant Type Name Cooperators 17% Givers Reciprocators 63% Matchers Cheaters (Free Riders) 20% Takers Not Classified 3% I do not think I can stress enough how important Kurzban’s and Houser’s work is to how we can understand and develop professional networks. For example, if you have to work on a randomly assembled team, you will encounter a mix of cooperators, reciprocators, and cheaters. Grant reports that each of these reciprocity types deal with their networks in different ways: Givers give a lot more than they receive. This is a key point: takers and matchers also give in the context of networks, but they tend to give strategically, with an expected personal return that exceeds or equals their contributions. When takers and matchers network, they tend to focus on who can help them in the near future, and this dictates what, where, and how they give. Their actions tend to exploit a common practice in nearly all societies around the world, in which people typically subscribe to a norm of reciprocity: you scratch my back, I’ll scratch yours. (2013) However, Grant reports that even though takers and matchers get ahead, givers end up creating the widest network and become the most successful (as long as they do not end up as doormats for takers). If you are a giver, gossip once again comes in handy; matchers and other givers do not appreciate takers and will share this information widely among their co-workers. Unfortunately, takers are fakers, and that can make them hard to identify. Everyone talks like they are a good member of the team so be sure to watch closely and remember that being agreeable is not the same as contributing. Of course, sometimes you cannot avoid working with a taker. To help you understand what strategy you should use, we need to acquaint you with some economic game theory, specifically a strategy called tit for tat. This is usually a matcher strategy, as it requires you to match what the other player does. It will maximize your gains if you are dealing with a giver or matcher and minimize your losses if you are dealing with a taker. Thus, it is a max/min strategy. To understand how this strategy works, we can talk in terms of cooperating with or not cooperating with your teammate. The strategy works like this: 1. In the first round you presume good will and cooperate with your teammate. 2. You see if your teammate reciprocates by cooperating in the first round. 3. If your teammate does not cooperate in the first round, you stop cooperating until the teammate cooperates. 4. Then you return to cooperating. 5. If the teammate again does not cooperate in any round, you then do not cooperate in the next round. Another way to look at this is that you (assuming you are a giver or matcher) begin by cooperating and then copy your teammate(s)’ strategy from the previous round. Here is how it might look in a series of rounds: Table 2.2. Tit for Tat Strategy YOU YOUR TEAMMATE Cooperate Not cooperate Not cooperate Not cooperate Not cooperate Not cooperate Not cooperate Cooperate Cooperate Cooperate Cooperate Not cooperate Not cooperate Cooperate How does this translate to the real world of work? Well, imagine that a fellow worker comes to you to ask for advice or help with a project. You are a giver or a matcher and you help them. Then, you need help yourself and that person has excuses or does not answer your emails. Obviously, your natural tendency is to not help them the next time they ask. We can also humanize the strategy to make it feel more familiar: 1. Tit for tat is generous in that it starts out cooperating in the first round. 2. Tit for tat has a strong sense of fairness in that it punishes the teammate by not cooperating in a subsequent round if the teammate does not return favors. 3. Tit for tat is forgiving because if the teammate starts to cooperate, you will return to cooperating. 4. Tit for tat is non-envious because by cooperating, both of you are gaining and you are not competing and striving to get ahead of your teammate. Are these not characteristics you want your children to have? There’s also a good chance that many of us already use this in our personal interactions. However, tit for tat is not the only strategy. Grant reminds us that givers are the most successful people in the workplace, since they develop the widest and strongest networks. Citing Martin Nowak’s book, Super Cooperators Grant says that the best strategy for givers (or wannabe givers) is the generous tit for tat. This is because Nowak found that it is more advantageous to alternate between giving and matching in personal interactions As with regular tit for tat, you begin by cooperating, assuming good will on the part of your teammates. If your partner does not cooperate but rather competes, you continue to cooperate. Specifically, you want to cooperate once every three complete rounds. In other words, for every three times your teammate competes, you compete two of the times in response and cooperate one time in response. Another way to put it is that instead of competing every time the other player competes, you compete only two-thirds of the time. According to Nowak, “Generous tit for tat can easily wipe out tit for tat and defend itself against being exploited by defectors” (2011). It achieves the desired goal of encouraging givers and punishing takers, but it is not too punitive. It can also be called a “Trust but verify” strategy. According to Grant: Generous tit for tat achieves a powerful balance of rewarding giving and discouraging taking, without being overly punitive. It comes with a risk: generous tit for tat encourages most people to act like givers, which opens the door for takers to ‘rise up again’ by competing when everyone else is cooperating. But in a world where relationships and reputations are visible, it’s increasingly difficult for takers to take advantage of givers (2013). Promotions Promotions are the way to get salary increases that are above just a Cost-of-Living raise. The concept in human resources is that a more complex job deserves a higher salary. The best way to get a promotion is to do an excellent job in your current position. Also, volunteer for extra work if your boss asks. You are proving that you are a team player and an engaged employee and your boss will trust you with increasing responsibilities. Women at Work In 2020 women made up 47% of the US labor force. However, analyses of women’s compensation and place in the organizational hierarchy reveal ongoing imbalances when compared with their male counterparts. At the highest level, in 2021 only 41 CEOs of the Fortune 500, or 8.1%, were women. The gender wage gap is a recognized phenomenon that has been widely studied. In 2019 women were making 82.3 cents for every dollar of men’s earnings. Although it has narrowed significantly since 1979, when it was 62.3, the gender wage gap remained relatively stable through the 2010s. This is true when examined across racial/ethnic and occupational categories (highlights of women’s earnings in 2020). Among the many causal factors, researchers have identified a motherhood wage penalty that is variously attributed to productivity differences and discrimination (De Linde Leonard, 2020; Gallen, 2018; Correll, 2007). For women in the early phase of their careers, there are several considerations that may lead to more positive salary and promotional outcomes. According to Adam Grant (2013), women are not as willing as men to advocate for more money during salary negotiations. Understanding this tendency may help to resist the urge to accept the first offer. It is often prudent to step back, take an objective perspective, and get some advice from a trusted colleague before responding to a salary offer. Early career promotions to management are a second factor that women should pay attention to. In a Wall Street Journal article largely based on the influential study Women in the Workplace, reporter Vanessa Fuhrmans explains that it is “…early in women’s careers, not later, when they fall dramatically behind men in promotions…Though women and men enter the workforce in roughly equal numbers, men outnumber women nearly 2 to 1 when they reach that first step up—the manager jobs that are the bridge to more senior leadership roles.” Although companies advance women already in management positions, there is not a similar effort to promote women to that first management position. As Haig Nalbantian, a labor economist at the global human resource consulting firm Mercer explains, companies need to “position women and minorities to succeed in the roles that are likely to lead to higher-level positions” (Fuhrmans, 2019). Senior partner at McKinsey & Co. and contributor to Women in the Workplace, Lareina Yee comments that “[f]ew efforts are likely to remedy the problem as much as tackling the gender imbalance in initial promotions to management” (Fuhrmans, 2019). In early career employment searches, women can choose to seek out companies with a positive record of advancing women. Once employed, they should be proactive in understanding the expectations for promotion into management. Employment Discrimination Contemporary employment discrimination law developed mainly out of Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on race, religion, national origin, and sex. Subsequent legislation and legal interpretations have extended employment discrimination to include age, disability, pregnancy, and other categories. On June 15, 2020, the Supreme Court ruled that Title VII also extends protection to LGBT employees. The Equal Employment Opportunity Commission (EEOC) is the agency responsible for the enforcement of federal employment discrimination laws. These laws also protect those who report discrimination from retaliation. States have laws prohibiting employment discrimination that are similar to and sometimes more expansive than federal laws (Bennett-Alexander, 2018). These state agencies are referred to as fair employment practice agencies (FEPAs). Sexual harassment is a form of employment discrimination that can occur in a variety of ways. • The victim, as well as the harasser, may be a woman or a man. • The victim may be of the same or opposite sex. • The harasser may be the victim’s supervisor, an agent of the employer, a supervisor in another area, a co-worker or a nonemployee, such as a vendor or customer. • The victim does not have to be the person harassed but can be anyone affected by the offensive conduct. • The harasser’s conduct must be unwelcome. If you report sexual harassment, whether to your supervisor or someone in HR, legally they must investigate the harassment and, if found credible, take prompt action to try to stop the behavior. Ideally, it is best to understand your rights and responsibilities regarding employment discrimination before encountering a problem. Familiarize yourself with your company’s policies on discrimination and the procedures of the EEOC and relevant state FEPAs. If you feel you have been the subject of workplace discrimination, the EEOC website is a good place to begin figuring out the most appropriate way to respond. There you can find the relevant laws, the role of enforcement agencies in interpreting and administering the laws, and how to file charges. It is important to note that, with the exception of violations of the Equal Pay Act (EPA), you can only file a job discrimination lawsuit under federal law after you have filed charges with the EEOC. Resources There are a number of resources that will help you identify companies that are committed to diversity and inclusion, as well as resources to consult if you have been the subject of harassment or discrimination in the workplace. Several are listed here. How to identify companies that value diversity, equity, and inclusion How to identify bias in a job listing Knowing your rights as an employee What to do if you have been discriminated against How to recognize and respond to workplace stressors or microaggressions Well-Being In 2010, Ben Bernanke gave a commencement address in which he described the “the ultimate purpose of economics”: [It] is to understand and promote the enhancement of well-being. Economic measurement accordingly must encompass measures of well-being and its determinantsInterestingly, income and wealth do contribute to self-reported happiness, but the relationship is more complex and context-dependent than standard utility theory would suggest. Other important contributors to individuals’ life satisfaction are a strong sense of support from belonging to a family or core group and a broader community, a sense of control over one’s life, a feeling of confidence or optimism about the future, and an ability to adapt to changing circumstances…. Psychological wellness, the level of education, physical health and safety, community vitality and the strength of family and social ties, and time spent in leisure activities. That is a pretty large list of things that determine your well-being. You will recognize some from traditional economics, while others are from this new view on economics. We can put them in a list to make them a little clearer: Determinants of Well-Being • Gross domestic product per capita • Personal consumption expenditures • Household income • Household wealth • Changes in the distribution of income, wealth, or consumption • Degree of upward mobility in material measures of well-being • Indications of job security and confidence about future employment prospects • Households’ liquidity buffers or other measures of their ability to absorb financial shocks • A strong sense of support from belonging to a family or core group and a broader community • A sense of control over one’s life • A feeling of confidence or optimism about the future • An ability to adapt to changing circumstances • Psychological wellness • The level of education • Physical health and safety • Community vitality and the strength of family and social ties • Time spent in leisure activities Additionally, the Organization of Economic Cooperation and Development (an association composed of developed nations) has created a Better Life Index comprised of elements that increase well-being. OECD Better Life Index • Housing • Income • Jobs • Community • Education • Environment • Civic Engagement • Health • Life Satisfaction • Safety • Work-Life Balance What makes you happy? (Maslow’s Hierarchy of Needs) People often conflate “happiness” with “life satisfaction.” According to Bernanke, researchers define happiness as a transitory emotion that is influenced significantly by your current circumstances, including the weather and even the time of day (2010). On the other hand, they use life satisfaction to refer to a long-term state of contentment and well-being. Psychologist Abraham Maslow captures the conditions that give humans life satisfaction in his Hierarchy of Needs (1943). These are usually portrayed as a pyramid (though Maslow did not initially present it this way) in order to represent Maslow’s contention that each level must be achieved before progressing to the next level. For example, if a person does not have their physiological needs met, they will be focused on those before pursuing safety needs, which is the next step on the pyramid. While most of the needs might seem pretty obvious, I want to point out that the self-actualization need can include things like partner acquisition, parenting, pursuing goals, and utilizing and developing talents and abilities. It should also be noted that Maslow would later revise and expand his Hierarchy of Needs to include: • Cognitive needs • Aesthetic needs • Transcendence For Maslow, transcendence included the need to help others and to seek spiritual transcendence. Other researchers have added two significant ideas to Maslow’s Hierarchy. The first is that the steps are fluid, and the pursuit of various goals can overlap at different stages in our lives. Second, you can achieve high levels of self-actualization, but if the lower needs have not been met, you will be forever trying to find them. The news is filled with stories of stars with wealth and fame who missed the needs lower on the pyramid and had tragic ends to their lives. What makes you happy? (Ben Bernanke) I find it pretty amazing that Ben Bernanke, Chair of the Federal Reserve Bank—the bastion of capitalism—would give a commencement address on the economics of happiness. In fact, Bernanke did just that at the University of South Carolina on May 8, 2010 (Bernanke 2010). This is exactly the kind of advice that you are looking for in this book! Bernanke opened his addressed by saying, As you might guess, when thinking about the sources of psychological well-being, economists have tended to focus on the material things of life…. This traditional economist’s perspective on happiness is not as narrow and Scrooge-y as you might think at first. There is now a field of study, complete with doctoral dissertations and professorships, called ‘the economics of happiness.’ The idea is that by measuring the self-reported happiness of people around the world, and then correlating those results with economic, social, and personal characteristics and behavior, we can learn directly what factors contribute to happiness (2010). What gives people life satisfaction is not just material wealth. In fact, although rich people in developed nations self-report that they are somewhat happier than poor people in those nations, people in poor nations report that they are pretty much just as satisfied with their lives as those in rich nations. As a matter of fact, in the United States, real per capita income has almost tripled over the time period 1946 to 1991 but average happiness has not changed at all. This finding is called the Easterlin Paradox, named after the researcher who discovered it. Similarly, Easterlin also found that as countries around the world get richer (economists measure this as Gross Domestic Product per capita) people do not report being happier. And in comparing rich countries to poor countries, Easterlin also found that once you get above a certain amount of income that satisfies basic material needs, people in rich countries do not report being much happier than people in poor countries (Easterlin, 1974). Additional research on the Easterlin Paradox has shown that even though people in rich countries may be more satisfied than people in poor countries, the increase in happiness due to greater wealth is moderate (Bernanke 2010). Do not forget that rich countries have more leisure time, better health care, often less corruption and other benefits. The explanation for the Easterlin Paradox, according to Bernanke, is that relative wealth is much more important than absolute wealth. A behavioral economic phenomenon called hedonic adaptation is also at work here (Frederick and Lownestein 1999). Humans are adaptable, and, like lottery winners who seem to return to their base level of happiness within six months, adaptation to any additional income causes us to return to our base level of life-satisfaction. Finally, Bernanke relates what the economics of happiness tells us will give us life satisfaction. Here are some of the highlights from his 2010 address: • “Happy people tend to spend time with friends and family and put emphasis on social and community relationships.” • “Another factor in happiness, perhaps less obvious, is based on the concept of ‘flow.’ When you are working, studying, or pursuing a hobby, do you sometimes become so engrossed in what you are doing that you totally lose track of time? That feeling is called flow.” • “Another finding is that happy people feel in control of their own lives. A sense of control can be obtained by actively setting goals that are both challenging and achievable.” • “Finally–and this is one of the most intriguing findings–happiness can be promoted by fighting the natural human tendency to become entirely adapted to your circumstances. One interesting practical suggestion is to keep a ‘gratitude journal,’ in which you routinely list experiences and circumstances for which you are grateful.” You will no doubt see some parallels to Maslow’s Hierarchy of Needs. Maslow spent a lot of time studying exceptional people such as Albert Einstein to see what made them feel fulfilled. His Hierarchy of Needs is a prescription for what will make humans happy—not just an academic study in developmental psychology. What makes you happy? (Authentic Happiness) When Dr. Martin Seligman was elected President of the American Psychological Association in 1998, he promoted an initiative in psychology to study what makes people happy along with the traditional subject of what makes people sick. This field of study became known as Positive Psychology. In his book, Flourish, Seligman presents his theory of what makes people happy (2011). It consists of five practices that are memorialized by the pneumonic “PERMA.” Positive Emotions Cultivate positive emotions. Pursue activities that bring you happiness and life satisfaction. (We’ve talked about these activities above in the sections on Maslow and Bernanke.) It also means doing gratitude exercises, such as listing three things you are grateful for each day before you go to sleep. Engagement Deep engagement in an activity is known as flow. In his 2011 book, Seligman provides a series of questions so that you can determine if you were in a flow: • Did time stop for you? • Were you completely absorbed by the task? • Did you lose self-consciousness? According to the principles of Positive Psychology, the way to become engaged in your work is to find a career that uses your signature character strengths (see Chapter 1). You can find your Signature Character Strengths by taking the questionnaire VIA Survey of Character Strengths on the Authentic Happiness website. Relationships Specifically, we should focus on positive relationships. As Bernanke said, friends and family give us the greatest life satisfaction, and you can practice daily reminders that you are grateful for friends and family. However, it is not just what your friends and family can do to make you happy. Maslow’s revised hierarchy places transcendence at the top. Transcendence is achieved by activities that focus on helping others reach life satisfaction and on pursuing your spiritual virtues. According to Seligman, positive relationships require both the capacity to love and the capacity to be loved. Meaning Meaning is belonging to and serving something bigger than yourself. (Sleigman 2011, p. 17). It can be devotion to your family, to a cause, or a spiritual belief. Meaning has both a subjective motivation (the feeling we get) and an objective motivation (that caring for others is an important virtue). Psychologist Vicktor E. Frankl, in his book, Man’s Search for Meaning, claims that our quest for meaning in our lives is one of the fundamental human aspirations (2006). According to Frankl, having meaning in our lives is so important that it can make the difference between life and death. During the Holocaust, Frankl was interned in a concentration camp. He reports in his book that among seemingly equally healthy prisoners in the camp, those who had expressed a belief in a meaning to life or belief in a higher power survived while those who did not see any meaning in life disproportionately perished (2006). Accomplishment People, according to Seligman, pursue accomplishment, achievement, success and mastery for its own sake (2011). Some only care about winning, measured by the number of defeated opponents or the amount of money in their bank account. Some, on the other hand, pursue accomplishment to feel competent or to achieve their full potential. In Maslow’s Hierarchy, this is reflected in the Need for Esteem and the Need for Self-Actualization. Hopefully you see by now the common threads between Maslow, Bernanke and Seligman as to what gives us happiness and well-being. Due to hedonic adaptation, we know that it certainly is not about having more money. Instead, the most important factor for achieving well-being is developing positive relationships and spending time with family and friends. Bad Habits and How to Change Them A fundamental tenet of economics is that individuals seek rewards. People do actions and seek to acquire goods and services that give them “utility.” English jurist, philosopher, and social reformer Jeremy Bentham (1748-1832) defined “utility” as “satisfaction.” Bentham was an English jurist, philosopher, and social reformer who invented the philosophy of Utilitarianism. According to Bentham, the fundamental axiom of Utilitarianism is that “it is the greatest happiness of the greatest number that is the measure of right and wrong.” More contemporary economists define utility as “well-being.” The new field of neuro-economics contends that individuals perform actions and seek to acquire goods and services because these activities give individuals a reward of dopamine in the area of the brain known as the ventral tegmental area (Wargo et al, 2010). In both mice and in humans, habits form by repetition of a certain activity. We do the activity because the dopamine neurons release dopamine, a neurotransmitter, thereby giving us a reward and encouraging us to repeat the action either at the time or later. As we (or the mouse in the maze) repeat the action, it becomes less and less mediated by the ventral tegmental area and more controlled by the basal ganglia, the most primitive part of the brain. Eventually, the basal ganglia takes over the action, and it is no longer mediated by the dopamine reward system. In essence, this is why it is so hard to change a habit, whether it is good or bad. Once an action becomes a habit, it is more closely related to an instinct in an animal than a conscious choice. So, how do we change or extinguish a bad habit? Cassie Shortsleeve, in Time Magazine, reviews the actions that scientists recommend to eliminate a bad habit: 1. Replace a bad habit with a good one. You must keep repeating the good habit, since a scientific study found that it takes an average of 66 days to form a new habit. 2. Reduce your stress levels. A lot of bad habits (smoking, sugar drinks) are used to alleviate stress levels because they give a dopamine feel-good high. Do other things to alleviate stress like meditation or a walk 3. Know the cues that trigger the habitual response, as in having a cigarette after every meal. Try to interrupt the cues. 4. Create for yourself a better reason for quitting the habit. This means creating intrinsic motivation for yourself, such as reminding yourself that you will be healthier without smoking or overeating. 5. Set better goals than just reacting to triggers. If you eat a cookie every time you walk in the kitchen, avoid the kitchen in between meals. Like an alcoholic, you want to throw out the liquor and avoid triggers that will remind you of the habit (2018). We should note, though, that “addiction” is neurologically different from “habit.” All addictive drugs hijack the dopamine system, not the basal ganglia. It is harder to alleviate addiction than it is to break a bad habit. Unfortunately, we do not have the space here to discuss addiction in detail. Savings is One Key to Well-Being In the Atlantic, Neal Gabler cites an annual survey of the Federal Reserve Bank to “monitor the financial and economic status of American consumers” (2016). One survey question asked respondents how they would pay for an emergency expense of \$400. Almost half (47%) of those taking the survey said that either they would cover the expense by borrowing or selling something, or they would not be able to come up with the \$400 at all. Many experts believe this is due to the credit card debt that Americans have taken on. According to most recent data from the Survey of Consumer Finances by the U.S. Federal Reserve, the average credit card debt of U.S. households is approximately \$5,700. At the same time, people’s dependence on credit card debt has been pushed by banks. Last year over two billion credit card solicitations were mailed out by banks and financial agencies. Chapter 11 talks about savings in great detail, but it’s important to know that having emergency savings can significantly add to your personal well-being. Personal finance experts recommend that you have a goal of accumulating six months of expenditures (mortgage, food, etc.) in a secure bank account. In good economic times, 90% of those who are laid off find a new job within six months. Further, regular unemployment compensation typically only lasts 26 weeks and, depending on which state you live in, ranges from \$235 (Mississippi) to \$650 (Connecticut). This payment will likely not cover the mortgage, the utilities and food for most of us. A six-month nest egg will dramatically reduce our stress while looking for a new job.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.02%3A_Flourishing_in_Your_Job_and_Well-Being_in_Your_Life.txt
Introduction Behavioral Economics uses psychology, neuroscience and economics to examine how humans make economic decisions. It includes the process of studying the biases, rules of thumb, inaccurate or incomplete information, propaganda and other influences that interfere with our making optimal decisions. It also presents prescriptions for countering these irrational influences in order to make better decisions as employees, citizens, and family members. To begin to understand this field, we should look at one of the most pervasive and consequential biases that has affected our entire economy: the oft repeated belief that a company’s purpose is to “maximize its profits” and to only look out for the owners’ and shareholders’ interests. This idea has roots in the writing of economist and Nobel Prize Laureate Milton Friedman. In an article in The New Times Magazine, Friedman stated, …there is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud (1970). The University of Chicago Department of Economics, where Friedman taught, was (and still is) the center of conservative free market economics in the United States. His general argument was that any employee of a business or corporation is an agent of the owner or its stockholders and has no right to spend their money in any way other than to increase profit. The owners and stockholders can then do anything they want with their profits, including spending it on some “social purposes.” Whether because of simple greed or true capitalist philosophy, this became a battle cry of capitalists and was widely quoted and used in corporate mission statements. It also was used in management and finance textbooks as a fundamental guiding principle and turned up in corporate annual reports as a mission statement to “maximize shareholder value.” The problem with Friedman’s entire theory is that it is simply wrong. Since this is a text about financial literacy, I will not spend a long time discussing the Friedman’s errors. However, I will make three points: 1. Corporations and partnerships must apply to a state to receive the permission to incorporate. This is not a right but a privilege. For example, according to Pennsylvania state laws, the state grants the right to incorporate for the “good of the Commonwealth.” 2. Freidman’s view was not the majority view when he voiced it. In the mid-twentieth century, firms were an integral part of their communities and the prevailing view was that firms had a responsibility to their shareholders, employees and communities (Wells, 2020). 3. The “free market competition” that Friedman envisioned in his theory does not exist in most markets, It is a fiction made up by economists. Perhaps the most telling repudiation of Friedman’s theories is that recently the CEOs of most of the largest corporations stated definitively in what can only be called a manifesto that the purpose of a corporation is not what Freidman said it was, but that corporations do have a social responsibility. The press release for this new manifesto was promulgated by the 181 CEO members of The Business Roundtable on August 20, 2019, which stated, Since 1978, Business Roundtable has periodically issued Principles of Corporate Governance. Each version of the document issued since 1997 has endorsed principles of shareholder primacy – that corporations exist principally to serve shareholders. With today’s announcement, the new Statement supersedes previous statements and outlines a modern standard for corporate responsibility…we share a fundamental commitment to all of our stakeholders…. Each of our stakeholders is essential. We commit to deliver value to all of them, for the future success of our companies, our communities and our country (2019). It is my sincere hope that this new philosophy will become best practice in business. However, as of this writing, this pledge is over three years old, and according to the empirical evidence, this has not been the case (Colvin, 2021). Dennis A. Muilenburg, Chairman, President, and CEO of The Boeing Company, signed the pledge. A recent Congressional investigation on two Boeing737 MAX airplane crashes (which had been grounded by the Federal Aviation Administration due to safety issues) casts doubt on his commitment. John Cassidy, an economics reporter for The New Yorker, summarized some of the reports’ key findings: It illustrates how Boeing’s management prioritized the company’s profitability and stock price over everything else, including passenger safety. Perhaps even more alarmingly, the report shows how the F.A.A., which once had a sterling reputation for independence and integrity, acted as a virtual agent for the company it was supposed to be overseeing (Cassidy, 2020). This is known as a “regulatory capture”—when a company dominates the regulator that is supposed to be overseeing it. In 2020, KKR Advisors and TCP published a more comprehensive analysis of corporate responsibility, reviewing all 500 companies in the S&P 500 and all 300 companies in the European FTSEurofirst Index. They were able to compile extensive data on 619 of these 800 companies and use a machine learning high-tech lab to analyze millions of data points. The report reached these key conclusions: 1. Business Round Table’s (“BRT”) Signatories’ “Purpose-Washing” Unmasked: Since the pandemic’s inception, BRT Signatories did not outperform their S&P 500 or European company counterparts on this test of corporate purpose. 2. Powered by Purpose: Companies with long track records of strong performance outperformed more than expected, while laggards’ underperformance became more pronounced, demonstrating how resilient companies were further fortified during this corporate purpose stress test. 3. Speed matters: Proactive, substantive responses to the pandemic and inequality crises had a discernible positive impact. Slow responders underperformed. 4. Global challenges: U.S. and European companies performed roughly the same on this test of corporate purpose. 5. Shareholder capitalism is no longer fit for purpose: TCP highlights the business case for ushering in a new form of stakeholder-aligned capitalism (Cassidy,2020). There are some positive changes underway in the corporate world. 1. The Pandemic Recession and the resultant labor shortage have increased wages, benefits, and working conditions for workers in the U.S. 2. Shareholder and popular activism has prompted corporations to promote their “green” efforts. These changes, while welcome, do not seem to be a result of the BRT’s new manifesto of the Principles of Corporate Governance. Rather, they seem to be the result of market forces and political pressure. Decision Biases and Information Literacy An important area of economics is the topic of making decisions based on imperfect information, or decision making under uncertainty. While decisions under risk are defined quite narrowly in economics as decisions where we know each outcome’s probability, decisions under uncertainty are decisions where we do not know all the outcomes or their probability. When trying to make a decision under uncertainty, typically the first step is to search for more information, as you want to reduce a decision under uncertainty to a decision under risk. Thus information literacy is a critical component of decision making. At the university where I teach, information literacy is a required component of every writing intensive course required of all majors. In this “Post-Truth Era,” as some have called it, finding factual information has become extremely complicated. Nowhere is this more evident than in the media, where misinformation and conspiracy theories try to bias our opinions about everything from vaccinations, to mask-wearing, to the last presidential election. This is why information literacy is such a crucial part of decision making. The following graph from Ad Fontes Media gives an excellent analysis of the bias of media in America. This chart is a continually updated version on the Ad Fontes Media website. For financial literacy, you can read serious mainstream publications like the Wall Street Journal, the New York Times, and the Economist via library access or student subscriptions. Traditional Economic Assumptions About People’s Behavior Professor Eugene Fama of the University of Chicago originated the efficient market hypothesis. In its simplest form, the hypothesis states that today’s current stock prices have factored into them all available information, and that past price performance has no relationship with the future. This means it is impossible to use technical analysis of past price performance to achieve exceptional returns. The assumptions behind this hypothesis are that investors are rational and that markets are perfectly competitive. However, we often see bubbles and busts in the stock market, a telling criticism of the efficient market hypothesis. Behavioral economics, on the other hand, studies how people actually make financial decisions, as opposed to how they should make decisions. It finds that people are often irrational: they have biases that skew their decisions, they use heuristics to make choices, and they are impatient for instant rewards. These tendencies can cause people to make sub-optimal choices. However, behavioral economics is still a rigorous science. People do not have to be rational in order to develop a model of their behavior; they only have to be predictable—that is, predictably irrational. The standard economic model of consumer choice has rigid assumptions that behavioral economists believe are either inaccurate or severely limited. Below, I have listed some of these assumptions as well as my thoughts on them. Assumption: Economic agents are rational. • While it’s true that people sometimes behave rationally, most of the time their actions are motivated unconsciously and emotionally. Assumption: Economic agents are motivated by expected utility maximization. • People are often motivated by material rewards, as this assumption states. However, it is important to remember people are motivated by a whole host of non-material rewards as well. Assumption: An agent’s utility is governed by purely selfish concerns, without taking other’s utilities into consideration. • The falsity of this assumption should be evident to you without much explanation. People care very much about others’ happiness. Assumption: Agents are Bayesian probability operators. • This is accurate. Bayesian logic is the opposite of scientific logic. Scientific logic looks at data with no preconceptions, while Bayesian logic holds that we have preconceived notions about most phenomena. Those preconceived notions are tested, and if we get an error message, we revise our preconceived notion. Assumption: Agents have consistent time preferences according to the Discounted Utility Model (DUM). • People have a very strong present bias, so future rewards are not anywhere near as salient as present rewards. Assumption: All income and assets are completely fungible. Fungible means that they are completely substitutable. • Assets are equivalent to income in that they generate an income dividend. That is, assets times an operator equals income (e.g., at a 10 % return an asset investment paying \$1,000 per year would be worth \$10,000: \$10,000 × 0.10 = \$1,000 annual return). However, people use mental accounting. We consider money we earn different from money we get as a gift and money in a savings account different from money in a checking account. We then treat this money differently (Wilkinson, 2008). Irrational (biases, heuristics, etc.) Nobel Prize winner Herbert Simon referred to the sometimes irrational cognitive processes that humans use to process information and make decisions as “bounded rationality.” However, current day behavioral economists are more focused on irrationality than bounded rationality. There are plenty of examples of people letting their biases and heuristics wrongly influence their decisions and opinions. For starters we merely need to pay attention to the partisan interpretation of everything each of the Presidential candidates says in the 2020 election cycle. Some other anomalies that violate the assumptions of the standard economic model: • If you take a weekend trip to New York City and eat at a restaurant you will likely never visit again, why do you leave a tip? • Why is the average return on stocks so much higher than the average return on bonds? • Why are people willing to drive across town to save \$5 to purchase a \$15 calculator but not willing to drive to save \$5 on a \$125 jacket? • Why do people forever make resolutions to stop smoking, to join a gym, to go on a diet, but it lasts about three weeks? • Why are people willing to pay \$8 for a hot dog at a sports stadium but not from a street vendor? • Why do people buy a new TV on credit when they have plenty of cash in their savings accounts to afford the TV? • When people go to an event and go to purchase a ticket for \$30, and find there is, say, \$50 missing from their wallet 88% say they will still buy a ticket. • However, if they already bought the ticket and find the ticket missing from their wallet only 46% said they would buy another ticket (Tversky and Kahneman, 1981). Homo sapiens have been around for 900,000 years. Over time, evolution has endowed us with deep impulses that guide our decisions. The following are a few examples of these impulses: 1. Humans use heuristics to make decisions, not rational thought. 2. Humans approach and are impatient for expected rewards. 3. Humans avoid expected losses. 4. Humans place more importance on relative income than absolute income. 5. Humans feel an actual loss twice as much as an equivalent gain. 6. Humans hate uncertainty. 7. Humans are a super-cooperative species. 8. Humans have a profound sense of fairness. 9. Humans are willing to punish third-party cheaters. 10. Humans have inertia; they resist change. 11. Humans act into a new way of thinking, rather than think into a new way of acting. Peoples’ financial decisions are a good example of this kind of irrational behavior. Nowhere is this more evident than in the stock market, especially by investors who are not professionals and even by investors who are professionals. Meir Statman, a finance professor at Santa Clara University, catalogues some of the erroneous beliefs that biased individuals hold about the stock market in a Wall Street Journal article (2020). After publishing an earlier article debunking five myths that amateur investors believe, Statman received feedback from amateur investors who still believed they could “beat the market” (that is, they could beat the performance of market indices, such as the Dow Jones Industrial Average, the S&P 500 Index, or the NASDAQ Composite Index). He aggregated these contrarian comments into six main categories, and I have summarized his response to each. Average is for losers. By definition, diversified investors earn average returns. If they choose an Index Fund or an Exchange Traded Fund (“ETF”) they earn the returns of the market. Some stocks deliver low returns and others deliver high returns; these average out to the market return. Undiversified investors attempt to pick good stocks and shun bad ones, leading to higher than market returns. In reality, though, they only think they earn higher returns. OK, but I know what I am doing. One reader contended that a person who has run a business will have acquired skills such as reading a financial statement, knowing what makes a company successful, or other types of business acumen that will help them pick good stocks. Statman retorts that playing the stock market game as an amateur is like playing tennis against a top-seeded professional. Reward requires risk. Another reader says that to reap higher returns you have to take higher risks, and that this is just the nature of the market. Statman says that in order to reap higher returns with an undiversified portfolio, you need luck not skill. A diversified portfolio will gain when the market gains. An undiversified portfolio may take a dive even when the market is gaining and decimate your portfolio. Time itself is a diversification. Another reader advocates holding stocks for five years or more, implying that the risk of any portfolio declines over time. This is not true, says Statman. Even a diversified portfolio can lose value over time, due to some companies going out of business. But a single stock or a small portfolio can be subject to many things over a longer horizon that can decrease its value. Competition could rise, or perhaps an innovation disrupts its market. Even Tesla, a current high-flying stock, is about to get a deluge of electric vehicle competitors from 2021 to 2025. Dollar-cost averaging is another form of diversification. One reader suggested you invest the same amount every month in an S&P 500 mutual fund. This is known as “dollar cost averaging.” Statman argues that the only value of dollar cost averaging is reducing your regret should you change your mind over the course of a year. For example, if you invest 10% of your cash on the first of every month, you will still have 100% of your money invested at the end of 10 months. If you invest in a diversified fund, you have the lower risk of a diversified fund. If you invest in an undiversified portfolio, you still bear the higher risk of that undiversified portfolio. Just pick stocks of good companies. Is Tesla a better company than General Motors? Maybe, by environmental criteria. On the other hand, is General Motors stock a better buy than Tesla stock? Definitely yes, by any standard fundamental stock market analysis. Tesla is way overvalued in relation to its earnings; GM is not. Whether a company is “good” is certainly important, but whether or not a stock is a good buy is much more important to investing. That is, if you want to make money on your investment. Statman reports that a study of Fortune magazine’s “America’s Most Admired Companies” found that these companies’ stocks had lower returns on average than stocks of spurned companies. Further, there was a correlation between increases in admiration over time and lower returns. (It should be noted, though, that this could be caused by investors bidding up the price when the firms get publicity and therefore reducing the returns). Biases Behavioral economics has studied many biases that influence our decisions. The following are some of the more common ones. Table 3.1. Bias Types Bias Type Definition Example Anchoring Bias Cognitive bias that causes us to rely too much on the first piece of information on a topic. Subsequent information is interpreted based on the reference point of our “anchor.” The first price a dealer gives us for a car sets our interpretations of the price of the car. If they lower the price, the price seems more reasonable, regardless of whether the price is too high. Confirmation Bias Cognitive bias in which we seek out information and notice it if it confirms an existing point of view. We tend to ignore or reject conflicting information that does not fit with our view. Picking a specific news or media source can limit what an individual is exposed to. People who have pre-existing party biases may tend to watch only networks that support their party and views, while rejecting or ignoring opposing channels. Actor-Observer Bias Bias where we attribute our own actions to external causes or situational factors while attributing others actions to internal causes, such as personality traits or motives. If you are scheduled to interview someone, but they show up 30 minutes late, you may attribute their lateness to their personality. However, if the roles were switched and you were the one running late, you might not blame yourself but rather attribute it to traffic or other situational factors. Correlation Causation A bias where someone inaccurately perceives a cause-and-effect relationship based on an assumed association or correlation between two events. As ice cream sales increase, crime rates increase. However, ice cream sales do not “cause” crime. Rather, there is another variable likely affecting each, such as the summer heat. Rhyme as Reason A cognitive bias where rhyming statements are more easily remembered, repeated, and believed. O.J. Simpson’s lawyer, Johnnie Cochran, used the phrase “If it doesn’t fit, you must acquit.” Cochran was referencing the gloves that were left at the murder scene. This phrase was considered a vital part of his defense. Loss Aversion Choosing to avoid a loss, even with potential to make an equal or even greater gain. Investors may hold their stock even if they are taking a loss so they can “at least breakeven” and sell for the price they bought. If they sell below the price they paid, they are experiencing a loss. Herd Instinct Tendency for individuals to think and behave like the people around them. During the first few months of the COVID-19 pandemic, Robinhood and other trading platforms experienced a marked increase in new accounts, primarily new investors with little to no experience. Information Bias Using extra information to increase your confidence in a choice, even if the information is irrelevant. For example, believing that the more information that can be acquired to make a decision, the better, even if that extra information is not related to the decision. Status Quo Bias Preferring to keep things the same as they are. Rejecting new ideas just because they are new. Halo Effect Extending positive attributes of a person or brand to the things they promote or the opinions they hold. For example, any celebrity and athlete endorsement that creates goodwill for a brand. In-group Bias Preferring people who are part of your “tribe” and acting in ways that confirm membership in the group. For example, always trusting the views of your political party and voting accordingly. Bizarreness Bias Remembering material more easily if it is unusual or out of the ordinary. For example, remembering facts about dinosaurs more readily than those about more academic topics. Google Effect Not bothering to try to remember information that can be found online. For example, not caring about historical events because, “I can always look them up!” Picture Superiority Effect Learning and recalling concepts more easily when they are presented as a picture rather than as words. Advertising uses this to great effect. For example products are shown in the midst of very happy people. However, if an ad contained the words, “Beer makes you happy,” many people would disagree. Humor Effect Remembering things easier if they are presented in a funny or entertaining way. Believing that political cartoons are true and unbiased because they are funny. Peak-end Rule Judging an experience by its peaks (highs and/or lows) and how it ended. “All’s well that ends well!” Sources: Coglode Ltd. (2021); Kendra Cherry (2020); The Decision Lab (2022); Shahram Heshmat (2015); Connie Mathers (2020); Gretchen Hendricks (2021); Daniel R. Stalder (2018); Anthony Figueroa (2019); Itamar Shatz (2022); Craig Shrives (2022). As the field of Behavioral Economics expands, researchers identify more and more biases that humans have. Thus far, researchers have found that humans have 188 cognitive biases. Nudges in Behavioral Economics The co-author of Nudge, Cass Sunstein, describes nudges as, “‘Nudges’ are “choice-preserving approaches that steer people in a particular direction, but that allow them to go their own way” (Thaler and Sunstein, 2008). They are not mandates but important “gentle pushes” that help people make good decisions. Nudges are very important in motivating people to take action on behaviors that are good for them. As one classic example of a nudge, a company automatically enrolls its employees in a retirement plan but allows them to opt out. In one experiment, this increased employee participation in contributing the maximum amount matched by the company from 40% to over 90% (Thaler and Sunstein, 20028). 1. Default rules (e.g., providing automatic enrollment in programs, including education, health, and savings). 2. Simplification of current requirements (in part to promote take-up of existing programs). 3. Reminders (e.g., emailing or text messaging for overdue bills and coming obligations). 4. Eliciting implementation intentions (e.g., asking ‘‘do you plan to vote?’’). 5. Uses of social norms (e.g., saying ‘‘most people plan to vote,” “most people pay their taxes on time’’ or ‘‘most people are eating healthy these days’’). 6. Increases in ease and convenience (e.g., making low-cost options or healthy foods visible). 7. Disclosure (e.g., sharing the costs associated with energy use), or as in the case of data.gov and the Open Government Partnership. 8. Warnings, graphic or otherwise (e.g., putting warning labels on cigarettes). 9. Precommitment strategies (e.g., having people commit to a certain course of action). 10. Information on the consequences of past choices (e.g., the use of ‘‘smart disclosure’’ in the US or the ‘‘midata project’’ in the UK). Nudging Yourself Humans are creatures of habit. When left to our own devices, we will more often than not fall back into our old habits, no matter how good our intentions. Setting up external nudges for ourselves can help us gain new and better habits. For example, you can set up a savings account connected to your checking. Then the checking account can “pay” your savings account a certain amount every month. This builds up your savings, which then can be used as an emergency fund or to invest in the stock market. Another way you might nudge yourself is to make to-do lists, both a daily list and a separate list for long term projects. These lists should be numbered by priority, but keep in mind that it will only work if you look at them regularly. Usually when we try to consider what task to do next, we will be subject to availability bias. Instead of focusing on the highest priority, we will work on the first thing that comes to mind. This first thing is often the easiest. Unfortunately, we have to do the hard things in life. Setting an alarm with a snooze function or, even better, two alarms fifteen minutes apart recognizes that we’d all prefer to turn off the alarm and sleep another hour in the morning. It will nudge us to eventually get up around the right time.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.03%3A_The_Importance_of_Behavioral_Economics.txt
The Definition of Money Money is most often defined as “a medium of exchange with no intrinsic value.” This essentially means that what people accept as money can be used as money. If you go back in history, you will see that people have used a number of different things as money, some that had intrinsic value (such as gold and silver), and many that had no intrinsic value of their own (such as seashells and cocoa beans). Currently, all countries around the world use money that is known as fiat money. From Latin, this term means, “Let it be so.” Essentially, this means that each country prints money on paper (or in some cases, plastic), and that currency is not backed by anything of intrinsic value except the full faith and credit of a country’s Central Bank. In the past, money was backed by silver (the silver standard) or gold (the gold standard). However, that came with its own set of problems. It meant that you had to have silver or gold equivalent in value to the amount of total money you had in circulation. This made it difficult to increase the supply of money in your economy, since you had to acquire enough silver or gold to back up the additional money you wanted to circulate. So the Central Banks of the world went off the “metal standard” for their currency. The U.S. abandoned the gold standard in 1933 but allowed holders of dollar currency to convert them to gold at the fixed price of $35 per ounce, an arrangement that was eliminated in 1971. The U.S. abandoned the silver standard in 1935. So, in a way, all paper money is fake! It is, of course, backed by “the full faith and credit” of the country that issued it, but that’s the only thing backing it. That means that unstable countries might end up with currency that cannot be used as payment for oil or food. Even if it is accepted, it is only at a greatly depreciated value. The world’s currencies fluctuate relative to each other according to the rules of demand and supply. For example, if you are trying to understand the exchange rates between the U.S. dollar and the Euro, consider how many U.S. dollars it would take to buy one Euro. If a lot of people who own dollars want to buy Euros but not an equal amount of people who own Euros want to buy dollars, the Euro will appreciate relative to the dollar. The Barter System Some economies in the past did not use money; instead, they used the barter system. It is a simple system. Let’s say that I have two extra bushels of corn, and I need some wheat. I will swap you my two bushels of corn for two bushels of wheat. The problem is that the barter system depends on what is called a coincidence of wants. Now let’s say that I have three extra pigs, and I ask my neighbor ask to trade them for a cow. However, he does not have any cows he wants to trade, and he does not want any more pigs. That means I have to go searching for someone who wants to trade a cow for my pigs. Money solves this problem, because cows and pigs (and everything that is for sale) can be valued in terms of money. Instead of bartering, I can sell my pigs in the local marketplace and then use that money to buy a cow. How Money Is Used Money is used in several ways: 1. It is a medium of exchange. A medium of exchange is something that can be traded for goods and services. As we showed above, it solves the problem of the coincidence of wants. 2. It is a store of value. Money’s function as a store of value allows you to hold on to money and buy something in the future, and the money is still accepted. If you are going to hold onto money, you should, of course, not hide it under your pillow, but put it in a savings account and earn some interest on it. When we save money for our future retirement, it is functioning as a store of value, and we must have confidence in the money still being valuable when we retire. 3. It is a unit of account. Money functions as a universal yardstick that expresses the value of goods and services in a single measure. For example, your labor might be valued at$15 per hour and then you can take that money you earn and buy a dozen eggs at $1.98 per dozen. The Amount of Money in the U.S. (M2) The Money Supply (M2) in 2019 is$14,941,700,000,000. This is a lot of money. According to the St. Louis Federal Reserve Bank, the types of money that are counted in the M2 are: 1. Savings deposits (which include money market deposit accounts) The Federal Reserve Bank The Federal Reserve Bank of the United States is the Central Bank of the United States. Virtually all countries have a Central Bank. The main exception to this is the European Union, which created a common currency, the Euro, in 1999. The EU has 27 members and 23 of them currently use the Euro as their official currency. As a result, the EU created the European Central Bank, which functions as the Central Bank for countries using the Euro. The key function of these Central Banks is threefold: 1. It monitors the banks and other financial institutions in the country to make sure they are following its rules and are acting in a financially responsible manner. The Central Bank has great power in this area and can shut down banks, either on its own or (in the United States) through the Federal Deposit Insurance Corporation, which guarantees all the deposits at U.S. banks. 2. It controls key interest rates, such as rates for bank borrowings and, indirectly through the prime rate, commercial lines of credit for companies. It also indirectly influences longer term rates such as car loans and mortgages. 3. It also controls the money supply. These activities all together are called monetary policy. The Federal Reserve Bank (or “the Fed”) is made up of three key entities: 1. The Federal Reserve Board of Governors. The seven governors are appointed by the President of the United States and serve for fourteen years each. Their terms are staggered so that one governor’s term expires every two years. This arrangement prevents one President from controlling the Fed through their appointments. The Chair of the Federal Reserve Board of Governors is also appointed every four years by the President. 2. The Federal Reserve Banks. There are twelve Federal Reserve Banks in the United States and these are effectively local offices of the Fed. The United States is divided into twelve Federal Reserve Districts, with a Federal Reserve Bank monitoring the commercial banks in each district and each Federal Reserve Bank is headed by a President. 3. The Open Market Committee. The Open Market Committee dictates monetary policy. It has twelve members and is composed of the seven members of the Board of Governors, the President of the New York District Federal Reserve Bank , and four additional Presidents of the District Federal Reserve Banks, each of whom serves on a rotating basis for one year. The Open Market Committee meets every six weeks to decide on monetary policy. In addition to the function and structure of the Fed, we also need to understand the mandate of the Fed. According to the various laws creating and underpinning the Federal Reserve Bank, it has a dual mandate: • To maintain low and predictable rates of inflation • To maintain maximum levels of employment that are sustainable. The Fed meets these mandates by controlling the amount of money in the economy. This indirectly influences the amount of goods and services bought in the economy. The total amount of goods and services made and purchased in any economy in a specific time period (usually a year) is called the Gross Domestic Product (GDP) of an economy. If we look back over the last forty years of the U.S. economy, the empirical evidence tells us that the ratio of the GDP purchased each year to the M2 is pretty constant. Specifically, it is a ratio of approximately 2 to 1. The technical term for this ratio is the Velocity of Circulation. The relatively constant Velocity of Circulation has three important implications for Monetary Policy. First, this constant 2 to 1 ratio means that every dollar of money in the economy buys two dollars of GDP over the course of a year. Second, it also means that if the Fed wants to influence the growth of GDP, it needs to create $1 of Money for every$2 of GDP it wants to stimulate. Third, the growth rate of M2 needs to be equal to the growth rate of GDP or the lack of money will slow down the growth of GDP. The relatively constant ratio of GDP to M2 is an important assumption of the Quantity Theory of Money, as espoused by the Monetarist economists. Monetarism is a school of thought in monetary economics that emphasizes the role of governments in controlling the amount of money in circulation. Monetarist theory asserts that variations in the money supply have major influences on national output in the short run and on price levels over longer periods (Wikipedia). The standard bearer of Monetarism was Nobel Laureate Milton Friedman of the University of Chicago. Unfortunately, although the ratio of GDP to M2 was fairly constant in the 1960s and 1970s when Friedman was doing his Nobel Prize winning research, it is no longer true (see graph below). This discrepancy now calls into question the validity of Monetarism. Firms need employees to make things and provide services, and we can get pretty specific about how many people will be employed based on additional GDP purchases. In 2018, if we take the total GDP and divide it by the number of employed people, we get this result: How We Get Addicted to Money In The Protestant Work Ethic and the Spirit of Capitalism , sociologist Max Weber points out that there has been a predisposition to amassing material things since this country’s founding (1930). Puritans, the original colonizers were Calvinists, and as such, they believed in predestination. In this tradition, God already knows who is going to end up in heaven or hell; however, for Puritans, this also meant that those destined for heaven would also be blessed with material prosperity in this life. The Puritans then worked hard to attain material wealth but also led ascetic lifestyles—no drinking, no dancing, and no enjoyment of their wealth. As Weber points out, all of this was so that these forefathers of the American Dream could assure themselves that they were truly one of the chosen. American materialism still exists in our society’s materialistic value orientation (MVO), as defined by Kasser and Kanner: From our perspective, an MVO involves the belief that it is important to pursue the culturally sanctioned goals of attaining financial success, having nice possessions, having the right image (produced, in large part through consumer goods) and having a high status (defined mostly by the size of one’s pocketbook and the scope of one’s possessions (2004). Further, Kasser and Kanner focus on two questions: 1. What causes people to care about and to accept materialistic values and to “buy into” high consumption behavior? MVO develops in individuals through two pathways: • From personal experiences and environments that deny peoples’ basic psychological needs of safety, relatedness and love, and competence and autonomy • From exposure to social models that encourage materialistic values – parents who are excessively materialistic or by heavy exposure to the advertisements and influences of our materialistic culture or by schooling (Kasser & Kanner, 2004) 2. What are the personal, social and ecological consequences of an individual’s or a society’s having a strong MVO? According to Kasser and Kanner, personal well-being declines as materialism becomes more centralized in someone’s value system. Further, they show that an MVO encourages behaviors that damage interpersonal and community relations and destroy the ecological health of the planet. Many psychologists, economists, and neuroscientists have presented research that shows how easily money can become addictive (Layard, 2005, Peterson, 2007). The human brain constantly engages in what is called “hedonic adaptation.” When we reach a higher level of income, we initially derive satisfaction from it. However, very soon, we adapt mentally and emotionally to that higher level and need even more money to achieve the same level of happiness. Through the same mechanisms by which we can succumb to drugs, alcohol or gambling, people can become addicted to money. Current psychological theories characterize money as both a tool (a function of money as what it can be exchanged for) and as a drug (a maladaptive function of money as an interest in the money itself) (Lea and Webley 2006). Essentially, this posits that people not only value money for its instrumentality—that is, how it enables people to achieve goals—but for itself— that is, for the totally false sense of control, security, and power that it gives (Vohs et al. 2006). Conversely, Price et al. (2002) have shown that physical and mental illness after financial strain due to job loss is triggered by reduced feelings of personal control. Unfortunately, even with enormous amounts of money, the wealthy are no happier than the less wealthy. In fact, they are actually more prone to depression and psychopathology (Kasser & Kanner, 2004, p. 129). Adults who engage in conspicuous consumption are largely trying to compensate for our unique human awareness of mortality and the pursuit of self-worth and meaning that this engenders or, simply put, existential anxiety, or the fear of going out of existence (Kasser & Kanner, 2004, p.128). National and time-series studies attest to the fact that large amounts of wealth have little or no effect on happiness. Real purchasing power has more than doubled in the United States, France, and Japan over the last fifty years, but life satisfaction has not changed at all (Seligman, 2002, p. 153; Layard, 2005). I believe that people with a MVO are at risk for anxiety, psychological problems, family dysfunctions, health problems, and personal financial problems. The evidence for this is voluminous (Kasser and Kanner 2004). These attitudes cause real damage and are then major contributors to social problems that undermine the fabric of our society. MVO even contributes to world discord, as the exportation of American materialism emphasizes the gulf between the haves and the have-nots around the globe. Cryptocurrencies For economists, Bitcoin and other cryptocurrencies are an interesting experiment, but they are not yet ready to be adopted by banks and financial institutions as a way of doing business. The extreme volatility of Bitcoin (see chart below) and other cryptocurrencies make them an extremely poor store of value (one of the main functions of money) though they might be an adequate medium of exchange. Further, if you look at the history of the Bitcoin, you will see that there have been a number of scandals and thefts. Bitcoin’s defenders say these problems are just the “growing pains” of a whole new type of currency and system. To this I say, that is fine, but let me know when it is grown up and adopted by (and guaranteed by) major financial institutions in the United States. My advice is to stay away from these cryptocurrencies for now. The graph below certainly looks enticing. If you had bought one Bitcoin on January 8, 2015, at a price of $288.99, you would have had your investment grow to$19,650 dollars by December 16, 2017, a return of 67 times your original investment, 3,350% per year for each of the two years you held it. But how could you have known that? At the same time, if you had bought one share of Amazon stock on January 1, 2015 at $320, you would have had your investment grow to$3,225 on August 6, 2020, a return of 10 times your original investment, an equivalent to 200% annual return on your investment for each of the five years you held it. The fundamental difference here is that Amazon makes something. It provides goods and services to customers, it has a cash flow, and it has revenue and net income on which you can calculate Return on Investment (the universal way we value companies and the price of their stocks). Buying Bitcoin is almost like buying collectibles, like an A-Rod rookie baseball card or a pair of original Air Jordans. Will these collectibles increase in value? Maybe yes or maybe no. Do you remember the Beanie Baby collecting craze? Did those increase in value? “Risk follows reward” is an immutable law of Wall Street; if you are seeking higher than average returns, you must go after riskier investments. You might have been lucky enough to invest in Bitcoin in 2015, but you might have bought it in 2017, at the height of its speculative run. You also could have bought shares in an S&P mutual fund at the Vanguard Mutual Fund Company, and your return from January 2, 2015 to August 6, 2020, would have been + 63% over five and a half years for an annual return of 11.5%, with much less risk than Bitcoin (and the start-up, Amazon). A Cashless Society Unquestionably, we are moving more and more toward a cashless society. In a cashless society (in the U.S.) it’s possible only drug dealers and firms paying their employees “under the table” will be using cash. Think about all the transactions you use debit or credit cards for each month. Like me, you might also be paying your bills electronically through your financial institution. And, as I mentioned earlier, only about 10% of M2 is actually currency; its circulation creates the rest of M2 in the worldwide banking system. Debit cards could easily replace this currency. Hyperinflation In Zimbabwe As I said before, the Quantity Theory of Money states that the growth rate in the money supply will equal the growth rate in the prices of goods and services in an economy (inflation rate) minus the growth rate in real Gross Domestic Product. Rearranging this equation, we have: This is called the inflation equation. As we said above, although this relation does not hold for every year, it is accurate over the long run, a fact supported by the empirical evidence. To paraphrase the Nobel Prize winning economist, Milton Friedman, inflation is always and everywhere a money supply problem (1970). As a thought experiment, imagine an economy with a certain (fixed) money supply. You need money to buy goods and services created within that economy (the GDP). Now let’s imagine that over time the money supply grows 10% greater (rate of growth = 10%) but the goods and services do not change at all (rate of growth= 0%). Therefore, the prices paid for the fixed amount of goods and services will be bid up by 10% (rate of inflation=10%). For example, consider Robert Mugabe, the strongman dictator who ruled Zimbabwe for over 30 years. From 2007 to 2010, he created seismic shifts in his country’s monetary policy. Since he needed more money to run the government, to pay the military, and to buy imports, he simply ordered the Central Bank of Zimbabwe to print more money. Unfortunately, he printed so much money compared to the supply of goods and services that the rate of inflation in 2008 was over one billion percent (1,000,000,000%). As things became more expensive, the Central Bank had to print currency notes in larger and larger denominations so the residents would not have to carry money around in wheelbarrows to pay for food at the market. In addition, Mugabe instituted poorly executed land reforms that did not help, but the root cause of the hyperinflation was printing too much money. In 2009, so many people had lost confidence in the Zimbabwe dollar that the government had to allow the U.S. dollar and other foreign currencies to be used for payments. They also stopped printing the Zimbabwe dollar, which caused the inflation rate to drop precipitously. Even so, in 2010, it still cost 100 Billion Zimbabwe dollars to buy lunch. Eventually, the inflation rate in Zimbabwe was tamed (relatively, at least), and as of June 2019, the official inflation rate was 97.8% annually.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.04%3A_What_is_Money.txt
Your Parents’ Advice Some students may have received advice from their parents or other adults ranging from how to ride a bike to which fork to use at a formal dinner. However, many parents are reluctant to talk to their children about financial management. And if you got financial advice, it might be wrong, as so much has changed since they had to make the important financial decisions you are currently facing. Julia Carpenter, in the Wall Street Journal article “Your Parents’ Financial Advice is (Kind of) Wrong,” points out what is right and wrong about your parents’ advice: The rules have changedAmericans entering the workforce in the decade since the financial crisis face a starkly different landscape than their parents did at the same age. They often have far higher student loan debt. Housing eats up a bigger chunk of each paycheck. And young households have lower incomes and fewer assets than previous generations did at the same ages (2019). Given these new conditions, Carpenter feels we need new rules. Below, I have listed these rules along with my commentary. Educational Debt is Not Necessarily Good Debt In 2018, the average starting salary of a college graduate was about \$60,000, while the average salary for a high school graduate was \$28,000. On average, students complete their undergraduate degrees in five years; however, at more than a third of U.S. colleges, only half of the students will earn their degree in eight years. Those who do not finish end up with debt but not with the higher income they were hoping for. Further, four in ten college graduates are in jobs which do not require a college degree (New York Federal Reserve Bank, 2018). If you plan to go to graduate school, remember that if your starting salary after the degree equals the debt incur then it is probably a good investment. You want to be able to pay your living expenses and still have enough left over to pay off your loans in about ten years. If you think about it, “buying” an education after high school is really an investment and you should think about the kind of return you will be getting on that investment. Do Not Assume You Should Buy a Home Owning a house is still part of the American Dream, but it might not make financial sense for you. For example, you might work in a city with a hot housing market. You might not be able to afford a down payment, or you could wind up depleting your entire savings. On top of that, if you do not expect to stay in a city for more than three years, you will likely not get back all the transaction costs (fees, title insurance, etc.) of purchasing a house. Do not buy a house just because you think you should. The Best Place for Financial Growth You should compare your salary (or potential salary, based on the average for your field) to a city’s cost of living. Many people think it would be cool to live in New York City or San Francisco, but the cost of living is so high that your salary has to be proportionate. Otherwise, you can find yourself commuting an hour or more from the only affordable living accommodations in the area. Some cities like Chicago, Philadelphia, Austin, and Portland, although costly, have more affordable housing than San Francisco and good starting salaries. If, for example, you compare the salaries of high-tech workers and the cost of living in San Francisco to those in these cities, you will find you are financially better off living in the city with the lower cost of living. Cities around the U.S. are trying to attract tech companies and, although San Francisco had a higher percentage of high-tech jobs as a proportion of overall jobs, there are good high-tech jobs in the many cities. As a result of the Pandemic, remote work has increased substantially. The U.S. Census Bureau recently released its annual 2021 American Community Survey a survey of household behavior (September, 2022). According to the Census Bureau, between 2019 and 2021, the number of people primarily working from home tripled from 5.7% (roughly 9 million people) to 17.9% (27.6 million people). However, remote work is not evenly distributed around the country. In metropolitan areas, 19% of employees worked from home (with Washington, D.C. at 48% remote workers and Silicon Valley at 35% remote workers as outliers). Outside metro areas, only 9% of employees were working from home in 2021. The opportunity for remote work is a factor to consider when seeking a position. It has its advantages, including working flexible hours and saving on commuting time. It also has its disadvantages, including the loss of comradery of office work and not being visible to your superior to take advantage of bonding and advice. Not All the Old Rules Are Dead Your parents might have followed this old rule: be frugal until you save up enough for the down payment on a house. Unfortunately, with student debt and the higher cost of housing, it does not work to do simple things like pack your own lunch or hold off on a vacation. It’s part of the American Dream that couples rent for a while, save up for a house, and then, when they are ready to have children, buy a house in a good school district. If they cannot do this, they might feel a sense of disappointment or failure. However, that should not cause you to throw up your hands and not work on saving for your future. There are still important goals for you to save for. First, although young people tend to live in cities, there are almost always suburbs that are more affordable. Under the gravity model of real estate, the center of gravity is downtown where there are a lot of jobs. Then, unless there are physical constraints such as mountains or a coastline, housing construction proceeds over the years in concentric rings around center city. In general, the closer the housing is to the center, the more expensive it is. Housing that is farther out is then cheaper, but it could entail a longer commute. However, in many cities, young people are creating a new trend of moving into affordable suburban housing, and others have started looking for a job in smaller cities with good salaries and a reasonable cost of living. Outside of housing, you will need to save for a number of things. You should an emergency fund of, ideally, at least six months’ salary in case you lose your job and begin contributing to your retirement as early as possible. If you intend to have children and expect them to attend college, you should begin putting aside money for their college expenses. Put these savings into an account where the money will compound to a significant amount by the time you need it. Having these savings will reduce your financial anxiety and improve your well-being. Ten Rules for Financial Freedom In 2019, Susan Hube wrote for the financial journal Barron’s saying: The true measure of financial success isn’t how much money you make—it’s how much you keep. That’s a function of how well you’re able to save money, protect it, and invest it over the long term. Sadly, most Americans are lousy at this(Hube, 2019). Two-thirds of Americans would have trouble coming up with \$1,000 cash (not credit) to pay for an unexpected medical bill or emergency. Even more disturbing, seventy-five percent are not saving enough or investing correctly for their future retirement requirements. While there are a number of external factors that exacerbate this problem—stagnant salaries, expensive healthcare and education and rising housing costs—there is a deeper issue: a lack of financial literacy. Parents are reluctant to talk to their children about money, and high schools and colleges lack financial literacy courses. Individuals are increasingly left on their own to decide how much to save and where to invest their savings. To help, Hube laid out ten rules for financial freedom that I present here, along with my commentary. Set goals. The first step is to set goals: short-term. medium-term, and long-term. For example, a short-term goal could be to save up six-months’ salary as an emergency fund. A medium-term goal might be to save up for a down payment on a house. Finally, a long-term goal would be to save for retirement. The sooner you set your goals, the sooner you will begin trying to achieve them. Goals motivate us, and when you have your goals to think about, you will likely squirrel away the extra cash. Know what you have got and what you need. Always keep this question in mind: “Do I need this thing or do I just want it?” It is hard to resist something you really want, like a new pair of shoes or a new kind of tool. However, you should it is not a good idea to buy something just because it gives you a jolt of pleasure. For example, my neighbor had a garage sale recently. Since my wife helped organize the sale, we got a preview on what was being sold. I saw three electric guitars, and I really wanted one. Luckily, my wife said, “You already have a guitar. You don’t need another one!” I must admit, it was hard to distract myself from that guitar, but the next day, I knew she was right. Look at your monthly after-tax income (disposable income), and add up all your expenses for the month. If your expenses exceed your income or if you are not saving any money monthly, you have to cut expenses. Finally, if you are buying things that you do not need or do not use (such as a gym membership or a particular streaming service), drop it and bank the money. Save systematically. Pay into your savings, the same way you pay your electric bill: monthly and automatically. Assuming you have joined a credit union for your banking needs, (See Chapter 10, Banks and Financial Institutions.) arrange for automatic bill payment and have a specific amount transferred into your savings account every month. Ideally, you will be saving 10% of your disposable income each month. However, this is impossible to do in your first or second job. Start out with 5% of your take-home pay and slowly ramp it up to 10%. Begin saving early to take advantage of the compounding of interest. In simple terms, this means if you put \$1,000 in a savings account, and in year one you earn 10% interest, this means you will have \$1,100 at the end of the first year. If you leave the \$1,100 in the account and continue to earn 10% interest, you will not only earn interest on the original principle of \$1,000 but you will also earn interest on your year one interest. Thus, at the end of year two, you will have \$1,210 in your account. There are websites such as www.bankrate.com that give you compound interest calculators to estimate the value of your principle over time, but the Rule of Seventy can also calculate your money’s growth. Take the number 70 and divide into it the interest you earn. Assuming compounding of interest, the result is the number of years it will take for your money to double. Using our example above, if you are earning 10% per year your money will double in seven years. (A 10% return is not an unrealistic goal. As you will see in Chapter 15, The Vanguard Group has shown that, going back to 1926, a mutual fund containing a very broad portfolio of U.S. stocks (i.e., the stocks in the S&P 500 Index) has earned 10% per year). Invest in your retirement plan. If your employer provides a retirement plan, for example, a 401(k), and matches your contribution to it, always contribute the maximum your employer will match. If you really think about this, you are earning 100% return on your money; the employer is doubling the money you contribute. You should even give up your lunch or other non-essential expenditures to contribute the maximum. Employers often have 401(k) plans where they will match your contribution up to 4% of your gross salary. These plans have taken the place of the traditional guaranteed pensions and have shifted the burden of managing each worker’s retirement fund from the employer to the worker. However, if you put your retirement contributions in a mutual fund of all stocks with a good manager such as Vanguard, you can earn the 10% annually. The value of a 410(k) plan is that the money you contribute and the money your employer matches are tax-deferred (but not tax-free). That is, you are not taxed on these contributions nor on the annual return (presumably 10% annually) until you withdraw money for your retirement. If your employer does not offer a retirement plan or if you are self-employed, you can create either an Individual Retirement Account (IRA) or a Roth Individual Retirement Account (Roth IRA) and still earn the same returns. As of 2019, you can contribute up to a maximum of \$19,000 into a 401(k) and \$6,000 or \$7,000 into a Roth IRA. Invest for growth. Until you are within a couple of years of retirement, you should invest your retirement funds and other extra income for growth, which means investing in stocks. Although stock prices have more volatility than bonds, they return double what bonds do over time. You will need to be able to stomach that volatility in order to get the higher return. In Wall Street terms, a bull market is a market in which prices are going higher, while a bear market is a market in which prices are going lower. This archaic language comes from the fact that a bear hits downward with its paws while a bull gores upward with its horns. See Chapters 14 and 15 for more detail on the joys and risks of investing in the stock market. Avoid bad debt. Debt used for investing in something, such as a house, your education, or your car is good debt. Credit cards are bad debt—debt for consumption purposes. If you buy something with a credit card, pay it off at the end of month. Credit cards charge anywhere from 9% to 25% per month, depending on your credit score. If you pay only the minimum each month, you can end up paying double the original amount you borrowed. If you really need to pay for something on a credit card, such as car repairs or a new computer for school or work, use the credit card that charges the lowest interest rate and then use another credit card for your other purchase and pay it off every month. Do not overpay for anything. Do not overpay on fees or commissions for investing in stocks and do not overpay on your taxes. I talk about each of these issues in the chapters on investing and on taxes. Financial advisers usually charge 1% of your assets annually to tell you what stocks to buy. None of the actively managed portfolios or mutual funds has consistently exceeded the return on the S&P 500. Index funds or Exchange Traded Funds will hold all the stocks in the S&P 500 or similar indexes and will charge less than ¼% of your assets annually and still return 10% per year on average. Furthermore, do not overpay on other big purchases, such as televisions and appliances. Shop around. Protect yourself. The ideal goal is to build up a fund worth six months’ expenses. This is a very difficult goal to accomplish, so when you are young try to save at least one month’s rent for starters. Why do all the experts pick six months as the ideal amount in this emergency fund.? This is because it takes about six months to find a new job if you are laid off. You can go on your job search without falling apart emotionally. Also, buy renters insurance (It’s very cheap) and, if you own a house, a decent house policy with a reasonably low deductible. You may have an accident and be unable to work. Many employers pay for a minimum amount of disability insurance that will pay you 60% of your salary if you are disabled long term. If you own a home and have a family, you should consider buying some long-term disability insurance as a supplement to your employer’s. It is pretty inexpensive. As to life insurance, do not buy a whole life policy. Whole life insurance is a rip-off. If your life is financially complicated with a house and family, buy term life insurance. It is much cheaper than whole life insurance. Keep your investing simple. As I said above, keep your investments simple. Do not chase fads, such as cryptocurrency or 3D printing companies. Invest in an Index Fund that has the S&P 500 stocks in it, and you will pay low fees and earn on average 10% per year. Also, a mutual fund with the S&P 500 stocks in it will have Google, Apple, Facebook, and any other significant stock worth holding, so you can still ride the high-tech wave with your mutual fund. Seek unbiased advice. I strongly advise you to go to Vanguard and invest in one of their index funds. Vanguard is owned by the customers who invest in its mutual funds, so it is essentially a non-profit. There are no stockholders, so they can keep their fees low. Jack Bogle, the founder of Vanguard, invented Index Funds (passively managed funds that track a stock market index like the S&P 500) because he saw that actively managed mutual funds were not beating the S&P 500 returns and were charging a fee of 1% of assets. Finally, do not buy individual stocks from a stockbroker, and do not buy mutual funds that charge you a commission to get into them. You cannot pick the consistent winner stocks and the brokers who charge you a commission to get into a fund often sell you the investment that gains them the highest commission, not the investment that fits your needs.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.05%3A_Analyzing_Your_Current_Financial_Situation.txt
Savings Among Americans Personal and household savings are important, both for you and for the economy. For you, savings creates a buffer for unexpected expenses and can also be used to finance a down payment on a house or to help pay for college. You can deposit your savings in a financial institution or buy a mutual fund that invests in the stock and bond markets. In other words, your savings becomes an investment; that is, it is money that you put into a financial institution or instrument for which you receive a return in the form of interest or dividends. For the economy as a whole, these savings create economic growth. Firms borrow money (your deposits) from financial institutions or sell shares to your mutual funds and then use that money to expand their businesses. Before we go further, though, we should break down some of these terms. First of all, it’s important to understand that personal savings is equal to income minus personal outlays (or consumption) and taxes: Then, the Personal Savings Ratefor any economy is defined as this ratio: Disposable personal income(DPI), mentioned above, is your take home pay: From another perspective, savings can be viewed as the portion of personal income that is used either to provide funds to capital markets or to invest in real assets such as residences. What happened to the U.S. savings rate in the recent Pandemic Recession is quite unusual, to say the least. As you look at the graph below, you will see that from 2000 to 2020, the Personal Savings Rate averaged about 5% to 7% of Disposable Income. However, as the Pandemic Recession began (in February 2020) the Personal Savings Rate skyrocketed. • In February 2020, the U.S. Personal Savings Rate was 8.3%. • In March 2020, the U.S. Personal Savings Rate was 12.8%. • In April 2020, the U.S. Personal Savings Rate was 33.5%. • In May 2020, the U.S. Personal Savings Rate was 22.4%. • In June 2020, the U.S. Personal Savings Rate was 19.0%. If you look at the gray bars, which indicate recessions, you can see that the Personal Savings Rate does increase somewhat. This is due to consumer sentiment or, as John Maynard Keynes called it, “animal spirits.” During a recession, the sentiment is fear. Even so, the magnitude of the Personal Savings Rate during the Pandemic Recession is unprecedented. As stated before, the absolute amount of personal savings is the difference between income minus taxes and spending. The graph below shows this difference in absolute dollars from 2014 up to June 2020. From April to July 2020, personal income jumped dramatically from the $600 supplemental unemployment compensation and other relief payments provided by the CARES Act. However, because of fear, consumers decreased their spending. On top of this, in many states, restaurants, bars, hotels and all non-essential retail stores were shuttered in April, curtailing consumer spending and further increasing pessimistic sentiment. As this graph shows, even though personal income increased in 2020, spending still decreased, highlighting just how important consumer sentiment is to the economy. If consumers are worried about the economic future, they will put off their expenditures to whatever extent they can. During the Pandemic Recession, restaurants, bars, vacation venues and services took the biggest hit. In comparison, the 2008 Great Recession saw large durable goods expenditures (appliances, automobiles, clothing) decrease by 8%, but spending on restaurants and services stayed relatively the same. According to a recent Gallup Poll, consumer satisfaction in the U.S. has fallen, and this could curtail future spending. However, it’s worth noting that consumer satisfaction is currently not as low as it was during The Great Recession. Savings Rates in Select Countries Table 6.1. Household Savings Rates as a Percent of Disposable Income (in %) 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019* Austria 9.1 7.4 9 7.1 10 7.3 6.8 7.8 7.7 7.7 Belgium 10.1 8.8 7.5 4.9 8.3 5.1 4.3 3.9 4.8 5.1 Canada 4.8 3.5 5.2 4.8 3.5 3.6 4.6 3.4 1.4 1.6 Czech Republic 6.8 4.9 6.2 5.6 6.6 6.6 6.8 6.5 6.0 6.7 Denmark -1 -0.6 -1.2 2.3 -1 -2.9 4.3 4.6 6.6 3.9 Finland 3.3 1.1 0.6 1.3 1.3 -0.4 -0.7 -1.8 -1.2 -0.4 France* 15.9 16.2 14.9 14 14.9 14.2 13.8 13.7 13.8 14.7 Germany 10.9 10.4 9.4 9 9.9 9.5 9.7 9.8 11.0 11.0 Hungary 5.4 5.1 4.8 7.1 5.9 8 6.2 8.1 6.9 6.6 Ireland 7 5.4 5.2 4.9 3.9 3.6 4.2 3.8 5.8 5.8 Italy 5.3 4.3 3.1 3.6 6.3 3.9 3.3 3.2 2.5 4.3 Japan 2.1 2.9 1.3 0.3 2.3 -0.4 0.8 2.6 4.3 4.5 Netherlands 3.4 5 6.5 7.3 6.1 9.9 9.6 10 8.4 7.8 Norway 6.1 7.8 8.3 7.6 5 8.2 10.3 7.3 6.5 6.7 Poland 5.9 3.5 2.6 0 1.5 -0.4 -0.4 1.5 0.3 1.4 Portugal* 10.2 10 9.5 7.8 8.4 5.2 5.3 5 6.5 7.0 Slovakia 5.7 4.8 1.9 0.2 1.1 1.5 3 3 2.6 4.0 Spain 13.1 11 4.4 3.8 3.2 3.5 2.9 1.8 1.5 2.3 Sweden 8.3 10 15.1 15 10.4 16.4 15 16 15.4 17.1 Switzerland 11.3 12.7 17.5 17.5 16 18.9 18.2 18.7 17.3 17.6 United Kingdom* 6.6 6 7.3 8.7 8.5 8.6 9.4 6.7 6.1 6.4 United States 5.1 4.2 7.2 5 5.3 7.4 7.6 6.7 7.7 8.1 Source: Organization for Cooperation and Development (“OECD”) European Federation of Building Societies – Annual Report 2019. *Estimate China’s Savings Rate China’s economy is the second largest in the world. Its economic growth rates have been extremely high, many years climbing into the double digits. It also has high rates of government, corporate, and household savings. In a working paper from the International Monetary Fund (part of the World Bank), Zhang et al. identify three phases that they contend influenced the savings rate of Chinese households. 1. The first phase was in the 1980s, following the introduction of the one-child policy and de-collectivization of agriculture in rural areas. Beginning in 1976, the one-child policy freed disposable income, and since children traditionally took care of their parents in old age, the one child policy also incentivized older Chinese to save more. The savings rate rose from 5 to 20 percent of disposable income (albeit with a temporary dip in the late 1980s, possibly due to a GDP growth slow down). 2. The second stage was in the 1990s, after Deng’s southern tour reaffirmed China’s policy to reform and open-door policy. In addition, the massive lay-offs resulting from the state-owned enterprises (SOE) reform in the late 1990s also put downward pressure on wage growth. SOE reform took center stage in this period and was accompanied by the transformation of the social safety net and job security, leading to savings rising to 25 percent of disposable income. 3. The third stage came after China entered the World Trade Organization in 2001. Savings rose to 30 percent of disposable income during an export-driven boom. Notably, since 2012, household savings have plateaued and gradually begun to decline (2018). China’s saving rate was also affected by its conversion from a centrally planned economy to a market economy. This resulted in massive layoffs; 27 million people lost their jobs between 1997 and 2002. Along with these reforms, the social safety net was dismantled, and as a result, Chinese people paid an increasingly larger share of their healthcare costs (from 20% in 1978 to 60% in 2002, although it has declined since then). The layoffs and unexpected health care costs further incentivized the Chinese people to save. The Chinese economy shows us some of the reasons savings rates can fluctuate from country to country, largely in response to demographic and economic changes. Let’s now take an overview of the reasons individuals and households save. Influences on the Rate of Savings In general, people save for the following reasons: • Emergency/ unforeseen expenditures (especially unexpected medical expenses) • Down payment on a house (although often a 5% down payment is enough) • Down payment on a car (although less and less is required these days) • Retirement Income • Education for yourself or your children The savings rates as a percentage of disposable income vary from country to country (in some places, quite significantly). The dominant influences on these differing savings rates are explained below: 1. The social safety net varies significantly from country to country. Is there a national healthcare system (e.g. Canada)? Are there generous retirement pension plans (e.g. Finland)? 2. Certain countries have a cultural disposition to savings (e.g., France and Germany). This is likely due to the trauma of World War II. 3. A national tragedy or recent disaster can cause an increase in the savings rate. For example, China was occupied by the Japanese in the 1920’s and again in World War II. After World War II, a civil war erupted between Mao Tse Tung and Chang Kai Sheck with Mao winning and turning China Communist in 1948. After the collectivization of all farms, Mao led the Great March, an event that led to the deaths of thirty million people. Interestingly, Megan McArdle states that some of the reasons people used to save, such as taking a vacation or for holiday gifts are now just put on our credit cards (2018). This means we are buying what we want without having the money for it, which means we have to pay the credit card bill every month. This increases our monthly expenses and conversely decreases our monthly savings. Global Consequences of a Lack of Savings Previously, I stated the following: The graph below shows this correlation in the United States from 1970 to 2019. (I did not include 2020 in the graph due to the extraordinary temporary jump in the savings rate during the Pandemic Recession.) This correlation of investment to savings is also true in the rest of the world: From this, we see that one of the things a country can do to stimulate investment (and economic growth) is to encourage higher rates of savings among its citizens. The consequences of low rates of savings can be seen best in Sub-Saharan African countries. Many citizens in these countries are subsistence farmers and have almost no savings. As a result, there is not a large supply of loanable funds, which are essentially deposits in local banks. Since the supply is low, interest rates are high on loans, which curtails investment. Low ratios of capital equipment to labor results in low productivity of workers. Low productivity of workers results in workers being paid low wages. Low wages mean workers have low or non-existent savings. You can see how this is a vicious circle. How Much You Should Save You should begin saving now, even if you can only set aside$100.00 per month. Your goal should be to ramp this up to 10% to 15% of your disposable income, but that is impossible when you are just beginning your career. As we saw in the chart from the OECD above, the savings rate in the United States from 2010 to 2019 ranged from a low of 4.2% of disposable income to a high of 8.1% of disposable income. That gives us average savings rates of: However, one of the drawbacks of using the average rates is the increasing income inequality in the U.S. Lower income households have a much lower savings rate as a percentage of disposable income than high income households. Therefore, we should look at savings rates for income quintiles or deciles before we decide on a reasonable expectation for a savings rate. I have my retirement fund at the nonprofit mutual fund company TIAA. The TIAA website contained an article by personal finance journalist Paula Pant, who has been featured on MSN Money, Bankrate, Marketplace Money, AARP Bulletin, and more. Her website, “Afford Anything,” draws 30,000 visitors each month. Paula recommends saving 10% to 15% of your disposable income. However, she also recommends the 20/50/30 Rule for personal budgets (Pant, 2020): • 20% of your disposable income goes to savings • 50% of your disposable income goes to necessities • 30% of your disposable incomes goes to discretionary expenditures, such as entertainment The 20/50/30 rule seems like an impossible goal. Perhaps more realistically, Vanguard, one of the largest mutual fund companies in the world, advises the following: 1. Save at least enough to get the full match offered by your employer retirement plan, if you have one. 2. Work your way up to 12%–15% of your pay, including any employer match. This goal seems more reasonable, although when you are starting your career, it may be very difficult to save anything. The important thing, however, is to begin the habit of saving something every month. As you see your savings grow, you will appreciate the feeling of security and will want to save even more. Your Budget Keep in mind that the purpose of budgeting is to get to savings. You do not need a complicated budget; instead, focus on keeping track of your spending. Then just subtract that from your disposable income to get your cash flow. You can easily track your spending with a simple spreadsheet. For a young person, a budget like the example below is all you should need (until you make your first million, that is). Table 6.2. Personal Cash Flow Statement Budget Actual Budget Actual Income Month #1 Month #1 Month #2 Month #2 Disposable (after-tax) income Interest on Bank Account Dividend payments Total Cash Income Expenditures Rent Electricity and Water Cable and Internet Mobile Phone Groceries Health Insurance Clothing Car Payment Car Expenses Entertainment Other Expense (Credit Card) Total Expenditures Net Cash Flow Create a budget like the one above to start keeping track of your spending, then track your spending for the month and enter it in the actual column. Next, create a revised budget for month number two based on your actual experience. If you see that your net cash flow is zero or negative, look at the actual spending for month number one and decide where you can cut back. Entertainment is the easiest place to cut spending. For a real-world perspective, I asked a student of mine to create the monthly budget below for when he is at college. Table 6.3. Personal Cash Flow Statement Example Budget Actual Budget Actual Income Month #1 Month #1 Month #2 Month #2 Disposable (after-tax) income 0 Interest on Bank Account 0 Dividend payments 0 Income from Summer Work 1200 Total Cash Income 1200 Expenditures Rent (loan from last year) 650 Electricity and Water (loan) 165 Cable and Internet (loan) 60 Mobile Phone (parents) 45 Groceries (loan and personal) 200 Health Insurance (parents) 500 Clothing 0 Car Payment 0 Car Expenses 0 Entertainment 75 Other Expense 50 Net Cash Flow 0 Summary of Expenditures Total Expenditures 1745 Parents Expenditures 545 Loan Expenditures 1075 Personal Expenditures 125 Why Budgets Do Not Work (most of the time) Budgeting is all about savings. Otherwise, you could just spend your paycheck until there is nothing left (and maybe that is exactly what you do), and then what do you do with all the leftover bills? Unfortunately, budgets are much like diets, and neither diets nor budgets work most of the time. Each is complicated, and both take time to add up your calories or expenditures. Neither is any fun at all. Despite all our good intentions, diets and budgets usually go the way of many of our New Year’s resolutions; that is, they do not last. David Bach, co-founder of AE Wealth Management recently told CNBC his key to getting to savings: If you want to save more money and build wealth, you do not necessarily have to create a detailed budget that allocates money for categories like clothes, coffee and bars. Instead, simply commit to paying yourself first…Whenever you earn money, set aside a portion for your future self (2019). In her article “Why a Budget is Like a Diet—Ineffective,” Tara Siegal Bernard provides advice from experts (including herself) and concludes that budgeting does not work. Despite this, she still had this to say: But there are plenty of mental tricks and strategies that can make your budgeting more sustainable now. In fact, the best strategy is not to think about it as budgeting at all. Instead, set up broad goals and automate all savings and other priorities where you can (201). How to Use Behavioral Economics to Create a Workable Budget The way to stay committed to your budget is to establish some external controls on yourself. Behavioral economists call these “nudges.” The best way to keep on track is to use your accounts at your financial institution to automatically stay on budget. I use the term “financial institution” purposely, because there are certain truths you should know about financial institutions that are not easily evident: • Commercial banks are not your friends. • If you have your main checking account at a commercial bank, do not set up automatic bill paying there. You will be stuck! Switch to a credit union first. • Credit unions are your friends. • Most online stock trading companies such as Robinhood, TD Ameritrade, E*TRADE, etc. are not your friends. • Non-profit mutual fund companies, such as Vanguard, and TIAA are your friends. • Almost all stockbrokers and mutual fund companies currently allow you to trade stocks for free—that is, no stockbroker commissions on stock trades. If you want to trade stocks, I recommend Charles Schwab as the best broker to set up an account with. With these facts in mind, we can now talk about how to use your financial institution to nudge you to stay on budget. First, keep your checking and savings account at a credit union, not a commercial bank. Commercial banks such as Wells Fargo, Bank of America, JP Morgan/Chase and Citicorp are in business to make a profit. They have to generate enough profit to pay dividends to their shareholders. The upshot of this is that they charge higher interest rates on their loans and pay lower interest rates on their deposits than credit unions. Commercial banks, savings banks, and credit unions are called financial intermediaries. This means they take money in from depositors (to whom they pay interest) and lend it out to borrowers (who pay the bank interest). In order to cover their overhead (salaries, rent, advertising), financial intermediaries charge higher rates to their borrowers than they pay to their depositors. In addition, commercial banks also borrow money in the short-term money markets (also known as a Commercial Paper Market) at a low rate and lend it out to borrowers at a higher rate. Since the commercial banks must pay interest to their depositors and interest to the Commercial Paper Market lenders, they are essentially borrowing all their money. Additionally, commercial banks must pay dividends to their owners and stockholders, adding to their expenditures. Thus, commercial banks must do the following: 1. Pay interest to their depositors. 2. Pay interest to their lenders in the short-term Commercial Paper market. 3. Cover their overhead (salaries, buildings utilities, advertising, rent). 4. Pay dividends to their owners and stockholders. In general, commercial banks add a mark-up on the cost of funds between 3% and 4%. That is, if the bank is paying their depositors 1% on their savings accounts (and 1% on the Commercial Paper they borrow, since all short-term interest rates move in synchronization) this is their average cost of funds. On average, they will then charge 4% on their portfolio of loans. The difference between what a financial institution charges its borrowers and what it pays its depositors (and lenders) is called the interest rate spread or the net interest margin. Below is the historical data on the interest rate spread at all U.S. Banks. If you look closely, you will see two big spikes in the interest rate spreads, in 1994 and 2010. In these two years, the Federal Reserve Bank reduced the Federal Funds Rate dramatically as part of Monetary Policy to help the economy recover following recessions. Since all short-term interest rates (including deposit interest rates and Commercial Paper rates) move in lockstep with the Fed Funds Rate, this effectively reduces the cost of funds to banks, allowing them to make larger profits. Credit unions, on the other hand, are all non-profit mutual institutions, entirely owned by their depositors. Therefore, these are the only expenses they have to cover: 1. Pay interest to their depositors. 2. Cover their overhead (salaries, buildings utilities, advertising, rent). This is why credit unions can pay higher interest rates on saving accounts and charge lower interest rates on all their loans. They just need a slightly lower interest rate spread to cover their expenses. A recent average interest rate spread for credit unions was 3.15%. Unfortunately, the basic business model for all financial intermediaries is inherently unstable. They are all subject to what is known as disintermediation. Disintermediation occurs when depositors demand their money back, but the bank does not have it. This can occur because financial intermediaries borrow short term and lend long term. Banks and credit unions borrow their money from depositors (or Commercial Paper Markets, in the case of commercial banks). and the depositors can demand its return at any time. However, the financial intermediaries have lent the depositors’ money out in loans that are paid back over time—auto loans, mortgages, credit card loans, etc. When depositors demand more of their money back than the bank has on hand, this is known as a run on the bank. During the Great Depression (1929 to 1940) there were several runs on banks, and many banks went bankrupt, while numerous depositors lost their money. As a result, the Federal Deposit Insurance Corporation (FDIC) was created in 1933 by the federal government to insure depositors’ money. FDIC currently insures up to $250,000.00 per account in commercial banks against the bank’s insolvency. In 1970, in response to the explosive growth of credit union membership, the National Credit Union Share Insurance Fund (“NCUSIF”) was created by the federal government to fulfill a parallel function to the FDIC, but for credit unions. The NCUSIF also currently insures up to$250,000.00 per account in credit unions against the credit union’s insolvency. Credit unions were initially set up to benefit employees at the same company, such as the Pentagon Federal Credit Union, the General Motors Employees Credit Union, or the AFL-CIO Credit Union. In the expansion of credit union membership after 1970, many of the credit unions relaxed their membership regulations and now anyone can join almost any credit union. Usually to join a credit union currently, you merely need to deposit a minimum of $5.00 in a savings account. Choose a credit union that has an office convenient to you (although that may not even be necessary, as you can do all your banking with credit unions electronically). Credit Union Accounts to Facilitate Budgeting In order to use your credit union to facilitate your budgeting (and savings), you need to set up the following accounts: 1. Checking Account #1 for expenses. 2. Savings Account connected to your checking account. 3. An overdraft Line of Credit connected to Checking Account #1, so if you overdraw your account, the Line of Credit will automatically deposit money into the Checking Account to cover the overdraft. This will save you a lot of overdraft fees. 4. Checking Account #2 for your monthly entertainment. 5. Arrange for Debit Cards for both checking accounts. This arrangement is analogous to having different envelopes in your drawer with allocations of your cash for expenses, entertainment and savings, but it accomplishes it electronically. Once these accounts and facilities are set up, take the following actions: 1. Have your paycheck electronically deposited to Checking Account #1. 2. Have a certain savings amount automatically transferred from Checking Account #1 to the associated Savings Account. 3. Have a monthly entertainment amount automatically transferred to Checking Account #2. Use this debit card to pay for your monthly entertainment. When the account is depleted, stop spending and wait for your next paycheck. 4. Use the credit union’s electronic bill pay for all of your bills. Between this and your debit card, you will have a full accounting of your expenses at the end of each month. Most credit unions will allow you to categorize each payee and will aggregate the payments for each budget category. For example, say your monthly disposable income is$3,500. Your budget includes $2,900 on monthly expenses,$500 for entertainment, and $100 for savings. To manage this, you would do the following: 1. Have your paycheck deposited directly into Checking Account #1. Most likely you will be paid on the last day of the month. 2. Set up an automatic bill pay to transfer$500 into your Checking Account #2 and $100 into your Savings Account each payday. 3. Use your debit card for Checking Account #1 or automatic bill pay to cover monthly expenses. 4. Use your debit card for Checking Account #2 to pay monthly entertainment expenses. When this account is empty, stop spending until you put more money in the entertainment account. 5. Do not touch your savings account unless you are ready to make a purchase you were saving for, for example, to put a down payment on a car or some other long-term goal. As I mentioned before, do not set up automatic bill pay at a commercial bank. Studies have shown that 95% of customers who set up automatic bill pay do not leave their financial institution. The customer views it as too much work to set up all the accounts again at another financial institution. Move to a credit union before you set up automatic bill pay. Establishing Financial Goals All animals are goal directed: find food, find a burrow, find a mate. The human animal is no exception. Use these innate tendencies to help your budget. The basic necessities of life (rent or mortgage payment, food, transportation) scream at us to be paid every month, so it does not take much to keep them at the forefront of our mind. Getting to savings is the hard part. To do this, we have to set (and write down) financial goals, utilizing one of the key techniques of behavioral economics: making a commitment. You can write your goals down anywhere, but I recommend you write them at the bottom of your budget, ensuring that you will see them regularly. The priority for your savings account is to keep a stash of money for unforeseen expenses, like car repairs or medical expenses. Try to save six months of your basic expenses, not including entertainment. Six months of basic expenses helps protect against job layoffs, as in normal economic times, 90% of workers find a new job within six months (though this gets skewed during recessions). The goal is to give yourself a safety net in addition to unemployment compensation, because unemployment compensation varies from state to state and pays an average of a little over$300.00 per week for an average of 26 weeks. Six months of base expenses is an extremely difficult savings goal at the beginning of your career. However, it is a goal you need to work towards. Having this savings will give you great peace of mind. A second financial goal is to save for future purchases, such as a new car, a down payment on a house, or even just new furniture for your apartment. For example, a house down payment typically equals 5% of the purchase price. Since the median sales price of houses in the United States in 2020 is $320,000.00, a 5% down payment would be$15,000.00. Do not be discouraged, though; in a lot of cases, banks will accept a 3% down payment on a house, especially for a first-time home buyer. The third thing to save for are what are typically the three big purchases in your life. • The down payment on a house • College tuition for your children • Your retirement We discuss buying a house and saving for retirement in upcoming chapters, but as to education, we can look at the 2019-2020 average cost of tuition to gain perspective. Among national colleges and universities, the College Board (2022) reported the following average cost of tuition and fees for the 2021–2022 school year: In 2021-22, the average published (sticker) tuition and fees for full-time students are: • Public four-year in-state: $10,740 •$170 higher than in 2020-21 (+1.6% before adjusting for inflation) • Public four-year out-of-state: $27,560 •$410 higher than in 2020-21 (+1.5% before adjusting for inflation) • Public two-year in-district: $3,800 •$50 higher than in 2020-21 (+1.3% before adjusting for inflation) • Private nonprofit four-year: $38,070 •$800 higher than in 2020-21 (+2.1% before adjusting for inflation) Add to this anywhere from $5,000 to$10,000 per year for room and board, and a state resident at a four-year public college could pay up to $85,000. The good news is that with financial aid, very few students pay the full cost of tuition. However, according to the College Board, the average amount borrowed by 2017-2018 bachelor’s degree recipients was$29,000 ($26,900 for public colleges and$32,600 for private colleges). Reviewing Your Budget In the beginning, review your budget in the middle and at the end of the first month. Reviewing it in the middle of the month gives you some time to correct your behavior; reviewing at the end will help you revise for the next month. Once you feel that you are comfortably running, compiling your actual expenses and revising can be done once per month.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.06%3A_Budgets_and_Saving.txt
How Much Have Americans Borrowed? Prior to the Pandemic (and the subsequent recession), household or consumer debt outstanding was at an all-time high. The total amount of consumer credit outstanding at the end of the first quarter of 2020 was $14.3 trillion. Of this amount, here are the types and amounts of the outstanding loans as of the end of Q1, 2020 (note that the Pandemic Recession began in February 2020, but the President’s order to shut down restaurants, hotels, bars, etc. was March 16, 2020). What Determines Interest Rates As a practical matter, we need to divide interest rates into short-term interest rates—those where the principle must be repaid in one year or less—and long-term interest rates—those where the principle must be repaid over a period in excess of one year. Some short-term interest rates include credit cards, treasury bonds with maturity of less than one year, business or personal lines of credit, and corporate paper loans. Long-term interest rates include automobile loans, home mortgages, student loans, and home equity lines of credit. Interest rates, both short- and long-term, are ultimately determined like any good or service; that is, by the laws of demand and supply. The equilibrium interest rate and equilibrium quantity of loans borrowed is determined by the intersection of demand for loans and supply of loans. The graph below calls the good we are examining financial capital. It is often also called the demand and supply of loanable funds or the demand for and supply of loans. We can see who creates the demand for loans and the supply of loans by using a simple model known as the circular flow of the economy. Households supply labor to firms and receive wages in return. Firms produce goods and services by using labor along with the plants and equipment they own (physical capital), as well as natural resources and raw materials (sometimes called “land”). Firms then sell these goods and services to households (consumption spending). Households spend some of their disposable income and save some of it: Households put their savings into banks or stocks or bonds, therefore: The savings that households deposit in banks are the Supply of Loanable Funds. Households supply Loanable Funds to banks through deposits. How much Loanable Funds households supply is determined by the price they will be paid for their savings (the interest rate) and other factors, such as how much income they make. Firms, households and the government demand Loanable Funds. The price of Loanable Funds and other factors, such as the state of the economy, determine how large the demand is. Banks are the intermediaries, who collect the deposits and lend them out to the borrowers, adding a markup, of course, to cover their overhead and to create a profit for their stockholders. In general, financial intermediaries are in business to make a profit. While this is not true of credit unions, they still have to pay interest to their depositors, cover their workers’ wages, and fund overhead; they just do not have to make money above their expenses to pay to stockholders. In any case, financial intermediaries supply banking services. As a producer of banking services, we can characterize their production function like any other firm: For any firm, the definition of profit is: For financial intermediaries, their total revenue is the interest they earn on the loans they make plus some investment returns (usually Treasury Bonds). Their costs are the interest they pay their depositors, interest on Commercial Paper, physical capital expenses, and employee wages. Thus, a financial intermediary defines profit as such: To cover all its expenses, the financial intermediary must decide what breakeven interest rate it must charge on its loans. In order to understand this, we can think of interest rates as having three components: 1. A Risk Premium 2. Expected Inflation 3. The Time Value of Money Let’s imagine you are going to throw a party for all your friends. You have saved$1,000 and have exactly enough money to buy 20 kegs at $50.00 apiece. A couple weeks before the party, your best friend says his car broke down, and he really needs it for work. It will cost$1,000 to fix it, and he asks you to lend him $1,000.00 and promises to pay you back within one year with interest. Tough call, right? It is your best friend, of course, so you lend him the money for one year. But what interest rate should you charge? Let’s examine the components. First, you are giving up using your$1,000 for the party (Consumption), and you deserve some interest payment. This is known as the time value of money. The time value of money over the long term has historically been 2 to 3% (a rate we have seen on long-term loans when there is no inflation). Second, when you get the $1,000 back, you want to still be able to buy 20 kegs of beer. If the cost of the kegs has inflated, you want the principal amount you lent to still be worth$1,000, so you want the future or expected inflation rate to be applied to the principal. Let’s say this is 2%. Finally, there is a risk premium on top of all this. Let’s say you expect your friend to only pay back 95% of the principal. You want to be made whole, so you charge this risk premium of 5% on top of the other two components. This part of the analogy does not work as well, but in real world banking, if you have $1,000,00 in loans outstanding and historically 5% of the loans default, you have to get that 5% back first before you can start earning on your money. If we add these components all together, you would charge your friend 9% for a one year loan of$1,000. 1. A Risk Premium: 5% 2. Expected Inflation: 2% 3. The Time Value of Money: 2% Figuring this all out can be mentally exhausting, so financial intermediaries use a shortcut. U.S. Treasury Bonds are considered the safest investment in the world, so the U.S. is charged an interest rate that includes only the time value of money plus expected inflation. For example, let’s say a ten-year U.S. Treasury Bond pays an annual interest of 4%. Since we know the time value of money is 2%, these must be the components of that 4% interest rate: 1. A Risk Premium: 0% 2. Expected Inflation: 2% 3. The Time Value of Money: 2% As a short cut, financial intermediaries look at the market interest rate on the appropriate term length U.S. Treasury Bond and match it to a loan they are making with the same term length and add a risk premium. Let’s look at the current rates for Treasury Bills, Treasury Notes and Treasury Bonds. The maturity of a Treasury obligation is its term; that is, when the principal amount will be paid back in full. • Treasury Bills mature in one year or less. • Treasury Notes mature in two to ten years. • Treasury Bonds mature in longer than ten years. Daily Treasury Yield Curve Rates (Treasuring Bills and Bonds) Table 7.1. Daily Treasury Yield Curve Rates Date 1 Mo 2 Mo 3 Mo 6 Mo 1 Yr 2 Yr 8/7/20 0.08% 0.09% 0.10% 0.12% 0.14% 0.13% Date 3 Yr 5 Yr 7 Yr 10 Yr 20 Yr 30 Yr 8/7/20 0.14% 0.23% 0.41% 0.57% 1.01% 1.23% Source: U.S. Treasury These yields can be graphed into what is known as a yield curve. The yield curve will shift as the various rates change so there will be a new yield curve every day. Note that the longer the maturity of the Treasury Notes and Bonds, the higher the interest rate. To put it simply, the longer the maturity, the higher the expectation of a bigger inflation rate, thus the expected inflation component increases. Note that during the Pandemic Recession, the Federal Reserve Bank reduced short-term interest rates to effectively zero and reduced long-term interest rates to historical lows by buying Treasury Notes. For an example, see below for the historical rates on the bond market bellwether: the Ten-Year Treasury Note. (A bellwether is a leader or a leading indicator of a trend. The lead sheep of a flock has a bell around its neck and is called the bellwether.) Looking back at setting interest rates, we can examine auto loans and mortgages to get a better idea of how this works. For an auto loan of 48 months, banks will take the 5-year Treasury Note and add a risk premium. For a 30-year mortgage, banks will take the 10-year Treasury Note and add a risk premium. By subtracting the corresponding Treasury Note rate to the auto loan or mortgage rate, we can calculate the risk premium. For example, this is how these rates looked as of August 7, 2020. Table 7.2. Auto Loan and Mortgage Rates Loan Loan Rate Treasury Note Rate Risk Premium Auto Loan (48 months) 4.27% 5 yr Note = 0.23% 4.04% Home Mortgage (30 years) 3.08% 10 yr Note = 0.57% 2.51% The risk premium added to the similar term length U.S. Treasury Bill or Bond often follows the default rate on that type of loan. This is because if, for example, 3% of your automobile loans are not paid back, you have to recover that 3% before you can earn any interest. Here are the historical delinquency rates on various loans (90 days overdue): Your Credit Score The Fair-Isaac credit score (FICO) is the most popular credit score used by financial institutions and other firms interested in your financial stability. Its scale ranges from 300 to 850, and since most Americans have a score of 700 or above, people with that magic 700 (or higher) are considered prime credit risks. Your FICO score is made up of a weighted mix of your financial transaction history. Here are the weights (and their explanations). Payment History (35%) The first thing any lender wants to know is whether you have paid past credit accounts on time. This helps a lender figure out the amount of risk it will take when extending credit. This is the most important factor in a FICO Score. Be sure to keep your accounts in good standing to build a healthy history. Amounts Owed (30%) Having credit accounts and owing money on them does not necessarily mean you are a high-risk borrower with a low FICO Score. However, if you are using a lot of your available credit, this may indicate that you are overextended. Banks might interpret this to mean you are at a higher risk of defaulting. Length Of Credit History (15%) In general, a longer credit history will increase your FICO Scores. However, even people who have not been using credit for long may have high FICO Scores, depending on how the rest of their credit report looks. Your FICO Score will look at how long your credit accounts have been established, including the age of your oldest account, the age of your newest account, and an average age of all your accounts. It will also factor in how long specific credit accounts have been established and how long it has been since you used certain accounts. Credit Mix (10%) FICO Scores will consider your mix of credit cards, retail accounts, installment loans, finance company accounts, and mortgage loans. However, it is not necessary to have one of each. New Credit (10%) Research shows that opening several credit accounts in a short amount of time represents a greater risk, especially for people who do not have a long credit history. If you can avoid it, try not to open too many accounts too rapidly. FICO is the leading credit scoring model. In 2006, the three major credit bureaus—TransUnion, Equifax, and Experian—joined forces to create VantageScore in order to compete with FICO. The VantageScore 3.0 is used mainly by the credit card and auto sectors while the FICO score is used by the mortgage sector. The weights used by VantageScore 3.0 are similar to the weight of your FICO score. Here are a few facts about credit scores: • Average FICO Score: 706 • Average VantageScore: 685 • Average U.S. Household Credit Card Balance: $8,602 • Average Annual Percentage Rate on Credit Cards: 17% • Amount of Time Adverse Info Stays on Your Credit Report: 7 years Source: FICO, Vantage, Federal Reserve Bank, 2019 How to Get and Maintain a Good Credit Score Your parents and acquaintances likely have a lot of advice on how to get and maintain a good credit score. Some of this advice is correct, but some of it is a myth. In her 2019 article, “9 Myths About Credit Scores,” Demetria Gallegos presents a comprehensive overview of the do’s and don’ts of credit scores. Gallegos points out that with the near universal use of credit scores today by banks, landlords, employers, rental agencies, and others, your credit score represents more than the financial aspects of your life. Your credit score can be the key to a better standard of living. Gallegos debunks the common myths around credit scores; I have listed these below and included my commentary (Galegos (2019). Myth: Checking My Credit Score Hurts My Credit Score. There is a difference between a hard inquiry and a soft inquiry. A hard inquiry is when a bank checks your credit in order to evaluate whether they will extend a loan to you. A soft inquiry is an employer checking your credit as part of a background check on you or a utility company checking your FICO score to set up a new account. Each hard inquiry will drop your FICO score by a few points. Almost all soft inquiries will not. If you are simply checking on your credit score, there will be no loss of points. You can check your credit score for free on a number of websites, like Discover Credit Score, Credit Karma, or Mint. Discover Credit Score is best in terms of data sharing and solicitation. If you just want to check your credit score, they do not share your info with any other credit card company or commercial enterprise. Credit Karma has the most comprehensive information available, providing a look at all of your outstanding credit and information reported to two of the three credit agencies. It also allows you to dispute a late report or other inaccurate information directly from their website. However, they do sell your info to credit card companies, and you will likely receive credit card solicitations. Mint is owned by the accounting and financial software company, Intuit, and is primarily a free personal budgeting site. You will need to sign up for the personal budget offering before you can enter the site. Myth: If I Pay My Bills on Time, That is All I Need to Worry About. All you have to do is look at the credit score components above to realize that paying your bills is not enough on its own. Pay attention to how much credit you have available and how much of your total credit is outstanding. As a rule of thumb, you should only have about 30% of your total credit limit outstanding. Try spreading your purchases among two or more credit cards. Call your credit card companies and ask for your credit limits to be increased. If you have good credit, the credit card companies will oblige you 80% or more of the time. This will immediately reduce the percentage of your outstanding credit. Myth: Carrying a Balance on My Credit Card Helps Boost My Credit Score. Carrying a balance will not help your credit score. In fact, if the balance is above 30%, it will hurt your credit score.) In addition, carrying a balance if you can afford to pay it off just costs you interest payments. Myth: Closing an Old Credit Card with a High Interest Rate Will Help My Score. Since the amount of outstanding credit in part determines your credit score, it is best to pay off high interest credit cards and leave them open. Do not cancel them unless they charge you an annual fee. If there is a fee involved, call the credit card company and ask them to substitute a card without a fee and ask to have the same credit card number. Remember, the length of the credit extended helps your score. FICO ignores the closed account status and continues averaging the age of the closed account with your open accounts. Vantage, however, removes closed accounts (and your payment record) from its calculation, so you lose the value of positive payment on a past account. The best policy is to keep high interest credit cards open and use lower interest credit cards for purchases. Myth: Opening a New Retail Credit Card Is Good for My Credit Score. Retailers entice you with 0% interest and other incentives to open new credit cards. When you do, the average age of your credit gets younger, and you lose a few points from the inquiry. In addition, the interest rate from the retailer after the initial period is generally higher than the average interest rate on your other credit cards. Myth: It Hurts My Credit Score to Comparison Shop for a Mortgage, Auto or Student Loan. The credit rating models take comparison shopping into account. If the credit rating agencies see multiple hard inquiries around the same time, they will assume you are shopping around. However, there is a time limit on this. VantageScore bundles similar inquiries within 14 days into one hard inquiry. FICO has shopping periods of 14 to 45 days, depending on the type of credit. In any event, a good tip if you are buying a house is to wait till after closing to take on any new credit for furniture or appliances. This will assure the highest credit score as you go into closing. Myth: The Older My Unpaid Debt, The More It Hurts Me. Late payments, collections, foreclosures and Chapter 13 bankruptcies remain on your credit report and hurt your credit score for seven years. However, the older the credit problem, the less it affects your credit. So if you have an unfortunate event like a bankruptcy or foreclosure, stay current with any new or existing credit you are not delinquent on. As to collections, credit card companies aggressively pursue delinquent accounts for about two years. After that, they often sell the delinquent debt to collection agencies and take the debt off their books. If a legitimate collection company contacts you, you should try to make a deal to pay only part of the debt. Collection companies usually buy delinquent debt for 20% of its full value, so anything they collect over that is profit. The Consumer Financial Protection Bureau (CFPB) has established rights for you when dealing with collection companies. They cannot threaten or harass you. If they do, contact the CFPB. If you have gone through a bad financial period, a good way to re-establish credit is to get a secured credit card. With this type of card, you deposit money into your financial institution and spend up to that preset limit. If you pay off the charges each month, your credit score will improve, and in about a year (maybe less), you can likely get a regular credit card again. Myth: Selecting “Credit” While Using Your Debit Card for a Purchase Is Good for My Credit Score. There is no effect at all on your credit score if you select “credit” when using a debit card. However, you should be sure that your financial institution does not charge any fees for debit transactions. Myth: Credit Reports Are Accurate. Credit reporting firms make mistakes. An incorrect score could come from something as simple as someone who shares your name being put on your report; it could also be the result of a criminal stealing your identity and taking out credit cards in your name. Experts advise each of us to check our credit reports every four months. The most effective way to do this is to take advantage of the free credit reports to which every consumer is entitled. You are entitled to one free credit report each year from each of the three credit-reporting companies (Trans-Union, Experian, and Equifax). Order a credit report every four months but order the report from a different one of the three credit-reporting companies each time. That will give you three free reports each year spaced out every four months. You can also monitor your credit through Credit Karma. It is free and alerts you if there is a significant change in your credit score or if there is a hard inquiry. Pay for Deletion Finally, if you are seriously delinquent on a credit card, you can try a discussion called pay for deletion. Since the financial institution will have to sell the debt for 20% of its face value to a collection company once they write off the debt, the collection specialist at the financial institution (before it gets sold to a collection company) will be willing to make a deal. Offer them 30% or 40% of the outstanding balance with the agreement that he/she will delete the negative reporting from the credit agencies report. Credit Rating Agencies The three major credit rating bureaus in the United States are Experian, Equifax, and TransUnion These agencies pay financial institutions to send them your credit data every month. including credit limits, the amount of utilized credit, and your payment history. The credit agencies use this to calculate your credit score and sell these reports to banks, credit unions, landlords, auto finance companies, and even potential employers. Unfortunately, these credit scores have become the be-all and end-all of your ability to get a loan or a credit card, not to mention the interest rate you will pay for that loan or credit card. As was stated earlier, a FICO score of 700 or higher is golden. In 2019 67% of Americans had a FICO score of 670 or higher. The majority of Americans have a FICO score of good or better. Banks often see a FICO score of 700 or better as the “sweet spot” for them to extend credit at a reasonable interest rate. This does not mean that you cannot get a credit card or an auto loan if you have a score less than 700, but you will pay a higher interest rate, so it is worth aiming for. Your FICO score will improve if you use only 30% or less or your credit limits, so having more credit cards but not using them improves your score. That means you should get credit cards but do not use them. The financial institutions used to report on your financial activity to the credit scoring agencies at the end of each month, but now they seem to be reporting weekly or even daily, so check Credit Karma at least every two weeks. It will give you a good sense of how credit scores fluctuate based on your activity. Most importantly, you should immediately report any errors. You can do this for free on the Credit Karma website. Good and Bad Debt Certain assets are worth borrowing money for. We can call these investments. Borrowing to go to college, to purchase a house, or to buy an automobile are all investments; these are good debt. A house is an investment because it will appreciate in value and will save you rent, while education is an investment because it will lead to a better job and higher income. An automobile is an investment because you will likely need one to commute to your job. Bad debt is borrowing for consumption. Do not borrow on a credit card unless you can pay it off at the end of the month. You do not really need that 55-inch TV; you can buy it if you have the money to buy it, but do not finance it with a credit card. Of course, if you are unemployed and need to use your credit card to buy food, that is another matter. In that case, the hopeful outcome will be that you will find a new job and the credit card debt will just be temporary. Credit Cards Are Addictive The nature and structure of the human brain makes it difficult to not run up credit card debt. Our brain almost automatically compares cost to benefit when we are considering a purchase; however, benefits are evaluated in a different part of the brain than costs. The reward center of the brain, the ventral striatum, activates in response to the item we want. The prospect of getting that item feels good. On the other hand, the insula, the area of the brain that evaluates pain and expected loss, reacts to actually having to pay for the product. Using a credit card to purchase something, whether we need it or not, gives us a sudden rush of instant gratification. However, we do not feel the pain of having to pay for it until the credit card bill arrives. Credit cards are addictive because they hijack the ventral striatum (part of the dopamine system) which gives us the pleasure of buying something we want. On top of this, at least eight percent of men and women are addicted to shopping, only further triggering the potent addiction mechanism of credit cards. How to Use Credit Cards Wisely We all need credit cards. We need them to pay for airplane tickets, hotels, and things we order online. Also, having a large credit limit but using very little of it will increase your FICO Score. However, here is my best advice. Only buy something with a credit card that you can pay off at the end of the month when your credit card bill arrives. It is as simple as that! Credit Card Providers and the Games They Play Credit card providers begin their games with enormous marketing efforts. Credit card providers either email or snail mail over two billion new offers for credit cards per year in the United States. Given that there are 159,000,000 individuals employed in the U.S. (and presumably able to pay a credit card bill), this corresponds to six new credit card offers each year for each employed person. Second, the fees for late payment or exceeding your credit limit are exorbitant, ranging from$30 to $41. According to the Consumer Financial Protection Bureau, credit card companies raked in$12 billion in late fees in 2020, when millions of workers were laid off. Consumers with subprime credit cards and private-label store cards are particularly susceptible, especially in relation to their credit limits. The report also highlights that consumers living in low-income and majority-Black communities are disproportionately impacted by credit card late fees. Third, the offers of 0% “introductory” interest for a period of time is not really 0% interest. The credit card companies charge you a 3% to 5% “processing fee,” which covers their cost of funds, and then the rate jumps to 15% to 25% when the period is up. Finally, Visa and Mastercard are virtual duopolies in their marketplace. A duopoly is a market that has only two competitors in it. These credit cards have the overwhelming majority of market share and their above-normal profits are evidence monopolistic behavior. Auto Loans and Leases Taking out a loan to buy an automobile is good debt. If you live in America’s suburban sprawl, you typically need a car to travel to work. Purchasing an automobile is a big event in most people’s lives, so try to get advice from a parent or friend who has experience in that area. An automobile is, in economist’s jargon, a durable good, a good that lasts over three years. The price to consider when purchasing a durable good is the user cost. The user cost of a car is the total monthly (or annual) cost of financing and operating the vehicle. Specifically, these are the costs you need to investigate: • The annual finance payment • The annual fuel cost • The annual maintenance cost • The annual insurance costs • The annual replacement costs of tires, etc. (most important when a car is over 3 years old) • The trade-in value These costs can vary significantly among various makes and models of cars. The largest component of your user cost is the financing. Interest rates for automobile purchases will vary with the market interest rate and generally track the 5-year U.S. Treasury Bill, plus a risk premium. According to The Wall Street Journal, as of August, 2021, the average rate on a 48 month new car loan nationwide was 4.06%. Based on this, we can determine the annual user cost of a $30,000 car: Table 7.3. Annual User Cost of a$30,000 Car Annual Finance Costs $8,148 ($679 per month) Annual Insurance $1,134 Annual Fuel Costs$2,392 (16,000 miles per year at $2,99/ gallon) First Year Maintenance$500 (Oil change and tire rotation) TOTAL $12,174 The financing rate varies significantly with market interest rates, and often the auto manufacturer will give lower rates in order to sell specific models. Be sure to ask for a dealer quote on financing your car. You can use an auto loan calculator to figure out your monthly finance costs. A common saying in the auto industry is that your new car is worth 25% less the minute you drive it off the dealer’s lot. In actuality, your car’s value decreases around 20% to 30% by the end of the first year. From years two to six, depreciation ranges from 15% to 18% per year, according to recent data from Kelley Blue Book, which tracks new and used-car pricing. As a rule of thumb, in five years, cars lose 60% or more of their initial value. However, this can vary widely among makes and models, so it is worthwhile to investigate to what extent your chosen vehicle keeps its value. Remember that you will never recoup the cost of premium customization you may buy on your new car. Special models, expensive wheels, or deluxe sound systems will not increase the trade-in value of your car. Essentially, this money you are throwing away. Unfortunately, 2021 was a bad year to buy a new or used vehicle. As we exited the Pandemic Recession, the demand for new automobiles increased while at the same time there were serious supply shortages of the computer chips that run everything in today’s sophisticated cars. In addition, the prices of used cars increased 40% over the year 2020. However, this inflation in auto prices should be temporary, so here are some ways to minimize your user cost when buying a car in years like 2021. 1. Finance your purchase through a credit union. For example, I have seen rates between 1% and 2% on new auto loans at Pentagon Federal Credit Union. 2. Finance your loan over 60 months in order to bring down your monthly payments. 3. Do not load your car up with customizations. 4. Buy a used car with a warranty instead of a new car. You should first establish a monthly budget, keeping in mind the user costs. Then make a list of the few cars that will fit that budget. Drive the three cars that fit your budget and choose the one your gut tells you that you like the most. That way you will be happy with the purchase. As an economist, I recommend leasing your car instead of purchasing it. Leasing is just another method of financing your car purchase, with a number of added benefits. Leasing significantly reduces your monthly payment, helping your cash flow. When you purchase a vehicle outright, you pay interest on the amount you borrow. You also have to pay off (or amortize) the entire cost of the vehicle over the term of the loan (typically 4 to 7 years). When you lease a vehicle, you pay interest on the amount you borrow, but you only have to amortize the difference between the purchase price and the vehicle’s residual value. Here is an example of a purchase vs. lease monthly payment: Purchase • Price:$32,000 • Loan: $30,000 • Interest rate: 4% • Term: 48 months • Monthly Payment:$677.00 Lease • Price: $32,000 • Loan:$30,000 • Interest rate: 4% • Term: 36 months (almost all leases are for 36 or 39 months) • Monthly Payment: $535.00 When you purchase a car, you must pay sales tax up front. Not all states have sales taxes, but in Pennsylvania, for example, where the sales tax is 6%, this would be$1,920. For a lease, you only pay sales tax on the lease payment every month. You can purchase the car or truck at the end of the lease for the residual value, or you can just turn the vehicle in and lease another new vehicle. What to Do if You Fall Behind Communicate with your lender if you are having any difficulty in making your vehicle payment, whether you are purchasing or leasing. Often, there are programs to assist you. For example, during the pandemic, Citibank allowed auto loan customers to skip up to three payments. You may also get a cheaper rate. The important thing is to call your lender the first time you are going to miss a payment, before you go into default. Default happens when you are 90 days delinquent on a loan payment. Here is the delinquency rate of credit card debt compared to other types of debt. Note that student loans have the highest delinquency rate of all types of debt. Personal Loans It is always a good idea to establish a line of credit, a type of personal loan, with your lenders. Establish a line of credit that covers you if you overdraw the checking account. This way you will avoid any overdraft fees. Revisit this line of credit at least once per year. If your payments have been on time, ask to increase the line of credit. Of course, do not borrow money unless you really need to, but it is good to have a line of credit available for emergencies. Also, your FICO score is based in part on your credit limit and how much of that limit you are using. By increasing your line of credit but not using it, you can improve your FICO score. Personal loans without collateral to secure them will have a higher interest rate associated with them. However, personal loans are much cheaper than borrowing on your credit card, which is another reason that a line of credit is valuable. Pay off the Debt With the Highest Interest First Benjamin Franklin said, “A penny saved is a penny earned.” This is also true of debt. While you might try your best to avoid it, you still can end up with credit card or personal loan debt. Your personal debt will most likely have an interest rate of 9% or above. This is unsecured debt: debt with no asset like a car or house that can be repossessed. Secured debt, like an auto loan, mortgage, or student loan, will have an interest rate under 9%. Be sure to cover your monthly payments so you can maintain your credit rating, but if you have some money left over, make payments on your credit cards and personal loans first. Identity Theft There are plenty of criminals out there trying to steal your identity and use it to commit fraud. The internet has made it both much easier to do so and much harder to catch these criminals. Given this serious risk, here are some of the things you should not do: • Never give out your internet password. Not even your internet provider will ask for it. • Never give out your social security number. Even your bank or credit union will only ask for the last four digits to use for account access. • Never give out personal information to someone calling you. If it is someone you do not know, ask for a phone number and say you will call them back. If you have been the victim of identity theft, the Federal Trade Commission says this is what you should do: 1. Call the companies where you know fraud occurred and speak with their fraud department. 2. Place a fraud alert and get your credit reports. Place a free, one-year fraud alert by contacting one of the three credit bureaus. 3. Report identity theft to the FTC. After this, you will need to try to recover from the identity theft. 1. Close any new accounts opened with your stolen identity. 2. Call your accounts and get them to remove any bogus charges. 3. Call the credit bureaus and correct your reports. 4. Consider a freeze on all your accounts and credit cards. Open new ones. 5. Check your credit reports each month. The Last Resort: Bankruptcy If you cannot get accommodation or your debt is just too high to work out from under, the last resort is bankruptcy. Keep in mind, however, that it will not discharge your student debt. Ask someone you know to be informed for a good bankruptcy attorney or look up legal aid. Do not think that bankruptcy is a stigma. Plenty of people have declared bankruptcy, recovered and become successful.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.07%3A_Credit_Cards_Auto_Loans_and_Other_Personal_Debt.txt
Student Loans Are Good Debt The current amount of outstanding student loans is approximately $1.7 trillion; meanwhile, the total outstanding credit card debt stands at$922 billion. As you can see below, student debt has risen continuously over the past decade and a half, continuing a trend over the previous four decades. Borrowing money for a college education is an investment. In 2020, the average annual salary of a high school graduate in the United States was $37,000 while the average annual salary of a college graduate was$61,000. Those with a college degree will earn at least 60% more money for over their lifetime. In addition, a college degree is more likely to lead to career advancement. Considering a cost/benefit analysis of college degrees, we can calculate a return on investment (ROI) for a college degree. In 2019, the College Board reported that a moderate college budget for a four-year in-state public college averaged $26,590 while a moderate budget at a private college averaged$53,980. Thus, the return on investment would look like this: The ROI for a private college can be calculated the same way: In comparison, the stock market has an average annual return of 10%. This also does not include any cost of living increases or other raises. However, this assumes that you graduate from college, as the higher salaries above are for college graduates. Student debt has been driven higher by the relentless cost of a college education, having grown 145% since 1971: Student Debt Abuse by Educational Organizations For-profit colleges (like University of Phoenix, Corinthian Colleges, and Strayer University) and for-profit training schools (like ITT Technical Institute and Education Management Corporation) are some of the biggest culprits of student debt abuse. These organizations accounted for about 40% of all student loan defaults while only representing about 11% of all loans. According to a 2014 report by The Institute for College Access and Success, a student is three times as likely to default at a for-profit school than at a 4-year public or non-profit college; further, they are almost four times as likely to default than at a community college (see reports on ticas.org). One-third of college students drop out entirely. More than half of the students enrolled in college take more than 6 years to graduate. For-profit colleges have abysmal graduation rates. Sixty-seven percent of students at not-for-profits have graduated after six years, while the same is true for only 23% of students at for-profit schools. Dropouts are then saddled with student debt but still stuck at the same salary level as before going to college. Because these schools are motivated by profit, they admit less qualified students and offer less support. Beginning in the 1980’s, government student loans led to a massive expansion of for-profit educational institutions. However, the Obama administration cracked down on for-profit schools with the worst graduation rates, denying them the ability to qualify for federal student loans. As a result, their revenue declined precipitously. For example, the University of Phoenix revenue declined 70%, and Corinthian College declared bankruptcy. The Rules of Student Debt The government will pay the interest on federal loans if you qualify based on income while you are in school. When you stop going to school, you must start paying back the loan. There are four types of student loans from the U.S. Department of Education: • Direct Subsidized Loans are loans made to eligible undergraduate students who demonstrate financial need to help cover the costs of higher education at a college or career school. (Maximum loan is $12,500 per year of schooling) • Direct Unsubsidized Loans are loans made to eligible undergraduate, graduate, and professional students, but eligibility is not based on financial need. (Maximum loan is$12,500 per year of schooling) What Majors Are Worth Taking on Student Debt When you are deciding what you want to major in, keep in mind the kind of salary you will potentially earn, as well as the demand for employees in your field. Dozens of websites can give you this data so you can make an informed decision, such as the Federal Reserve Bank of New York. The Department of Labor also has extensive employment projections of what fields will be in demand over the next ten years. You should choose a field you will enjoy working in; however, it is worth taking a close look at the employment potential and salary in your chosen field, before deciding.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.08%3A_Student_Loans.txt
No one who understands the miracle of compound interest better than Warren Buffett, a multi-billionaire and Chair of Berkshire Hathaway, best known as the Sage of Omaha. [Nebraska] and Chair of Berkshire Hathaway. An article in The Wall Street Journal by Jason Zweig (8/28/20) details Buffett’s thinking about time and the value of money. Patience and endurance are the “investing superpowers” that helped him achieve his $82 billion of personal wealth: From the earliest age, Mr. Buffett has understood that building wealth depends not only on how much your money grows, but also on how long it grows. Around the age of 10, he read a book about how to make$1,000 and intuitively grasped the importance of time. In five years, $1,000 earning 10% would be worth more than$1,600; 10 years of 10% growth would turn it into nearly $2,600; in 25 years, it would amount to more than$10,800; in 50 years, it would compound to almost $117,400 (2020). Because we will be discussing the time value of money, we will inevitably be discussing math in this chapter. However, it is not advanced math, so you should find it easy to understand. The Time Value of Money Receiving a payment today is better than receiving one a year from now, in part because the general rate of inflation (e.g., two percent) makes the money worth less next year due to decreased purchasing power (that is, two percent less). It is also better if you are going to save or invest that money, putting it into a savings account at 3% interest (for example) means that in a year you will have an additional earning of 3% on top of the original amount. We can illustrate it in this equation: Pyear1 is the principal amount you received at the beginning of year one, and Pyear2 is the principal amount you will have at the beginning of year two, including the interest you earned. However, interest or dividends money in savings or an investment is compounded. That is, if you leave the interest or dividends you earned over year one in savings for year two, you will again receive 3% interest on the principal plus 3% interest on the interest you already earned in year one. We can represent this mathematically as follows: Let’s say that you put$1,000 in a savings account at 3% interest and leave it to compound. Below, you can see the amounts you will have at the beginning of each year: The compounding of the interest may not seem like a lot here, but it makes a huge difference when you are saving for retirement. For another example, let’s say you start working at 21 and retire at 68, spending 47 years in the labor force. As we will discuss in more detail later, one investment in a mutual fund with a widely diversified portfolio saw a return of an average of 10.1% per year for ninety-four years. If you were to invest $1,000 in this diversified portfolio and did not touch it for 47 years, you would have a retirement nest egg that looked like this: Original Amount:$1,000.00 at beginning of year 1 Interest Rate: 10.1% compounded Time Period: 47 years Amount at end of 47 years: $92,045.80 Furthermore, you will most likely deposit more into your retirement account each year, rather than just$1,000 once at the beginning of your career. If you invested $1,000 per year each year in this diversified stock portfolio, at the end of your career your nest egg would look like this: Principal Amount:$1,000.00 each year invested at the beginning of each year Interest Rate: 10.1% compounded Time Period: 47 years Amount at end of 47 years: $1,084,535.20 Most of the time, if you work for a good employer, they will sponsor a 401(k) retirement plan and match your contributions. The most common plan is that you contribute 3% of your salary, and your employer matches. Let’s say that together you contribute$4,000 per year for 47 years and put it all in a diversified stock portfolio. In that case, here is your retirement nest egg: Principal Amount: $4,000.00 each year invested at the beginning of each year Interest Rate: 10.1% compounded Time Period: 47 years Amount at end of 47 years:$4,338,140.81 This is also tax free until you retire and withdraw money to live on. Intertemporal Consumption and Savings Saving and borrowing allow intertemporal consumption. Basically, you move your consumption from one time period to another. If you do not spend all your income in year one, your savings can increase your consumption in later years. On the other hand, if you spend more than your income in year one (by using credit cards or taking out a personal loan), you must consume less than your income in subsequent years to pay back your debt. As the prime example of this, saving money for retirement each year means you are consuming less currently in order to have money for retirement. However, you are also earning interest or dividends that will allow you to consume even more than the original amount when you reach retirement. Dr. Franco Modigliani, Nobel Prize winner in economics, explains our consumption and saving decisions over a lifetime with the life cycle hypothesis. Until college graduation, when we have student loans, we are dissaving; that is, we are consuming more than our income and financing it with student loans. After we begin our career, we consume less than we earn because we are saving for retirement. Finally, when we retire, we are once again dissaving by spending the retirement savings we have built up. The Future Value of Dollars Received Today The mathematical formula for the amount of principal at the end of n periods is: However, you do not need to work out the future dollar value by hand. The internet offers lots of calculators that will do this for you. The Present Value of Dollars Received in the Future The present value of future dollars is called the Net Present Value (NPV), and it involves the economic principle of Opportunity Cost. The opportunity cost is the next best use for your money instead of your current purchase, or it can be the next best use of your time instead of what you are using it for now. For example, the opportunity cost of paying college tuition could be giving up on buying a new car. The opportunity cost of going to class could be getting a few more hours of sleep. The opportunity cost of not having a specific amount of money this year instead of next year is the interest or dividend you earn through investment. This concept is important in business because the principal way a business can value an investment is the stream of income the investment throws off, discounted to the present. This is called Net Present Value of Discounted Cash Flow. What interest rate (or discount rate) should you use to discount future streams of income? As a student, your opportunity cost would most likely be the 2% interest you would earn in a savings account. For business, the discount rate used is most often 8% or 10% per year because this is the return they would get by investing in their business if they had it now instead of later. If a company buys an investment that generates $100,000 per year for ten years. The discounted cash flow or net present value of this cash flow stream is Amount Per Year:$100,000.00 Discount Rate: 8% Time Period: 10 years Discounted Cash Flow: \$724,688.78 The mathematical formula for Net Present value is: Note that we are dividing the cash flow or income from each period by 1 plus the discount rate, so this is reducing the cash flow by the discount rate. There are many online calculators that you can use to calculate the Net Present Value of future cash flows. The Present Value or Future Value of an Annuity An annuity is simply a stream of money paid periodically. It could be interest from a savings account or dividends from a stock investment. The present or future value of an annuity can be calculated using the present or future value calculators presented above.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.09%3A_Understanding_the_Time_Value_of_Money.txt
Different Types of Financial Institutions Financial innovation describes the changes in the types of institutions or services offered in the financial marketplace. Here are some financial innovations that have occurred recently: • The expansion of insurance companies into banking (e.g., Travelers Insurance merged with Citibank to form Citicorp) • The expansion of automated teller machines • The invention of online payment systems (e.g., Paypal, Apple Pay, etc.) • The expansion of investment banks into commercial banking ( e.g., Goldman Sachs now offers checking accounts and other services.) • The creation of completely online banks (e.g., SoFi) and completely online insurance companies (e.g., bestow.com) As we discussed before, depository institutions are known as financial intermediaries. They accept deposits on which they pay interest and make loans on which they charge higher interest, making a profit on the difference. The loans they make include credit cards, mortgage loans, personal loans, and business loans. All are set at different interest rates. The difference between the average aggregate rate financial intermediaries pay on their total deposits and the average aggregate rate they charge on their total portfolio of loans is called the net interest margin. This must be enough to pay for the overhead plus make a profit for their stockholders. The net interest margin can vary, so here is a snapshot going all the way back to the 1980’s: Note that there is a lower limit to the net interest margin, and this gives us an insight into the business banking model. If the net interest margin for a bank gets significantly below 3%, the bank will likely be unable to meet its overhead costs, putting it into serious financial trouble. Similarly, according to the National Credit Union Administration, the net interest margin on credit unions have also been running about 3% for the last decade. The prime rate, or the rate that banks give to their most creditworthy customers, is always exactly 3% above the Federal Funds Rate. Of course, most commercial bank customers do not get the prime rate on their loans, but it is the benchmark against which commercial loans are priced. Most customers pay 1% to 2% above prime on their short-term loans. Commercial Banks Commercial banks accept deposits into checking and savings accounts. They use these deposits to make business, personal, and auto loans, as well as issue credit cards and mortgages. These banks also borrow money in the Commercial Paper Market and lend this out at higher rates. Commercial paper is short term loans, secured by promissory notes (essentially I.O.Us), with terms typically 30 to 180 days. There is a huge market for borrowing via commercial paper from banks. The current outstanding amount of commercial paper in the U.S. is about \$1.1 trillion. With an average term length of 30 days, banks must reborrow the money every 30 days. Commercial banks receive a charter from the Federal Reserve Bank that gives them permission to operate. However, they must follow the rules of the Fed and remain solvent. The Fed audits commercial banks regularly and can revoke a charter if a bank is insolvent or engages in prohibited behavior. All deposits in commercial banks are insured by the Federal Deposit Insurance Corporation (FDIC) up to \$250,000 per account. The FDIC is a government sponsored insurance company that charges premiums to commercial banks. If a bank becomes insolvent, the FDIC will usually sweep in on a Friday after close of business, seize the bank, fire the officers, and immediately call the customers to let them know their deposits are insured and therefore safe. Usually, the FDIC will then sell the assets to a solvent bank. Savings Institutions Savings institutions or savings banks accept deposits and provide personal and auto loans, as well as issue credit cards and mortgages. They tend to focus less on commercial loans than commercial banks. As with commercial banks, deposits are insured by the FDIC up to \$250,000 per account. Prior to 1980, savings institutions were legally limited to only offering checking and savings accounts, and their lending was restricted to mortgages. Following World War II, they paid 3% on deposits and lent mortgages at 6%. Then in 1979, Paul Volker, chair of the Federal Reserve Bank, raised short-term interest rates to 17% to control excessively high inflation. This rate stayed high for years, going up to 19% in 1980 and 1981. This caused disintermediation at the savings institutions, causing them to raise the rate to 8% on savings accounts in order to stay competitive. However, most of their money was already lent out at 6% for thirty-year mortgages. This was a recipe for bankruptcy. The U.S. Government had to bail out the industry, costing taxpayers about \$100 billion (though this now seems like a bargain compared to the massive bailout during The Great Recession). In 1980, there were more than 4,500 savings institutions insured through federal or state government programs. As of December 2017, FDIC data reveals that only 752 remained. Credit Unions Credit unions are non-profit institutions, and as a depositor, you are a part owner. Like a commercial bank, credit unions offer checking and savings accounts and certificates of deposit. They also offer auto and personal loans, and they issue credit cards and mortgages. Instead of the FDIC, your deposits are insured up to \$250,000 per account by a similar organization, the National Credit Union Administration (NCUA). The NCUA is also responsible for issuing charters to credit unions. As mentioned before, credit unions tend to have lower fees and better interest rates on savings accounts and loans since they do not have to generate profits. Most people use their local credit union for car purchases because the rate is normally lower than what is offered by dealers and commercial banks. Credit unions are also an excellent place to apply for a mortgage. Despite all of this, it is worth noting that commercial banks’ mobile apps and online technology tend to be more advanced. According to the NCUA, as of 2019, there were 5,335 federally insured credit unions with 117.3 million members. At the same time, there were 5,177 commercial banks and savings institutions. So, the number credit unions and the number of commercial banks in the U.S. are approximately equal. Almost half of all U.S. adults are members of a credit union. Finance Companies Finance companies are non-depository financial institutions that provide personal loans and financing, as well as issue credit cards. These companies lend to individuals who have trouble borrowing from sources such as banks and credit unions, thus they charge higher interest rates and are often ruthless in foreclosing on a defaulted loan. Because of this, you should avoid finance companies. Securities Firms Securities firms, such as Goldman Sachs, do Wall Street work. They sell new issues of stocks and bonds for companies that want to raise money. They also advise companies on mergers and acquisitions. For this work, they earn millions of dollars in fees. Securities firms also provide stock brokerage services to individuals. In order to buy and sell stocks, you must hold a membership on the stock exchanges, so individuals need to go through brokers. Since there is so much competition for customers, securities firms have reduced the cost for trading stocks to zero, leading to an explosion of amateur stock pickers. We will discuss investing at length in a later chapter. Insurance Companies Traditionally, insurance companies have sold automobile, homeowners, and health insurance, as well as annuities. However, about a decade ago, insurance companies entered personal wealth management, charging fees typically equal to 1% of the assets under management. Many insurance companies like Lincoln Financial and Prudential have aggressively sought this business since it is risk free and quite lucrative. We will discuss insurance in more depth in a later chapter. Investment Companies Investment companies, such as Vanguard and Fidelity Wealth Management, invest other peoples’ money in mutual funds. We will discuss this more later, but Vanguard’s invention of low cost mutual index funds has brought fees down dramatically. Historically, investment advisors charged fees of 1% of the value of your assets to manage your investments. Now, the average mutual fund fee at Vanguard (and others) is one-tenth of 1%. Financial Conglomerates Many financial institutions combine some or all services listed above. For example, Citicorp was created by the merger of Travelers Insurance Company and Citibank, so its activities include almost all of the above. Also, Goldman Sachs, a securities firm, is now entering retail banking. Payday Lenders Avoid payday lenders at all costs. Their main function is to advance money to people waiting for a paycheck. The fees they charge are exorbitant, and they usually prey on low-income people. Banks Are Not Your Friends Banks have shareholders and are motivated by profit. They run advertisements that implicitly say they will be your best friend and help you achieve your financial goals. However, this is just not true. Their interested in maximizing their profits, and this can come in conflict with your goals. Banks charge higher fees, pay lower interest on savings deposits, and charge higher interest rates on loans. Also, one of the biggest sources of income for banks is what they term in their financial statements as non-interest income. This income includes a number of charges, like ATM fees, overdraft fees, and late fees. ATM fees, for example, average \$2.97 per transaction in the U.S. On top of that, if you go to an ATM not operated by your bank, you can be charged an additional fee, averaging \$1.72 nationally. Typically, overdraft fees are \$35 or higher. In 2017, commercial banks charged \$34 billion in overdraft fees. These fees came from only 9% of their customers, almost exclusively low-income. Additionally, if the overdraft is not corrected right away, the bank will continue to charge fees until the account balance runs down to zero; they will then will close the account. Since the bank is already earning profits from interest they charge on loans, the overdraft fees are pure profit. Many commercial banks sell their mortgages to Fannie Mae and Freddie Mac, so they must conform exactly to the rules of these institutions. Your mortgage could end up being owned by anybody. A credit union might be a better choice. They will keep all or most of their mortgages, so they are more flexible on their requirements. If you do not have perfect credit, a credit union is more likely to give you a mortgage than a commercial bank. Financial Services Offered by Banks and Credit Unions Checking Accounts Financial intermediaries all offer checking accounts. They typically do not pay interest on checking accounts, but some commercial banks charge a fee if the account does not hold a minimum amount of money or has no activity. Some commercial banks charge you \$2.00 or more if you request a paper account summary each month. Commercial banks might also offer fee-free checking accounts for students, but as soon as you graduate, they put the standard fee structure in place. As a rule, credit unions do not charge you fees on checking accounts. Ideally, you only need to keep money in your checking account to pay bills. Any extra money should be in a savings account. Arrange a “sweep” of your checking account at a certain time each month. A “sweep” is a banking term that means your financial institution will transfer any excess money from your checking account into your savings account, where it will earn interest. Some securities firms, like Charles Schwab and Goldman Sachs, also offer checking accounts. These firms are insured by the FDIC for up to \$250,000 per account. They often will pay interest on checking because they will invest your balance in money market funds. They both are insured by the FDIC up to \$250,000 per account. The idea is to have one-stop shopping for banking and stock or mutual fund investing. Saving Accounts Savings accounts are where you should transfer any money that you do not need to cover daily expenses. Savings accounts pay interest, but the interest paid is very close to the federal funds rate. The federal funds rate is now 0% to .25%, so savings accounts pay about .5%. This is better than nothing. When you join a credit union, you automatically get a checking and a savings account. I have explained above how to use these to create a budgeting vehicle to nudge you to save each month. Credit Cards All financial intermediaries offer credit cards. They will be lending you their own funds but will contract with VISA or Mastercard to do the billing and collecting. I have an entire chapter on credit cards, so I refer you to that. However, allow me to repeat the cardinal rule for credit cards: only use credit for a purchase if you can pay it off completely at the end of each month. Safety Deposit Boxes Commercial banks and credit unions offer safety deposit boxes for rent at their branches. These offer security for important papers like auto titles and house deeds and valuable jewelry. They are completely confidential. ATMs Commercial banks and credit unions have automated teller machines at their branches for cash withdrawals. You need to be aware of what fees these charge. Commercial banks will charge a fee for withdrawing cash, but credit unions usually do not. In addition, if you withdraw money at an ATM at a convenience store, you will pay an additional fee on top of the bank’s fee. This could add up to \$4.00 or more to withdraw cash. However, certain retailers like grocery stores will allow you to withdraw cash without fees. Cashier’s Checks Certain legal transactions, such as your payments at closing on a house, require a cashier’s check, also called a “bank check.” A cashier’s check is a guarantee to the receiver of the check that your account will have money to cash it. When you ask the financial intermediary to issue a cashier’s check for a certain amount (assuming you have the money in your account), the intermediary will put a hold on the corresponding amount and issue a check under the bank’s name. Why Banks Want You to Sign Up for Electronic Bill Payment About a decade ago, there was a huge push by commercial banks for all their customers to sign up for electronic bill pay. A study done by the Banking Trade Association found that if a customer signed up for electronic bill pay, it was so difficult to change all the data that 95% never left the bank. Thus, the bank could continue to charge higher fees and the customers would not leave. If you are currently at a commercial bank, do not sign up for electronic bill paying. Switch to a credit union right away. If or when you are at a credit union, it is a very good idea to sign up for electronic bill pay, since it is so convenient. Who Regulates Banks and Credit Unions The Federal Reserve Banks supervise and regulate commercial banks, and the FDIC insures their deposits. In certain states, old laws say that a Comptroller of the Currency regulates banks, but with all banks being insured by the FDIC, the same regulatory rules apply. The NCUA regulates and insures credit unions, ensuring that all credit unions have to abide by the same rules. How Interest Rates on Deposits and Loans Are Determined The federal funds rate is the rate that banks regulated by the Federal Reserve charge each other for overnight loans. The federal funds rate is set by the Fed as its principal tool of Monetary Policy, and it becomes the “wholesale cost of money” for commercial banks. In 2020, due to the Pandemic Recession, the Federal Reserve reduced the funds rate to 0-.25%. This essentially means that commercial banks can borrow in the short-term money markets at 0% to .25%. It therefore causes savings rates offered by the commercial banks to be about the same. Commercial banks then will pay their depositors the same interest that other banks will charge them to borrow money. Supply and Demand of Funds The familiar law of supply and demand also applies to money and credit. If there is a lot of demand for money or credit relative to supply, interest rates rise and vice versa. However, the Federal Reserve Bank creates all the money, and it is their job to maintain moderate interest rates so economic actors can easily borrow money and keep the economy moving. In times of recessions or credit liquidity squeezes (not enough money supply to satisfy demand), the Fed injects money into the banking system to bring down interest rates. As I said above, in 2020, the Fed injected enough money to essentially bring interest rates down to 0%. Bank Runs and Financial Crises In economics, moral hazard can exist when a party to a contract can take risks without having to suffer consequences. It can also be characterized as cleaning up another’s mistakes so they do not have to live with the negative consequences of their actions and so will make the same mistake over and over. As a perfect example, in the Great Recession, every major bank in the U.S. (with the exception of J.P Morgan) became insolvent. The Federal Reserve Bank bailed them all out. Since that bailout, the major banks know that they are “too big to fail,” so they will continue to take big risks in the future. This is a prime example of moral hazard. In the Great Depression, thousands of banks went bankrupt, and people lost their deposits. There were runs on the banks, but the money was gone. That is the reason the FDIC was established, to stop runs on the banks. It guarantees deposits up to \$250,000 per account. Unfortunately, financial crises are cyclical and with the Fed bailouts essentially encouraging moral hazard, bank failures will be cyclical also. When there is a financial crisis, a higher number of borrowers default on loans, banks become insolvent, and the FDIC or the NCUA has to take them over and make the depositors whole.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.10%3A_Banks_and_Financial_Institutions.txt
Reasons to Rent or Own a Home Owning a home has always been the American Dream. In fact, the rate of American home ownership has always been greater than in most European countries. Historically, the homeownership rate in the U.S. has ranged from 63% to 65%, going back to 1970. It rose to 69% just before the Great Recession, but then in 2015, as homes were foreclosed, it dropped all the way back down to 63%. Still, it remains a dream of most people to own their own home. There are two main factors to consider when trying to decide whether to buy or rent: how long you will be in a location and what your current financial situation is. You need to own a house at least three years to recover your transaction costs, and you should consider whether you can afford the down payment and the monthly mortgage costs. Also, be sure to check if you would be eligible for tax benefits. You can deduct the annual interest of your mortgage plus local real estate taxes. Generally, young people rent while in college, then rent in a downtown area after they get their first job, and then buy a home after they become a couple (especially if they have children). People under 25 tend to rent, as they are not yet locationally stable and because the mortgage interest tax deduction does not help them much. Between the ages of 25 to 55, people tend to buy. At any age, however, low-income people tend to rent due to economic barriers. You Are Buying a Location, Not Just a Physical Structure When buying a home, you are buying into a school district, a government, and a neighborhood. Depending on the district, school quality (and associated taxes) can vary significantly and can be a major expense in owning a home. The government in your city or township may or not be interested in actively maintaining the infrastructure of the municipality, which can have an effect on road conditions and municipal taxes. Finally, you want a friendly neighborhood, and if you buy a house when you have children, you want other children for your kids to play with. When you have narrowed down your property choices, it pays to knock on a few doors to introduce yourself and ask about the neighborhood. When either renting or owning, you consume housing services. This is easy to understand with renting but might be a little harder to grasp when it comes to owning. A house is a durable good, or a good that lasts more than three years. A house provides you with housing services that you then pay for. The correct price for a durable good is not its purchase price but what is called its annual user cost: your annual out of pocket expenses. For a home, the user cost includes: • Mortgage payments • Real estate and other taxes • Home insurance • Utilities (electricity, gas or oil, water, sewage) • Trash collection • Home and yard maintenance This probably sounds like it would be more expensive than if you were renting, but really, it is not. Your landlord incurred these same expenses to own the property, and your rent has these expenses taken into account; instead of paying various municipalities, people and companies, you paid your landlord. Calculate What You Can Afford There are constraints established by financial institutions on the size of a mortgage you are allowed to take out. The size of the mortgage, of course, will dictate the price of the house you can afford. The Consumer Financial Protection Bureau is a government agency whose job it is to make sure that financial institutions treat consumers fairly. According to the CFPB, your debt payments can be no more than 43% of your gross income: The 43 percent debt-to-income ratio is important because, in most cases, that is the highest ratio a borrower can have and still get a Qualified Mortgage. Evidence from studies of mortgage loans suggest that borrowers with a higher debt-to-income ratio are more likely to run into trouble making monthly payments. Here is what makes a qualified mortgage: A Qualified Mortgage is a loan a borrower should be able to repay. Beginning on January 10, 2014, lenders making virtually any residential mortgage loan will have to assess a borrower’s ability to repay the loan. A Qualified Mortgage is presumed to meet this requirement. A Qualified Mortgage is a loan that avoids risky features and meets other requirements general, the borrower also must have a total monthly debt-to-income ratio including mortgage payments of 43% or less. Your debt-to-income ratio is all your monthly debt payments divided by your gross monthly income. This ratio is one way that lenders measure your ability to manage your monthly loan payments. For example, let’s say your monthly debt payments look like this: • $1500 for your mortgage •$100 for an auto loan • $400 for the rest of your debts This means your monthly debt payments total$2,000. If your gross monthly income is $6,000, then your debt-to-income ratio is 33 percent ($2,000 is 33% of $6,000). Remember that your student loans must be included in this calculation, so student loans can be a drag on buying your first home. These regulations are the direct result of the housing and mortgage crisis that lead to the Great Recession. Immediately prior to the housing bust, mortgage companies were committing vast fraud in the initiation and documentation of mortgages and then selling these mortgages to investors. Investors then lost their money in these fraudulent mortgages, and many people lost their houses because they could not make the mortgage payments. But where did the magic number 43% come from? As I will detail below, the government sponsored (and now government owned) companies, Fannie Mae and Freddie Mac either own or guarantee about 60% of the residential mortgages in the United States. From an analysis of the mortgages they hold (including defaulted mortgages), Fannie Mae and Freddie Mac have determined that 43% is a safe ratio for a household. Difficulties of First-Time Home Buyers The biggest difficulty for first-time home buyers is saving up the down payment. As we said above, you can usually put only 10% down on a house and often only 5% down. For the 2019 median house price in the U.S., a 10% down payment would be$23,000. Saving this amount is difficult, so you might do what many young people do: go to your relatives for help. Remember, though, that you cannot borrow the down payment, but anyone can give you a gift of some or all of the cost. The second biggest difficulty for first-time home buyers is getting a mortgage. As I mentioned earlier, in order to get a qualified mortgage your debt payments to gross income ratio cannot be more than 43%. This includes auto loan, credit card, and student loan payments. Recently, student loans have surpassed the $1.5 trillion mark and exceeded the amount of credit card debt owed by all the households in America ($970 billion in 2018). The amount of student debt that young people have is negatively affecting the economy, slowing down consumer spending and home purchases. There has been talk in Washington, D.C. about figuring out a way to alleviate this debt, but no effective action has been taken yet. Another difficulty for first-time buyers is that no one is really building a lot of starter homes’ for first-time buyers. This is like a vicious cycle. Many first-time buyers cannot qualify for homes at today’s prices, so builders are not building as many, so the choices for those first-time buyers who do qualify are limited. Finally, after a big rush to the cities, the price of housing in hip neighborhoods is getting too expensive, leading millennials to flock to the suburbs. In a 2019 article in the The Wall Street Journal entitled “American Suburbs Swell Again as a New Generation Escapes the City” Valerie Bauerlein discusses this phenomenon: Millennials, the generation now ages 23 to 38, are no longer as rooted as they were after the economic downturn. Many are belatedly getting married and heading to the suburbs, just as their parents and grandparents did. Millennials are trying to find small towns that give the feel of a community, instead of a big sprawling suburb with big houses. This means a longer commute to work but better schools and environment for the whole family. House as Nest or House as Investment No doubt you have been told that a house is a great investment, and this is generally true. For most people nearing retirement, their work retirement fund and their home equity are the only assets they have to depend on. However, I will point out that a house is both a nest and an investment. As a nest, you consume housing services from the physical structure you own. You therefore want to be conservative in the type of mortgage you select, and I recommend a simple 30-year fixed rate mortgage. As an investment, a house is an asset that pays you a return either in a dividend or in the appreciation of its value. More than that, owning a house also offers some income tax relief. Rate of Return on Houses as Investments Prior to The Great Recession, houses were a great investment. From 1964 to 2009, the average growth rate of housing prices was 5.4% (Freddie Mac). In order to fully understand the rate of return on home ownership, we need to analyze how leverage (that is, borrowing part of your investment capital) impacts your investment. Let’s say you buy a home at $230,000, the median national (Zillow, 2019). You then put down 20% of this, or$46,000. The return must be calculated on your actual cash investment of $46,000 and not the total house price. If the house goes up in value by 10%, it is now worth$253,000 for an increase of $23,000. If you had not taken out a mortgage but paid all cash for the home, your return on investment would have been 10%. However, since you only invested$46,000 cash into the home, your return on investment is: $23,000/$46,000 = 50%. Note, however, that I did not take into account the transaction costs of buying and selling the home and all the costs while owning it. Finding a Real Estate Agent Just like finding a contractor to work on your home, finding a good real estate agent requires research. Since the seller pays the commission to the realtors, it costs you nothing to hire your own real estate agent. Here are some tips for finding a good agent. First, since housing is a local market, you want to find an agent that has a lot of local experience. A local agent will know the home prices in the market and, through their contacts with other agents, know about homes that will be coming on the market soon. This latter information is valuable in a hot real estate market. Second, it is best to get a referral from friends or acquaintances. A good agent will be loyal and focused on your needs, not just looking to make a commission. A good agent will also know competent home inspection services and mortgage and title insurance companies to help with your purchase. Finally, you should sit down and interview the referred agent. Ask what they know about the school districts and municipalities in the areas in which you are interested. Ask them to show you recent comparable sales in those areas and ask them what they think you will need to pay. Talk to them about pre-qualifying for a mortgage so you know if you can afford the houses in that area. You need to spend some time talking to the agent about the school districts and the types of houses you like. A good agent will be patient with you to make sure you get a house you can afford and love. Home Prices The price of housing, like almost everything else, is determined by supply and demand. The more buyers and the fewer sellers in a local market, the faster housing prices rise. Conversely, more sellers and fewer buyers, the slower housing prices rise. Prior to the Great Recession, however, people actually believed that housing prices never went down. Sadly, that was not to be the case. The American Association of Realtors’ general rule is that if there is a six months’ supply of houses, the market is in equilibrium; that is, housing prices neither move up or down. However, if there is less than a six months’ supply, house prices tend to rise, and vice versa. The first thing your real estate agent will do before they meet with you is to look up comparables. Comparables are houses that were sold or are for sale in your neighborhood or in the neighborhood you are considering. In order to be comparable, it should have approximately the same square feet as your house or the house you can afford, have the same number of bedrooms and bathrooms, and share other traits in common. Zillow and Redfin are two good sources for house prices. They can give you fairly accurate estimates of average home prices in the area you are considering. When you look at prices, only look at sales prices, not the listing price. Only a sold house will give you the correct price that a willing seller and a willing buyer will agree upon. The Down Payment and Private Mortgage Insurance Conventional wisdom says that you need 20% as a down payment on a house. However, home buyers can usually put 5% or even 3.5% down if they arrange a U.S. Federal Housing Administration (FHA) loan on a 30-year fixed-rate home mortgage. Note that 3.5% FHA down payments are usually capped at $417,000 for home mortgage loans, although there are exceptions to that rule depending on location. Many bank loans also often approve loans up to$417,000 with 5% down. If the loan is larger than that, lenders will usually ask for another 5% down. Regular 30-Year Fixed Mortgage Traditional mortgages, like a 30-year fixed rate mortgage, usually require at least a 5% down payment. For example, if you are buying a home for $200,000, you will need$10,000 to secure a home loan. FHA Mortgage For a government-backed mortgage like a FHA, the minimum down payment is 3.5%. For a home that costs $200,000, you will need to save$7,000 to get a loan. VA Loans A U.S. Veteran’s Affairs (VA) loan offers military members and veterans home loans with zero money down approvals. The U.S. Department of Agriculture (USDA) also has a zero-down loan guarantee program for specific rural areas. In 2016, the average home down payment was 11% according to the National Association of Realtors. Home buyers age 35 and under on average put down 8% in the same time period. When you are figuring out how much to save for a down payment, know that, while you are not allowed to borrow the money for the down payment, it’s perfectly acceptable to use any cash gifts from friends, family, or business partners. Setting aside any workplace bonuses or financial windfalls (like an inheritance) can also curb the impact of having to save. Many young people (including myself) got help with the down payment on their first house by their parents, grandparents, or other relatives; there is no need to be prideful about it. Accept with gratitude any help you get, and be sure to send them a thank you letter. If you take out a traditional mortgage and do not make a 20% down payment, your financial institution will likely make you purchase Private Mortgage Insurance (PMI). PMI is arranged by the lender and provided by private insurance companies to insure the financial institution against loss of money if they foreclose on your house and sell it. A buyer usually would be required to put down 20%, and the financial institution would put a mortgage of 80% of the purchase price. If the buyer defaults, and the lender forecloses on the home, the lender only has to sell the house for 80% of what you paid for it to be made whole on its mortgage. The 20% down payment gives the lender a cushion to recover its loan, even if home values have declined since the borrower bought the house. Pre-Qualifying for a Mortgage If you are ready to buy a house, it is important to pre-qualify for a mortgage. You do this by going to your financial institution and submitting all the paperwork they require before you make an offer on a house. You can do this while you are still house hunting. The lender will give you a letter saying you qualify for a mortgage of a certain amount, addressed either to you or to your real estate agent. A lender can easily determine the maximum mortgage that you qualify for. As we said before, your total monthly debt payments plus the mortgage payment cannot exceed 43% of your gross monthly income. If your credit score is acceptable, the lender will give you a letter testifying to the maximum mortgage you qualify for. A credit score of 700 or above is ideal. A credit score from 600 to 700 may affect the interest rate you will be charged on the mortgage and may affect the maximum amount you can borrow. However, this usually will still allow you to get a mortgage close to the 43% maximum mortgage guideline. A credit score under 600 will be a problem in securing a mortgage but not impossible. If you have a credit score under 600, you should first try your credit union or an online mortgage broker like Rocket Mortgage or Ditech.com. Pre-qualifying for a mortgage is an important competitive edge in winning a bid on a house, especially if several people are interested in the same house as you. The sales contract that you will sign will have a contingency clause which states that your offer is dependent on securing a mortgage. If you already have a letter from your lender saying they have pre-approved you for a mortgage, then the seller can feel comfortable that you will be able to close the deal. It is common practice to get pre-approved for a mortgage now, so if you do not, you will be at a competitive disadvantage. This is especially true when there is a seller’s market (more demand for than supply of houses) as opposed to a buyer’s market. Types of Mortgages There are many different flavors of mortgages in the marketplace. These are the three most common: • A fixed rate 30-year mortgage • A fixed rate 15-year mortgage • A three-year adjustable rate mortgage The thirty-year-fixed rate mortgage is by far the most common type, and I recommend this for your principal residence. In this case, be conservative. Take out a conventional or FHA fixed-rate thirty year mortgage loan when you buy your house. A thirty-year fixed rate mortgage has a consistent monthly payment. This gives you a specific amount you need to budget each month. Also, the longer the term of the loan, the lower the amount of principle that must be paid back (or amortized) every month; that means a smaller monthly payment. As with any loan, the interest you pay is on the outstanding principle. But if the outstanding principal changes every month (along with the interest) as you pay down the loan, how do you end up with a consistent monthly payment? Simply put, the paydown of the principle of the loan changes every month. Here is a typical relationship of interest to principal each month in a thirty-year fixed rate constant payment mortgage: A fifteen-year fixed rate mortgage is similar to a thirty-year fixed rate mortgage, but you repay the principle over fifteen years instead of thirty. The only major advantage of a fifteen-year loan is that you pay off the principal sooner, which, in addition to being satisfying, saves a lot of interest. However, the monthly payment is larger. Below, you can see an example of how much interest you can save. The loan amount in each case is $200,000, and the interest payments are shown in orange on the chart. Now let’s compare payments between a thirty-year and a fifteen-year fixed rate mortgage. Here is a 95% Loan-to-Value loan for a thirty-year and a fifteen-year fixed rate mortgage on the median home price in the United States: • Median home price:$227,000 • Down payment (5%): $11,350 • Closing costs (2%-3%):$4,900 • Mortgage amount: $222,450 For a$222,450 mortgage, here is what your monthly payment would be: Since rates and home prices vary, you can use an online calculator to calculate a mortgage. Generally, younger people buying their first or second house cannot afford the higher payment on a fifteen-year mortgage, so they choose the thirty-year instead. My advice is to go with the thirty-year mortgage with the lower monthly payment. This will help your cash flow. A three-year adjustable rate mortgage (ARM) has an interest rate that is adjusted upward (or downward) based on a certain designated financial index after three years or to a predetermined rate. The principal payment is usually based on a thirty-year amortization. The advantage of this loan is that the interest rate is lower at the beginning. For example, on September 17, 2019, the rate on a five-year ARM ranged from 3.00% to 3.25% while the rate on a thirty-year fixed rate mortgage was 3.97%. However, when the interest rate is adjusted the payment often is higher, and this can create a cash flow problem for the borrower. There are many different types of adjustable-rate mortgages, but they all have common elements. For example, if the mortgage is a five-year ARM, it will be tied to some index of interest rates, such as the five-year U.S. Treasury Note. Then, after the first five years, the interest rate will be changed once a year in accordance with any changes in the five-year Treasury Note. This also means that the monthly payment will change (up or down) as the interest rate of the index changes. The lender will also specify how much above the index interest rate your mortgage interest rate will be. This is called the margin or mark up. As an individual, you cannot borrow money at the same rate as the U.S. government, as you represent a higher risk for the lender. The higher risk is reflected in the higher rate. If we look at the rates in the previous paragraph, we see that the five-year ARM mortgage has a margin or risk premium of 1.4% over the five-year U.S. Treasury Note (3.00% 5yr ARM – 1.6% 5yr Treasury = 1.4% risk premium). The likelihood is that once the first five years is over, the rate will increase and, as a result, your monthly payment will increase. The assumption here is that five years from now your salary will have increased, and you can afford a higher monthly payment. However, the ARM interest rate will have what is known as caps on it. Caps are limits on how much the ARM interest can rise in any one year or over the life of the loan. Here is a hypothetical example: • 5-year Adjustable Rate Mortgage: 5.25% will not adjust more than +/- 0.5% in first 5 years then adjusts to market rate at the time • Constant monthly payment: $1,190 • Principle amortization based on 30-year amortization: 30-year amortization • Rate for five years: 5.25% After five years, the rate will adjust every year on the anniversary of the loan to a rate that is 2.00% above the rate of the five-year U.S. Treasury Note. • Annual cap: Upon adjustment, the rate will not go up (or down) more than 0.25% each year it is adjusted or go up more than 1% total for the life of the loan. With an adjustable rate mortgage, you may end up doing a partial amortization or a negative amortization of your principal. Partial amortization or zero amortization in a mortgage will occur if you take out an interest-only mortgage. If you are paying only the interest on the loan, it reduces the monthly payment. However, the downside is that the principal does not decrease and must be paid off if you sell the house or refinance the loan. If you are paying only the interest on your home mortgage plus a little bit of the principal (in order to reduce the monthly payment) the amount of the loan paid off will not decrease as rapidly as it will with a thirty-year or fifteen-year home loan. If you take out an adjustable rate mortgage, you may also end up with negative amortization of the principal of your home loan. A negative amortization loan is one in which you are not even paying the market interest on your home loan. Any interest above the market interest is added to the balance of unpaid principal. Negative amortizations can be offered with certain types of mortgage products. Although negative amortization can help provide more flexibility to borrowers by reducing the monthly payment, it can also increase their exposure to interest rate risk and actually increases the amount they owe. Fannie Mae and Freddie Mac Fannie Mae, or FNMA, is shorthand for the Federal National Mortgage Association. Freddie Mac, or FHLMC, refers to the Federal Home Loan Mortgage Corporation. The main difference between Fannie and Freddie comes down to who they buy mortgages from. Fannie Mae mostly buys mortgage loans from commercial banks, while Freddie Mac mostly buys them from smaller banks that are often called thrift banks. Fannie Mae and Freddie Mac were created by Congress to perform an important role in the nation’s housing finance system: to provide liquidity, stability, and affordability to the mortgage market. They provide liquidity (ready access to funds on reasonable terms) to the thousands of banks, savings and loans, and mortgage companies that make loans to finance housing. It may not seem like it, but the banking business model, especially in mortgage lending, is a very unstable business model. Banks borrow short (that is, borrow money from depositors or from 90-day Commercial Paper lenders) and lend it long through multi-year credit cards, one year lines of credit, three year auto loans and, in the case of the mortgage market, three to thirty year mortgages. Depositors can demand their money back at any time, and the 90-day Commercial Paper loans must be renewed every 90 days. If a large portion of the depositors demanded their money back at once or if the banks were not able to roll over the Commercial Paper, the bank would be illiquid and would likely have to close. Fannie Mae and Freddie Mac buy the three-to-thirty-year mortgages and give the banks a profit for originating them. The banks get their money back and can lend it out again. As to affordability, Fannie Mae and Freddie Mac bundle the mortgages and attach a guarantee to the bonds they buy. Since the market considers this a “quasi-guarantee” by the U.S. government, the interest rate that FNMA and FHLMC must pay on these bonds approaches the low interests on U.S. Treasury bonds. This translates to low interest rates on mortgages that qualify for purchase by these two institutions. They are very powerful in the mortgage market, owning or having guaranteed over 60% of all U.S. mortgages. Fannie Mae and Freddie Mac buy mortgages from lenders and either hold these mortgages in their portfolios or package the loans into mortgage-backed securities (MBS) that may be sold. Lenders use the cash raised by selling mortgages to the enterprises to engage in further lending. The enterprises’ purchases help ensure that individuals and families that buy homes and investors that purchase apartment buildings and other multifamily dwellings have a continuous, stable supply of mortgage money. These institutions also set the rates and the conditions for the mortgages they will buy (called prime mortgages). Fannie Mae and Freddie Mac also help stabilize mortgage markets and protect housing during extraordinary periods of stress in the broader financial system. Fannie Mae was first chartered by the U.S. government in 1938 and was a company whose stock was sold to the public, and Freddie Mac was chartered by Congress in 1970 as a private company, whose stock was also sold to the public. During the Great Recession, both Fannie Mae and Freddie Mac went bankrupt and were taken over by the U.S. Treasury Department. They are still in what is called conservatorship and pay their profits to the U.S. Treasury. The Home Mortgage Crisis of 2006 to 2009 and the Great Recession The following graph shows median home prices in the U.S. before, during, and after the mortgage crisis of 2006 to 200 (Case/Shiller, 2009). The Case/Shiller Index is one of the most respected indices of home prices. Note that home prices for the 10 largest U.S. cities (the Composite 10) and the 20 largest U.S. cities (the Composite 20) along with the National Index began dropping in early 2006 and continued to drop until sometime in the year 2009. As Akerloff and Shiller (both Nobel Prize laureates in Economics) contend in their book, Animal Spirits, most recessions begin with a financial crisis (2009). The Great Recession was no exception. Fannie Mae and Freddie Mac were not owned by the government but instead private organizations that offered stock to the public. Fannie Mae and Freddie Mac were very profitable and became the envy of the Wall Street banks. In fact, as of now, they own or guarantee 60% of the mortgages in the United States! They became so profitable by purchasing mortgages from banks and mortgage brokers (called the originators) and assembling them into bonds they then sold. Fannie Mae’s and Freddie Mac’ guarantees were seen by investors as being equivalent to an implicit guarantee by the U.S. government. Therefore, Fannie Mae and Freddie Mac were able to pay very low interest rates on the bonds, allowing them to make large profits on the difference between the interest rates they were receiving on the mortgages they purchased and the low rates they were borrowing their money at by selling bonds. Wall Street banks wanted to get in on the action. The problem was that even the biggest banks could not match the implicit government guarantee that backed the Fannie Mae and Freddie Mac bonds. Instead, they came up with the idea of paying for default insurance on the bonds they wanted to issue to buy mortgages. The banks went to AIG, the largest insurance company in the world, and convinced them to issue default insurance. With this AIG guarantee, the banks were able to get the highest credit rating on their bonds and to borrow money almost as cheaply as Fannie Mae and Freddie Mac. Like all screwy schemes, things went well (and profitably) for a while (from 2000 to 2003), but then the banks got greedy. As they started to run out of very credit-worthy mortgages to buy (the prime mortgages), the Wall Street banks bought less credit-worthy mortgages (known as subprime mortgages). These subprime mortgages were structured in a dizzying array of new types of loans or even loans where the income and assets of the home buyer were self-reported and not verified (called liar loans). These subprime mortgages were all bundled into bonds with some prime mortgages and the AIG guarantee the bonds the highest credit rating. In 2006, subprime mortgage holders began to default—not just a few, but millions. This caused a total halt to the bond market for Wall Street banks. According to the Generally Accepted Accounting Practices rules (GAAP), if there is no market for an asset you own, you must write its value down to zero in your financial statements. The Wall Street banks had to write down the mortgage-backed bonds they held to zero, and as a result every major bank in the United States became insolvent (except for J.P. Morgan, who did not participate as much in this bond party). They all had to be bailed out in 2008 by the Federal Reserve Bank. The dominoes began to fall. The availability of cheap and easy money to buy houses had caused a spike in housing prices from 2000 to 2005. The mortgage defaults and the resulting disappearance of this easy money then caused housing prices to drop precipitously. Millions had bought homes at elevated prices and borrowed mortgages on those elevated prices. When home prices fell, the value of their homes was less than the mortgage amount they owed on their home (called being underwater). Ultimately, three million people lost their homes to foreclosure in 2008, and it is estimated that as many as ten million people lost their homes to foreclosure in the Great Recession. The financial crisis and the ensuing drop in home prices and foreclosures were one of the major causes, if not the major cause, of the Great Recession. Eight and a half million people lost their jobs in this recession. The value of stocks in the U.S. stock market dropped 38% to 40% during the recession. Since then, housing prices have recovered in the United States (and internationally). Even further, some cities can be classified as unaffordable for middle class people. Here is data on the most unaffordable cities, when we compare home prices to income: Transaction Costs of Purchasing a Home Here is an estimate of the closing costs on a 95% Loan-to-Value loan and a thirty-year fixed rate mortgage on the median home price in the United States: • Median home price:$227,000 • Down payment (5%): $11,350 • Closing costs (2%-3%):$4,900 • Mortgage amount: $222,450 The closing costs will be quite substantial, and these will likely include the following: Table 11.1. Closing Costs Item Comments Estimated Cost Mortgage Points 1% of Mortgage$2,200 Origination Fee   $700 Appraisal Fee$300 Application Fee   $200 Attorney Fee (Deed Preparation)$500 Inspection Fee (Termites or Radon)   $300 Title Insurance$500 Other Fees   $200 TOTAL$4,900 You may also be asked to pay for some other items at closing: • Reimbursement of Oil: If your purchased home has oil heat, you will likely be asked to reimburse the seller for oil left in the oil tank. • Prepayment of Insurance: The bank giving you the mortgage may ask you to pay at settlement the first six months of homeowners insurance, so they know, at least initially, that the home is insured. • Reimbursement of Real Estate Taxes: If the seller has already paid all the real estate taxes for the year and there are, e.g., six months left in the tax year, you will have to reimburse the seller for six months of real estate taxes. • State Transfer Tax: Some states have real estate transfer taxes, which are charged on home sales. As an example, the state of Pennsylvania has a 2% transfer tax on all home sales. One percent of this is paid by the buyer and 1% is paid by the seller. You are entitled to a full good faith estimate of the closing costs at least a few days prior to closing on the house and closing on the mortgage. If you do not get one, ask for it. How to Calculate the Monthly Payment Start with the approximate sales prices of recently sold houses in the neighborhood. Next, figure out what amount of money you have for the down payment. This will most likely need to be a 5% down payment. If you do not have 5%, often you can put only 3% down. Next, realize your closing costs will be 2 % to 3% of the purchase price (depending on any real estate transfer tax in your state). Then calculate the mortgage you will need by taking the home price and deducting the 5% down payment and adding the closing costs. Finally, use a mortgage calculator online. Common Mistakes in Taking Out a Mortgage The biggest mistake people make in buying a home is to buy a more expensive house than they can afford. This, of course, means that they will take out a bigger mortgage than they can afford. The mortgage payment on the house is the gauge of how expensive a house you can qualify for. A qualified loan is one where the total debt payments-to-total income ratio is no more than 43%. However, just because you qualify for a certain loan size does not mean you should buy the most expensive house you can. There are maintenance expenses on the house and other expenses you need to consider. Seriously review your household budget and include the mortgage payment and expenses. Then decide what monthly mortgage payment you are comfortable with. Do not forget to consider the tax savings on the mortgage interest in your budget. When to Refinance Historically, common wisdom said that you should refinance if you can reduce your mortgage interest rate by 2%. However, many people refinance if they can lower their interest rate by 1%. You should definitely refinance an adjustable rate mortgage to a fixed rate thirty-year or fifteen-year mortgage to protect yourself against interest rate increases. Essentially, the decision to refinance should be based on a cost/benefit analysis. What will it cost you to refinance, and how much will you save per month? Calculate how many months it will take you to get back the fees you paid to refinance from the savings. Bankrate.com has a refinancing calculator to show you how much you can save and how long it will take you to get your fees back. The fees to refinance are similar to the fees to take out the original mortgage: • Origination fee • Appraisal fee • Application fee • Attorney fee (deed preparation) • Inspection fee (termites or radon) • Title insurance • Other fees (PMI insurance) These could total up to 2% of the new financed amount. Always ask the bank early on what the fees for refinancing add up to. This will enable you to do an informed cost/benefit analysis. Table 11.2. Refinancing Example Example Your numbers 1. Your current monthly mortgage payment $1,199 2. Subtract your new monthly payment –$1,073 3. This equals your monthly savings $126 4. Subtract your tax rate from 1 (e.g. 1 – 0.28 = 0.72) 0.72 5. Multiply your monthly savings (#3) by your after-tax rate (#4) 126 x 0.72 6. This equals your after-tax savings$91 7. Total of your new loans’ fees and closing costs $2,500 8. Divide total costs by your monthly after-tax savings (from #6)$2,500 / 91 9. This is the number of months it will take you to recover your financing costs 27 months
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.11%3A_Buying_a_Home.txt
Life’s Problems Will Happen to You Sorry to tell you, but life is hard. You will have difficulties, and there will be accidents and events that affect your property or your health. To put it another way, life inherently contains risk. You and your family are at risk that you may be in an automobile accident. You and your family are at risk that one or several of you may become ill and require a hospital stay. You and your family are at risk that a tree may fall on your house. Of course, not all these risks have the same probability of occurring, and not all of these risks will incur the same amount of financial loss. The Concept of Insuring Against Risk Insurance companies take on your risk for a fee, known as a premium. By buying insurance, you are protecting your wealth and income against unexpected events that can take away your wealth or income. There are many types of insurance you can buy to protect yourself, your family, and your property: • Automobile insurance • Homeowner’s insurance • Health insurance • Disability insurance • Life insurance There are also specialty insurance companies for special risks, such as insuring professional football or basketball players against a career-ending injury. Then there are also a few insurance programs administered by federal and state governments that people often forget: • Social Security insurance • Medicare insurance • Unemployment insurance (federal and state) Insurance companies have experience with the costs of fire, accidents, and casualties, so they price their premiums on anticipated claims against the insurance policies they sell. As a consumer, you should do a cost/benefit analysis. Always get at least three quotes for your insurance, and if you have been with a company for a few years, you should get quotes to make sure you are being charged a competitive premium. However, there is one rule that you should always follow. If a potential casualty loss is so large that it will really hurt you financially, you must insure against it. For example, if your house burns down, even though this is an unlikely event, you should insure against the loss because of its financial consequences. The Business Model of Insurance Companies Commercial insurance companies are in the business of making a profit. They sell policies to customers and charge an annual premium for the protection. An insurance policy is a legally binding contract, and it obligates the company to pay for any of the claims that are covered in the policy. For example, when you buy automobile insurance, you agree to pay the annual premium, and the insurance company agrees to pay for damages from an accident. Insurance companies have teams of actuaries who estimate the probable amount of claims in each category of things they insure, based on historical averages. The insurance company charges enough premiums to pay for the claims, while still having a profit left over to pay dividends to their shareholders. In most years, if fate behaves, claims replicate their historical averages. However, when unexpected events happen, such as Hurricane Katrina or the West Coast Wildfires, insurance companies lose money. Since claims occur over the course of a year, insurance companies invest the premiums they collect in short term investments (e.g., in the case of auto or homeowners’ policies) or in long term investments (e.g., in the case of whole life insurance). This gives them another source of profit for their shareholders. There are insurance companies that are mutual insurance companies. These are analogous to credit unions in the banking sector. In mutual companies, the policyholders are the owners of the company. That means that while a mutual company must cover all their claims, they do not have to generate a profit for their shareholders. If they have extra cash left over at the end of the year, they can send refund checks to their policyholders or reduce premiums the next year. Mutual insurance companies are very competitive in their rates. When you are new to the insurance market, it is a good idea to use an insurance broker who can advise you on the ins and outs of the market. The broker is paid a commission by the insurance company (normally in the range of 8% of the premium) to place your insurance with them. Once you become confident, try companies that sell by phone or over the internet. This usually saves money because they do not pay commissions to brokers. Make sure you get at least three quotes. Here are some legitimate companies that sell by phone or internet: • GEICO • Liberty Mutual • Progressive • USAA (for veterans of families of veterans) I personally switched a number of years ago from a commercial insurance company to Liberty Mutual for both auto and homeowners’ insurance and my premium went down by about 25%. Health Insurance Health insurance can be expensive compared to auto or homeowners’ insurance. In 2020, the average national cost for health insurance was \$5,500 annually for an individual and \$13,800 for a family per year. However, costs vary among plans. There is a wide selection of plans and programs for health insurance, such as opting for a Health Maintenance Organization or agreeing to only use the doctors approved by the insurance company. In addition, most companies offer health insurance to their employees, and by joining a group insurance plan of mostly healthy people, premiums are reduced. Further, companies that provide health insurance for their employees usually pay for a good portion of the premium. As an historical note, the United States is the only developed country (other than China) that does not have a national insurance program. The expansion of private health insurance in the U.S. goes back to World War II. Six million men and women were sent to fight in Europe and the Pacific. Thus, the military-industrial complex on the home front had a shortage of workers just as it was producing war material at full speed. One consequence, of course, was the recruitment of women to staff the manufacturing lines. Another consequence was the competition for employees. Companies were not allowed to give raises during the war in order to avoid inflation; in order to compete for employees, companies started giving benefits, like health insurance. From after the war until today, pharmaceutical companies, hospitals, doctors, and insurance companies have lobbied against the U.S. having a national health insurance plan. They are afraid (and rightly so) that a government insurance program will use its buying power to control their fees. The most common types of health insurance programs are fee for service and managed care plans. Both cover doctors’ visits, hospital outpatient services, medical procedures, and hospitalization. Fee for service (sometimes called Personal Choice) allows you to choose your own doctors and specialists. Your doctor submits a bill to the insurance company and is paid all or part of their fee. If the insurance company thinks the fee from your doctor is higher than the prevailing rate in that area, the insurance company will only pay the prevailing rate, and you must pay the rest, a drawback of the fee for service policy. Managed care plans require the policyholder to go to a specific group of doctors and hospitals identified by the insurance company. They are called in network doctors and hospitals. The value of this program is that the doctors recommended will not charge you above the fees agreed with your insurer, and you will not be billed for any difference. Health maintenance organizations (HMO) are insurers that usually establish a set annual fee per patient with doctors. You must go first to the primary care doctor, which you choose from their list. Any referral for further treatment by a specialist must first be approved by your primary care physician. The HMO is paying the primary care physician to keep you well and is controlling any unnecessary trips to specialists. The premiums for fee for service are higher than the managed care policies, which are higher than the HMO policies. Your choice of plan affects your premium payment. Being part of a health insurance group (such as your employer-sponsored health insurance program) significantly reduces the total premiums. Of course, your employer decides how much of the health insurance premiums they are willing to subsidize. If you are self-employed or between jobs without health insurance, there are often ad hoc groups in your area that sponsor group programs. For example, you can join the local Chamber of Commerce that sponsors a plan that is cheaper than buying as an individual. Finally, federal law states that an insurance company cannot refuse you coverage due to a pre-existing condition, as part of the Affordable Care Act (ACA) legislation. If you lose your healthcare because you lose your job, you should at least buy a low premium policy in case of an unexpected hospitalization. Hospital stays are expensive, and you should at least protect yourself against those expenses. If you are out of work, you should be able to enroll in a very reasonably priced ACA policy. There are also government subsidies for those with low income. You would likely qualify for this if you are single, and your only source of income is unemployment compensation. The Main Reason People Go Bankrupt A 2019 Harvard study found that 66.5% of all bankruptcies were tied to medical issues, due to high costs of care and time out of work. This includes an estimated 530,000 families. This study also shows that 78% of bankruptcy filers had some form of health insurance but not enough to cover their medical costs (CNBC). Even if you are young and healthy, you should at least have health insurance for what are called catastrophic illnesses. This includes things like accidents that put you in the hospital, cancer, and COVID-19. Because you are young and healthy, the probability of these events is low, so the premium is low. Recent support for Medicare for all stems not just from the plight of uninsured people but also from middle class people who have catastrophic illnesses that are not covered by insurance. With this model, the government can achieve substantial savings by negotiating fees and costs. Affordable Care Act The Affordable Care Act (ACA) was passed in 2010, in the second year of President Obama’s first term. The Republicans immediately took to calling it Obamacare and have been trying to repeal it as unconstitutional ever since it was passed. A Republican court case even made it to the U.S. Supreme Court, which upheld the constitutionality of the ACA by a five to four vote. Republicans continue to challenge the ACA, so expect it to continue to be in the news. The ACA has three main goals: 1. Reduce the price of health insurance in order to make it available to the millions of people. The government subsidizes the cost of health insurance for households whose incomes are 100% to 400% of the federal poverty level. 2. Expand Medicaid. Medicaid is free health insurance for households whose incomes are substantially below the federal poverty level. States are responsible for administering Medicaid, and many states with Republican governors or legislatures refused to accept this expanded federal aid. 3. Find and support medical care delivery systems that lower the cost of providing healthcare. The ACA has been successful by any measure. In 2020, 23 million people are covered by the ACA. In addition, 31 states accepted the expanded Medicaid program. In 2020, a total of 73 million people are now insured under Medicaid or under the Children’s Health Insurance Program (CHIP). In addition, children are now able to remain on their parents’ healthcare policy until they turn 26, and no insurance company can refuse healthcare coverage because of a pre-existing condition. Another benefit of the ACA is that it modernized the health insurance search, establishing an online marketplace for health insurance. When you access the marketplace, you will be asked some relevant questions and receive an estimate of the subsidy you may qualify for. After this, you will have access to the private insurance companies offered through the marketplace. All participating companies must offer at least these ten essential benefits in their plans: 1. Ambulatory patient services (outpatient care) 2. Emergency services 3. Hospitalization (like surgery and overnight stays) 4. Pregnancy, maternity, and newborn care (both before and after birth) 5. Mental health and substance use disorder services, including behavioral health treatment (this includes counseling and psychotherapy) 6. Prescription drugs 7. Rehabilitative and habilitative services and devices (services and devices to help people with injuries, disabilities, or chronic conditions gain or recover mental and physical skills) 8. Laboratory services 9. Preventive and wellness services and chronic disease management 10. Pediatric services, including oral and vision care (but adult dental and vision coverage are not essential health benefits) In addition to these, birth control coverage and breastfeeding coverage must also be offered. Finally, health insurance plans on the marketplace can offer additional coverage for services like dental and vision care, but these plans will cost you more. As I said before, if you lose your healthcare coverage due to any reason, you should at least buy a minimum policy that will cover an unexpected illness. Look at the ACA Marketplace for these options. Disability Insurance Disability insurance provides income if you cannot work due to an illness or accident. Short-term disability typically covers you for 13 to 26 weeks and will pay 40% to 70% of your salary during that period. It costs about 1% to 3% of your salary and may be paid for by your employer as a benefit. The premium depends on your age and occupation. Long-term disability insurance also costs about 1% to 3% of your salary, and these policies will begin payment after the period of short-term disability payments. Depending on the policy you select, the long-term disability payments can cover 20 years, 30 years or until retirement. The Social Security system also pays disability payments under the SSDI program, and, if you qualify, you can collect this in addition to your private insurance. These SSDI payments range from \$800 to \$1,800 per month, depending on your earnings history, providing only a minimal safety net; it will not replace your income. Buy long-term disability insurance; it is not that expensive if you are part of an employer sponsored plan, and it protects you and your family. Auto Insurance All fifty states require auto insurance if you own a car. This is mostly to protect other drivers if you are at fault. However, you should also get a provision against uninsured drivers. This will protect you if someone without insurance hits you, causing damage to your car or injuring you. This is a common provision in auto policies. The national average of car insurance in the U.S. is \$1,400. This will vary if you have an expensive car, have just gotten your license, or have a bad driving record. Below are some potential types of coverage you can have. Coverage A: Liability Coverage If an accident is your fault, by law you must pay for the damages. This includes both property damage and bodily injury to another. Some states have no-fault property damage laws, eliminating the need for lawyers, thereby reducing costs. In this scenario, your insurance company fixes your car, and my insurance company fixes my car. However, no-fault does not apply to bodily injury. Coverage B: Medical Payments Coverage If you are at fault, your insurance company pays any medical bills for the other person. This is the law. Coverage C: Uninsured or Underinsured Motorist Coverage If the other driver is at fault and has no insurance, you are covered by your policy. This is a good provision to have. Coverage D: Collision and Comprehensive Coverage Collision covers damages to your car if you are at fault in an accident. Comprehensive covers all other damages or loss to your car, such as theft, vandalism, floods, hail damage and other unhappy events. The deductible is the amount you pay out of the total of each claim. You can choose the deductible, but it normally ranges from \$250 to \$1,000. The higher the deductible, the lower the premium, because you will not bother your insurance company every time your car gets a dent or scratch. Many insurance companies also offer a discount for safe drivers. Note that each state has a different minimum insurance coverage required. Renter’s Insurance If you are a renter, almost every lease has a provision that says if an event happens that damages your property, the landlord will not pay for it. If someone breaks into your apartment and steals your laptop, the landlord has nothing to do with that and will not reimburse you. This is why you should buy renter’s insurance. Renter’s insurance is inexpensive, with an average price at about \$15 per month in 2020. It covers fire, damage and theft. You should also ask for liability to be included in the policy. This will protect you if a visitor gets injured in your apartment or if your dog bites the neighbor. Also, it is helpful to keep photos of your important property. Take pictures of your new laptop, your television, and expensive jewelry (include receipts if possible). This will help you establish your claim to the insurance company if you have a loss. Homeowner’s Insurance For many, their most valuable asset is their home, thus it is important to protect it with homeowner’s insurance. Homeowner’s insurance should protect your home against fire and wind damage, theft, falling trees, and personal liability. Standard forms of homeowners insurance policies are numbered from HO-1 to HO-8. These are structured with different degrees of coverage and the premiums are different for each package. • HO-1: The most basic and limited type of policy for single-family homes, HO-1s are all but nonexistent nowadays. • HO-2: A more commonly used policy and a slight upgrade from the HO-1. • HO-3: The most common type of homeowners insurance policy with broader coverage than the HO-2. • HO-4: A policy type that is specifically for renters. • HO-5: The most comprehensive form of homeowners insurance and the second most common policy type for single-family dwellings. • HO-6: A type of coverage designed for condo owners. • HO-7: The type of policy you get if you own a mobile or manufactured home. • HO-8: A special type of homeowners insurance for homes that do not meet insurer standards for other policy forms. The insurance policy will generally be a cash value policy; if the house is destroyed, you will be paid the current value of the house. Since ground does not burn, what the company is insuring is the structures on the property. You should try to get a replacement value policy which will increase the value of the house according to inflation and pay the cost to replace it. Your policy will also specify the replacement of other structures on the property such as a detached garage or swimming pool. In addition, the policy will cover personal property in the house such as furniture, computers, televisions, jewelry and clothing if there is fire, theft, or vandalism. There is usually a set limit on the cash amount covered under this. If you have especially valuable items in the house, such as jewelry, paintings, or antique rugs, you should catalog them specifically in the policy or buy a separate valuable items policy. The insurance company may ask you to present an appraisal to establish their value. Liability is also an important part of the homeowners policy. If someone is hurt on your property, you are covered up to a certain amount. However, you are not covered if you are breaking the law. For example, if you allow your teenager to have a party at your house with underage drinking and someone gets hurt, you will be in deep trouble, and your insurance company will not cover your liability. The national average annual premium for homeowners’ insurance is \$2,300. However, this is for the average home price of \$300,000 and with a \$1,000 deductible. Your premium will vary according to several factors, including: • The value of your house • The deductible you choose • Whether your area is subject to earthquakes or wildfires • Whether there is a fire hydrant nearby Ask your real estate agent for a recommendation for an insurance broker. After you are comfortable with homeowner’s policies, you can get quotes from the direct to customer insurance companies. The Insurance Institute recommends these ways to lower the cost of your insurance premiums: 1. Shop around. 2. Before you buy a car, compare insurance costs. 3. Ask for higher deductibles. 4. Reduce coverage on older cars. 5. Buy your homeowners and auto coverage from the same insurer. 6. Maintain a good credit record. 7. Take advantage of low mileage discounts. 8. Ask about group insurance. 9. Ask about other discounts, such as for safe drivers or if you install smoke alarms in your house. Flood Insurance If your home is in a location prone to flooding, you will need to buy flood insurance. Your homeowner’s policy does not cover flood damage. The Federal Emergency Management Administration (FEMA) offers flood insurance, with an average annual cost of \$700. This is a good investment, since one inch of water in your house can cause about \$25,000 in damage. FEMA has detailed maps online of frequent and moderate flood zones to help you determine if your property is at risk. Your mortgage lender may insist you buy flood insurance if you are in a flood zone, but you should whether they insist or not. Life Insurance Whole life insurance is like having a savings account that you are forced to put money in each year. Its main function is to protect your family with a lump sum payment if you die prematurely. However, as long as you pay the premiums, the whole life also accumulates savings (building up a cash value) that you can withdraw at the end of the term. If you purchase a 20- or 30-year whole life policy for \$400,000 face value, at the end of 30 years, even if you have not died, you can withdraw the \$400,000. However, a whole life policy is not a good investment from an economic point of view. The insurance company takes your premiums and invests them, at 7% or 8% annual return. They offer you a guaranteed return on the policy of anywhere between 1.5% to 3% annual return and keep the difference. This is simply not a good deal for you. Instead, you should purchase a term life policy. This does not build up cash value; rather, it is similar to auto and homeowner’s insurance in that you are paying for coverage only against the event of your premature death. Because you are not building up cash value, term life insurance is inexpensive. The premium, of course, depends on your age and health, but the average national cost of a term life policy for a healthy 30-year-old male is \$26 per month for a \$500,000 policy. The policy premiums will likely increase every year, so be sure to shop around. A 2020 report by McKinsey and Company showed that life insurance companies have suffered from a decade of declining profitability and growth. One major reason is that customers (and financial advisors) question the value of whole life insurance. McKinsey recommends several changes to revive insurance companies, including a more personal connection to customers and invention of new products, such as whole life policies that can be converted to long term nursing home care policies. We will have to wait and see. Umbrella Personal Liability Policy As your wealth increases, you have an economic incentive to protect it. The least expensive way to do this is by purchasing an umbrella personal liability policy. You likely will be carrying personal liability coverage of about \$250,000 total on your auto policy and on your homeowner’s policy, but it is expensive to increase the limits on these policies. Instead, you can buy a \$1,000,000 umbrella policy for about \$200 per year. It will pay any excess claims above the limit of \$250,000 paid by your auto or homeowner’s policy.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.12%3A_Insurance_-_What_Do_You_Need.txt
The Nature of Investing Generally speaking, an investment is something you put time or money into and get a return from. For example, we talk about investing in a personal relationship. We also talk about investing in the stock market. For this chapter, we will use the financial definition of investing: a financial asset you contribute money to and from which you receive an interest payment, a dividend, or an increase in the market value over time (or all three). You use money from your savings to invest: With your Disposable Income, you can either spend it or save it: Unless you hide your cash in the ground, you will take your savings and invest it somewhere you will get a return. Keeping these equations in mind, it is hopefully apparent that the more disposable income you save, the more you are able to invest. But where to invest? Risk and Reward On Wall Street, the standard saying is “risk follows returns.” By choosing a lower risk investment (such as U.S. Treasury Bonds), you will receive a lower return. Treasury Bonds are considered the safest investment possible because the U.S. has always paid those bonds back (except for debts from the Revolutionary and Civil Wars, but that’s another story). Almost all long-term interest rates are influenced by the rate on the U.S. Treasury Bonds, which are considered a “risk-free” return. In terms of risk, the three traditional investment instruments are stocks, bonds, and cash. We can analyze each in terms of risks and rewards. On Wall Street, risk is measured by beta (the Greek letter β), which measures the volatility or deviation from the average historical return of an investment. Stock prices are negatively correlated with recessions. Below, you can see a chart showing the prices and volatility of the general stock market. The grey bars are recessions, so it is easy to see their impact on the stock market. The S&P 500 is a general index of the overall stock market. It was created and is maintained by the financial company Standard and Poors, which is a major credit rating company. The index consists of the prices of 500 stocks out of the 3,700 public companies listed on American stock exchanges, and the composition of these 500 selected companies reflects the composition of the entire market. Adding bonds tends to lower both risk and potential return. The following chart shows the maximum gains and losses on various portfolios consisting of all stocks (on the left) through a series of mixed stocks and bonds to a portfolio consisting of all bonds (on the right). As you can see the maximum gains and losses are greatest with an all stock portfolio. Finally, some investment advisors suggest you should hold up to 10% cash in your portfolio, either for emergencies or to take advantage of bargains that may arise in the market. This should not be kept in a checking account but in a money market fund. The current annual return on money market funds is 1.7% (Vanguard Prime Money Market Fund). How Return on Investment is Calculated Inflation can have a large impact on your investment, so it is important to understand how that works. Let’s say you have cash and save it by hiding it in your mattress. Your money gets less valuable every day by exactly the rate of inflation because money is something we need to buy goods and services. If you hide your savings in a mattress and the rate of inflation is 2%, your money is depreciating in value by 2% per year. The same thing happens to money you keep in a checking account that pays no interest. Your money is depreciating at a rate of 2% per year. Even further, the money you receive as a dividend or interest on your investment is also depreciating at the annual rate of inflation. The real quest, then, is to find an investment that gives a return greater than the rate of inflation. In order to be able to compare the return on all sorts of different investments, like buying stocks or buying a Picasso, we use the same measure to calculate returns. The return on an investment comes from two areas: dividends or interest paid and the price appreciation. For example, you may put your savings in stocks and get a dividend of 2% per year plus the stock price may have increased by 8% over the course of a year. Thus, your total return for that year would be 2% + 8% = 10%. Alternatively, those investors who buy gold or a Picasso do not get any interest or dividends but receive returns from the price appreciation of their asset over time. If you bought gold today at $1,600.00 per ounce and after a year its price was$1,700.00 per ounce, your annual return would be: The general calculation of a return on investment (ROI) is the appreciation in the price (value) of the investment (asset) over a year plus any dividends or interest earned during that year, compared to the original cost of the investment (asset). The calculation is thus: This calculation will yield a decimal which is then expressed as a percent annual return. The general formula is expressed as a backward-looking calculation: Now, here’s an example. Let’s say you purchase 100 shares of Apple stock at $5.00 per share. At the end of one year, it is now selling on the stock market for$6.00 per share. In addition, you receive a dividend of $1.00 from Apple during the year. Your return could be expressed like this: If your investment is held for multiple years, you would use the total price appreciation of the asset plus add up all the dividends and calculate your total return. You would then divide the total return by the number of years you held the asset to get the average annual return. This annual return allows investors to compare the annual returns for all sorts of investments that are dissimilar, such as paintings, antique automobiles, collectibles, and stocks and bonds. However, there is one more complication to consider: taxes on your profits from investments. The profits you make from the price appreciation of an asset is called a capital gain, and it is taxed when you sell the asset to realize the gain. Taxation of the capital gain is different if you own the asset for less than one year (a short-term capital gain) or own it for more than one year (a long-term capital gain). A short-term capital gain is just added to your regular income on your tax return and taxed at your regular income tax rate. On the other hand, if you sell the asset after owning it for more than one year, you will be taxed at the long-term capital gains rate. If your total taxable income is$39,375 or below, a single person will pay 0% capital gains tax. If their income is $39,376 to$434,550, they will pay a 15% capital gains tax. Above that level, the rate jumps to 20%. The tax rates (or brackets) are somewhat different for married people. Tax laws give an incentive to hold investments over one year. Historical Returns on Various Investments Vanguard Mutual Funds, a not-for-profit, has created historical returns of various portfolios that are made up of different mixes of stocks and bonds going all the way back to 1926. Below are the returns of those different allocations. Income An income-oriented investor seeks current income with minimal risk to principal and is comfortable with only modest long-term growth of principal. They have a short-to mid-range investment time horizon. Balanced A balanced-oriented investor seeks to reduce potential volatility by including income-generating investments in their portfolio and accepting moderate growth of principal and is willing to tolerate short-term price fluctuations. They have a mid- to long-range investment time horizon. Growth A growth-oriented investor seeks to maximize the long-term potential for growth of principal and is willing to tolerate potentially large short-term price fluctuations. They have a long-term investment time horizon. Generating current income is not a primary goal. We can actually trace the historical returns on stocks and bonds going all the way back to 1870. The historical returns do not, of course, guarantee that the same returns will happen in the future, but most of the stock market investors are wise, and wise investors demand certain minimum returns in order to take the risk on investments. In their 2019 study of investment returns, The Rate of Return on Everything, 1870–2015, Òscar Jordà, Katharina Knoll, Dmitry Kuvshinov, Moritz Schularick, and Alan M Taylor calculated returns on stocks, bonds, and housing in 16 developed nations going all the way back to 1870. As Thomas Picketty notes in his book Capital in the Twenty-First Century, housing is important in all developed nations, since it represents approximately one-half of the wealth in a typical economy (2021). Here is a graph of the real rates of return in the world: The Portfolio Theory of Investing Asset allocation is spreading out your investment in various financial assets to maximize your profit while minimizing your risk. However, as I stated before, in order to achieve a higher return, you must take a higher risk. A low risk portfolio would be invested in all bonds, and from 1926 to 2018 achieved an average annual return of 5.3% with low volatility. A moderately high risk portfolio would be invested in all stocks, and from 1926 to 2018 achieved an average annual return of 10.1% with much higher volatility. Professional stock pickers are merely making educated guesses as to which stocks will appreciate the most over the next year. This is because no one can accurately predict the future. Wall Street is myopic in their focus on short-term returns. In contrast, Warren Buffet, Chair of Berkshire Hathaway, has always focused on long-term profits. Even with all their computer models and data dumps, not a single active stock picker has consistently beaten the overall rise or fall of the market, as measured by the stock market indexes of the Dow Jones Industrial Average, the S&P 500, or the Nasdaq. Therefore, the only way to achieve consistent average returns is to invest in a broadly diversified portfolio of investments. As mentioned before, a portfolio of 100% stocks has achieved a 10.1% average return over the 90+ years from 1926 to 2018. As a small investor, you will not have enough money to diversify by buying stocks yourself. Experts say you should have a minimum of 20 diverse stocks in a portfolio. Instead, you should invest in a mutual fund that contains all S&P 500 stocks. Almost every mutual fund company has a fund that is exactly that. Currently, you do not need to invest in bonds due to their lower returns. However, when you get within five years of retirement, you will need to rethink that strategy. Finally, in your portfolio allocation, you do not need to invest in a global stock portfolio. This strategy was popular over fifteen years ago because when the U.S. was in a recession, Europe was not in a recession. This is no longer true. Globalization has connected world economies, and now European and U.S. Economies are procyclical. Another reason you do not need to invest in a global stock portfolio is that a portfolio of European stocks have consistently underperformed the S&P 500 by about 1% annually. Europe does not have the high-flying tech stocks like Apple or Google that are included in an S&P 500 mutual fund. In any case, most of the largest European companies like Nestle, BMW, or Mercedes are also listed on the U.S. stock exchanges. Investing in Money Markets Money market mutual funds are alternatives to savings accounts or certificates of deposit in banks or credit unions. Their annual returns are higher than bank and credit union savings accounts, but your funds do not have the government guarantee of the FDIC or CUIC. Money market mutual funds invest your money in bonds, and these returns fluctuate with the market. Vanguard Mutual Funds reports that the ten-year average annual return on its money market fund was .42%. Unless you want to park your money and not really invest it, you do not need to put your investment dollars in a money market fund. Investing in Bonds You can find quotes on bond yields and prices in The Wall Street Journal or on FIRNA’s Market Data Center. A bond is basically an I.O.U. or promissory note. A government or a company issues bonds in order to borrow money directly from investors. This is cheaper than borrowing from a bank because the bank adds overhead and profit to its loans. A bond is a promise to pay interest to the investor every year and then to pay back the investor at the end of a specified time period. A bond’s time period is also known as its length, term, or maturity. For example, the U.S. Government issues Treasury Bonds in order to finance the ongoing annual deficit. Currently, a newly issued 10-year Treasury Bond would likely have the following characteristics: • Face Value Amount or Par: $1,000 • Coupon or Yield: 0.7% • Maturity or Term: 10 years This means that the owner of the bond will receive interest payments every year of$7.00 until maturity and will receive the $1,000 back at the end of the ten years. The interest payment is calculated as$1,000 X 0.007 = $7.00. The current 0.7% yield of the 10-year Treasury Bond is extremely low and was manipulated by the Federal Reserve Bank in the last two recessions. The Fed purchased trillions of dollars’ worth of Treasury Bonds and, due to supply and demand, brought down long-term interest rates. Even though the investor may have purchased the Treasury Bond when it was first issued, they do not have to hold the bond for the next ten years. They can sell the bond in the secondary market. On average last year,$600 billions’ worth of Treasury Bonds were bought and sold every day in secondary bond markets. The U.S. government is constantly issuing new Treasury Bonds to finance the fiscal deficit and to refinance existing Treasury Bonds as they mature and must be repaid. The total amount of outstanding U.S. Treasury Bonds is the National Debt and is currently about $18 trillion. According to the Brookings Institute, as of April, 2020 of the$18 trillion outstanding U.S. Treasury Bonds: • $3.5 trillion were held by U.S. households, companies, and governments •$3 trillion by asset managers • $2.5 trillion by the Federal Reserve •$2 trillion by banks and insurance companies • Nearly $7 trillion (40%) were held overseas, mostly by foreign central banks When bonds are sold in the secondary market, the price at which the investor buys them may not be the Face Value of Par (typically$1,000). That is because the price adjusts to reflect the current interest rates in the marketplace. For example, in the table below, you see that although the coupon remains constant at the same annual payment as when the bond was first issued, the marketplace bids up or down the price to achieve the desired yield: Table 13.1. Bond Prices and Yields Fixed Dollar Amount Example Bond Price Coupon Yield $1,000$100/ year $100/$1,000 = 10.0% $900$100/ year $100/$900 = 11.1% $1,100$100/ year $100/$1,100 = 9.1% You may have read in The Wall Street Journal that bond prices and bond yields move in opposite directions. This is because the coupon is fixed at the issuance date of the bond and when the price goes up, the yield goes down and vice versa. The above example used a fixed dollar amount for the interest paid annually on the bond. However, the coupon is an actual interest rate that will be paid annually, and the yield fluctuates the same way as in the table above: Table 13.2. Bond Prices and Yields Coupon Example Bond Price Coupon Yield $1,000 10% ($100/ year) $100/$1,000 = 10.0% $900 10% ($100/ year) $100/$900 = 11.1% $1,100 10% ($100/ year) $100/$1,100 = 9.1% The U.S. Treasury issues Treasury Bonds of many different maturities for their different borrowing needs (e.g. tax anticipation, long term deficits, etc.). The daily yields of these treasuries are depicted in a yield curve. The yield curve is published every day in The Wall Street Journal. The graph above shows that the historical yields of 10-year U.S. Treasury Bonds have been significantly higher in the past. Because inflation is a component of nominal interest rates, we can use the 10-year Treasury to see how the market anticipates the rate of inflation in the future. In 1997, The Treasury Department began to issue what are called Treasuries with Inflation Protection (TIP) in response to investor demand. In addition to the coupon yield, the Treasury protects the TIP owners by increasing the principle of the bond after the end of the year, based on inflation. The Consumer Price Index is used as a gauge for inflation, thus guaranteeing that the purchasing power of the bond-holder’s original investment will not decrease. Nominal interest rates on regular 10-year Treasuries have both a time-value of money component (the real interest rate) and an inflation component. 10-year TIPs contain only the real interest rate. Therefore, using the difference between the yield on the regular 10-year Treasury Bond and the 10-year TIP, we can accurately gauge expected inflation. Below, I have added a graph showing the two different yields. For current quotes, visit The U.S. Department of the Treasury. There are other types of bonds besides U.S. Treasuries. Bonds are classified according to the type of issuer: • Treasury Bonds • Corporate Bonds • Municipal Bonds • Federal Agency Bonds • State Agency Bonds Corporate Bonds If a company is solid and financially secure, bonds they issue will have a lot of demand. Citibank, Amazon, General Motors, and all other large public companies issue bonds, because the yield they pay is much cheaper than borrowing from a bank. The yields on 10-year top rated corporate bonds (rated AAA) has been, on average, 1.3% above the 10-year Treasury bonds. You can see the difference between 10-year Treasury Bonds and 10-year AAA Corporate Bonds in the graph below. The higher yield is due to the fact that corporate debt is not as safe as U.S. Treasury Bonds. Companies that are less financially strong also issue bonds but must offer higher interest rates to entice investors. Even some very risky ventures can offer bonds but may have to offer yields of 10% or more. Bonds with a yield of 10% or more are called junk bonds. For example, Donald Trump offered junk bonds to refinance his casinos in Atlantic City at a 14% interest rate. The casinos went bankrupt, and the bondholders lost 50% of their money and ended up taking over ownership of the casinos. Municipal Bonds Cities and townships can issue bonds to borrow for projects they want to undertake, such as a new sewer treatment plant or a new school. However, in all states except Hawaii, cities, townships, and states cannot borrow money to finance operating deficits the way the federal government does. They must have balanced budgets every year. Investors in municipal bonds get a break from the IRS; interest on municipal bonds is tax free (federally, but often not on state income tax). Bond issuers can then pay a lower interest rate. For example, if the municipality anticipated paying 8% on their bonds, and the average federal income tax rate is 25%, the tax-free yield that is equivalent would be .08 X .75 or 6%. This is an approximation, of course, because the final yield is determined in the municipal bond market and depends on current interest rates and the credit worthiness of the issuer. Federal Agency Bonds There are many federal agencies that also issue bonds. This could be to build highways (Federal Highway Administration) or to provide mortgages to residential housing buyers (Freddie Mac and Fannie Mae). Most of these Federal Agency Bonds have a U.S. Government guarantee behind them. The yields are low and comparable to U.S. Treasuries. Because of the federal guarantee, investors have a big appetite for these types of bonds. During the past two recessions, the Fed bought trillions of dollars of Fannie Mae’s and Freddie Mac’s bonds, and now the rates on home mortgages (around 3% for a thirty-year mortgage) are the lowest they have ever been. State Agency Bonds States have to build highways, regional sewage treatment plants, and other projects. In addition, states often guarantee the bonds of their public universities, so the colleges can borrow at a much lower rate. Interest on some state bonds is exempt from state and federal taxes. When buying state bonds, ask if a particular bond issue is exempt from federal and/or state income taxes. Bond Ratings Standard and Poor’s and Moody’s are financial services companies that provide risk ratings on bonds. The risk is whether the bond issuer will default on either the interest payment, on repaying the principle, or both. These ratings range from AAA to DDD for Standard and Poor’s and from AAA to C for Moody’s. New bond issues almost always will ask one of these agencies to provide a rating. Bonds with a rating of BBB- (on the Standard & Poor’s) or Baa3 (on Moody’s) or better are considered investment-grade. Bonds with lower ratings are considered speculative and often referred to as high-yield or junk bonds. Investing in Stocks Quick tip: Yahoo Finance is a good place to get price quotes on stocks and their historical price charts. Most students who have taken my financial literacy courses have wanted to learn as quickly as possible how to become a millionaire (or preferably a billionaire) by investing in stocks and bonds. If this is your goal, lucky for you, as I can show you how to do it. However, you need to know how to evaluate stocks and bonds, and it takes time. To whet your appetite, in the next chapter, I show you how to become a millionaire by investing wisely in the stock market early in your career and then being patient. First, however, you need to understand how to evaluate stock prices. The most used tool for assessing stock prices (that is, whether the market is overvaluing or undervaluing a stock) is the Price/Earnings Ratio (P/E Ratio). This is the simplest formulation of this ratio: Last year’s earnings per share means the total net income of the company divided by the outstanding number of shares. For example, the closing price of one share of stock you are looking at is $20.00 per share, and the earnings (or net profit) per share is$2.00. The P/E Ratio would look like this: That means you are paying $10.00 for every$1.00 in earnings, or to put it another way, you are receiving $1.00 return for every$10.00 you invest. Your ROI could then be calculated like this: To see if a P/E Ratio of 10 is a good, we need to look at the historical averages of the stock market’s P/E Ratios. I have summarized a few similar historical P/E Ratios below: Table 13.3. P/E Ratios of S&P 500 Stocks P/E Ration Source Dates P/E Average One Year Trailing P/E Robert Shiller 1872 to 2015 15.5 CAPE 10 Year P/E Robert Shiller 1818 to 2013 16.5 One Year P/E Estimate FactSet 2000 to 2019 15.2 To calculate what returns these P/E averages would give, plug the known numbers into the formula and solve for the unknown price. Then you can calculate the annual return. To simplify, we can assume that the earnings are $1 per share. If we paid$15.50 for one share of this company’s stock to own $1 per share of net earnings, our ROI would be: In order to reconcile this with numbers we saw above, we need to add to use this formula: The price appreciation of stock is a direct function of the annual growth in earnings per share, and the average annual dividend paid on the S&P 500 stocks is approximately 2%. However, P/E Ratios are volatile. Below is a chart of what is known as the trailing P/E Ratio of S&P 500 stocks from 1929 to 2019. The trailing P/E Ratio is an historical P/E Ratio; that is, it is calculated as such: Note the volatility of the historical P/E ratio. It certainly gives us pause to think that we could predict the value next year with this tool, even though we have already discussed previously that the average of the trailing P/E Ratio from 1872 to 2015 is 15.5. Nevertheless, this data gives us the base for expected P/E ratios in the future. However, there are serious theoretical and practical flaws in projecting this historical P/E Ratio into the future, even on the average. Calculating the ROI for the average of the One Year P/E Estimate, we get the following: Using the trailing P/E Ratio as a principle forecasting tool is flawed, due largely in part to the real world. Investors do not buy a stock for its past earnings but for its expected earnings and dividends. Simply, buying stock today does not entitle you to past earnings or dividends, but you will receive a proportionate share of future net earnings and dividends. Given an historical P/E Ratio of 15.5, investors are looking to buy a share of stock at a P/E Ratio of 15.5, but the earnings that are used to calculate the P/E Ratio are expected earnings based on the following year’s earnings. This P/E ratio is usually called the P/E Estimate and is calculated as follows: Real world investors price stocks this way, causing a lot of the trailing P/E ratio’s volatility. Let’s say investors estimate that next year’s earnings will be$1 per share. If they want to buy a share at the average P/E Ratio of 15.5, they would pay $15.50 for each one. Now, let us say that they paid$15.50 for each share and were wrong about their estimate of next year’s earnings. The trailing P/E Ratio would be quite different from a P/E Ratio of 15.5. If the investor overestimated next year’s earnings and the earnings per share were $.50, the trailing P/E Ratio would look like this: If, on the other hand, the investor underestimated next year’s earnings and earnings per share were$2.00 instead of $1, the trailing P/E Ratio would be this: Estimated P/E Ratios can vary significantly across industries. Let’s say professional investors were able to accurately predict the one-year future earnings per share of the S&P 500 and that they were looking for a 6.5% return. The one-year P/E Estimate would be constant at approximately 16.5 times earnings. The high volatility of the one-year P/E Estimate simply attests to the fact that it is impossible to accurately predict future earnings. The calculation of the ROI for the CAPE 10-year Price/Earnings Ratio is: The Cyclically Adjusted 10-year Price Earnings Ratio (CAPE Ratio) is based on the average inflation-adjusted earnings from the previous 10 years. Nobel Laureate Robert J. Shiller created this ratio with economist John Campbell and detailed this in his book, Irrational Exuberance (2000). Shiller uses an inflation adjusted (or real earnings) 10-year average is to smooth out the cyclical volatility of corporate earnings over periods of the business cycle. Divide today’s closing stock price by the 10-year real earnings per share and you have the CAPE 10-year P/E Ratio. Below is a graph of the CAPE 10-year P/E Ratio compared to long-term interest rates. Note that even though the average CAPE 10-year P/E Ratio is 16.5 for the period 1818 to 2013, it is still quite volatile a measure of the value of stocks. The Current One Year P/E Estimate Note that on all the graphs above, the values of all three P/E Ratios are significantly above their long-term averages. This is also true as of January 2020: Table 13.4. P/E Ratios P/Ratio Source Dates P/E Average P/E Ratio Jan. 31, 2020 One Year Trailing P/E Robert Shiller 1872 to 2015 15.5 26.1 CAPE 10 Year P/E Robert Shiller 1818 to 2013 16.5 30.9 One Year P/E Estimate FactSet 2000 to 2019 15.2 19.1 What does this mean? Well, first of all, let’s see why the P/E Ratio I recommend to watch (the P/E Estimate) is so high. According to John Butters of FactSet, one year prior (January 18, 2019), the forward 12-month P/E ratio was 15.5. Over the following 12 months (January 18, 2019 to January 17, 2020), the price of the S&P 500 increased by 24.7%, while the forward 12-month Earnings Per Share estimate increased by 3.8%. Thus, the increase in P has been the main driver of the increase in the P/E ratio: This means that prices of the S&P 500 stocks are overvalued which usually leads to a correction through a drop in S&P 500 share prices. We will have to watch the stock market to see if that is true. In a recent New York Times article, Robert Shiller notes that his CAPE 10-year P/E reached 33 in January 2018 and was 31 at the time of publication (2020). He further pointed out that it has only been as high or higher at two other times, 1929 and 1999. In 1929, the high CAPE 10-year P/E immediately preceded the Stock Market Crash, during which the stock market lost 85% of its value. Likewise, in 1999, the high CAPE 10-year P/E ratio preceded another Bear Market; stocks lost 50% of their value. According to Shiller, some pundits blame exceptionally low interest rates for the stock market highs. However, states that low interest rates do not correlate well with the CAPE 10-year P/E Ratio. The opposite is also true; high interest rates do not correlate well with subsequent market crashes. Shiller attributes the current Bull Market to what John Maynard Keynes describes as Animal Spirits. According to Shiller, he has seen a proliferation of narratives since 1960 of “going with your gut” as opposed to “using your brain” to make decisions. This attitude includes people like President Trump (“I have a gut and my gut tells me more sometimes than anybody else’s brain can tell me.”) and inexperienced entrepreneurs in Silicon Valley. It fuels the mania in the market. This is not the method of investing that Shiller advocates: “We have a stock market today that is less sensible and less orderly than usual, because of the disconnect between dreams and expertise (2009).” However, no matter the outcome of the S&P 500 share prices, no one can accurately predict the timing of the market over the long term. For those investing in the market over the long haul, especially those who are putting regular amounts each month in their retirement plan, the best strategy is to stay the course. As we saw above, over the long term, the S&P 500 has returned on average 10% per year. The S&P 500 includes five hundred companies, but six of them play an outsized role. These are: • Meta (Facebook) • Amazon • Apple • Netflix • Alphabet (Google) • Microsoft David J. Lynch discussed this in a recent Washington Post article: …with a combined market value exceeding$7 trillion, these six companies account for more than one-quarter of the entire S&P 500. That explains how so few companies can lift an index of 500 stocks. Since the S&P 500 is weighted by each stock’s value, or market capitalization, gains by these larger companies have a greater effect than gains by an equal number of less valuable companies (2020). These six companies led the S&P 500 Index from a drop of 35% during the pandemic to a return to near its all-time high on February 12, 2020. It took about six weeks to fall into a Bear Market (defined as a drop of 20% or more from a previous high), and this was the fastest drop in history. The S&P 500 Index then climbed back in 126 days to where it was before the Pandemic Recession. This is likewise the fastest recovery from a bear market in history, according to The Wall Street Journal. Bubbles and Busts in the Stock Market As much as the professional stock traders would like us to think that they are rational analyzers of expected future cash flows and P/E ratios, there is still a great deal of speculation, gambling, and herd behavior in the stock markets. For example, take a look at the activity of Tesla’s stock just since the beginning of the year 2020: There is no rational reason for the stock to rise almost 250% in 2020. Tesla had been announcing good news about vehicle deliveries, but there was no reason to expect earnings per share to increase 250% anytime in the near future. Tesla stock is clearly in a bubble. Tesla, of course, is just one of many instances of speculation and gambling in the stock market. Bitcoin went from under $1 per coin in December 2016 to almost$20,000 per coin in December 2017, an increase of 2,372%. It then dropped to under $5,000 per coin and now trades around$9,500 per coin. In 2019, a “lost” Leonardo da Vinci was sold to the Prince of Saudi Arabia for $450 million, although experts disagreed about its provenance. Despite these prices, the ROIs are quite mundane. In the 2015 article, “Does it Pay to Invest in Art? A Selection-Corrected Returns Perspective,” a group of finance professors from top universities examined the returns on 32,928 paintings sold repeatedly at art auction houses from 1960 to 2013. They found returns (adjusted for selection bias) to be 6.3% annually. They conclude that art is just not a good investment compared to stocks and other assets. They also computed returns on other assets and compared them to the investment returns for fine art, or what we might call investment art. For most of us, we will almost assuredly not even get what we paid for a piece of art when we sell it. When we buy art, we are paying the retail price which is typically double what the gallery paid for it. If we are going to sell it to a gallery, we will receive a wholesale price from the gallery. If we sell it on eBay, it depends on the fads of the day. So, if you want to buy art to hang on your wall, buy something because you love it, not because you expect to make money from it. Day Trading Many brokerages now advertise to individual investors to get them started in trading stocks online. These include not just places like TD Ameritrade, E-Trade, and Robinhood, but also major mutual funds such as Fidelity Investments. Due to competition, online trading now has zero trading fees, and the ease of trading online is incredible. Several online stock brokerages are criticized because they make stock trading feel like a video game and give customers access to large credit lines to trade with. With these options, can we as individual investors do better than the actively managed mutual funds by picking our own stocks? The answer is a resounding no! Mark Hulbert in his Wall Street Journal article, “When Day Traders Do Well, It’s Probably Just Luck,” says: There’s little doubt that day trading has mushroomed in popularity in recent months, or that some day traders have produced extraordinary profits. According to statisticians, however, there’s also little doubt that most of these day traders’ good performance is due to luck. They essentially would have just as good a chance of success going to the casino (2020). Investing in Stock Options An option is a right to buy or sell a specific amount of stock at a specific price of a specific company (or group of companies). Options always have a set time period in which they may be exercised. In his Wall Street Journal article, “More Investors Play the Stock-Options Lottery”, Randall Smith reports that due to individual investors jumping into the market, stock market volume has more than doubled since the year 2000. However, the volume of stock options trading has grown to more than six times what it was in 2000 (2020). According to the Options Clearing Corporation, the average daily trading in options on stocks was about 21,000,000. On September 13, 2020, the Wall Street Journal reported that options trading on shares was 120% of the buying and selling of stock shares. These options are mainly on high-tech companies that are flying high right now, directly as a result of COVID-19 and the switch to online shopping. Further, the Wall Street Journal reported that share values optioned by small investors was$500 billion. There are two main types of options: Call Options and Put Options. A Call Option gives you the right to buy shares of a stock. A Put Option gives you the right to sell shares of a stock. If you decide to invest in options and are convinced that the price of a certain stock will go up, you will buy a Call Option. The way you make money is to wait until the stock price goes up and then exercise the right to buy at the lower price. Alternatively, the price of the option will rise as the price of the stock rises, so you do not have to even exercise the option to reap your profits. You can just sell the option on the market at a higher price. If you are convinced that the price of a certain stock will go down, you will buy a Put Option. The price of the Put Option will rise when (and if) the price of the stock drops, and you can reap your profits just by selling the option in the market. Let’s look at an example of a Call Option. On January 1, you purchase 100 Call Options to purchase Apple stock at $250 per share to expire on March 31. The price of the options is$2 each. If Apple stock rises to $255.00 the options price will typically rise by the same amount. You sell the options on the market, and your profit and return are thus: The value in buying options rather than Apple stock is that your returns are multiplied. If you bought 100 shares of Apple stock at$250, and its price rose to $255, your profit and return can be expressed like this: Looks great, huh? The problem is that your timing could be off. If Apple does not rise in price by March 31 (or drops in value by March 31), your Call Options will expire as worthless, and you will lose your$200. The $2.00 per option that you paid is called the time premium or premium, and it decays or decreases the closer you get to the expiration date. This means that the option that you paid$2.00 for is worth \$0 on the expiration date if the stock price has not risen above your exercise price. For every buyer of a Call Option, there must be someone willing to sell a Call Option. There will be some sophisticated investors on the other side of your Call Option betting that Apple stock will not go up (or will decline) in price by the end of March 31. According to Smith’s Wall Street Journal article, online brokers such as TD Ameritrade and E*Trade are aggressively promoting options trading to small investors. It is much more profitable to them to sell options as opposed to stocks. My advice for the individual investor is to stay away from options. Think about it this way: if about two thirds of individual investors lose money in options, you would do better to place your money on red or black on the roulette wheel at your local casino. On that bet, your odds are 50/50, and you double your money if you win. Buying Your Company’s Stock Sometimes, if you work for a large public company, you are given the opportunity to buy your company’s stock. If you believe your company is doing well and will do well in the future then you should buy some of their stock. This could be an especially good deal if the company sells it to employees at a discount or helps finance the purchase for you out of a payroll deduction. However, the general rules of portfolio investing apply here. Do not put more than 5% or 10% of your investment in your company’s stock. The rest of your savings should be in an S&P 500 Mutual fund. Perhaps a cautionary tale that is relevant here is the Enron employee pension fund. Enron was an energy company headquartered in Houston, Texas. In the late 1990s it almost single-handedly deregulated energy markets through lobbying and reaped huge profits by buying and selling electricity and natural gas. However, it was fraudulently hiding losses that it was making in other diversified investments, and when that was discovered by the Wall Street Journal, its stock tanked. It declared bankruptcy in December 2001. Enron had encouraged its employees to invest their entire pension fund in Enron stock. Consequently, when Enron went bankrupt, not only did all the employees lose their jobs, but they also lost all their pension funds. Investing in Real Estate The most accurate way to look at the returns on real estate is to look at the publicly traded Real Estate Investment Trusts (REITs). There are two general classifications of REITs: Equity and Mortgage. Equity REITs buy properties and manage them for profits. Mortgage REITs lend money to investors who buy real estate. According to the National Association of Real Estate Investment Trusts, the average annual returns on REITs during the period 1972 to 2019 are as follows: • All REITs: 11.78% annually • Equity REITs: 13.2% annually • Mortgage REITs: 9.4% annually Obviously, if you want to invest in a REIT, it makes more sense to invest in an Equity REIT, due to the higher historical average return. However, if you want to become a sophisticated REIT investor, you should realize that almost all Equity REITs invest in only a single sector of the real estate market, e.g. office buildings or apartment buildings. Investors, especially institutional investors, want to be able to tailor their exposure or spread their exposure to specific segments of the real estate market; they can do this by buying into a REIT that only invests in shopping centers, for example. As a beginning investor, you will not have enough money to buy a properly diversified portfolio of REITs along with a properly diversified portfolio of stocks. You will be able to diversify safely by investing in a mutual fund that holds all of the S&P 500 stocks, including a good number of real estate stocks. Diversity is the key to reducing risk. The Biggest Investment Mistakes Meir Statman, in his Wall Street Journal article, “The Mental Mistakes That Active Investors Make”, has a good catalogue of the biggest mistakes that active amateur investors make (2020). According to Statman, the biggest mistake of all is believing that you can beat the market (achieve annual returns in excess of the appropriate market index). Here are just some of the indices that are used as benchmarks of how your stock picks performed: Table 13.5. Indices Index Types of Assets Measured S&P 500 U.S. Stocks – 500 large representative stocks Barclays Aggregate Bond Index U.S. Bond Prices Dow Jones Industrial Average U.S. Stocks – 30 largest U.S. industrial companies Russell 2000 Index U.S. Stocks – 2,000 small capitalization companies MSCI EAFE Index International Stocks of developed countries MSCI EE International Stocks of emerging markets There are also appropriate indices that track a mixture of stocks and bonds. When you invest in a mutual fund, its quarterly and annual reports should inform you of the appropriate index to measure its performance against. The only reason to invest in individual stocks or a specified portfolio of stocks is if they will beat the market. Broker fees or fees for an actively managed portfolio of stocks will be significantly larger than those for a passively managed mutual fund that invests in, say, all S&P 500 stocks. Therefore, if your fund cannot beat the S&P 500 fund, you should put your money in the S&P 500 fund and save the fees. Statman goes on to say that for amateur investors, the best bet is low-cost index funds. Statman then asks, if amateur investors cannot beat the market, even when they invest in an actively managed mutual fund, why do so many try? He blames it on our minds. The mental shortcuts we use to make decisions, according to Statman, turn into mental errors. Below are some common errors. Framing According to Statman, amateur investors think of stock trading as a skill that improves with practice, like surgery, carpentry, or driving. However, this is not the case, because the amateur investor has millions of other professional traders working against them. Whereas a rising stock market can be a win/win for every investor, it would be better to frame an individual stock trade as a war. For everyone buying a stock, there is someone selling the stock. What does the seller know that the buyer does not? Also, the returns that amateur investors achieve in their trading should not be compared to zero, which is often what they do. The returns should be compared to the appropriate benchmark index, which for stock portfolios is most likely the S&P 500 Index. Overconfidence When asked, 80% of people think they are above average in intelligence and good looks. Of course, this is a statistical impossibility. One cause of overconfidence by amateur stock traders is that they see stock trading as a skill akin to plumbing or carpentry, instead of something more competitive, like tennis. Playing against Rafael Nadal would soon erode your confidence. Faulty Benchmark or Anchor If we were looking to sell our house, we would look at the prices of recently sold houses in our immediate neighborhood in order to set our asking price. Amateur investors often do the same with stocks; that is, they hold the belief that the 52-week high and low of a stock define its range of trading and buy the stock at or near its low and often sell a stock at or near its 52-week high. Unfortunately, according to Statman, this strategy fails to beat the market. Flip of a Coin Even if an amateur investor invests in only mutual funds, some move their money regularly to the fund that beat the market last year. This is a losing strategy. As I will detail later, research has shown that out of 3,000 mutual funds to invest in, no one beat the S&P 500 more than two years in a row. While there are some mutual funds that beat the market, it is not consistently the same fund doing so. Picking the fund that will actually perform better than the S&P 500 next year is no better than the flip of a coin. The Availability Heuristic The amateur investor makes decisions on the information currently available to them. This information is limited. Often, the information that is available are newspaper articles about a high-flying stock or mutual fund. There is a high correlation between news coverage of a particular stock and trading in that same stock. There are plenty of stocks we do not hear about and plenty of information we do not know. Even worse, a stock’s price per share is a function of next year’s expected earnings. How accurately can an amateur investor predict next year’s earnings? The Thrill of the Hunt Fidelity Investments, one of the largest mutual funds, found in a survey that 54% of amateur investors enjoy the thrill of the hunt. Further, 53% enjoy learning new investment skills, and more than one half enjoy sharing trading news with family and friends. It’s for fun and profit. Generally, I almost always hear about the wins but not the losses of friends who talk to me about their amateur trading. As is typical in situations of incomplete information, this used to give me the feeling that I was less than competent when a stock I bought was a loser. Having since become much more aware of the actual statistics involved, I do not feel so bad anymore when I lose, and I do not feel superior when I make a winning bet on a stock. However, given the lack of information of amateur investors, they are better off at the roulette wheel. How to Learn From Investment Mistakes If you make an investment mistake, learn from it instead of just beating yourself up. Everyone makes investment mistakes. You can recover. Behavioral economics tells us that many amateur investors hold onto stocks that have declined, hoping they will rise again to at least break even. The reason for this is loss aversion. When you sell the stock, you have to admit your mistake and feel the loss. A diversified portfolio like an S&P 500 Mutual fund, will surely rise again with the market, but more than likely this will not happen for a start-up or a small company. A diversified portfolio will sooner or later deliver average returns, but a single stock could go bankrupt. If a stock is significantly down, and the company has fundamental financial issues, dump it and move to a better (diversified) portfolio.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.13%3A_Investing_Fundamentals.txt
Mutual Funds When you buy a mutual fund, you are pooling your money along with other investors. You put money into a mutual fund by buying units or shares of the fund. As more people invest, the fund issues new units or shares. The investments in a mutual fund are managed by a portfolio manager. All mutual funds have a stated goal for the assets they invest in and a philosophy for how they will invest. According to Statista, there were approximately 7,900 mutual funds in 2019, and they managed over \$21 trillion (2020). There are many mutual fund management companies, and each company offers many different types of mutual funds; trying to make an informed choice can make you dizzy. There are four broad categories of mutual funds: those that invest in stocks (equity funds), bonds (fixed-income funds), short-term debt (money market funds) or both stocks and bonds (balanced or hybrid funds). There are also many mutual funds that invest in specific sectors, such as technology, real estate, gold, or the bank sector. The Advantage of a Mutual Fund The overwhelming advantage of a mutual fund is diversification. The benefits of diversification include the following: • It minimizes the risk of loss to your overall portfolio. (Risk is defined by the standard deviation of the returns of your portfolio). • It exposes you to more opportunities for return. • It safeguards you against adverse market cycles. • It reduces volatility in your portfolio. In A Random Walk Down Wall Street, author Burton Malkiel, explains these benefits: By the time the portfolio contains close to 20 [similarly weighted] and well-diversified issues, the total risk (standard deviation of returns) of the portfolio is reduced by 70 percent. Further increase in the number of holdings does not produce any significant further risk reduction (2019). Other investment advisors agree, saying that 20 to 30 stocks is good diversification. However, here’s the rub: one share of Amazon on August 19, 2020, cost \$3,284, and one share of Google on the same day costs \$1,561. Meanwhile, Facebook costs \$271 per share, and Netflix costs \$486 per share. (Along with Apple, these are known as the “FAANG” stocks). For us mere mortals who are not billionaires, how can we diversify into 20 or more stocks? The answer is, of course, a mutual fund. Warren Buffet, Chairman and CEO of Berkshire Hathaway, once said, “Ninety-eight percent or more of people who invest should extensively diversify and not trade. Specifically, these investors should buy a very low-cost index fund.” By buying into an S&P 500 Mutual fund, you can own shares of all the stocks in the S&P 500 Index. You can also buy into diversified fund that owns a mix of 70% stocks and 30% bonds. Before we discuss what mutual fund you should buy, let’s explore whether you should choose an actively managed fund or an index fund. John Bogle, the founder of Vanguard Mutual Funds, became convinced from his research that not a single actively managed mutual fund consistently beat the market. That is, none had a better return than the index that was used to benchmark them. Benchmarks are indices of various stock market and stock market sector prices that can be compared on a day to day or annual basis. Below are the most watched stock markets indices. The Dow Jones Industrial Average (DJIA) This is not actually an index but the daily sum of the 30 largest U.S. companies’ current stock prices of. Almost all of them are household names, like McDonald’s, Facebook, ExxonMobil, and Proctor and Gamble. The stocks are weighted in the sum according to their relative prices. For example, a \$200 per share stock is weighted four times as much as a \$50 per share stock. The Standard and Poor 500 Index (S&P 500) Standard and Poor is a credit rating company that created this stock index in 1957. It is composed of 500 of the largest U.S. publicly listed companies. It further disaggregates the U.S. economy into eleven sectors, and it then selectively chooses stocks from each of these sectors to match the total market capitalization of all the public stocks in those sectors. In the chart below, you see these eleven sectors and their weights. 1. Information Technology: 24.4% 2. Health Care: 14% 3. Financials: 12.2% 4. Communication Services: 10.7% 5. Consumer Discretionary: 9.9% 6. Industrials: 8.9% 7. Consumer Staples: 7.2% 8. Energy: 3.6% 9. Utilities: 3.5% 10. Real Estate: 3.1% 11. Materials: 2.5% The NASDAQ Composite (NASDAQ Index) The National Association of Securities Dealers Automated Quotation Index has over 2,500 stocks in its index, all stocks, both domestic and international that are listed on the NASDAQ stock exchange. The NASDAQ stock exchange began operations on February 8, 1971 as the first electronic stock market. The NASDAQ Composite Index is made up of 40% tech stocks, so it is a heavy tech index compared to public stocks overall (which are only 20% tech stocks). Let’s look at some examples of benchmarks. If the actively managed mutual fund had a broad range of hundreds of stocks in it, its annual returns would be measured against the S&P 500 Index, tracking the performance of the overall stock. If the actively managed mutual fund had a broad range of international stocks, its annual returns would be measured against a stock index of international stocks, such as the MSCI Europe, Australasia, Far East Index(EAFE), which is a broad index that represents the performance of foreign developed-market stocks. This Index was created by Morgan Stanley Capital International (MSCI) to track foreign developed markets. Many of the largest mutual fund companies, such as Fidelity Investments and Charles Schwab Company have created mutual funds that mimic this index. Unfortunately, the number of actively managed funds that beat their benchmarks is well below 50%. In Barrons, Daren Fonda reported on this: Fund managers gave investors yet another reason to avoid their products last year: Well below 50% of actively managed mutual funds beat their benchmark in 2019—and it would have taken a stroke of luck to pick a winner. Just 29% of active U.S. stock fund managers beat their benchmark after fees in 2019. That declined from 37% of funds beating their benchmarks in 2018, the average success rate over the past 15 years (2020). In November 2019, Barrons gave a similar report card to actively managed funds: Fund flows continue to favor index funds over actively managed funds…We found that 22% of active funds (182 out of 840 with 10-year records) beat the S&P 500 Index’s 13.35% annualized return for the last decade through Nov. 21. The vast majority of them were growth funds (Coumarianos). Further, Wallick et al. summarize the research on whether investors can rely on past performance to predict future performance for a mutual fund: It has long been stated that past performance is not indicative of future results, but many investors are still tempted to select mutual funds by recent performance. Philips (2012) confirms that past performance is no more reliable than a coin flip in identifying active managers who will outperform in the future. Not only is past performance an unreliable predictor, but according to significant research, most other quantitative measures of fund attributes or performance (such as fund size, star ratings, active share, etc.) are equally undependable when used to identify future outperformers (2013). The other issue to note here is the cost of actively managed funds. Wallick et al. report that the average annual fees of actively managed funds are 0.87% while the average annual fees of index funds is 0.17% (2013). Nerdwallet reported almost the same fee structure averages for 2020. Vanguard states: However, the traditional value proposition for many advisors has been primarily based on their investment acumen and their prospects for delivering better returns than those of the markets. No matter how skilled the advisor, the path to better investment results may not lie with the ability to pick investments or strategies. Historically, active management has failed to deliver on its promise of outperformance over longer investment horizons. (Bennihoff and Kinniry, 2018). Can we as individual investors predict the funds that will best the indices each year? For the Wall Street Journal, Mark Halbert, a financial analyst who audits and reports on the advice of investment newsletters, says the answer is a resounding no; rather, it is more dependent on luck than skill (2020). Halbert reports that similar studies by a number of prominent researchers come to remarkably similar conclusions. Bradford Cornell, a retired finance professor at UCLA, measures the role of luck by comparing the greater dispersion of short-term versus long-term returns of mutual funds. Halbert applied Cornell’s algorithm to analyze several hundred investment newsletters, many of which are popular with day traders: When applying Prof. Cornell’s formula to this data, 92% of the differences in newsletters’ annual returns is due to luck. When he [Cornell] applied the same formula to a sample of large-cap U.S. equity mutual funds, he reached the almost-identical conclusion (2020). Further, according to Halbert, Michael Mauboussin, a managing director at Counterpoint Global, a division of Morgan Stanley Investment Management, analyzed how quickly a top-ranked manager falls back to the middle of the pack. Mauboussin’s rationale is that the faster this happens, the more luck is playing a role. Halbert applied Mauboussin’s algorithm to forty years of investment returns from the advice given by investment newsletters (1980 to 2020). He tracked newsletters whose returns put them in the top 10% of all newsletter returns in a given year. Halbert states that if skill were involved, the newsletter’s return should be in the top 10% again in the following year. Unfortunately, on average, the top performing newsletters for one year ended up on average at the 51st percentile performance mark the next year. This is only slightly better than chance. Finally, the Dow Jones Indices (owned by the company that owns the Wall Street Journal) found that only 3.84% of U.S. equity funds that were in the top half of performers in 2015 (above 50% of all funds) were still in the top half of performers in 2019. (Halbert, 2020) So, this is the bottom line: even the experts cannot beat the indices on a regular basis. It is impossible to guess who will be the lucky few who do beat them in any particular year. The best path to riches is to invest your money in a diversified index mutual fund with a non-profit mutual fund company like Vanguard or TIAA, reaping annual average returns of 9% to 10%. For-Profit Mutual Funds In the United States, there were 7,900 mutual funds in 2019, managing assets worth approximately \$21 trillion U.S. dollars. The three largest mutual fund companies are BlackRock, Vanguard, and Charles Schwab. As of the third quarter of 2019, Blackrock had approximately \$7 trillion in assets under management. As of the third quarter of 2019, the Vanguard Group manages approximately \$5.6 trillion under management, while Charles Schwab managed \$3.7 trillion in assets as of the second quarter of 2019. Almost all mutual fund companies are for profit, but there are a number of mutual fund companies that are nonprofits, and these are worth considering. Nonprofit Mutual Funds Vanguard Vanguard was started by John Bogle, who had a cult following similar to Warren Buffett’s. Bogle’s research showed that no actively managed mutual funds beat their benchmark for more than two years in a row but were still charging 1% or more per year to manage the mutual funds. Because of this, Bogle invented Index Funds that had all the same stocks as the benchmarks; therefore, he did not need to actively manage them. Vanguard has 17,600 employees worldwide and offers 170 mutual funds and 80 Exchange Traded Funds (ETFs). There is a fund to meet every investor’s interest and risk tolerance. The good thing about Vanguard is that their average mutual fund fee is 0.10%. The industry average mutual fund fee is 0.63%. The owners (that is, the customers) of the Vanguard Funds own the company. The low fees at Vanguard are possible because the fees only have to cover the salaries of the Vanguard employees plus the overhead of the buildings, utilities, and other operating costs. Vanguard does not have to generate any profit over and above its expense to run the funds. TIAA The Teachers Insurance and Annuities Association (TIAA) and its sister organization, College Retirement Equities Fund (CREF) both offer insurance, annuities, and mutual funds to individuals. TIAA used to be identified as TIAA-Cref but in the past few years has shortened its acronym to TIAA. It has 17,500 employees and manages approximately \$1 trillion in accounts. Although TIAA started out as an insurance company and retirement fund manager for teachers, anyone can now use its services. Similar to Vanguard, TIAA offers over 100 mutual funds. However, reviews from some investment websites claim that TIAA’s mutual fund management fees are somewhat higher than Vanguard’s. Load vs. No-Load and Open End Funds vs. Closed End Funds Load vs. no-load and open vs. closed end funds are technical terms in the mutual fund industry. No load mutual funds sell directly to investors. These are the types of funds you want. Load mutual funds charge a commission when you purchase them and are usually sold through stockbrokers. Do not buy load mutual funds. All or almost all mutual funds from the top mutual fund companies (BlackRock, Vanguard, Charles Schwab, TIAA) are no-load funds. Open-end funds are the ones you want. Open end funds sell shares directly to investors, and the funds will redeem the shares (that is, buy back the shares) when the customer wants to sell them. All or almost all of the top mutual fund companies’ mutual funds are open end funds. Closed end funds sell shares to investors at the creation of the funds but do not redeem them when the customer wants to sell them. The closed end funds are listed on stock exchanges, and any buying and selling takes place on the stock exchange. A fund manager actively manages the closed end fund but, as I said, does not redeem the shares. Closed end mutual funds have been in existence for almost one hundred years. ETFs are relatively new but will have many advantages over closed end mutual funds. Therefore, you would do better with an ETF than a closed end fund. Exchange Traded Funds (ETFs) An Exchange Traded Fund (ETF) is a collection of tens, hundreds, or sometimes thousands of stocks or bonds in a single fund. ETFs are traded on major stock exchanges, like the New York Stock Exchange and Nasdaq. Of course, you will buy and sell them through a brokerage account at your mutual fund company. Although ETFs and mutual funds share many similarities, there are a couple of distinguishing characteristics that may make ETFs more attractive to some investors, including lower investment minimums when you first start investing and real-time pricing every time you buy and sell. Mutual funds themselves are not traded on any stock market. The mutual fund owns stocks that are traded on the stock market(s) and the value of the mutual fund is calculated at the end of each day based on the closing price of the mutual fund’s stocks. This is similar to owning a stock portfolio and calculating at the end of each day what your stocks are worth based on its closing prices. ETFs are listed stocks themselves, and the ETF owns a portfolio of stocks just like a mutual fund. However, the ETF price in the market fluctuates just like a listed stock, depending on the buying (demand) and selling (supply) of that ETF. For example, Vanguard offers 80 ETFs with various portfolios of stocks and bonds and levels of risk. Types of Mutual Funds Bond Mutual Funds Bond mutual funds invest their money in bonds. Bonds are basically IOUs and can be issued by governments, states, local municipalities, and corporations. Instead of these entities borrowing money from the bank, it is considerably cheaper to go directly to the investors themselves. I talked about bonds in the previous chapter, and we saw that the average annual return on bonds for 94 years was 5.3%. Bond mutual funds tend to specialize in specific types of bonds, and these include the following: • International Government Bond Funds • U.S. Treasury Bond Funds • Mortgage Bond Funds • Corporate Bond Funds • Municipal Bond Funds • International Bond Funds • Index Bond Funds Stock Mutual Funds Stock mutual funds invest their money in stocks (also called equities). There are many types of stock mutual funds. Some of the more popular ones are below: • Growth Funds • Capital Appreciation Funds • Small-Capitalization Funds • Mid-Capitalization Funds • Large-Capitalization Funds • Equity Income Funds • Balance Growth and Income Funds • Sector Funds • International Stock Funds • Index Funds • Socially Responsible Stock Funds Real Estate Mutual Funds Real estate mutual funds invest the money in real estate stocks. The funds also tend to be specialized, so there are real estate stock mutual funds that invest exclusively in things like these examples: • Large Shopping Mall Stocks • Industrial Building Stocks • Office Building Stocks • Apartment Building Stocks Mixed Mutual Funds Traditional, conservative investment advisors will tell you that you should have a mix of 70% stocks and 30% bonds in your portfolio. This is because stocks rise in price when the economy is in an expansion, and bonds rise in price when the economy is in a recession. There are plenty of mutual funds that offer a mix of stocks and bonds in various proportions, according to your risk tolerance. These usually have “Balanced Fund” in their name to signify that they have a mix of stocks and bonds. Hedge Funds Many people have heard that hedge funds have been a great investment for well-connected and wealthy people and institutions. A recent article in the Wall Street Journal that while this may have been the case from 1990 to 2009, hedge funds have seriously underperformed the S&P 500 since 2010 (Chung, 2019): Table 14.1. Percent Return Above/ Below S&P 500 Average 1990 to 2009 Average 2010 to 2019 Hedge Funds Outperform S&P 500 by 5.2% annually Underperform S&P 500 by 8.9% Source: HFR, Inc. and WSJ A hedge fund is a mutual fund that by its mission and charter can invest in any multitude of assets. It can buy and hold stocks and bonds, but it can also sell short stocks and bonds; that is, it can make a bet that stocks or bonds will drop in price. Some hedge funds invest in commodities like gas and oil or corn and wheat. Some use people to pick the assets, but increasingly more and more are using computers to analyze tons of data to find assets to invest in. The underperformance of the hedge funds hurts investors further by the exorbitant fees they charge. Normal mutual funds charge their investors 1% or less of assets annually. Hedge funds typically charge their clients 2% of assets annually plus keep 20% of the profits they make each year (called 2 and 20). Critics say this fee structure means that hedge funds are a vehicle to “transfer all the fund money from the pockets of the investors to the pockets of the fund managers.” Indeed, there have been a lot of billionaires minted out of hedge fund managers. So what happened to hedge funds? 1. Quants and Index Funds: the increase in trading by computers and passive investing funds (like Index Funds and ETFs) have distorted the way stocks move. Currently, only about 15% of stocks traded are traded by humans. The quants’ computers can spot small mis-pricings in stocks and take advantage of them. 2. Competition: there were just 530 hedge funds in 1990, and they managed \$39 billion. Now there are 8,200 hedge funds managing \$3.2 trillion of investors’ money. 3. Stock Correlations: in recent years, stocks moved in correlation when financial news hit the market (such as a Federal Reserve Bank action), and this means less mis-pricing of individual stocks for hedge funds to take advantage of. 4. Low Interest Rates: low interest rates keep shaky companies alive that would have died in higher interest rate environments. These are the companies that hedge funds sell short. There seems to be no advantage to owning hedge funds now, so do not do it, even if you could. Domestic and International Stock Funds In the last chapter, I mentioned that a portfolio of international stocks appears to consistently show an annual return of about 1% less than a portfolio of U.S. stocks. This means that there seems to be no advantage to diversifying internationally, especially in European stocks. You may, however, be enamored of emerging economies like the BRICS countries: • Brazil • Russia • India • China • South Africa There are mutual funds that invest just in stocks of those countries. I would not, however, put all of my investment in that one basket. Ten percent or twenty percent of your cash seems reasonable. But why does a U.S. domestic firm mutual fund outperform a European portfolio of stocks? First, Europe (and the BRICS nations) do not have the innovative, high-flying tech companies that we do. The top tech companies are often referred to as the FAANGs: Facebook, Apple, Amazon, Netflix, and Google (now called Alphabet). Some investment gurus put Microsoft in this exclusive club (the FAANGMs) and some do not. Those who do not say that the recent growth of Microsoft’s stock has not been as meteoric as the FAANG stocks. Twenty years ago it was a better idea to diversify with European stocks, as at the time the economies of the U.S. and Europe were countercyclical. That is, when the U.S. was in a recession, Europe was not; European companies were doing well when U.S. companies were slumping. This is no longer true. Globalization is so widespread that the U.S. and European economies are now procyclical. Finally, most of the largest European companies are listed on both a European stock exchange and the New York or NASDAQ stock exchange. If you buy a widely diversified stock mutual fund like an S&P 500 Index Fund, you will still get stock of the largest European companies in the fund. Diversification Advice My advice is that, when you are investing (especially for retirement), put all your money into an S&P 500 Index Fund. (Of course, having said that, I am also going to make the case in the next section for investing in an ESG Fund.) In the last chapter, I showed you that this will return you an average annual return of 10.1% per year. That return from 1926 to 2018 included both the bear and the bull markets. Of course, you must have the patience to endure recessions and not panic and sell stocks when it enters a bear market. This panic is the hallmark mistake of amateur investors. However, recessions are a short run phenomenon. We have had 12 recessions (and expansions) since the end of World War II, including the current Pandemic Recession. The average length of these recessions has been 11 months. As long as you are not within five years of retirement, you have the time to ride out the recession and achieve your 10.1% annual return. One more note, if you want to be risky and try your own luck at the stock markets, do not invest any more than ten percent of your current cash/stocks in the market. If you make a big mistake, you can recover from a ten percent loss. Social Investing Funds (ESG Funds) What is loosely called Social Investing is the wave of the future. You should strongly consider investing in them instead of an S&P 500 mutual fund. Companies that pollute, do not treat their stakeholders fairly, or engage in unethical behavior will not survive for long. The public and investors are demanding more and more that firms engage in ESG behavior. An ESG Fund is essentially an S&P 500 mutual fund that filters out any company that is not • Environmentally Responsible • Socially Responsible • Governance Responsible There are a number of different interpretations that mutual funds use to claim that their ESG funds fulfill the above responsibilities. Therefore, you need to read about what the fund means by this to assure yourself that it is a true ESG fund. However, here’s what these terms should mean, although this is not a complete list: Environmentally Responsible This should mean that the firms in the portfolio, minimize greenhouse gas emissions, minimize air and water pollution, manage energy appropriately, and create recyclable packaging for their products. Socially Responsible This should mean that the firms in the portfolio respect human rights, exercise fair labor practices, promote diversity in hiring and promotion, insist on fair labor standards in its supply chain, engage in good community relations, and treat their customers fairly. Governance Responsible This should mean that the firms in the portfolio engage in good safety and health practices for its workers, are transparent and honest in their financial reporting, have fair and equal compensation practices, are ethical in their business practices, source their materials from fair trade suppliers, and do not engage in anti-competitive behavior. Table 14.2. Examples of Authentic ESG Funds Fund Name Investment Type # Stocks or Bonds Global ESG Select Stock Fund (VEIGX) Mutual Fund 50 ESG U.S. Stock ETF (ESGV) ETF 1,500 (Indexed) ESG International Stock ETF (VSGX) ETF 3,000 to 4,000 (Indexed) FTSE Social Index Fund (VFTAX) Mutual Fund 500 (Indexed) ESG U.S. Corporate Bond ETF (VCEB) ETF 200 to 300 (Indexed) ESGs excludes companies that do the following: • Produce alcohol, tobacco, gambling, and adult entertainment • Produce civilian, controversial, and conventional weapons • Produce nuclear power • Do not meet certain diversity criteria • Have violations of labor rights, human rights, anti-corruption, and environmental standards defined by UN Global Compact Principles • Own proved or probable reserves in fossil fuels such as coal, oil, or gas* *This excludes any company that FTSE determines has a primary business activity in the exploration and drilling for, as well as producing, refining, and supplying, oil and gas products; the supply of equipment and services to oil fields and offshore platforms; the operations of pipelines carrying oil, gas, or other forms of fuel; integrated oil and gas companies that provide a combination of services listed in above, including the refining and marketing of oil and gas products; or the exploration for or mining of coal. Here are a few examples of BlackRock’s ESG Funds. BlackRock Advantage ESG International Equity Fund Invests at least 80% of its assets in equity securities or other financial instruments that are components of, or have market capitalizations similar to, the securities included in the MSCI EAFE® Index. BlackRock Advantage ESG U.S. Equity Fund (BIRIX) Invests in a portfolio of equity securities of companies with positive aggregate societal impact outcomes, as determined by BlackRock. BlackRock Advantage ESG Emerging Markets Equity Fund (BLZIX) Invests at least 80% of its assets in equity securities or other financial instruments that are components of, or have market capitalizations similar to, the securities included in the MSCI Emerging Markets® Index. BlackRock ESG Aware Moderate Allocation Index The BlackRock ESG Aware Moderate Allocation Index is designed to measure the performance of a portfolio composed of equity and fixed income iShares ESG ETFs intended to represent a moderate risk profile strategy with a 60% allocation to fixed income and 40% allocation to equities. BlackRock ESG Aware Growth Allocation Index The BlackRock ESG Aware Growth Allocation Index is designed to measure the performance of a portfolio composed of equity and fixed income iShares ESG ETFs intended to represent a growth risk profile with a 60% allocation to equities and 40% allocation to fixed income. BlackRock ESG Aware Conservative Allocation Index The BlackRock ESG Aware Conservative Allocation Index is designed to measure the performance of a portfolio composed of equity and fixed income iShares® ESG ETFs intended to represent a conservative risk profile with a 70% allocation to fixed income and 30% allocation to equities. Buy a Vanguard ESG Fund because the fees are generally lower than BlackRock funds. Research also shows that adding international stocks to a portfolio does not increase returns, nor does adding a greater number of stocks increase the return. My advice is to invest in the FTSE Social Index Fund (VFTAX), a U.S. stock fund that has 500 stocks in it, like the S&P 500 Index Mutual Fund but with an ESG filter. The United Nations has 17 Sustainable Development Goals that can work as a framework for your investments. Every large American company (and every large company in the world) is now a global company, so the goals of the U.N. have relevance here. You can evaluate how your ESG fund meets each of these goals. Non-Fossil Fuel Funds Since you are investing for the long term and not day-trading on the volatility of the stock market, you should avoid investing in companies whose business is in fossil fuels. Over the next twenty years (and maybe sooner), these companies will perform very poorly. A number of coal mining companies are declaring bankruptcy right now as electricity generating plants switch to natural gas, which is cheaper and pollutes less. Every automobile manufacturing company is adding electric vehicles to their lineups in anticipation of national and state standards mandating cleaner vehicles. Finally, clean energy is becoming cheaper and competitive with fossil fuels. On top of this, the principal way to reduce greenhouse gasses is to eliminate fossil fuel burning. Divestment Movements The fossil fuel divestment movement began with student protests calling for their university endowments to divest from any company involved with fossil fuels. The movement was quite effective over time as university endowments pulled out of fossil fuels. In addition, the divestment movement has expanded to demand that mutual fund managers pull out of fossil fuel companies and that endowments and mutual funds invest in clean energy. A 2013 study by HSBC bank found that between 40% and 60% of the market value of BP, Royal Dutch Shell and other European fossil fuel companies could be wiped out because of stranded assets caused by carbon emission regulation. The reaction of energy companies has been mixed in response. For example, BP announced they will be pivoting from fossil fuel exploration to become a clean energy company. ExxonMobil, on the other hand, has announced it will continue exploring fossil fuel and has committed to a massive new investment program in fossil fuel exploration (Matthews, 2020).
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.14%3A_Investing_in_Mutual_Funds.txt
Goals for Retirement There are two cardinal rules to remember about saving for retirement: 1. The money you put in a retirement fund is tax-free, so it reduces your tax burden. You only pay taxes on it when you withdraw it at retirement (although you can also withdraw it early for certain hardships). 2. The earlier you contribute to your retirement plan, even if it is a small amount, the richer you will be at retirement due to the magic of compound interest. Your retirement goals may be slightly different from mine but probably not too much. Personally, I would like to be completely out of debt and maintain approximately the lifestyle I have now. Unfortunately, Social Security, as wonderful a program as it is, will not accomplish that. As I will explain a little later, Social Security is just a safety net that will not support you in the style to which you are accustomed. Social Security benefits are much more modest than many people realize; the average Social Security retirement benefit in June 2019 was about \$1,470 a month, or about \$17,640 a year. If you retire at the full retirement age (which is now 67) the maximum you can receive in monthly Social Security Benefits is \$3,011, if you have earned a good salary and paid in your 6.25% Social Security Tax each month (as a payroll deduction). This is only \$36,132 per year, and it is subject to income tax deductions. You can earn a bit more if you delay receiving SS benefits until the age of 70, but generally, the economics are better to start receiving them at 67, even if you are still working. In a recent survey, close to 50% of people 18 years and older report that they have no retirement savings—a national tragedy! We have to assume this means retirement savings other than Social Security, because if you are working, the 6.25% Social Security Payroll Tax is deducted from your payroll by your employer and sent to the IRS. A good goal for retirement is to have enough income to match 70% of your pre-retirement disposable income. You will likely have your mortgage paid off and will not have any work-related expenses such as transportation, so 70% is a reasonable goal to achieve. I have read some recent articles that say 60% of pre-retirement income is adequate based on lower expenses. However, I am not ready to advocate that. In any case, these instruments can help you save up to 70% of your pre-retirement income: • Social Security Payments • Personal Savings • 401(k)s or 403(b)s • Individual Retirement Accounts (“IRAs”) • Roth Individual Retirement Accounts (“Roth IRAs”) • Annuities Social Security Payments The Social Security Act, part of FDR’s New Deal, was passed August 14, 1935 in the midst of the Great Depression. It was founded as an insurance program administered by the government that would act as a safety net for retirees. Every pay period, an employee pays 6.2% of their earnings for Social Security and 1.45% for Medicare taxes. Workers pay the 6.2% Social Security tax on annual earnings up to \$137,700. Meanwhile, the employer pays the same rate per paycheck, adding up to a combined 12.4% Social Security tax and 2.9% Medicare tax. You can collect your full Social Security benefits at age 67, and they are not taxed if you are over 67. You can also delay receiving your benefits until age 70 and receive higher benefits at that time. However, it generally does not make economic sense to defer your benefits. Once you become 67 and start collecting your benefits, you can continue to work, and it will not affect your benefits. You can also begin collecting at 62 with reduced benefits and further benefit reductions if you earn a certain amount of work income. Social Security is an insurance program that you and your employer paid for; it is not welfare. The maximum monthly Social Security benefit that an individual can receive per month in 2020 is \$3,790 for someone who files at age 70. For someone at full retirement age, the maximum amount is \$3,011, and for someone aged 62, the maximum amount is \$2,265. The benefit is calculated on your average wages, your salary, and the number of years you worked. The formula assumes 35 years of working life. However, the average Social Security Benefit in the U.S. for 2020 is \$1,503.00. Even though it is increased every year according to the Consumer Price Index and not taxed if you are 67 or older, this is only \$18,036.00 per year. As I said, Social Security is only a safety net. You will need your retirement plan and other savings to have a comfortable retirement. 401(k)s and 403(b)s Many Baby Boomers (born in the years immediately following World War II) are now collecting defined benefit pensions. Defined benefit pensions will pay you a fixed pension benefit every month based on how much you earned and how many years you worked at the company. Often, this was 60% to 80% of your last salary before retirement. Companies had to put cash for these benefits in trust. However, if these investments lost money, the company had to come up with the payments to the retirees. There were also numerous underfunded pension funds that defrauded employees. Today, very few organizations have defined benefit pensions for their employees. Only 16% of the Fortune 500 public companies still have defined benefit pension plans. The other major organizations that still having defined benefit pension plans are the military, and federal, state, and local governments. For most organizations with retirement plans today, they offer their employees what are known as defined contribution retirement plans. Defined contribution (DC) retirement plans are the centerpiece of the private-sector retirement system in the United States. According to a recent report from Vanguard, more than 100 million Americans are covered by DC plan accounts, with assets now in excess of \$8.8 trillion. The vast majority are 401(k) plans. A 401(k) plan is a defined contribution plan set up by firms for their employees. Typically, the employee contributes an amount each month and the employer matches some or all of the employee’s contribution. A typical arrangement is for the employee to contribute up to 6% of their gross salary and the employer to match \$.50 for each dollar the employee contributes. Vanguard also reports these statistics on the millions of retirement accounts they manage: • 71% of Vanguard managed plans contribute \$0.50 for each dollar the employee contributes up to 6% of salary. • 22% of Vanguard managed plans contribute \$1.00 for each dollar the employee contributes up to 3% of salary and \$0.50 for each dollar the employee contributes for the next 3% of salary. • 6% of plans cap their contribution at \$2,000. The huge advantage of a 401(k) is that both your contribution and the employer’s contribution is tax free. The money is typically managed by a bank or mutual fund; all income from your investments is tax free. You are only taxed on the money you take out every year upon retirement. (Of course, you must begin taking distributions no earlier than at 59 ½ years and no later than 72. The IRS gets its taxes eventually). In 2020, an employee can contribute up to \$19,500 to a 401(k) tax free, no matter how much their employer matches. Non-profit organizations can set up defined contribution plans called 403(b)s. The rules and regulations are almost identical to those of 401(k)s. The most important rule for you as an employee who is eligible for a 401(k) plan is to always contribute the amount matched by the employer. The employer’s match is free money and it is tax-deferred until you retire. Individual Retirement Accounts (IRAs) and Roth IRAs Whether or not you have a 401(k) or a 403(b) retirement plan, you can also set up an IRA and contribute money to it. The limit on annual contributions to an IRA in 2020 is \$6,000. If you make less than \$63,000 per year as an individual, you receive a tax deduction for your contribution. As with a 401(k), you pay no taxes on the IRA or its investment returns until you retire. A Roth IRA is an alternative to the traditional IRA. The money you put into a Roth IRA is taxed as regular income when you contribute it. Actually, since it probably comes from your paycheck, you already paid taxes on it. However, the investment returns are not taxed and the withdrawals (after age 59) are not taxed (unlike withdrawals from 401(k)s and 403(b)s. The logic behind setting up a Roth IRA instead of a regular IRA has to do with your perception of where tax rates will be in the future. If you think income tax rates years from now will be more than income tax rates now, then you would set up a Roth IRA. Finally, Roth IRAs are not allowed for people whose income is above certain limits. Do not let all of this taxation rate talk confuse you. The fact remains that the IRS will tax you now or tax you later. For the 401(k) and 403(b), you are not taxed on the money now but are taxed on it when you withdraw it. For Roth IRAs you are taxed on the money now but not when you withdraw it. Finally, there are certain IRS rules on required withdrawals at age 70 or 72 from 401(k)s. 403(b)s, and IRAs. This assures that the IRS gets their share if you have not yet paid taxes on it. You should get the advice of your tax accountant on IRAs. However, generally for a young person a regular IRA makes better financial sense than a Roth IRA. Annuities An annuity is a retirement vehicle that is a contract with an insurance company or financial institution that provides annual payments for a specified number of years or until your death. Annuities are not substitutes for retirement plans. First, the money you contribute to buy the annuity is not tax deductible. Secondly, even though you are not taxed on the investment income from the annuity until you withdraw it, this does not compare favorably with 401(k)s, 403(b)s, IRAs or Roth IRAs. Next, the financial institution that manages your annuity usually charges high fees. Finally, the average return on an annuity is 3.27%, well below a retirement plan invested in the stock market, so it is best to discuss your retirement options with your accountant. Companies to Handle Your 401(k)s or IRAs There are many good companies to manage your money. However, if you are in an employer-sponsored 401(k) or 403(b) plan, your employer will have two or more companies already selected from which you can choose. There are for-profit mutual fund companies that manage funds for investors and manage retirement accounts for companies and their employees The largest U.S. for-profit mutual fund companies are BlackRock, Charles Schwab Company, and Fidelity Investments. There are also nonprofits that manage funds for investors and manage retirement accounts for companies and their employees. The largest nonprofit mutual funds in the U.S. are Vanguard and TIAA. I recommend that you use Vanguard. Currently, Vanguard manages \$5.6 trillion in assets and is a very low-cost mutual fund, due to the fact that they are a non-profit. Vanguard’s average mutual fund expense ratio is 0.10%, whereas the industry average is 0.63%. Their philosophy is based on research that shows no mutual fund has beaten the stock market index averages for more than two years in a row, and it is impossible to guess which will be the one to beat the S&P 500 each year. Jack Bogle, Vanguard’s founder, began offering index funds to customers and charging fees way below industry average. Vanguard has since expanded into many other mutual funds in various sectors. For more, look back to the previous chapters on investing. Each year, Vanguard reports data on all their investors in How America Saves. The report notes that in 2019, the average account balance in Vanguard Retirement Accounts was \$106,478. In 2019, 73% of retirement funds were allocated to stocks and the rest to bonds, cash, and other funds. There is no need to look at all the tables of asset allocation by age, but I think the contrast between the asset allocation of 25- and 65-year-olds is interesting: Table 15.1. Asset Allocation by Age Age Stocks Bonds Cash Other 25 to 35 87% 2% 1% 10% 65+ 48% 10% 17% 28% But which mutual fund should you invest your retirement fund in? The traditional, conservative money manager would advise you invest in a mutual fund that is composed of either 60% stocks and 40% bonds or 70% stocks and 30% bonds. Stock prices rise in economic expansions and bond prices tend to fall, while stock prices fall in recessions and bond prices tend to rise. Thus, whatever the economic conditions, your portfolio can achieve some balanced stability. Here are the returns on these conservative portfolios as reported by Vanguard at the end of 2019: Table 15.2. Rates of Return on Defined Contribution Plans 60/40 Balanced* 70/30 Balanced* S&P 500 FTSE Global All Cap Except US 1 year 20.8% 22.8% 31.5% 21.7% 3 years 10.0% 10.9% 15.3% 9.8% 5 years 7.6% 8.3% 11.7% 6.1% Source: Vanguard I do not recommend either a 60/40 portfolio or a 70/30 portfolio for your retirement funds. I explain this in more detail in the chapters on investing. However, for a simple reason, look at the returns of the benchmark S&P 500 above compared to the 60/40 and 70/30 balanced funds. My recommendation is to invest your retirement funds in an S&P 500 mutual fund. This fund has all the S&P 500 stocks in it and will achieve the in S&P 500 Index increases. According to historical records, the average annual return since its inception in 1926 through 2018 is approximately 10%-11%. I also do not agree with the conventional wisdom that you should decrease the share of stocks and increase the share of bonds in your portfolio as you get closer to actual retirement. The traditional advisor will say that your share of bonds should be related to your age. For example, when you are young, invest your retirement fund in a 60/40 mutual fund. When you turn 50, change your allocation to 50/50. At 60, change your allocation to 40/60. Several mutual fund companies even offer what are known as Target Date Funds, which automatically increase over time the share of bonds versus stocks in an individual’s retirement fund once they pass the age of 50. The traditional advice makes no sense to me, given that the average return on the S&P 500 over the historical record (10%) is double the average return on bonds (5%). I am not the only advisor who feels this way. In my opinion, you should keep all your retirement savings in an S&P mutual fund, until you are two years from retirement. Then each year make sure you move two years of your estimated expenses in retirement to a bond fund and continue to have two years in a bond fund every year of retirement. That way you will continue to achieve double the returns in retirement while protecting yourself from having to sell stocks in a down market to pay for retirement expenses. Rolling Over Your Retirement Funds Your 401(k)s, 403(b)s, and IRAs are portable; you can take them when you leave an employer. The funds you contribute always belong to you; however, the employer’s contribution often has some vesting period (often three years) until the employer’s funds belong to you. You can leave the funds in your current mutual fund manager or transfer them to the mutual fund manager at your new employer. Just make sure you have the funds transferred from manager to manager and not sent to you. The mutual fund managers are quite familiar with how to accomplish this. Imagining Your Retirement I know it seems quite early to think about what you will be doing in retirement, but it is certainly important to spend a little time doing so. Karl Marx said, “Man is a worker”; we want to do stuff. It is important to develop hobbies or volunteer work you will like to do in retirement. It will keep you physically and mentally active, and more importantly, it will put you in contact with other people. Loneliness will shorten your life. An active and interpersonal retirement will help you live a long and happy life.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.15%3A_Saving_for_Retirement.txt
The Business Cycle and Recessions The business cycle is the term we give for the expansion and contraction of an economy. This is measured through Gross Domestic Product (GDP). GDP is the output and sale of goods and services in any economy measured over a period of time (usually one year). Traditionally, GDP is aggregated into four broad categories, as measured by the Bureau of Economic Analysis of the U.S. Commerce Department. These categories are represented in the following equation: The largest component of GDP is Consumption Expenditure. How comfortable consumers are opening their wallets every month has an outsized effect on the GDP and the business cycle. Here is the relative value of the components of GDP for 2020 (estimated, as of June 2020) in current dollars: C + $14.58 trillion (68%) I +$3.63 trillion (17%) G + $3.85 trillion (18%) (X-M) –$0.53 trillion (-2.5%) GDP = $21.54 trillion (100%) The business cycle can be visualized as a graph of the value of GDP over time. Its fluctuations from its trend line are the expansions and recessions of the economy. I show a close-up of the period from 2000 to 2016 so you can see more clearly the fluctuations in actual GDP from the trend line. The graph also includes the last two recessions, March 2001 to November 2001, and December 2007 to June 2009. The blue is the trend line of GDP growing and the red is the actual real GDP. The deviation in the actual from the trend is the business cycle. The gray bars show the time of official recessions. Note that GDP is below the trend during recessions, meaning GDP has decreased. Recessions have both a popular definition and an official definition. The popular definition is a drop in economic activity (a drop in GDP) for two successive calendar quarters (six months). On the other hand, the National Bureau of Economic Research (NBER), a group of academic economists from around the U.S., is the official arbiter of when we are in a recession and when a recession is over. The NBER defines a recession as follows: A recession is a significant decline in economic activity spread across the economy, normally visible in production, employment, and other indicators. A recession begins when the economy reaches a peak of economic activity and ends when the economy reaches its trough. Between trough and peak, the economy is in an expansion. The terms peak and trough are an analogy to a wave on the ocean: There have been several business cycles in the economic history of the United States. Here is a graph of GDP and recessions (in gray bars): The graph covers 1940 to 2020, so the drops in GDP during recessions may look small. However, note that in the Great Recession, GDP dropped 4.1% and 8,500,000 employees lost their jobs. One the last line in the chart above, it states that since the end of WWII there have been 12 business cycles (recessions and expansions) including the Pandemic Recession. On average, recessions have lasted on average 11.1 months, while economic expansions have lasted on average 64.5 months (a little over five years). Does this mean that we can predict recessions? If that were possible, we could all become millionaires. As you will see from the graph below, the stock market (the S&P 500 Index) drops 6 months to one year before a recession and begins trending upward again 6 months or less prior to the end of the recession. That means if we could predict a recession, we could predict the stock market. Unfortunately, the time between recessions and, to a lesser extent, the length of recessions is too variable to be able to accurately predict them. At an economic conference, I was able to ask Robert Hall, Chair of the National Bureau of Economic Research Committee on Business Cycles and professor at Stanford University, whether anyone can predict recessions. Dr. Hall said no one can predict recessions accurately. There are several characteristics of the business cycle that may not be immediately apparent from the graphs and charts above but are important to understand. Dr. Daron Acemoglu of MIT states these: • Many aggregate macroeconomic variables move together in the business cycle. In the NBER’s definition of a recession, they lay out the most important economic variables they use to determine the business cycle: “…real GDP, real income, employment, industrial production, and wholesale-retail sales” (NBER.org). • It is very hard, if not impossible, to predict the turning points in the business cycle. As I mentioned earlier, Dr. Hall said it is impossible to predict recessions. It is equally impossible to predict the turning point of a recession, when the economic expansion begins. • There is a persistence to the rate of economic growth. If the economy is growing in one quarter, it will likely grow in the next quarter as well. Contrariwise, if the economy is in a recession in one quarter, it is likely to decline again in the following quarter (Acemoglu, Laibson, & List, 2018). There are strong psychological reasons for the persistence of the rate of economic growth. Economists today call them expectations of the future by consumers and firms. John Maynard Keynes, the father of modern economics, called these expectations animal spirits. We now call them consumer sentiment and business expectations. The fact remains that humans tend to think the near future will be a replication of the current time period and so act accordingly (this is an important tenet of Behavioral Economics.) Further, there are important economic reasons for the persistence of the rate of economic growth, mainly the circular flow of the economy: In this simple model, there are two agents: individuals (or households) and businesses (or firms). Individuals sell their labor to businesses and receive income (wages) in return. Businesses use this labor along with factories, equipment, and raw materials (physical capital) to make goods and services. Businesses sell the goods and services to individuals who use the income they received for selling their labor to pay the businesses for the goods and services that the individuals buy (expenditures). The circular flow of an economy contributes to the persistence of the trend of economic growth (or decline). Recessions most often begin when consumers slow down their spending on goods and services. Historically, this has been caused by a financial crisis of some sort that causes consumers to run up their debts too high. Consumers slow down their spending. The sales of businesses decline due to the decreased spending, usually first as a decline in consumer durables (Consumer durables are items that last three years or more, such as automobiles and appliances). With the decline in sales, the businesses decrease making goods and services. Consumers buying fewer goods and services means that businesses do not need as many workers as they currently have. Because a recession is a short-term phenomenon, firms do not sell their factories and equipment; they just lay off workers. These layoffs mean a decrease in aggregate income for consumers overall, and this decreases aggregate expenditure on goods and services, leading to further layoffs. The initial drop in consumer expenditures in goods and services usually leads to further drops in consumer expenditures due to layoffs, making the recession worse. The circular flow also helps explain the persistence of economic growth. Increased purchases of goods and services by consumers results in businesses expanding production and hiring more workers. Then the increased aggregate income of consumers results in more purchases of goods and services and the hiring of more workers to make those goods and services. During these economic fluctuations, the hiring and firing of workers do not happen instantaneously, but they can happen pretty quickly and historically have always moved together. Another way of saying this is that there are lags in the co-movement of these two variables. The Pandemic Recession was an exception to the historical start of a recession (financial crisis or excessive consumer debt) because the government-mandated COVID-19 lockdown resulted in massive layoffs of workers, especially in the hospitality industry. As you will see in the chapter on the Pandemic Recession, the U.S. government enacted a huge fiscal and monetary policy stimulus in order to counter the economic effects of the lockdown. Despite that, consumers hoarded their money, the result of consumer sentiment. This is another example of the persistence of the rate of economic growth. When the economy is going up, it continues going up. When it is going down, it tends to continue going down. Of course, the government can do a lot of things to keep the economy rolling and a lot of things to help bring the economy out of a persistent recession. Government Tax Policy The government taxes us for two main reasons. The first is to run the functions of the government and provide the services, such as national security, regulation, commerce, and more. The second reason to tax us is for income redistribution through welfare payments, unemployment compensation, and aid to lower income people in the country. These payments are called transfer payments. The government not only collects payroll taxes, it also collects corporate, social security, unemployment and other types of taxes. These are deductions from our GDP, and as such, the amount of taxes and the percent of GDP taxed influences the amount of consumer spending, corporate investment, and other aspects of the economy. This can be expressed simply: This is true theoretically, but with a few minor adjustments, it is also true in the real world. Everything we make, we sell. The income from those sales goes to someone in the United States as income. So taxing GDP is taxing our national income, and the more the government takes, the less there is for consumers and corporations to spend. I am sad to say, though, that “the only certain things in life are death and taxes,” so taxes are here to stay. The U.S. government budget for fiscal year 2020 is below. (The government’s fiscal year 2020 runs from October 1, 2019, to September 30, 2020.) The numbers are in billions; for example, the total revenue for 2019, listed as 3,463, is$3 trillion and $463 billion. Table 16.1. U.S. Government Budget for Fiscal Year 2020 Revenues (Billions) 2019 Actual 2020 2020 As % of 2020 GDP Individual income taxes 1,718 1,791 Payroll taxes 1,243 1,302 Corporate income taxes 230 234 Other 271 305 Total 3,463 3,632 16.4% On-budget 2,548 2,672 Off-budget 914 960 Outlays (Billions) Mandatory 2,734 2,910 Discretionary 1,338 1,413 Net interest 375 383 Total 4,447 4,706 21.3% On-budget 3,540 3,748 Off-budget 907 958 Deficit (-) or Surplus (Billions) -984 -1,073 -4.9% On-budget -992 -1,075 Off-budget 8 2 Debt Held by the Public (Billions) 16,801 17,835 80.7% Memorandum: Gross Domestic Product 21,220 22,111 100% Source: Congressional Budget Office, March, 2020. Since budget numbers are changing every year and inflation is affecting our incomes and the cost of goods and services to the government, we should look at historical data so that we can evaluate whether these numbers are above or below average. Revenues and Outlays as percentages of GDP are a good benchmark. We can see those numbers in the chart below. Table 16.2. Revenues, Outlays, Deficits (or Surpluses) and Debt Held by the Public, as a Percentage of GDP Year Revenues Outlays Deficits/ Surpluses Debt Held by the Public 2007 18.0 19.1 -2.4 35.2 2008 17.1 20.2 -4.4 39.4 2009 14.6 24.4 -10.7 52.3 2010 14.6 23.3 -9.2 52.3 2011 15.0 23.4 -8.9 60.8 2012 15.3 22.0 -7.1 70.3 2013 16.7 20.8 -4.3 72.2 2014 17.4 20.2 -3.0 73.7 2015 18.0 20.4 -2.6 72.5 2016 17.6 20.8 -3.3 76.4 2017 17.2 20.6 -3.7 76.0 2018 16.4 20.2 -3.9 77.4 2019 16.3 21.0 -4.7 0.0 The government’s revenue as a percent of GDP (the first column in the chart above) can be considered as the average tax rate. This is because GDP is equal to Gross National Income (GNI). That is, for everything made in the United States (the GDP), the money goes to someone or some corporation in the United States. According to the table above, when the revenue of the U.S. Government as a percentage of GDP decreases (often through tax cuts) and the Outlays as a percentage of GDP do not decrease (that is, no spending cuts), you get big annual deficits. In theory, the government budget is similar to your household budget. If you spend more than you earn, you have to borrow from your credit cards to make up the difference. If a government spends more than they take in through taxes, it must issue more Treasury Bonds to finance that deficit, and, by definition, the national debt increases. The problem with increasing your credit card debt or a government increasing its national debt is that you each have to pay it back. You or the government must make paying down the debt a priority over spending on anything else, or your credit rating goes down. More debt decreases your ability to buy goods and services. There is a crucial difference, though; you cannot easily increase your income if you have higher debt and want to maintain your former level of spending. On the other hand, governments can raise taxes to maintain their level of spending. Government Spending The government’s revenues come principally from individual income taxes and payroll taxes (Social Security and Medicare tax deductions). The spending (outlays) goes to pay for (in order of size) making transfer payments, maintaining the military, and running the government. Here is a graph of the revenues and outlays of the U.S. Federal Government for 2019: Similar to your household budget, if the government spends more than it collects in revenue, it has to borrow money to finance the deficit. In the case of the U.S., it does this by issuing more Treasury Bonds. Since the definition of the national debt is the amount of Treasury Bonds outstanding, financing the annual deficit each year with additional Treasury Bonds increases the national debt. The graph below shows the revenues and outlays of the U.S. Government through the year 2017. The red part of each year’s bar graph represents the annual deficit. Because of the scale of the graph the red section may not look large, but in the later years, the deficit is$1 trillion. In 2019, the federal budget deficit was $984 billion. Moreover, due to the$3 trillion CARES Act, the federal budget deficit in 2020 is projected to be way over $2 trillion. Here is the history of Federal revenue and spending through 2017: In July 2020, the Congressional Budget Office (CBO) projected the federal revenue and spending through 2030. The difference between the two is the annual deficit, and this deficit must be financed by issuing additional Treasury Bonds. Note that the CBO projections show trillions of dollars in deficits each year that must be borrowed. Every time the government runs a deficit, the Treasury Department must issue more Treasury Bonds to finance it. Since the National Debt of any country is defined by the outstanding amount of Treasury Bonds, the National Debt increases. Note that the government generally highlights only the Treasury Bonds held by the public and not those held by the Federal Reserve Bank or by the Social Security Administration. As of the end of 2019, the U.S. National Debt was$23.2 trillion and is continuing to balloon. The graph below shows Total U.S. Debt Held by the Public. Additionally, the Treasury Department continues to borrow heavily to pay for the economic stimulus programs created by the $2.2 trillion CARES Act, enacted in March 2020, as counter-cyclical fiscal policy to alleviate the Pandemic Recession. Since there was also Pandemic Fiscal Policy legislation immediately prior to the CARES Act, Congress has committed to approximately$3.6 trillion in additional discretionary fiscal spending in 2020. The graph below shows the monthly borrowings of the U.S. Treasury to finance its Fiscal Deficit from the year 2010 to August 2021. The Treasury Department announced in August 2020 that it planned to borrow a total of $4.5 trillion in fiscal year 2020 to finance the deficit for that year. As reported by the Peter G. Peterson Foundation, through mid 2022, the various fiscal stimulus plans in both the Trump administration (2017 to 2020) and the Biden administration (2021 to present) cost the U.S. government (and U.S. taxpayers)$5.3 trillion. The 2022 U.S. GDP is approximately $25 trillion in current dollars, and therefore this stimulus was over 25% of GDP, a historically unprecedented amount. The stimulus has ballooned U.S. national debt to$31trillion, or 124% of GDP, also a historically unprecedented amount. The national debt has grown every year, under both Republican and Democratic administrations. Government Recession Counteractions Congress and the President have power over taxation and government spending. To understand the influence that they can have on increasing GDP in a recession, we need only to look again at our definition: If any of the components of GDP increase, then by the definition, GDP will increase. When we are in a recession, the government can increase GDP by several actions involving taxation and spending: Taxation 1. Congress and the President can decrease the rate of income taxes, thereby giving consumers more disposable income. Since consumers on average spend 95% of their disposable income, they will spend most of it on goods and services, thereby increasing GDP. This creates increased consumption, but it is a much slower way to get money into consumer hands compared to a stimulus check. 2. Congress can give consumers a tax rebate; that is, they give a tax refund to everyone or to a large number of people. For example, during the Pandemic Recession, Congress sent a tax rebate of $1,200 to everyone who made less than$75,000. Again, the expectation is that consumers will spend most of the rebate. This gets money into consumer hands quickly. However, during the Bush era, most people used their $1800 stimulus check to pay down credit card bills. This does not stimulate the economy. 3. Congress can increase unemployment compensation by increasing the amount paid or lengthening the time that laid-off workers can collect unemployment. Regular unemployment compensation only lasts 26 weeks, and since it is administered by the states, the weekly amount paid varies widely from$235 per week in Mississippi to $649 per week in Connecticut. During the Great Recession, Congress extended unemployment compensation to 99 weeks and funded it with federal money. They also authorized a massive increase in the Food Stamp Program (called SNAP) which was often used by the SNAP administrators to replace unemployment compensation after a worker used up their 99 weeks of unemployment payments. In the CARES Act, Congress added a$600 per week payment to everyone collecting unemployment compensation that expired on July 31, 2020. The CARES Act also added unemployment benefits to the self-employed and gig workers, who are not eligible for state unemployment compensation. This gets money to the unemployed so they do not get desperate. It also helps maintain consumption to previous levels. However, many conservatives believe that the total unemployment compensation for a lot of individuals is more than they earned when working, so it creates a disincentive to going back to work. That might make it difficult to open up the economy again. 4. Congress can enact a payroll tax cut for individuals. Individual workers have deducted 6.25% of their pay for Social Security insurance and 1.5% of their pay for Medicare/Medicaid Insurance. These are called payroll taxes as opposed to income taxes. President Obama reduced payroll taxes for individuals during the Great Recession in order to give more disposable income to consumers. President Trump advocated for a payroll tax cut, but it was rejected by both Republicans and Democrats. This puts more money in consumers’ hands which hopefully they will spend. However, it is not as quick as a $1200 tax rebate check, and critics say that a payroll tax cut helps the employed but not the unemployed (of whom there were 20 million). 5. Congress can authorize expansion of other transfer payments such as food stamps and welfare payments to help struggling people who are out of work. This gets money to struggling people. Spending 1. Congress can allocate extra money for construction projects, such as roads and bridges. This increases employment almost immediately and brings more money into the economy, particularly into the hands of construction workers, whose wages are about 55% of construction projects. For example,$105 billion of the $787 billion stimulus package that the Obama administration passed in 2009 was allocated to “shovel ready” construction projects by state and local governments, including roads, bridges, rail projects, and internet infrastructure. Government spending increases the GDP on a dollar-for-dollar basis, unlike sending stimulus checks to households (who spend 95% and save 5%). It also creates a multiplier effect, as the construction workers spend their wages from the jobs. However, a lot of economists say this program was not successful for two reasons: several of the “shovel ready” projects had long delays and, as economist Robert Hall of Stanford noted, governments that had “shovel ready” projects had already arranged funding for them so the stimulus did not increase the number of projects significantly; it merely replaced the financing for existing projects. 2. Congress can authorize aid to state and local governments. The G in the GDP equation includes all spending by federal, state, and local governments, so increased spending by states or local municipalities will increase G which then increases GDP. States and local government revenue can decline dramatically during lengthy or deep recessions, threatening layoffs of police, firefighters, and teachers. Sending aid to states and local municipalities can at the least avoid these layoffs and, depending on the amount authorized, help stimulate the economy. For example, the Obama era stimulus package had$144 billion allocated to state and local aid. The Congressional Budget Office estimated this aid either saved or created approximately three million jobs. This local aid saves and creates jobs. However, it can be difficult to wean the state and local governments off the federal aid. 3. The Federal Government can just hire people. The IRS has faced massive budget cuts over the last twenty years. President Trump has ordered large budget cuts at the State Department. There are likely dozens of federal agencies that could use more help. This creates jobs, which is what government stimulus is all about. For conservatives, though, this is seen as expanding the role of “Big Government.” 4. The Federal Government can go to war. I dislike bringing up this federal policy alternative, but for the longest time, the perceived wisdom was that war is good for the economy. It sent men and women overseas, thereby decreasing unemployment, and it involved huge government expenditures on arms and personnel. For example, in 1940, President Franklin Delano Roosevelt hired 16 million soldiers to serve in the army during World War II, which was 30 % of a total labor force of 53 million (U.S. Census, 1940). The unemployment rate in the Great Depression (1929-1939) was 25% of the labor force, so WWII created full employment. A number of historians and economists (including me) opine that FDR willingly embroiled the U.S. in WWII in order to end The Great Depression. This theory may be hard to prove but the actions of FDR were certainly more than coincidental in 1940. While it is true that war was viewed as good for the economy in the past, ever since the end of the Vietnam War, the majority of economists now conclude the opposite. Consider the fact that three million U.S. soldiers served in Vietnam, but only 13,000 served in Afghanistan and only 5,000 in Iraq; that is not enough to affect the unemployment rate. In addition, the Afghanistan and Iraq wars cost over $3 trillion, money that could have been better used for domestic policy purposes. The Federal Reserve Bank The Federal Reserve Bank of the United States is the bankers’ bank. It issues charters for banks to operate and regulates all the commercial banks in the country. If a bank in the United States does not follow the Fed’s rules or if it ends up insolvent, the Federal Reserve will seize the bank and have the FDIC liquidate it. The Federal Reserve Bank also creates the money we use in this country and is responsible for conducting Monetary Policy. Monetary Policy is the active use of setting or influencing interest rates and increasing or decreasing the Money Supply to steer the U.S. economy. In implementing its Monetary Policy, the Fed has two objectives: 1. To achieve full employment in the U.S. Economy 2. To keep inflation under control (the target rate of the Fed is 2% annual inflation – defined as a 2% annual increase in the rate of price increases in Personal Consumption Expenditures not including Food or Energy) Theoretically, the Federal Reserve Bank can create unlimited amounts of money, but in normal times, the Fed increases money enough to support the growth of GDP because everyone needs money to buy goods and services. For example, a trillion dollars of money buys two trillion dollars of GDP in the course of a year, if you expect the GDP to grow by 10% this year, the Fed needs to facilitate that growth by creating 10% more money and injecting it into the economy. This relationship between GDP and the Money Supply is central to the Quantity Theory of Money. The Quantity Theory of Money states that the growth rate of Gross Domestic Product (nominal GDP) and growth rate of the Money Supply are equal: This is supported in the long run by empirical evidence. Moreover, we can use this equation to create other equations. Since we can separate the growth rate of nominal GDP into the growth rate of real GDP plus the growth rate of prices over time (inflation), we can write this as: We can then substitute equation (B) above into equation (A) and get: Rearranging (C) gives us the Inflation Equation: Equation (D) tells us about an important constraint on the Fed’s ability to create unlimited amounts of money. If the Fed allows the Money Supply to grow faster than the growth rate of real GDP, prices will rise; that is, we will have inflation. The Inflation Equation prompted Nobel Prize winning economist Milton Friedman to state, “Inflation is always and everywhere a monetary phenomenon.” Friedman advocated for a steady rate of monetary growth at a moderate level. This, he felt, would provide a framework under which a country can have little inflation and much growth. If we look at the data, it appears that the Fed has followed this advice (until just recently): Unfortunately, the Quantity Theory of Money assumes a constant ratio of annual Gross Domestic Product to annual Money Supply (M2). This ratio (GDP/M2) is called the Velocity of Money. It actually shows the turnover of M2 or equivalently how many dollars of GDP does one dollar of M2 buy in a year. When Milton Friedman was doing his Nobel Prize-winning research, this ratio was quite stable. However, this relationship has broken down in recently, so now we likely need to revise Macroeconomic Theory. See the graph of Velocity of Circulation below. The Federal Reserve is very diligent about trying to fulfill its dual mandate of full employment and low inflation. They accomplish this by lowering short-term interest rates and increasing Money Supply during recessions. This makes it cheaper for commercial banks to borrow money in the wholesale credit markets and for bank customers to borrow money. Conversely, the Fed raises short-term interest rates and decreases the Money Supply when inflation appears to be in danger of moving above their target rate of two percent. This makes it more expensive for both the banks and their customers to borrow money. The short-term interest rate that the Fed controls is the Federal Funds Rate. The Federal Funds Rate is the rate that banks lend each other overnight. However, all short-term interest rates in the market follow the Fed Funds Rate, so this rate effectively becomes the wholesale cost of funds to the banks. That is, this is the rate at which banks borrow money. You can see from the following graph their historical record. (The gray bands are recessions.) The lesson here is that the Fed has dropped short-term interest rates when recessions occur in order to stimulate the economy and conversely raises short-term rates in economic expansions to guard against inflation accelerating. Although forecasting the future is very difficult to do, the Fed tries, with their economic modelling, to anticipate the movements of the business cycle at least a year in the future; based on that, they calibrate their interest rate actions. The full effects of Federal Reserve Monetary Policy actions take approximately eighteen months to filter through the credit markets and the economy. You can see from the graph above that in the previous recession (December 2007 to June 2009) the Fed Funds Rate was reduced from 5% to 0%, and it was reduced to 0% again in the Pandemic Recession. This means that the banks’ cost of money was almost zero, leading all short-term rates to drop precipitously again. Over the long run, the average Federal Funds Rate has been 4.74%, so dropping the Fed Funds Rate to 0% is extraordinary. Sometimes the dual objectives of the Fed (full employment and low inflation) are in conflict. The negative correlation between high levels of unemployment and low rates of inflation is known as the Phillips Curve, after the economist who first wrote about this phenomenon. The historical record shows strong evidence for this relationship. Note in the graph above that when the unemployment rate goes up (the blue line), the inflation rate (the red line) goes down (often with some lag). However, the traditional Phillips Curve (the inverse relationship of inflation and unemployment) has broken down in the last ten years, as you can see from the following graph. The unemployment rate decreased to a fifty-year low of 3.5% in February 2020, and inflation stayed below the 2% target of the Federal Reserve Bank. There are a number of reasons for this, but I do not have the space here to talk about it in detail. For the rate of inflation, I am using the preferred inflation measure of the Federal Reserve Bank. This is the change in prices of Personal Consumption Expenditures not including food and energy (which the Fed considers too volatile). This price index is known as the PCE Price Index, excluding food and energy and its annual change is the rate of inflation. The Fed not only lowered short-term interest rates to the lowest in sixty years, but beginning in the Great Recession, they performed some unprecedented actions under Chair Ben Bernanke to bring long-term interest rates down dramatically. At the start of the Great Recession, the Federal Reserve Bank had approximately$800 billion worth of assets. In 2008, the Fed increased their assets to $2 trillion by increasing the money on their computers. This is sometimes called printing money, but no paper currency is actually printed; instead, money appears electronically out of thin air. This is why money that has no gold or silver standard behind it is called fiat money. If you look at the graph below, you can see that from 2008 to 2015, the Fed increased its assets to approximately$4.5 trillion. It used this money to buy long-term U.S. Treasury Bonds and mortgage-backed bonds issued by Fannie Mae and Freddie Mac, the government home mortgage agencies. This brought down long-term interest rates to their lowest in 50 or 60 years for Treasury and corporate bonds, as well as for home mortgages. The financial markets dubbed this action Quantitative Easing, although Ben Bernanke has said that he did not favor the term, instead preferring the name, “long term asset purchases.” As is evident from this graph, the Fed began to dispose of the trillions of dollars in bonds in 2018 by selling small amounts per month on the open market. However, as the Pandemic Recession took hold, the Fed revived their playbook from the Great Recession and increased the money on their computers within one month to $7.1 trillion. This additional money has been used to lend to banks, to buy more Treasury bonds, and to buy more bonds issued by Fannie Mae and Freddie Mac. The Fed is also buying bonds issued by major corporations and has begun a program called “Main Street Lending” whereby the Fed has set aside$500 billion to lend money to mid-sized corporations, which will be guaranteed by the U.S. government. The effect of the Fed’s increased demand for long term bonds has brought the 10-year Treasury bond yield to .6% and 30-year home mortgages to 2.98%. These are the lowest rates ever in the history of U.S. financial markets. Additionally, Jerome Powell has stated that the Fed will keep both short-term and long-term interest rates at this low level “as long as it takes” for the economy to recover. The fiscal stimulus passed by Congress and the monetary stimulus enacted by the Federal Reserve Bank worked remarkably well. By early 2020, the unemployment rate had dropped from almost 15% to a low of 3.5%, the lowest in about 60 years. However, in a cruel economic reversal, inflation took off, fueled by supply-chain bottlenecks due to the pandemic and as a result of the Russian invasion of Ukraine. By early February 2020, inflation was about 8% nationally in the U.S. The Federal Reserve abruptly reversed its program of monetary stimulus and in 2020 raised short-term rates from effectively 0% to 3%. (the Federal Funds Rate). This caused the 10-year treasury bond yield to rise to 4.2% and 30-year mortgage rates to rise from 3% to 7%. As a result a number of economists and financial institutions predicted a recession in early 2023, which is unknown as of this writing. Modern Monetary Theory The most interesting development in economics within the last decade has been the growing popularity of Modern Monetary Theory (MMT). If you are not an economist, do not worry. I am going to explain this in the simplest way possible. MMT, distilled to its essence, is a theory which posits that when it comes to nations with a fiat currency, the only restraints are real restraints, not financial restraints. Meaning, the restraints an economy faces are not how much money is contained in the budget, but rather, what labor and resources it has available to utilize. This theory and its application are built off a few key assumptions: 1. The federal government cannot default on its debts since it issues its own currency. Theoretically, if the government is $27 trillion in debt, it can just issue another$27 trillion to remove the debt. Some of you are immediately going to shout “inflation!”—trust me, that will be addressed later. 2. MMT proponents see (involuntary) unemployment as evidence that the economy is operating under capacity. For the sake of better utilizing our resources and making the country better, we should be striving to reach full employment. For now, let’s just take a step back and analyze some underlying features of the US monetary and fiscal systems. Fiat Money In response to inflation in the 1970s, the Nixon administration made the decision to pull the United States off the gold standard. For years, anyone could trade in their US dollars to the federal government and receive the equivalent of its worth in gold. This placed an extreme limitation on the amount of money the US government could circulate in the economy; it needed to hold enough gold to meet the demand of those wanting to exchange it. It also meant that trade and fiscal deficits were inherently rocky waters. If a foreign government suddenly asked you to pay your debts in gold, you better hope you had it on hand. Nixon’s decision to untether the US dollar from gold transformed it into a fiat currency, a fancy term which essentially means the US dollar has value not because of its worth in gold, but because, well, the government said it does (Kelton, 2020). This sounds ridiculous, of course. The government just decided some arbitrary bill has value, but how do they possibly enforce that? What’s stopping me from issuing my own currency? In fact, many US businesses in the early 1900s did just this. In remote areas where one business would employ all the townspeople, wealthy owners would often pay their workers in “company store dollars.” These were only eligible at stores owned by the same person, which itself was a harsh and exploitative system that prevented workers from breaking out of their socioeconomic class (Richman, 2018). This all changed for two reasons. First, federal laws and greater financial oversight eventually ensured people had to be paid in real money. And second, the government established the federal income tax. Taxes Make Fiat Currency (And the Economy) Work While you and I can create our own forms of money, we will still need US dollars to pay taxes. Failing to do so will result in legal penalties, including jail time. Because everyone must pay taxes, everyone has a demand for US dollars (Kelton, 2020). You can choose to hold your assets in bitcoin, emeralds, or property, but every year, you need to convert at least some of that to pay the federal government. Typically, everyone assumes the government issues taxes to finance public expenditure; that is, they rely on our funds to pay for defense spending, education, social welfare, and everything else in their purview. Again, this is misleading, and the way taxes work reveals why. Many people assume the government operates like a household or business. Essentially, they believe that the government drafts a budget, decides on discretionary spending for the year, builds a tax plan around that spending, collects the taxes, and then redistributes the money into the chosen programs. This sounds logical. After all, if you or I were to start up a business, this is approximately the model we would follow; we would figure out what resources we need, and then use our capital to acquire them. However, for the government, this process is inverted. The government does not collect taxes and then redistribute them, but rather, they spend money and then collect it back in taxes. The government does not draft its budget and then sit on Capitol Hill waiting for the money truck to arrive and place it in their chosen programs; they spend first, and then collect the difference from the people. Often the government runs deficits, meaning it spends more than it collects from taxes. Americans are then filled with anxiety that a day will come where those deficits have to be paid back, and their children and grandchildren will be forced through brutal austerity measures, drained of every dollar they have, leaving our country doomed. According to MMT, this is hyperbolic. Debts Do Not Matter Now that we have all the groundwork laid out, it’s finally time to discuss the descriptive claims of MMT. The first, and most shocking is that fiscal deficits do not matter. No matter how much the government is in debt, since it is a currency issuer, it can always pay it back. If China came over tomorrow and asked for its 20 trillion of our dollars back, we could just change a number in a spreadsheet, and it would be done. This is because with a fiat currency, the government is not tethered to the supply of any other resource. And as a currency issuer, the government has no (legal) limit on the amount of money it can choose to generate in an economy. There are ramifications of course, but we will address that in a bit. The Point of Taxes If the government does not need us to pay down its debts and obligations, then why does it bother taxing? Simple: taxes do not exist to help the government pay for things; the government can create as much money as needed in order to pay for what it wants. Instead, taxes are meant to incentivize people to work, to make them interact in an economy (Kelton, 2020). The government does not need you or me to pay its debts down, but it does need us to attend schools, sell T-shirts, see the doctor, and perform just about every other economic (and legal) interaction possible. Since every year we must collect a certain amount of money to pay taxes, then we must all interact in the economy to obtain this money. There are other notable utilities to taxes, such as controlling inflation, redistributing wealth and income, and encouraging or discouraging certain behaviors (smoking, pollution, etc.), but those all can wait for now. It should be noted that this analysis is not applicable to every nation in the world. MMT only applies to countries with fiat currencies who are also currency issuers. Countries in the EU do not have access to MMT. None of the nations in the EU issue their own currency, meaning if asked to pay their debts, they cannot simply produce that money out of thin air. Greece found themselves in this situation after a series of considerable financial crises following the 2008 recession. Certain EU rules also prohibit declaring bankruptcy (see Adults in the Room by Yanis Varoufakis), but to be brief, had Greece still been using drachmas and had the drachmas existed as a purely fiat currency, they might have been able to navigate that situation much better without Germany imposing brutal austerity measures on them for borrowing (Kelton and Varoufakis, 2020) But then this does raise the question: if the US government can theoretically pay down its debts today, then why shouldn’t it? Well, because historically, this has always ended badly. If you follow patterns throughout the history of the country, every time the US government has been in surplus (and sometimes even paid off its entire debt), a recession or even depression has followed (Kelton, 2020). The most recent example was when the dot-com bubble burst following surpluses under the Clinton administration, but some proponents of MMT have gone so far as to say that the surpluses contributed to the 2008 Great Recession as well. In the 1830s, the US government paid off its entire debt, and what soon followed was one of the worst depressions this country has seen. The reasons for this are again complicated, but it can be explained in a simple flow model. When the government has a deficit, it means that it has printed more money than it has taxed out. For example, if the government prints $100 and distributes it to people, then taxes out$90 from each individual, each person is left with a surplus of $10. However, if the government is in surplus, then that means the opposite has happened: the government issued$100, taxed out $110, and the people now hold a$10 deficit (Kelton, 2020). Unlike the government, households can become insolvent. Meaning, while the government can hold onto debt as a currency issuer, people cannot indefinitely hold on debt. Eventually, households will have to pay for their mortgage, and since they cannot issue money, they will default. Hence, people will have less money, they will start spending less, and the entire house of cards will collapse, taking the economy with it. Thus, even though the government could pay off its debts tomorrow, according to MMT, it is not something it should do. From here on out, I am going to be oversimplifying some technical aspects as to how the monetary system works. Unless you intend to major in economics or something finance related, you probably will not need to know all of this on a deep level. Inflation It’s time to address the elephant in the room, the bogeyman that gives every economist nightmares: inflation. Inflation is the idea that as more money circulates in the economy, the value of each individual dollar (or whatever chosen unit of account) is comparatively less. Put simply, the more money the general public has, the more goods and services need to increase in price to compensate. The value of money theoretically lies in its scarcity, just as the value of any goods or commodities lies in scarcity; this is the underlying principle of economics. If tomorrow, everyone suddenly had 10,000 extra dollars, and markets were behaving perfectly with no delays, then we would expect the prices of nearly every good to increase. Some would increase more in price, some would increase less, but let’s ignore those technical details for a moment. Economists tend to agree that a moderate level of inflation is a sign of a prosperous and well-functioning economy, but once inflation crosses a certain threshold, it puts an economy into dangerous territory. One of the most famous examples of hyperinflation is what happened in the Weimar Republic. For a period in the 1920s, prices doubled every 3.7 days, and the monthly inflation rate was 29,500% (Toscano, 2014). There are countless stories of people showing up with wheelbarrows full of money to purchase the goods they need. The government changed currencies to bring stability back to the nation, but its effects were devastating enough to facilitate the birth of fascism, the rise of the Nazi party, and the beginnings of WWII’s eastern front. Given that, it makes sense why most economists would be wary of inflation exceeding a certain threshold. Most economists think around 2% per year is a stable, prosperous, and preferable level. The FED agrees. They target this rate every year by adjusting interest rates (do not worry about how for now) and try to maintain a certain level of unemployment in order to prevent hyperinflation. This is the part where MMT differs considerably from the prevailing economic theory. MMT asserts that the “natural rate” of unemployment is something which cannot be found out through theory but rather discovered after the fact. Kelton writes: The Fed doesn’t like to wait until inflation becomes a problem before acting. Instead, it prefers to fight the inflation monster preemptively before it rears its ugly head… This kind of preemptive bias often leads the Fed to err on the side of overtightening, raising the interest rate even when it may be premature or a false alarm. Errors like these carry real consequences in the form of millions of people unnecessarily locked out of employment (2020). For years, monetary policy has been the primary tool to battle inflation, but MMT advocates say that with responsible fiscal policy, the US government can achieve both full employment and stable prices. According to these advocates, by adjusting taxes and government spending, the country can continue to print more money. Let me give you a simplified example: if the government decides to spend an extra $1000 on a federal project, that extra$1000 is floated through the economy. If the government realizes they are \$1000 over their inflation target, they can just raise taxes to take that money back out, returning the country to its inflation target. Obviously, reality is not this simple, but the logic follows when considering the entire economy as a model and considering the other legal steps required. Obtaining Full Employment We know that the government cannot default, and printed money that threatens to increase inflation can be taxed back out. So what do we do with this? MMT proponents advocate the eradication of involuntary unemployment through a public option for work. You may have heard of this as “The Federal Jobs Guarantee,” part of the backbone of the Green New Deal and popularized by politicians like Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez. With this guarantee, the federal government establishes a voluntary pool for any unemployed person to enter. Once put into that system, the government will direct them to work on some project they have created through discretionary spending. Generally, the plan advocates spending it on infrastructure, education, or research and development, fields which historically “pay back”. These fields also help connect people to their own local communities and allow them to see the fruits of their labor (Kelton, 2020). While this program would always be in place, it would likely see more use during a recession or depression. To this day, no economist has figured out how to eradicate the “business cycle” of economic downturns, and what a Federal Jobs guarantee would do is soften its effects. Critics of MMT say this would crowd out the private marketplace, but this should not be the case. The key feature of the jobs guarantee is that it is voluntary. The jobs guarantee is not a program which will always and forever prevent the private market from innovating and growing. Workers will only enter here if they cannot find work otherwise, and they will do this by keeping wages low (but humane) so that when a private sector job is available, they will leave the program to go work with them (as a rational decision in the market). Put simply, there is no crowding out because, well, they are not taking any jobs away. Theoretically, if the private market obtains full employment, then the federal jobs guarantee would have “0” people available in its pool to work. And if there were people in the program, then it would mean they were there because they could not find work on the private market. It helps the workers who need a job, and it should not prevent employers from filling a position. To the contrary, it may be beneficial to the private sector. The longer someone is out of work, the more their skills depreciate, and the less attractive they become to employers (Kelton). A jobs guarantee ensures they would retain their skills, discipline, and talents. What We Cannot Do Well, a lot of things. Despite the government being able to spend as much as it decides, it cannot create its ideal workforces, nor can it spawn materials like steel, oil, or renewable energy from thin air. The real constraints on the economy related to labor and resources, not money. As shown, if the private market is doing well enough that there are not enough workers to build the powerplant, then that powerplant cannot be built. Similarly, just because there are 7 million people out of work, it does not mean we can suddenly build 100,000 fighter jets without any titanium. According to MMT, the government should not allocate money; rather, it should be allocating resources. The government should not ask “How will we pay for it?” but “What are we capable of paying for?”, and, perhaps most important, “What must we pay for, no matter the fiscal cost?” China and Trade We have already established that the government can never become insolvent because it is a currency issuer and can always pay back its debts. Explaining how this works with China is complicated, to say the least. It is built upon the US Treasury system, a tool by the FED to set interest rates. For the average person, understanding exactly how this works is probably not necessary. Instead, this quote from Kelton provides a simplified explanation: “Borrowing from China” involves nothing more than an accounting adjustment, whereby the Federal Reserve subtracts numbers from China’s reserve account (checking) and adds numbers to its securities account (savings). It’s still just sitting on its US dollars, but now China is holding yellow dollars instead of green dollars. To pay back China, the Fed simply reverses the accounting entries, marking down the number in its securities account and marking up the number in its reserve account. It’s all accomplished using nothing more than a keyboard at the New York Federal Reserve Bank. In other words, paying back China is not something that should fall on the backs of your average American but instead it could be carried out by interactions at the FED. However, it should be noted that China asking for its debt to be paid back immediately is also against their interests. If you want to know exactly how and why, I recommend further reading. Finally, we must consider trade. Even among economists who do not believe in MMT, protectionism and free trade are still divisive issues. Some claim that free trade is great, and when businesses decide to do foreign outsourcing, they are not only benefitting the domestic consumer but also foreign countries by placing businesses there. Others claim that this is exploitation by using cheaper labor to create our goods and supplies, though free trade advocates will rebut that it helps those countries develop. Regardless of these differences, most people tend to agree that there is some balance of trade with regulation which is good – just to varying degrees. The MMT community’s approach lands somewhere in the middle. They claim that free trade is beneficial to the United States and, in some ways, beneficial to the host countries. Without getting into the complicated history of colonialism and coups, just know that while there are downsides to more open trade, the MMT group argues that being a total protectionist and equaling our trade balances are also not good for these developing nations (Kelton). To “develop”, they need to stop relying on foreign imports for necessities, which is something we cannot really affect by looking only at protectionism and free trade. Once again, MMT claims that the Federal Jobs Guarantee will allow us to outsource certain businesses abroad without ruining people’s livelihoods so long as we simultaneously ensure Americans still have a public option for work. That way, foreign countries can have more money and jobs available, while Americans can benefit from cheaper commodities and have guaranteed employment. There are certainly critiques to bring up of this model: it does not address whether people working in a manufacturing plant for decades would suddenly like to switch jobs, and it does not offer a model to help these developing nations actually catch up to the developed world. Also, if a business decides to invest in a foreign country but then suddenly pull out when things look bad (as has historically happened), then their country may be left in financial ruin (Kelton). MMT offers a possible, temporary alleviation from downsides to trade, but if we really want to help developing countries, there are other ways to do so outside of trade. Trade is generally a very complicated cost-benefit analysis system that you could probably spend a lifetime trying to understand all the nuances of. Kelton argues that current trade contracts often neglect the needs of the working class, and that even with the Federal Jobs Guarantee, our contracts and trade agreements still need to be fairer. From here, I will leave it to you, the reader, to decide where you stand on trade. Criticisms Before I address some of the criticisms, I want to stress that MMT is far more descriptive than it is prescriptive. Although it is used to justify programs like a Federal Jobs Guarantee, MMT itself is a politically neutral idea. MMT is a description of how monetary systems work, while policy like the Federal Jobs Guarantee is a prescription of what to do with these revelations. Many critiques are aimed at these prescriptive implications, arguing why MMT may not be applicable to policy, even if many pieces of its underlying theory are accurate. N. Gregory Mankiw, a macroeconomist at Harvard, has written a working paper for the National Bureau of Economic Research called “A Skeptic’s Guide to Modern Monetary Theory”. One of the most striking criticisms he raises is that he worries MMT could have us fall into a feedback loop that would devastate our financial systems. According to MMT, this should not happen with proper fiscal policy, but he leads this into another large critique which perhaps should be taken seriously. The federal government as an apparatus is not the fastest thing in the world, and the economy moves pretty quickly by comparison. He concludes his paper by stating that while theoretically the government could act as an ultimate resource manager, in reality, this is probably too complex with all the bureaucratic layers involved. In other words, Mankiw does not seem to disdain the concept but sees it as ultimately impractical—one of those ideas that sounds good on paper but is too difficult to implement properly. This is where his concern about inflation seems more legitimate; if the government is over its inflation target and cannot act fast enough to reduce spending or raise taxes, the US dollar may become unstable. Gerald Epstein wrote an entire book critical of MMT. His critiques were not so much based in theory but more based on presentation and practical implementation. Epstein thinks that most MMT advocates encourage higher spending and deficits without specifying on what that money should be spent. This is a rather nebulous claim, so I do not think this one should be regarded too seriously. However, he does raise some legitimate points. Epstein believes that MMT is a privilege afforded to richer, more powerful countries and inherently barred from poor or developing nations. He says that rich countries whose currency is accepted internationally can run up large deficits without fear of consequence, but this is not something most of the world can claim. He also argues against creating low interest rates, as it can cause the financial sector to behave recklessly and stir up crises. In other words, if the interest rate is kept low, Wall Street may engage in more high-risk transactions that jeopardize the economy, just as they did in 2008. This is somewhat true, but MMT and regulating the financial sector are not contradictory issues. Indeed, tight regulatory mechanisms, low interest rates, and engaging in MMT are all possible simultaneously. But we should take heed of his warning and realize that the country perhaps should not engage in MMT before tightening laws on the financial sector. The final main critique Epstein makes of MMT is that while he thinks it could work in the United States, the window to use it is likely shrinking. The US dollar is a powerful currency, and because of its wide acceptance, the US can borrow cheaply when a crisis comes and still keep interest rates low. Yet this may not last forever. Epstein believes we are trending towards a multi-currency system, and when that happens, MMT will become less and less viable. Conclusion After many years in academia, MMT seems to be getting some mainstream attention. As of this writing, the COVID-19 crisis has spurred the government to massively increase spending. Many economists wonder whether the government will engage in MMT’s policy recommendations to control high inflation if and when it appears.
textbooks/socialsci/Economics/Economics_for_Life%3A_Real-World_Financial_Literacy_(Wargo)/1.16%3A_Fiscal_Policy_and_Monetary_Policy-Government_Intervention_in_Your_Life.txt
The basic idea of the Theory of Consumer Behavior is simple: Given a budget constraint, the consumer buys a combination of goods and services that maximizes satisfaction, which is captured by a utility function. By changing the price of a particular item, ceteris paribus (everything else held constant), we derive a demand curve for that item. Setting up and solving the consumer’s utility maximization problem takes some time. We will proceed slowly and carefully. This chapter focuses on the budget constraint and how it changes when prices or income change. What can be afforded is obviously a key factor in predicting buying behavior, but it is only part of the story. With the budget constraint alone, we cannot answer the question of how much the consumer wants to buy of each product because we are not incorporating any information about the utility gained by consumption. After we understand the budget constraint, we will model the consumer’s likes and dislikes. We can then put the constraint and utility components together and solve the model. The Budget Constraint in Equation Form The budget constraint can be expressed mathematically like this: $p_{1}x_{1} + p_{2}x_{2} \le m$ This equation says that the sum of the amount of money spent on good $x_{1}$, which is the price of $x_{1}$ times the number of units purchased, or $p_{1}x_{1}$, and the amount spent on good $x_{2}$, which is $p_{2}x_{2}$, must be less than or equal to the amount of income, m (for money), the consumer has available. Obviously, the model would be more realistic if we had many products that the consumer could buy, but the gain in realism is not worth the additional cost in computational complexity. We can easily let $x_{2}$ stand for “all other goods.” Another simplification allows us to transform the inequality in the equation to a strict equality. We will assume that no time elapses so there is no saving (not spending all of the income available) or borrowing. In other words, the consumer lives for a nanosecond – buying, consuming, and dying the same instant. Once again, this assumption is not as severe as it first looks. We can incorporate saving and borrowing in this model by defining one good as present consumption and the other as future consumption. We will use this modeling technique in a future application. Since we know we will always spend all of our income, the budget constraint equation can be written with an equal sign, like this $p_{1}x_{1} + p_{2}x_{2} = m$ Since we will want to draw a graph, we can write in the form of the equation of a line ($y = mx + b$) via a little algebraic manipulation: $p_{1}x_{1} + p_{2}x_{2} = m$ $p_{2}x_{2} = m - p_{1}x_{1}$ $x_{2} = \displaystyle{\frac{m}{p_{2}} - \frac{p_{1}}{p_{2}}x_{1}}$ The intercept, $m/p_{2}$, is interpreted as the maximum amount of $p_{2}$ that the consumer can afford. By buying no $x_{1}$ and spending all income on $x_{2}$, the most the consumer can buy is $m/p_{2}$ units of good 2. The slope, $-p_{1}/p_{2}$, also has a convenient interpretation: It states the rate at which the market requires the consumer to give up $x_{2}$ in order to acquire $x_{1}$. This is easy to see if you remember that the slope of a line is simply the rise ($\Delta x_{2}$) over the run ($\Delta x_{1}$). Then, $\displaystyle{\frac{\Delta x_{2}}{\Delta x_{1}} = -\frac{p_{1}}{p_{2}}}$ A Numerical Example of the Budget Constraint STEP Open the Excel workbook BudgetConstraint.xls, read the Intro sheet, and then go to the Properties sheet to see the budget constraint. Figure 1.1 shows the organization of the sheet. As you can see, the consumer chooses the amounts of goods 1 and 2 to purchase, given prices and income. With $p_{1}$ = $2/unit, $p_{2}$ =$3/unit and $m$ = $100, the equation of the budget line can be computed. STEP Click on the scroll bars to see the red dot (which represents the consumption bundle), move around in the chart. By rewriting the budget constraint equation as a line and then graphing it, we have a geometric representation of the consumer’s consumption possibilities. All points inside or on the budget line are feasible. Points northeast of the budget line are unaffordable. By clicking the scroll bars you can easily see that the consumer has many feasible points. The big question is, Which one of these many affordable combinations will be chosen? We cannot answer that question with the budget constraint alone. We need to know how much the consumer likes the two goods. The constraint is simply about feasible options. Changes in the Budget Line – Pivots and Shifts STEP Proceed to the Changes sheet. The idea here is that changes in prices cause the budget line to pivot or rotate, altering the slope, but keeping one of the intercepts the same. Note that changes in income produce a different result, shifting the budget line in or out, leaving the slope unchanged. STEP To see how the budget line pivots, experiment with cell K9 (the price of good 1). Change it from 2 to 5. The chart changes to reveal a new budget line. The budget line has rotated around the y intercept because if the consumer decided to spend all income on $x_{2}$, the amount that could be purchased would remain the same. If you lower the price of good 1, the budget line swings out. Confirm that this is true. STEP Changing cell K10 alters the budget line by changing the price of good 2. Once again, change values in the cell to see the effect on the budget line. STEP Next, click the button to return the sheet to its initial values and work with cell K13. Cut income in half. The effect is dramatically different. Instead of rotating, the budget line has shifted in. The slope remains the same because prices have not changed. Increasing income shifts the budget line out. This concludes the basics of budget lines. It is worth spending a little time playing with cells K9, K10, and K13 to reinforce understanding of the way budget lines move when there is a change in a price or income. These shocks will be used again when we examine how a consumer’s optimal decision changes when prices or income change. Remember the key lesson: Change in price rotates the budget line, but change in income shifts it. Funky Budget Lines In addition to the standard, linear budget constraint, there are many more complicated scenarios facing consumers. To give you a taste of the possibilities, let us review two examples. STEP Proceed to the Rationing sheet. In this example, in addition to the usual income constraint, the consumer is allowed a maximum amount of one of the goods. Thus, a second constraint (a vertical line) has been added. When the maximum is above the $x_{1}$ intercept (50 units), this second constraint is said to be nonbinding. As you can see from the sheet, when the maximum amount constraint is binding, it lops off a portion of the budget line. STEP Change cell E13 to see how changing the rationed amount affects the budget constraint. As we increase the amount of the subsidy, the horizontal line is extended. The downward sloping part has the same slope, but it is pushed outwards, STEP Proceed to the Subsidy sheet. In this example, in addition to the usual income constraint, the consumer is given a subsidy in the form of a fixed amount of the good. Food stamps are classic example of subsidies. Suppose the consumer has$100 of income, but is given $20 in food stamps (which can only be spent on food), and food ($x_{1}$) is priced at$2/unit. Then the budget constraint has a horizontal segment from 0 to 10 units of food because the most $x_{2}$ (other goods) that can be purchased remains at $m$/$p_{2}$ from 0 to 10 units of food (since food stamps cannot be used to buy other goods). STEP Change cell E13 to see how changing the given amount of food (which is the dollar amount of food stamps divided by the price of food) affects the budget constraint. Summary: Consumption Possibilities The budget constraint is a key component of the optimization problem facing the consumer. Graphing the constraint lets us see the consumer’s options. Just like a production possibilities frontier tells us what an economy can produce, the budget constraint shows what a consumer can buy. Any combination on or under the constraint is a feasible option. Points beyond the constraint are unattainable. Changing prices has a different effect on the constraint than changing income. If prices change, the budget line pivots, swings, and rotates (pick your favorite word and remember it) around the intercept. A change in income, however, shifts the line (out or in) and leaves the slope unaffected. The basic budget constraint is a line, but there are many other scenarios faced by consumers in which the constraint can be kinked or nonlinear. Subsidies (like food stamps) can be incorporated into the basic model. This flexibility is one of the powerful features of the Theory of Consumer Behavior. The constraint is just one part of the consumer’s optimization problem. The desirability of goods and services, also known as tastes and preferences, is another important part. The next chapter explains how we model satisfaction from consuming goods and services. Exercises 1. Use Excel to create a chart of a budget constraint that is based on the following information: m = $100 and $p_{2}$ =$3/unit, but $p_{1}$ = $2/unit for the first 20 units and$1/unit thereafter. Copy your chart and paste it in a Word document. STEP Watch a quick, 3-minute video of how to make a chart in Excel by visiting vimeo.com/econexcel/how-to-chart-in-excel. 2. If the good on the y axis is free, what does the budget constraint look like? 3. What combination of shocks could make the new budget line be completely inside and steeper than the initial budget line? 4. What happens to the budget line if all prices and income doubles?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/01%3A_Budget_Constraint.txt
The key idea is that every consumer has a set of likes and dislikes, desires, and tastes, called preferences. Consumer preferences enable them to compare any two combinations or bundles of goods and services in terms of better/worse or the same. The result of such a comparison has two outcomes: • Strictly preferred: the consumer likes one bundle better than the other. • Indifferent: the consumer is equally satisfied with the two bundles. In terms of algebra, you can think of strictly preferred as greater than ($>$), indifferent as equal (=). Since the consumer can compare any two bundles, then by repeated comparison of different bundles the consumer can rank all possible combinations from best to worst (in the consumer’s opinion). Three Axioms Three fundamental assumptions are made about preferences to ensure internal consistency: 1. Completeness: the consumer can compare any bundles and render a preferred or indifferent judgment. 2. Reflexivity: this identity condition says that the consumer is indifferent when comparing a bundle to itself. 3. Transitivity: this condition defines an orderly relation among bundles so that if bundle A is preferred to bundle B and bundle B is preferred to bundle C then bundle A must be preferred to bundle C. Completeness and reflexivity are easily accepted. Transitivity, on the other hand, is controversial. As a matter of pure logic, we would expect that a consumer would make consistent comparisons. In practice, however, consumers may make intransitive, or inconsistent, choices. An example of intransitivity: You claim to like Coke better than Pepsi, Pepsi better than RC, and RC better than Coke. The last claim is inconsistent with the first two. If Coke beats Pepsi and Pepsi beats RC, then Coke must really beat RC! In mathematics, numbers are transitive with respect to the comparison operators greater than, less than, or equal to. Because 12 is greater than 8 and 8 is greater than 3, clearly 12 is greater than 3. Sports results, however, are not like math. Outcomes of games can easily yield intransitive results. Michigan might beat Indiana and in its next game Indiana could defeat Iowa, but few people would claim that the two outcomes would guarantee that Michigan will win when it plays Iowa. When we assume that preferences are transitive, it means that the consumer can rank bundles without any contradictions. It also means that we are able to determine the consumer’s choice between two bundles based on answers to previous comparisons. Displaying Preferences via Indifference Curves The consumer’s preferences can be revealed by having her choose between bundles. We can describe a consumer’s preferences with an indifference map, which is made up of indifference curves. A single indifference curve is the set of combinations that give equal satisfaction. If two points lie on the same indifference curve, this means that the consumer sees these two bundles as tied – neither one is better nor worse than the other. A single indifference curve and an entire indifference map can be generated by having the consumer choose between alternative bundles of goods. We can demonstrate how this works with a concrete example. STEP Open the Excel workbook Preferences.xls, read the Intro sheet, and then go to the Reveal sheet to see how preferences can be mapped and the indifference curve revealed. STEP Begin by clicking the button. For bundle B, enter 4, then a comma (,), then a 3, then click OK. We are using the coordinate pair notation so 4,3 identifies a combination that has 4 units of the good on the x axis and 3 units of the good on the y axis. The sheet records the bundles that are being compared in columns A and B and the outcome in column C. The choices are being made by a virtual consumer whose unknown preferences are in the computer. By asking the virtual consumer to make a series of comparisons, we can reveal the hidden preferences in the form of an indifference curve and indifference map. Notice that Excel plots the point 4,3 on the chart. The green square means the consumer chose bundle B. This means that 3,3 and 4,3 are not on the same indifference curve. STEP Click the button again. Offer the consumer a choice between 3,3 and 2,3. This time the consumer chose bundle A and a red triangle was placed on the chart, meaning that the point 3,3 is strictly preferred to the point 2,3. These two choices illustrate insatiability. This means that the consumer cannot be sated (or filled up) so more is always better. The combination 4,3 is preferred to 3,3, which is preferred to 2,3 because good $x_{2}$ is held constant at 3 and this consumer is insatiable, preferring more of good $x_{1}$ to less. To reveal the indifference curve of this consumer, we must offer tougher choices, where we give more of one good and less of the other. STEP Click the button again. This time offer the consumer a choice between 3,3 and 4,2. The consumer decided that 3,3 is better. This reveals important information about the consumer’s preferences. At 3,3, the consumer likes one more unit of $x_{1}$ less than the loss of one unit of $x_{2}$. STEP Click the button several times more to figure out where the consumer’s break-even point is in terms of how much $x_{2}$ is needed to balance the gain from the additional unit of $x_{1}$. Offer 4,2.5 and then try taking away less of good 2, such as 2.7 or 2.9. Once you find the point where the amount of $x_{2}$ taken away exactly balances the gain in $x_{1}$ of one unit (from 3 to 4), you have located two points on a single indifference curve. If it is difficult to see the points on the chart, use the Zoom control to magnify the screen (say to 200%). You should find that this consumer is indifferent between the bundles 3,3 and 4,2.9. STEP Now click the button. One hundred pairwise comparisons are made between 3,3 and a random set of alternatives. It is easy to see that the consumer can compare each and every point on the chart to the benchmark bundle of 3,3 and judge each and every point as better, worse, or the same. STEP Click the button to display the indifference curve that goes through the benchmark point (3,3), as shown in Figure 2.1. Your version will be similar, but not exactly the same as Figure 2.1 since the 100 dots are chosen randomly. The indifference curve shows the bundles that are the same to this consumer compared to 3,3. All of the bundles for which the consumer is indifferent to the 3,3 bundle lie on the same indifference curve. The Indifference Map Every combination of goods has an indifference curve through it. We often display a few representative indifference curves on a chart and this is called an indifference map, as shown in Figure 2.2. Any point on the curve farthest from the origin, in Figure 2.2, is preferred to any point below it, including the ones on the two lower indifference curves. The arrow indicates that satisfaction increases as you move northeast to higher indifference curves. There are many (in fact, an infinity) of indifference curves and they are not all depicted when we draw an indifference map. We draw just a few curves. We say that the indifference map is dense, which means there is a curve through every point. STEP Build your own indifference map by copying the Reveal sheet and clicking the button, then the button, and then the button. This places a picture of the chart under the chart. This is an Excel drawing object, not a chart object, and it has no fill. STEP Change the benchmark to 4,4 in cell B1 and click the button to get the indifference curve through the new benchmark point. Click the button. This copies the chart and pastes the drawing object over the first one. Since it has no fill, it is transparent. You can separate the two pictures if you wish (click and drag), then undo the move so it is on top of the first picture. STEP Add one more indifference curve to your map by changing the benchmark to 5,5 and clicking the button, then clicking the button. You have created an indifference map with three representative indifference curves. Satisfaction increases as you move northeast to higher indifference curves. Marginal Rate of Substitution Having elicited a single indifference curve from the virtual consumer in the Excel workbook, we can define and work with a crucial concept in the Theory of Consumer Behavior: the Marginal Rate of Substitution, or MRS. The MRS is a single number that tells us the willingness of a consumer to exchange one good for another from a given bundle. The MRS might be $-18$ or $-0.07$. Read carefully and work with Excel so that you learn what these numbers are telling you about the consumer’s preferences. STEP Return to the Reveal sheet (with benchmark point 3,3) and click the button to copy and paste an image of the current indifference curve below the graph in the Reveal sheet. Now click the button to get a new virtual consumer with different preferences and then display the indifference curve for this new consumer (by clicking the button). Notice that the indifference curve is not the same as the original one. These are two different consumers with different preferences. You can use the buttons to offer the new consumer bundles that can be compared with the 3,3 benchmark bundle, just like before. The key idea here is that at 3,3, we can measure each consumer’s willingness to trade $x_{2}$ in exchange for $x_{1}$. Initially (as shown in Figure 2.1 and in the picture you took), we saw that the consumer was indifferent between 3,3 and 4,2.9. For one more unit of $x_{1}$ (from 3 to 4), the consumer is willing to trade 0.1 units of $x_{2}$ (from 3 to 2.9). Then the MRS of $x_{1}$ for $x_{2}$ from 3,3 to 4,2.9 is measured by $\frac{-0.1}{1}$, or $-0.1$. With our new virtual consumer, the MRS at 3,3 is a different number. Let’s compute it. STEP Proceed to the MRS sheet. Click the button. Not only is the indifference curve through 3,3 displayed for this consumer, it also shows some of the bundles that lie on this indifference curve. We can use this information to compute the MRS. You can compute the MRS at 3,3 by looking at the first bundle after 3,3. How much $x_{2}$ is the consumer willing to give up in order to get 0.1 more of $x_{1}$? This ratio, $\frac{\Delta x_2}{\Delta x_1}$, (the usual "rise over the run" definition of the slope), is the slope of the indifference curve, which is also the MRS. The MRS also can be computed as the slope of the indifference curve at a point by using derivatives. Instead of computing $\frac{\Delta x_2}{\Delta x_1}$ along an indifference curve from one point to another, one can find the instantaneous rate of change at 3,3. We will do this later. The crucial concept right now is that the MRS is a number that measures the willingness of a consumer to trade one good for another at a specific point. We usually think of it in terms of giving up some of the good on the y axis to get more of the good on the x axis. Do not fall into the trap of thinking of the MRS as applying to the entire indifference curve. In fact, the MRS is different at each point on the curve. For a typical indifference curve like in Figure 2.1, the MRS gets smaller (in absolute value) as we move down the curve (as it flattens out). The MRS is negative because the indifference curve is sloping downwards: a decrease in $x_{2}$ is compensated for by an increase in $x_{1}$. We often drop the minus sign because comparing negative numbers can be confusing. For example, say one consumer has an MRS of $-1$ at 3,3 while another has an MRS of $-\frac{1}{3}$ at that point. It is true that $-1$ is a smaller number than $-\frac{1}{3}$, however, we to use the MRS to indicate the steepness of the slope. Thus, to avoid confusion, we make the comparison using the absolute value of the MRS. Figure 2.3 shows that the bigger in absolute value is the MRS, the more the consumer is willing to trade the good on the y axis for the good on the x axis. Thus, an MRS of $-1$ at 3,3 means the indifference curve has a steeper slope at that point than if the MRS was $-\frac{1}{3}$. We would say the MRS is bigger at $-1$ than $-\frac{1}{3}$ even though $-1$ is a smaller number than $-\frac{1}{3}$ because we look only at the absolute value of the MRS. Funky Preferences and Their Indifference Curves We can depict a wide variety of preferences with indifference maps. Here are some examples. Example 1: Perfect Substitutes — constant slope (MRS) If the consumer perceives two things as perfectly substitutable, it means they can get the same satisfaction by replacing one with the other. Consider having one five-dollar bill and five one-dollar bills (as long as we are not talking about several hundred dollars worth of bills). If the consumer does not care about having \$10 as a single ten-dollar bill, one five-dollar bill and five one-dollar bills, or ten one-dollar bills, then the indifference curve is a straight line as shown in Figure 2.4. You could argue that there is an indivisibility here and there are actually just 3 points that should not be connected by a line, but the key idea is that the indifference curve is a straight line in the case of perfect substitutes. It has a constant MRS (the slope of the line is $-\frac{1}{5}$), unlike a typical indifference curve where the MRS falls (in absolute value) as you move down the curve. Example 2: Perfect Complements — L-shaped Indifference Curves The polar opposite of perfect substitutes are perfect complements. Suppose the goods in questions have to be used in a particular way, with no room for any flexibility at all, like cars and tires. You need four tires for a car to work. With only three tires the car is worthless. Ignoring the spare, having more than four tires does not help you if you still have just one car. Figure 2.5 illustrates the indifference map for this situation. It says that eight tires with one car gives the same satisfaction as four tires with one car. It also says that eight tires and two cars is preferred to four tires and one car (or eight tires and one car) because the middle L-shaped indifference curve ($I_{1}$) is farther from the origin than the lowest indifference curve ($I_{0}$). Notice how the usual indifference curve lies between the two extremes of perfect substitutes (straight lines) and perfect complements (L-shaped). Thus, the typical indifference curve reflects a level of substitutability between goods that is more than perfect complements (one good cannot replace another at all), but less than perfect substitutes (one good can take the place of another with no loss of satisfaction). Example 3: Bads What if one of the goods is actually a bad, something that lowers satisfaction as you consume more of it, like pollution? Figure 2.6 shows the indifference map in this case. Along any one of the indifference curves, more steel and more pollution are equally satisfying because pollution is a bad that cancels out the additional good from steel. The arrow indicates that satisfaction increases by moving northwest, to higher indifference curves. Example 4: Neutral Goods What if the consumer thinks something is neither good nor bad? Then it is a neutral good and the indifference map looks like Figure 2.7. The horizontal indifference curves for the neutral good on the x axis in Figure 2.7 tell you that the consumer is indifferent if offered more X. The arrow indicates that satisfaction rises as you move north (because Y is a good and having more of it increasing satisfaction). These are just a few examples of how a variety of preferences can be depicted with an indifference map. When we want to describe generic, typical preferences that produce downward sloping indifference curves, as in Figure 2.2, economists use the phrase “well-behaved preferences.” Another technical term that is often used in economics is convexity, as in convex preferences. This means that midpoints are preferred to extremes. In Figure 2.8, there are two extreme points, A and B, which are connected by a dashed line. Any point on the dashed line, like C, can be described by the equation $zA + (1-z)B$, where $0 < z < 1$ controls the position of C. This equation is called a convex combination. If preferences are convex, then midpoints like C are strictly preferred to extreme points like A and B. Convexity is used as another way of saying that preferences are well-behaved. An important property that arises out of well-behaved or convex preferences is that of diminishing MRS. As explained earlier, the MRS varies along an indifference curve and applies to a specific point (not to the entire curve). The MRS will start large (in absolute value) at the top left corner, like point A in Figure 2.8, and get smaller as we travel down the indifference curve to point B. This makes common sense. The consumer is readily willing to trade a lot of Y for X (so the MRS is high in absolute value) when he has a lot of Y and little X. When the amounts are reversed, such as point B, a small MRS means he is willing to give up very little Y (since he has little of it) for more X (which he has a lot of already). Indifference Curves Reflect Preferences Preferences, a consumer’s likes and dislikes, can be elicited or revealed by asking the consumer to pick between pairs of bundles. The indifference curve is that set of bundles that the consumer finds equally satisfying. The MRS is a single number that measures the willingness of the consumer to exchange one good for another at a particular point. If the MRS is high (in absolute value), the indifference curve is steep at that point and the consumer is willing trade a lot of Y for a little more X. Standard, well-behaved preferences yield a set of smooth arcs (like Figure 2.2), but there are many other shapes that depict preferences for different kinds of goods and the relationship between goods. Exercises 1. What is the MRS at any point if X is a neutral good? Explain why. 2. If the good on the y axis was a neutral good and the other good was a regular good, then what would the indifference map look like. Use Word’s Drawing Tools to draw a graph of this situation. 3. If preferences are well-behaved, then indifference curves cannot cross. Use Figure 2.9 to help you construct an explanation for why this claim must be true. Note that point C has more X and Y than point A, thus, by insatiability, C must be preferred to A. The key to defending the claim lies in the assumption of transitivity. 4. Suppose we measure consumer A’s and B’s MRS at the same point and find that $MRS_{A} = - 6$ and the $MRS_{B} = - 2$. What can we say about the preferences of A and B at this point?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/02%3A_Satisfaction/2.01%3A_Preferences.txt
Previously, we showed that a consumer has preferences that can be revealed and mapped. The next step is to identify a particular functional form, called a utility function, which faithfully represents the person’s preferences. Once you understand how the utility function works, we can combine it with the budget constraint to solve the consumer’s optimization problem. Cardinal and Ordinal Rankings Jeremy Bentham (1748-1832) was a utilitarian philosopher who believed that, in theory, the amount of utility from consuming a particular amount of a good could be measured. So, for example, as you ate an apple, we could hook you up to some device that would report the number of “utils” of satisfaction received. The word utils is in quotation marks because they do not actually exist, but Bentham believed they did and would one day be discovered with an advanced measuring instrument. This last part is not so crazyan fMRI machine is exactly what he envisioned. Bentham also believed that utils were a sort of common currency that enabled them to be compared across individuals. He thought society should maximize aggregate or total utility and utilitarianism has come to be associated with the phrase “the greatest happiness for the greatest number.” Thus, if I get 12 utils from consuming an apple and you get 6, then I should get the apple. Utilitarianism also implies that if I get more utils from punching you in the face than you lose, I should punch you. This is why utilitarianism is not highly regarded today. This view of utility treats satisfaction as if we could place it on a cardinal scale. This is the usual number line where 8 is twice as much as 4 and the difference between 33 and 30 is the same as that between 210 and 207. Near the turn of the 20th century, Vilfredo Pareto (1848-1923, pronounced pa-RAY-toe) created the modern way of thinking about utility. He held that satisfaction could not be placed on a cardinal scale and that you could never compare the utilities of two people. Instead, he argued that utility could be measured only up to an ordinal scale, in which there is higher and lower, but no way to measure the magnitude between two items. Notice how Pareto’s approach matches exactly the way we assumed that a consumer could choose between bundles of goods as preferring one bundle or being indifferent. We never claimed to be able to measure a certain amount of satisfaction from a particular bundle. For Pareto, and modern economics, the numerical value from a particular utility function for a given combination of goods has no meaning. These values are like the star ranking system for restaurants. Suppose Critic A uses a 10-point scale, while Critic B uses a 1000-point scale to judge the same restaurants. We would never say that B’s worst restaurant, which scored say 114, is better than A’s best, a perfect 10. Instead, we compare their rankings. If A and B give the same restaurant the highest ranking (regardless of the score), it is the best restaurant. Now suppose we are reading a magazine that uses a 5-star rating system. Restaurant X earns 4 stars and Restaurant Y 2 stars. X is better, but can we conclude that X is twice as good as Y? Absolutely not. An ordinal scale is ordered, but the differences between values are not important. Pareto revolutionized our understanding of utility. He rejected Bentham’s cardinal scale because he did not believe that satisfaction could be measured like body temperature or blood pressure. Pareto showed that we could derive demand curves with the less restrictive more-or-less ranking of bundles. The transition from Bentham’s cardinal view of utility to Pareto’s ordinal view was not easy. Using the same word, utility, creates confusion (although, to be fair, Pareto tried to create a new word, ophelimity, but it never caught on). It bears repeating that, for a modern economist, although a utility function will show numerical values, these should not be interpreted on a cardinal scale, nor should numerical utilities of different people be compared. Since we cannot make interpersonal utility comparisons to add utilities of different people, we cannot give me the apple or let me punch you. Monotonic Transformation Once we reveal the consumer’s indifference curve and map, we have the consumer’s rankings of all possible bundles. Then, all we need to do is use a function that faithfully represents the indifference curves. The utility function is a convenient way to capture the consumer’s ordering. There are many (in fact, an infinity) of functions that could work. All the function has to do is preserve the consumer’s preference ranking. A monotonic transformation is a rule applied to a function that changes (transforms) it, but maintains the original order of the outputs of the function for given inputs. Monotonic is a technical term that means always moving in the same direction. For example, star ratings can be squared and the rankings remain the same. If X is a 4-star and Y a 2-star restaurant, we can square them. X now has 16 stars and Y has 4 stars. X is still higher ranked than Y. In this case, squaring is a monotonic transformation because it has preserved the ordering and X is still higher than Y. Can we conclude that X is now four times better? Of course not. Remember that the star ranking is an ordinal scale so the distance between items is irrelevant. We say that squaring is a monotonic transformation because it maintains the same ordering and we do not care about the distances between the numeric values. Their only meaning is "higher" and "lower," which indicate better and worse. It is a fact that the MRS (at any point) remains constant under any monotonic transformation. This is an important property of monotonic transformations that we will illustrate with a concrete example in Excel. Cobb-Douglas: A Ubiquitous Functional Form STEP Open the Excel workbook Utility.xls, read the Intro sheet, and then go to the CobbDouglas sheet to see an example of this utility function: $u(x_1, x_2) = x_1^cx_2^d$ In economics, a function created by multiplying variables that are raised to powers is called a Cobb-Douglas functional form. STEP Follow the directions on the sheet (in column K) to rotate the 2D chart so you are looking down at it. A top-down view of the utility function looks like an indifference map. The utility function itself, in 3D, is a hill or mountain (that keeps growing without ever reaching a topillustrating the idea of insatiability). With a utility function, the indifference curves appear as contour lines or level curves. The curves in 2D space are created by taking horizontal slices of the 3D surface. Every point on the indifference curve has the exact same height, which is utility. STEP The exponents (c and d) in the utility function express “likes and dislikes.” Try c = 4 then c = 0.2 in cell B5. The higher the c exponent, the more the consumer likes $x_1$ because each unit of $x_1$ is raised to a higher power as c increases. Notice that when c = 4, the fact that the consumer likes $x_1$ much more than when c = 0.2 is reflected in the shape of the indifference curve. The steeper the indifference curve, the higher the MRS (in absolute value) and the more the consumer likes $x_1$. STEP Proceed to the CobbDouglasLN sheet, which applies a monotonic transformation of the Cobb-Douglas function. It applies the natural log function to the utility function. Recall that the natural logarithm of a number x is the exponent on e (the irrational number 2.7128 . . .) that makes the result equal x. You should also remember that there are special rules for working with logs. Two especially common rules are $\ln (x^y) = y \ln x$ and $\ln (xy) = \ln x + \ln y$. We can apply these rules to the Cobb-Douglas function when we take the natural log: $u(x_1, x_2) = x_1^cx_2^d$ $\ln [u(x_1, x_2)] = \ln [x_1^cx_2^d]$ $\ln [u(x_1, x_2)] = c \ln x_1 + d \ln x_2$ The CobbDouglasLN sheet applies the natural log transformation by using Excel’s LN() function. STEP Click on any cell between B12 and Q27 to see the formula. We are computing the natural log of utility, which is $x_1$ raised to the c power times $x_2$ raised to the d power. How does the original utility function compare to its natural log version? STEP Go back and forth a few times between the two (click on the CobbDouglas sheet tab and then the CobbDouglassLN sheet tab). It is obvious that the numbers are different. But did you notice something curious? STEP Compare the cells with yellow backgrounds in the two sheets to see that these two combinations continue to lie on the same indifference curve, even though the utility values of the two functions are different. The fact that the cells remain on the same indifference curve after undergoing the natural log transformation demonstrates the meaning of a monotonic transformation. The utility values are different, but the ranking has been preserved. The two utility functions both maintain the same relationship between 1,14 and 2,7 and every other bundle. So now you know that a Cobb-Douglas utility function can be used to faithfully represent a consumer’s preferences (including tweaking the c and d exponents to make the curves steeper or flatter) and that we can use the natural log transformation if we wish. In addition, economists often use the Cobb-Douglas functional form for utility (and production) functions because it has very nice algebraic properties where lots of terms cancel out. The Cobb-Douglas function is especially easy to work with if you remember the following rules: Algebra Rules: $\frac{x^a}{x^b} = x^{a - b}$ and $x^{a^b} = x^{ab}$ Calculus Rule: $\frac{dax^b}{dx} = bax^{b - 1}dx$ These rules may seem irrelevant right now, but we will see that they make the Cobb-Douglas function much easier to work with than other functions. This goes a long way in explaining the repeated use of the Cobb-Douglas functional form in economics. Expressing Other Preferences with Utility Functions STEP Proceed to the PerfSub sheet and look around. Scroll down (if needed) and look at the two charts. Notice how this functional form is producing straight line indifference curves (in the 2D chart). If the consumer treated two goods as perfect substitutes, we would use this functional form instead of Cobb-Douglas. The coefficients (a and b) can be tweaked to make the lines steeper or flatter. STEP Proceed to the PerfComp sheet. This shows how the min() functional form produces L-shaped indifference curves. The min() function outputs the smaller of the two terms, $ax_1$ and $bx_2$. This means that getting more of one good while holding the amount of the other good constant does not increase utility. This produces an L-shaped indifference curve. Finally, the Quasilinear sheet displays indifference curves that are actually curved, but rather flat. STEP Go to the Quasilinear sheet and click on the different functional form options. These are just a few of the many transformations that can be applied to $x_1$ and then added to $x_2$ to produce what is called quasilinear utility. Later, we will see that this functional form has different properties than Cobb-Douglas. Note that we can represent many different kinds of preferences with utility functions. An important point is that there are many (to be more exact, an infinity) of possible utility functions available to us. We would choose one that faithfully reflects a particular consumer’s preferences. We can always apply a monotonic transformation and it will not alter the consumer’s preferences. Computing the MRS for a Utility Function Now that we have utility functions to represent a consumer’s preferences, we are able to compute the MRS from one point to another (like we did in the previous chapter) or by using the instantaneous rate of change, better known as the derivative. This is not a mathematics book, but economists use math so we need to see exactly how the derivative works. The core idea is convergence: make the change in x (the run) smaller and smaller and the ratio of the rise over the run (the slope) gets closer and closer to its ultimate value. The derivative is a shortcut that gives us the answer without the cumbersome process of making the change smaller and smaller. But this is way too abstract. We can see it in Excel. STEP Proceed to the MRS sheet to see how the MRS can be computed via a discrete-size change versus an infinitesimally-small change. The utility function is $x_1x_2$. This is Cobb-Douglas with exponents (implicitly) equal to 1. Suppose we are interested in the indifference curve that gives all combinations with a utility of 10. Certainly 5,2 works (since 5 times 2 is 10). It is the red dot in the graph on the MRS sheet (and in Figure 2.10). From the bundle 5,2, if we gave this consumer 1 more unit of $x_1$, by how much would we have to decrease $x_2$ to stay on the $U=10$ indifference curve? A little algebra tells us. We know that $U = x_1x_2$ and the initial bundle 5,2 yields $U = 10$. We want to maintain U constant with $x_1 = 6$ because we added one unit to $x_1$, so we have: $U = x_1x_2$ $10 = 6x_2$ $x_2 = \frac{10}{6}$ We have two bundles that yield $U = 10$ (5,2, and 6,$\frac{10}{6}$). We can compute the MRS as the change in $x_2$ divided by the change in $x_1$. The delta (or difference) in $x_2$ is $-\frac{1}{3}$ (because $\frac{10}{6}$ is $\frac{1}{3}$ less than 2) and the delta in $x_1$ is 1 (6 - 5), so starting from the point 5,2, the MRS from $x_1$ = 5 to $x_1$ = 6 is $-\frac{1}{3}$. This is what Excel shows in cell C18. Another way to compute the MRS uses the calculus approach. Instead of a “large” or discrete-size change in $x_1$, we take an infinitesimally small change, computing the slope of the indifference curve not from one point to another, but as the slope of the tangent line (as shown in Figure 2.10). We use the derivative to compute the MRS at a particular point. For this simple utility function, holding U constant at 10, we can rewrite the function as $x_2$ in terms of $x_1$, then take the derivative. $U = x_1x_2$ $x_2 = \frac{10}{x_1}$ $\frac{dx_2}{dx_1} = - \frac{10}{x_1^2}$ At $x_1 = 5$, substitute in this value and the MRS at that point is $-\frac{10}{25}$ or -0.4. This is what Excel shows in cell D18. If you need help with derivatives, the next chapter has an appendix that reviews basic calculus. Computing the MRS this way relies on the ability to write $x_2$ in terms of $x_1$. If we have a utility function that cannot be easily rearranged in this way, we will not be able to compute the MRS. There is, however, a more general approach. The procedure involves taking the derivative of the utility function with respect to $x_1$ (called the marginal utility of $x_1$) and dividing by the derivative of the utility function with respect to $x_2$ (called the marginal utility of $x_2$). Do not forget to include the minus sign when you use this approach. Here is how it works. With $U = x_1x_2$, the derivatives are simple: $\frac{dU}{dx_1} = x_2$ and $\frac{dU}{dx_2} = x_1$. Thus, we can substitute these into the numerator and denominator of the MRS expression: Because we are considering the point 5,2, we evaluate the MRS at that point (which means we plug in those values to our MRS expression), like this: Note that minus the ratio of the marginal utilities gives the same answer as the $\frac{dx_2}{dx_1}$ method. Both are using infinitesimally small changes to compute the instantaneous rate of change of the indifference curve at a particular point. Also note that the ratio of the marginal utilities approach requires that you divide the marginal utility of $x_1$ (the good on the x axis) by the marginal utility of $x_2$ (the good on the y axis). Since we used $\frac{\Delta y}{\Delta x}$ in the discrete-size change approach, it is easy to confuse the numerator and denominator when computing the MRS via the derivative. Remember that $\frac{dU}{dx_1}$ goes in the numerator. Comparing $\Delta$ and d Methods So far, we know there are two ways to get the MRS: move from one point to another along the indifference curve (discrete change, $\Delta$) or slope of the tangent line at a point (infinitesimally small change, d). We also know that we have two ways of doing the latter (solve for $x_2$ then take the derivative or compute the ratio of the marginal utilities.) But you may have noticed a potential problem in that the two procedures to get the MRS yield different answers. In the MRS sheet and our work above, the discrete change approach tells us that the MRS as measured from $x_1 = 5$ to $x_1 = 6$ is $-\frac{1}{3}$, whereas the derivative method says that the MRS at $x_1 = 5$ is -0.4. This difference in measured MRS is due to the fact that the two approaches are applying a different size change in $x_1$ to a curve. As the discrete-size change gets smaller, it approaches the derivative measure of the MRS. You can see this clearly with Excel. STEP Change the step size in cell B7 to 0.5 and watch how cell C18 changes. Notice that the chart is also slightly different because the point at $x_1 = 6$ is now at 5.5. You have made the size of the change in $x_1$ smaller so the point is now closer to the initial value, 5. STEP Do it again, this time changing the step size in cell B7 to 0.1. The point with $x_1 = 5.1$ is so close to 5 that it is hard to see, but it is there. Do one last change to the step size, setting it at 0.01. With the step size at 0.01, you cannot see the initial and new points because they are so close together, but they are still a discrete distance apart. Excel displays the point-to-point delta computation in cell C18. It is really close to the derivative measure of the MRS in cell D18 because the derivative is simply the culmination of this process of making the change in $x_1$ smaller and smaller. In Figure 2.10, the discrete change approach is computing the rise over the run using two separate points on the curve, while the calculus approach is computing the slope of the tangent line. STEP Look at the values of the cells in the yellow highlighted row. The MRS for a given approach are exactly the same. In other words, columns C, H, and M are the same and columns D, I, and N are the same. This shows that the MRS remains unaffected when the utility function is monotonically transformed. Utility Functions Represent Preferences Utility functions are equations that represent a consumer’s preferences. The idea is that we reveal preferences by having the consumer compare bundles, and then we select a functional form that faithfully reflects the indifference curves of the consumer. In selecting the functional form, there are many possibilities and economists often use the Cobb-Douglas form. The values of utility produced by inputting amounts of goods are meaningless and any monotonic transformation (because it preserves the preference ordering) will work as a utility function. Monotonic transformations do not affect the MRS. The MRS is an important concept in consumer theory. It tells us the willingness to trade one good for another and this measure the consumer’s likes and dislikes. Willingness to trade a lot of y for a little x produces a high MRS (in absolute value) and this indicates that the consumer values x more than y. The MRS computed from one point to another ($\Delta$), but it can also be computed using the derivative (d) at a point. Both are valid and the resulting number for the MRS is interpreted the same way (willingness to trade). Exercises The utility function, $U = x - 0.03x^2 + y$, has a quasilinear functional form. Use this function to the answer the questions below. You can see what it looks like by choosing the Polynomial option in the Quasilinear sheet. 1. Compute the value of the utility function at bundle A, where x = 10 and y = 1. Show your work. 2. Working with bundle A, find the MRS as x rises from x = 10 to x = 20. Show your work. 3. Find the MRS at the point 10,1 (using derivatives). Show your work. 4. Why do the two methods of determining the MRS yield different answers? 5. Which method is better? Why?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/02%3A_Satisfaction/2.02%3A_Utility_Functions.txt
What you know so far: 1. The budget constraint shows the consumer’s possible consumption bundles. The standard, linear constraint is $p_1x_1 + p_2x_2 = m$. There are many other situations, such as subsidies and rationing, which give more complicated constraints with kinks and horizontal/vertical segments. 2. The indifference map shows the consumer’s preferences. The standard situation is a set of convex, downward sloping indifference curves. There are many alternative preferences, such as perfect substitutes and perfect complements. Preferences are captured by utility functions, which accurately reflect the shape of the indifference curves. Our job is to combine these two parts, one expressing what is affordable and the other what is desirable, to find the combination (or bundle) that maximizes satisfaction (as described by the indifference map or utility function) given the budget constraint. The answer will be in terms of how much the consumer will buy in units of each good. The optimal solution is depicted by the canonical graph in Figure 3.1. The word canonical is used here to mean standard, conventional, or orthodox. In economics, a canonical graph is a core, essential graph that is understood by all economists, such as a supply and demand graph. It is no exaggeration to say that Figure 3.1 is one of the most fundamental and important graphs in economics. It is the foundation of the Theory of Consumer Behavior and with it we will derive a demand curve. One serious intellectual obstacle with Figure 3.1 is that it is highly abstract. Below we work on a concrete problem, with actual numbers, to explain what is going on in this fundamental graph. Before we dive in, we need to discuss solution strategies. There are two ways to find the optimal solution: 1. Analytical methods using algebra and calculusthis is the conventional, paper and pencil approach that has been used for a long time. 2. Numerical methods using a computer, for example, Excel’s Solverthis is a modern solution strategy that uses the computer to do most of the work. Analytical Approach Unfortunately, constrained optimization problems are harder to solve than unconstrained problems. The appendix to this chapter offers a short calculus review along with a few common derivative and algebra rules. If the material below makes little sense, go to the appendix and then return here. Because this is a constrained optimization problem, the analytical approach uses the method developed by Joseph Louis Lagrange. His brilliant idea is based on transforming a constrained optimization problem into an unconstrained problem and then solving by using standard calculus techniques. In the process, a new endogenous variable is created. It can have a meaningful economic interpretation. Lagrange gave us a recipe to follow that requires four steps: 1. Rewrite the constraint so that it is equal to zero. 2. Form the Lagrangean function. 3. Take partial derivatives with respect to $x_1$, $x_2$, and $\lambda$. 4. Set the derivatives equal to zero and solve for $x_1\mbox{*}$, $x_2\mbox{*}$, and $\lambda\mbox{*}$. Suppose a consumer has a Cobb-Douglas utility function with exponents both equal to 1 and a budget constraint, $2x_1 + 3x_2 = 100$ (which means the price of good 1 is $2/unit, the price of good 2 is$3/unit, and income is $100). The problem is to maximize utility subject to (s.t.) the budget constraint. It is written in equation form like this: $\max\limits_{x_1,x_2}U(x_1,x_2)=x_1x_2 \ \textrm{s.t. } 100 = 2x_1 + 3x_2 \nonumber$ This problem is not solved directly. It is first transformed into an unconstrained problem, and then this unconstrained problem is solved. Here is how we apply the recipe developed by Lagrange. 1. Rewrite the constraint so that it is equal to zero. $0 = 100 - 2x_1 - 3x_2$ 2. Form the Lagrangean function. Most math books use a fancy script L for the Lagrangean, like this $\mathcal{L}$, but this is difficult to do in Word’s Equation Editor (which you will be using) so an extra-large L will work just as well. Also, many books spell Lagrangean with an i, Lagrangian, but both spellings are acceptable. Note that the Lagrangean function, L, is composed of the original objective function (in this case, the utility function) plus a new variable, the Greek letter lambda, $\lambda$, times the rewritten constraint. Called the Lagrangean multiplier, $\lambda$ is a new endogenous variable that is introduced as part of Lagrange’s solution strategy. The next step in Lagrange’s recipe can be intimidating. This is not the time to rush through and turn the page. Refer to the appendix at the end of this section if things start to get confusing. 3. Take partial derivatives with respect to $x_1$, $x_2$, and $\lambda$. The derivative used here is a partial derivative, denoted by $\partial$, which is an alternative way of writing a lowercase Greek letter d (which is why the more common symbol for the letter $\delta$ is also used). The partial derivative symbol is usually read as the letter d, so the first equation read out loud would be “d L d x one equals x two minus two times lambda.” It is also common to read the derivative in the first equation as “partial L partial x one.” The partial derivative is a natural extension of the regular derivative. Consider the function $y = 4x^2$. The derivative of y with respect to x is $\frac{dy}{dx} = 8x$. Suppose, however, that we had a more complicated function, like this: $y = 4zx^2$. This multivariate function says that y depends on two variables, z and x. We can explore the rate of change of this function along the x axis by treating it as a partial function, meaning that we hold the z variable constant. Then the partial derivative of y with respect to x is $\partial y/ \partial x = 8zx$. If we hold x constant and vary z, then the partial derivative of y with respect to z is $\partial y/ \partial z = 4x^2$. Applying this logic to the Lagrangean in step 2, when we take the partial derivative with respect to $x_1$, the first term is $x_2$ because it is as if we had "x4" and took the derivative with respect to x, getting 4. If we multiply $\lambda$ through the parenthetical expression in the Lagrangean, we get: $\lambda (100 - 2x_1 - 3x_2)\ \lambda 100 - \lambda 2x_1 - \lambda 3x_2 = 0$ The first and third terms on the left-hand side do not have $x_1$ so the derivative with respect to $x_1$ is zero (just like the derivative of a constant is zero). The derivative with respect to $x_1$ of the middle term produces $- \lambda 2$ which is written by convention as $- 2 \lambda$. Can you do the other two derivatives in step 3? 4. Set the derivatives equal to zero and solve for $x_1\mbox{*}$, $x_2\mbox{*}$, and $\lambda\mbox{*}$. There are many ways to solve this system of equations, which are known as the first-order conditions. Sometimes, this is the hardest part of the Lagrangean method. Depending on the utility function and constraint, there may not be an analytical solution. A common strategy involves moving the $\lambda$ terms in the first two equations to the right-hand side and then dividing the first equation by the second one. $x_2 = 2 \lambda \ x_1 = 3 \lambda \ \frac{x_2}{x_1}=\frac{2 \lambda}{3 \lambda}$ The $\lambda$ terms then cancel out, leaving us with two equations (the one above and the third equation from the original three first-order conditions) and two unknowns ($x_1$ and $x_2$). $\begin{gathered} %star suppresses line # \frac{x_2}{x_1} = \frac{2}{3}\ 100 - 2x_1 - 3x_2 = 0\end{gathered}$ The top equation has a nice economic interpretation. It says that, at the optimal solution, the MRS (slope of the indifference curve) must equal the price ratio (slope of the budget constraint). From the top equation, we can solve for $x_2$. $x_2=\frac{2}{3}x_1$ We can then substitute this expression into the bottom equation (the budget constraint) to get the optimal value of $x_1$. Then we substitute $x_1\mbox{*}$ into the expression for $x_2$ to get $x_2\mbox{*}$. $x_2=\frac{2}{3}[25]$ $x_2\mbox{*}=16\frac{2}{3}$ The asterisk is used to represent the optimal solution for a choice variable. This work says that this consumer should buy 25 units of good 1 and $16 \frac{2}{3}$ units of good 2 in order to maximize satisfaction given the budget constraint. We can use either equation 1 or 2 from the original first-order conditions to find the optimal value of $\lambda$. Either way, we get $\lambda\mbox{*} = 8 \frac{1}{3}$. For many optimization problems, we would be interested in knowing the numerical value of the maximum by evaluating the objective function (in this case the utility function) at the optimal solution. But recall that utility is measured only up to an ordinal scale and the actual value of utility is irrelevant. We want to maximize utility, but we do not care about its actual maximum value. The fact that utility is ordinal, not cardinal, also explains why the optimal value of lambda is not meaningful. In general, the Lagrangean multiplier tells us how the maximum value of the objective function changes as the constraint is relaxed. With utility as the objective function, this interpretation is not applicable. Numerical Approach Instead of calculus (via the method of Lagrange) and pencil and paper, we can use numerical methods to find the optimal solution. To use the numerical approach, we need to do some preliminary work. We have to set up the problem in Excel, carefully organizing things into a goal, endogenous variables, exogenous variables, and constraint. Once we have everything organized, we can use Excel’s Solver to get the solution. STEP Open the Excel workbook OptimalChoice.xls, read the Intro sheet, and then go to the OptimalChoice sheet to see how the numerical approach can be used to solve the problem we worked on above. Figure 3.2 reproduces the display you see when you first arrive at the OptimalChoice sheet. Notice how the sheet is organized according to the three components of the optimization problem: goal, endogenous, and exogenous variables. The constraint cell displays how much of the consumer’s budget remains available for buying goods. The consumer in Figure 3.2 is not using all of the income available so we know satisfaction cannot be maximized at the point 20,10. STEP Let’s have the consumer buy $x_2$ with the remaining$30. At \$3/unit, 10 additional units of $x_2$ can be purchased. Enter 20 in the $x_2$ cell (B13) and hit the Enter key. The chart refreshes to display the point 20,20, which is on the budget constraint, and draws three new indifference curves. Although 20,20 does exhaust the available income, it is not the optimal solution. While you know the answer is 25,$16 \frac{2}{3}$, there is another way to tell that the consumer can do better. STEP Look carefully at the display below the chart. It reveals the MRS does not equal the price ratio. This immediately tells us that something is amiss here. MRS $> p_1/ p_2$ tells us that the slope of the indifference curve at that point is greater than the slope of the budget constraint. The consumer cannot change the slope of the budget constraint, but the MRS can be altered by choosing a different the combination of goods. This consumer needs to lower the MRS (in absolute value) to make the two equal. This can be done by moving down the budget constraint. If the consumer buys 10 more of good 1 (so 30 units of $x_1$ total), consumption of $x_2$ must fall by $6 \frac{2}{3}$ units to $13 \frac{1}{3}$. STEP Enter 30 in cell B12 and the formula $=13 + 1/3$ in B13. Now you are on the other side of the optimal solution. The MRS is less than the price ratio. You could, of course, continue adjusting the cells manually, but there is a faster way. STEP Click the Data tab in Excel’s Ribbon (on the top of the screen) and click Solver (grouped under the Analyze tab) or execute Tools: Solver in older versions of Excel to bring up the Solver Parameters dialog box (displayed in Figure 3.3). If you do not have Solver available as a choice, bring up the Add-in Manager dialog box and make sure that Solver is listed and checked. If Solver is not listed, you must install it. Solver is included in a standard installation of Excel. For help, try support.office.com or www.solver.com. Note how Excel’s Solver includes information on the objective function (the target cell), the choice variables (the changing cells), and the budget constraint. These have all been filled in for you, but you will learn how to do this yourself in future work. STEP Since all of the information has been entered into the Solver Parameters dialog box, simply click the Solve button at the bottom of the dialog box. Excel’s Solver works by trying different combinations of $x_1$ and $x_2$ and evaluating the improvement in the target cell, while trying to stay within the constraint. When it cannot improve very much more, it figures it has found the answer and displays a message as shown in Figure 3.4. Although Solver gets the right answer in this problem, we will see in future applications that Solver is not perfect and does not deserve blind trust. STEP Click the Sensitivity option under Reports and click OK; Excel puts the Solver solution into cells B12 and B13. It also inserts a new sheet into the workbook with the Sensitivity Report. STEP Click on cells B12 and B13. Notice that Excel did not get exactly 25 and $16 \frac{2}{3}$. It got extremely close and you can certainly interpret the result as confirming the analytical solution, but Solver’s output requires interpretation and critical thinking by the user. We will focus on the issue of the exactly correct answer later. STEP Proceed to the Sensitivity Report sheet (inserted by Solver) to confirm that this numerical method gives substantially the same absolute value for the Lagrangean multiplier that we found via the Lagrangean method ($8 \frac{1}{3}$). We postpone explanation of this because utility’s ordinal scale makes interpretation of the Lagrangean multiplier pointless. For now, we simply note that Solver can report optimal lambda and its results agreed with the Lagrangean method. You might notice that Excel reports a Lagrangean multiplier value of -8.33 (with a few more trailing 3s) yet our analytical work did not produce a negative number. It turns out that we ignore the sign of $\lambda^{*}$. If we set up the Lagrangean as the objective function minus (instead of plus) lambda times the constraint or rewrite the constraint as $0 = 2x_1 + 3x_2 - 100$ (instead of $0 = 100 - 2x_1 - 3x_2$), we would get a negative value for $\lambda^{*}$ in our analytical work. The way we write the constraint or whether we add or subtract the constraint is arbitrary, so we ignore the sign of $\lambda^{*}$. To be clear, unlike the sign, the magnitude of $\lambda^{*}$ can be meaningful, but it is not in this application because utility is not cardinal. We will, however, see examples where the value of $\lambda^{*}$ is useful and has an economic interpretation. Using Analytical and Numerical Methods to Find the Optimal Solution There are two ways to solve optimization problems: 1. The traditional way uses pencil and paper, derivatives, and algebra. The Lagrangean method is used to solve constrained optimization problems, such as the consumer’s choice problem. 2. Advances in computers have led to the creation of numerical methods to solve optimization problems. Excel’s Solver is an example of a numerical algorithm that can be used to find optimal solutions. In the chapters that follow, we will continue to use both analytical and numerical approaches. You will see that neither method is perfect and both have strengths and weaknesses. Exercises The utility function, $U = 10x - 0.1x^2 + y$, has a quasilinear functional form. Use this utility function to answer the questions below. 1. Suppose the budget line is $100 = 2x + 3y$. Use the analytical method to find the optimal solution. Show your work. 2. Suppose the consumer considers the bundle 0,33.33, buying no x and spending all income on y. Use the MRS compared to the price ratio logic to explain what the consumer will do and why. 3. This utility function can be written in a more general form with letters instead of numbers, like this: $U = ax - bx^c + dy$. If a increases, what happens to the optimal consumption of $x\mbox{*}$? Explain how you arrived at your answer. Appendix: Derivatives and Optimization A derivative is a mathematical expression that tells you how y in a function $y = f(x)$ changes given an infinitesimally small change in x. Graphically, it is the slope, or rate of change, of the function at that particular value of x. Linear functions have a constant slope and, therefore, a constant value for the derivative. For the linear function $y = 6 + 3x$, the derivative of y with respect to x is written $\frac{dy}{dx}$ (pronounced “d y d x”) and its value is 3. This tells you that every time the x variable goes up, the y variable goes up threefold. So, if x increases by 1 unit, y will increase by 3 units. This is easy to see in Figure 3.5. Nonlinear functions have a changing slope and, therefore, a derivative that takes on different values at different values of x. Consider the function $y = 4x - x^2$. Figure 3.6 graphs this function. Its derivative is $\frac{dy}{dx} = 4 - 2x$. When evaluated at a specific point, such as $x = 1$, the derivative is the slope of the tangent line at that point. Unlike the previous case, this derivative has x in it. This means this function is nonlinear. The slope depends on the value of x. At $x=1$, the derivative is 2, but at $x=2$, it is zero ($4-2[2]$) and at $x=3$, it is -2 ($4-2[3]$). In addition, because it is nonlinear, the size of the change in x affects the measured rate of change. For example, the change in y from $x = 1$ to $x = 2$ is 1 (because we move from $y = 3$ to $y = 4$ as we increase x by 1). If we increase x by a smaller amount, say 0.1 (from 1 to 1.1), then $\frac{\Delta y}{\Delta x}=\frac{3.19-3}{1.1-1} = 1.9$. By taking a smaller change in x, we get a different measure of the rate of change. If we compute the rate of change via the derivative, by evaluating $4 - 2x$ at x = 1, we get 2. The derivative computes the rate of change for an infinitesimally small change in x. The smaller the change in x, the closer $\frac{\Delta y}{\Delta x}$ gets to $\frac{dy}{dx}$. You can see this happening as $\frac{\Delta y}{\Delta x}$ went from 1 to 1.9 as $\Delta x$ fell from 1 to 0.1. If we go even smaller, making $\Delta x$ = 0.01 (going from 1 to 1.01), then $\frac{\Delta y}{\Delta x}=\frac{3.0199-3}{1.01-1} = 1.99$. Optimizing with the Derivative An optimization problem typically requires you to find the value of an endogenous variable (or variables) that maximizes or minimizes a particular objective function. We can use derivatives to find the optimal solution. This is called an analytical approach. If we draw tangent lines at each value of x in Figure 3.6, only one would be horizontal (with derivative and slope of zero) and that would be the one at the top. This gives us a solution strategy: to find the maximum, find the value of x with the flat tangent line. This is equivalent to finding the value of x where the derivative is zero. By solving for the value of x where $\frac{dy}{dx} = 0$, we find the optimal solution. For $y = 4x - x^2$, this is easy. We set the derivative equal to zero and solve for $x\mbox{*}$. $\frac{dy}{dx} = 4 - 2x\mbox{*} = 0\ 4 = 2x\mbox{*}\ x\mbox{*} = 2$ The equation that you make when you set the first derivative equal to zero is called the first-order condition. The first-order condition is different from the derivative because the derivative by itself is not equal to anythingyou can plug in any value of x and the derivative expression will pump out an answer that tells you whether and by how much the function is rising or falling at that point. The first-order condition is a special situation in which you are using the derivative to find a horizontal tangent line to figure out where the function has a flat spot. A reduced form is the answer that you get when the derivative is set equal to zero and solved for the optimal solution. It may be a number or a function of exogenous variables. It cannot have any endogenous variables in the expression. Sometimes, you cannot solve explicitly for $x\mbox{*}$. We say there is no closed form solution in these cases. The solution may exist (and numerical methods may be used to find it), but we cannot express the answer as an equation. The second derivative is the derivative of the first derivative. It tells you the slope of the slope function. For example, if a function has a constant slope, we saw that its first derivative is a constant value (like 3 in the first example above). Then the second derivative is zero. Second derivatives are useful in optimization for the following reason: when you find the value of the endogenous variable that makes the first derivative equal to zero, the point that you have located could be either a maximum or a minimum. If you want to be sure which one you have found, you can check the second derivative. For $y = 4x - x^2$, the first derivative is $4 - 2x$ and the second derivative is, therefore, -2. Because the second derivative is negative, we know that our flat spot at x = 2 is a maximum and not a minimum. In this book, we will not use second derivatives to check that our solutions are truly maxima or minima. Our functions will be (mostly) well behaved and we will focus on the economics of the problem, not the mathematics. In summary, derivatives are used to measure the rate of change of a function based on a vanishingly small change in x. If we set a derivative equal to zero, we are trying to find an optimal solution by finding a value for x where the tangent line is flat. This solution strategy is based on the idea that a point where the tangent line is horizontal must mean that we are at the top of the function (or bottom, if we are minimizing). Useful Math Facts This appendix concludes with a short list of common rules for taking derivatives and working with exponents. The idea here is to sharpen your math skills so you can solve optimization problems analytically. A derivative can be computed by directly applying the definitioni.e., taking the limit of the change in x as it approaches zero and determining the change in y. Fortunately, however, there is an easier way. Differentiation rules have been developed that make it much less tedious to take a derivative. Most calculus books have inside covers that are full of rules. Many students never grasp that these rules are actually shortcuts. Here is a short list, with special emphasis on those used in economics. The derivative rules are followed by a few algebra rules relating to legal operations on exponents. We will use these rules often to find optimal solutions and reduce complicated expressions to simpler final answers. Reading these equations is boring and tedious, but may save a lot of time and effort in the future (especially if your math is rusty). You should consider writing out the examples for a different number, say 6. So, instead of $x^4$, what is the derivative with respect to x for $x^6$? Derivative Rules Let x be the variable and a be a constant. When you take a derivative of a function with respect to a variable, you apply the rules to the different parts of the function. For example, if $y = 4x - x^2$, then you apply the $\frac{d}{dx}(ax) = a$ rule to $4x$, getting 4. You apply the $\frac{d}{dx}(x^a) = ax^{a-1}$ rule to $- x^2$ and get $- 2x$. Thus, the derivative of y with respect to x is $\frac{dy}{dx} = 4 - 2x$. There are other calculus rules, of course, such as the chain rule, but we will explain them when they are needed.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/03%3A_Optimal_Choice/3.01%3A_Initial_Solution.txt
We know there are two approaches to solving optimization problems. 1. Analytical methods using algebra and calculus (conventional, paper and pencil, using the Lagrangean method): The idea is to transform the consumer’s constrained optimization problem into an unconstrained problem and then solve it using standard unconstrained calculus techniquesi.e., take derivatives, set equal to zero, and solve the system of equations. 2. Numerical methods using a computer (Excel’s Solver): Set up the problem in Excel, carefully organizing things into a goal, endogenous variables, exogenous variables, and constraint; then use Excel’s Solver. Use the Sensitivity Report in the Solver Results dialog box to get $\lambda\mbox{*}$. In this chapter, we apply both methods on a new problem. Quasilinear Utility Practice Problem A utility function that is composed of a nonlinear function of one good plus a linear function of the other good is called a quasilinear functional form. It is quasi, or sort of, linear because one good increases utility in a linear fashion and the other does not. Below are a general example and a more specific example of quasilinear utility. If $c < 1$, then the quasilinear utility function says that utility increases at a decreasing rate as $x_1$ increases, but utility increases at a constant rate as $x_2$ increases. The optimization problem is to maximize this utility function subject to the usual budget constraint. It is written in equation form like this: $\max\limits_{x_1,x_2,\lambda}x_1^c + x_2 \ \textrm{s.t. } p_1x_1 + p_2x_2 = m$ We will solve the general version of this problem, with letters representing exogenous variables instead of numbers, using the Lagrangean method. 1. Rewrite the constraint so that it is equal to zero. $0 = m - p_1x_1 - p_2x_2$ 2. Form the Lagrangean function. $\max\limits_{x_1,x_2,\lambda} {\large\textit{L}} = x_1^c + x_2 + \lambda(m -p_1x_1 - p_2x_2)$ Note that the Lagrangean function, L, has the quasilinear utility function plus the Lagrangean multiplier, $\lambda$, times the rewritten constraint. Unlike the concrete problem in the previous chapter, which used numerical values, this is a general problem with letters indicating exogenous variables. General problems, without numerical values for exogenous variables, are harder to solve because we have to keep track of many variables and make sure we understand which ones are endogenous versus exogenous. If the solution can be written as a function of the exogenous variables, however, it is often easy to see how an exogenous variable will affect the optimal solution. 3. Take partial derivatives with respect to $x_1$, $x_2$, and $\lambda$. Remember that the partial derivative treats other variables as constants. Thus, the partial derivative of the quasilinear utility function with respect to $x_1$ has no $x_2$ variable in it. 4. Set the derivatives equal to zero and solve for $x_1\mbox{*}$, $x_2\mbox{*}$, and $\lambda\mbox{*}$. We use the same solution method as before, moving the lambda terms to the right-hand side and then dividing the first equation by the second, which allows us to cancel the lambda terms. By canceling the lambda terms, we have reduced the three equation, three unknown system to two equations with two unknowns. Remember that not all variables are the same. The endogenous variables, the unknowns, are $x_1$ and $x_2$. The other letters are exogenous variables. From the first equation, we can solve for the optimal quantity of good 1 (see the appendix to the previous section if these steps are confusing). Notice that we used the rule that $(x^a)^b = x^{ab}$. Because we wanted to solve for $x_1$, we raised both sides to the $\frac{1}{c-1}$ power so that the $c - 1$ exponent on $x_1$ times $\frac{1}{c-1}$ would equal 1. Usually, when we have the MRS equal to the price ratio, we need to solve for one of the x variables in terms of the other and substitute it into the budget constraint. However, a property of the quasilinear utility function is that the MRS only depends on $x_1$; thus by solving for $x_1$, we get the reduced form solution. When solving a problem in general terms, the answer must be expressed as a function of exogenous variables alone (no endogenous variables) and this is called a reduced form. To get $x_2$, we simply substitute $x_1$ into the budget constraint and solve for $x_2$. It is a bit messy, but it is the answer. We have an expression for the optimal amount of $x_2$ that is a function of exogenous variables alone. To get the optimal value of lambda, we can use the second first-order condition, which simply says that $\lambda \mbox{*} = \frac{1}{p_2}$. If you use the first condition, substituting in the value for optimal $x_1$, it will take a little work, but you will get the same result. Practice with the MRS = $\frac{p_1}{p_2}$ Logic Economists stress marginal thinking. The idea is that, from any position, you can move and see how things change. If there is improvement, continue moving. The optimal solution is on a flat spot, where improvement is impossible. When we move the lambda terms over to the right-hand side and divide the first equation by the second equation, we get a crucial statement of the fact that improvement is impossible and we are optimizing. The familiar MRS equals the price ratio expression, along with the third first-order condition, which says that the consumer must be on the budget line (exhausting all income), is a mathematical way of describing marginal thinking. The MRS condition tells us that if the MRS is not equal to the price ratio, there are two possibilities, depicted in Figure 3.7. In Panel A, the slope of the indifference curve at point A is greater than the slope of the budget line (in absolute value). This consumer should crawl down the budget line, reaching higher indifference curves, until the MRS equals the price ratio. At this point, the slope of the indifference curve will exactly equal the slope of the budget line and the consumer’s indifference curve will just touch the budget line. The consumer cannot possibly get to a higher indifference curve and stay on the budget constraint. This is the best possible solution. In Panel B, the story is the same, but reversed. The slope of the indifference curve at point B is less than the slope of the budget line. This consumer should crawl up the budget line, reaching higher indifference curves, until the MRS equals the price ratio. At this point, the slope of the indifference curve will exactly equal the slope of the budget line and the consumer’s indifference curve will just touch the budget line. Numerical Approach to Quasilinear Practice Problem STEP Open the Excel workbook OptimalChoicePractice.xls, read the Intro sheet, and then go to the QuasilinearChoice sheet to see how the numerical approach can be used to solve this problem. It is easy to see that the consumer cannot afford the bundle 5,20 given the prices and income on the sheet. If she buys five units of $x_1$, what’s the maximum $x_2$ she can buy? STEP Enter this amount in cell B12. Does the chart and cell B21 confirm that you got it right? If you entered 13 in B12, then the chart updates and shows that the consumer is now on the budget line. In addition, the constraint cell, B21, is now zero. Without running Solver or doing any calculations at all, is she maximizing at 5,13? The answer is that she is not. It’s hard to see on the chart whether the indifference curve is cutting the budget line, but the information below the chart shows that the MRS is not equal to the price ratio. That tells you that the indifference curve is, in fact, not tangent to the budget line so the consumer is not optimizing. Because the MRS is greater than the price ratio (in absolute value) we also know that the consumer should buy more $x_1$ and less $x_2$, moving down the budget line until the marginal condition is satisfied. Let’s find the optimal solution. STEP Run Solver. Select the Sensitivity Report to get $\lambda\mbox{*}$. How does Excel’s answer compare to our analytical answer? Recall that we found: STEP Create formulas in Excel to compute these two solutions (using cells C11 and C12 would make sense). This requires some care with the parentheses. Here is the formula for good 1: =(p1_/(c_*p2_))(̂1/(c_-1)). You should discover that Excel’s Solver is quite close to the exactly correct solution, 6.25, 12.75. We conclude that the two methods, analytical and numerical, substantially agree. It is true, however, that Solver is ever so slightly off the computed analytical result. In general, there are two reasons for minuscule disagreement between the two methods. 1. Excel cannot display the algebraic result to an infinite number of decimal places. If the solution is a repeating decimal or irrational number, Excel cannot handle it. Even if the number can be expressed as a decimalfor example, one-half is 0.5precision error may occur during the computation of the final answer. This is not the source of the discrepancy in this case. 2. Excel’s Solver often misses the exactly correct answer by small amounts. Solver has a convergence criterion (that you can set via the Options button in the Solver Parameters dialog box) that determines when it stops hunting for a better answer. Figure 3.8 offers a graphical representation of Solver’s algorithm in a one-variable case. The stylized graph (which means it represents an idea without using actual data) in Figure 3.8 shows that Solver works by trying different values and seeing how much improvement occurs. The path of the choice variable (on the x axis) is determined by Solver’s internal optimization algorithm. By default, it uses Newton’s method (a steepest descent algorithm), but you can choose an alternative by clicking the Options button in the Solver dialog box. When Solver takes a step that improves the value of the objective function by very little, determined by the convergence criterion (adjustable via the Options button), it stops searching and announces success. In Figure 3.8, Solver is missing the optimal solution by a little bit because, if we zoomed in, the objective function would be almost flat at the top. Solver cannot distinguish additional improvement. When we say that the analytical method agrees with Solver, we do not mean that the two methods exactly agree, but simply that they correspond, in a practical sense. If Solver is off the exact answer in the 15th decimal place, that is agreement, for all practical purposes. Furthermore, it is easy to conclude that Solver must give an exact answer because it displays so many decimal places. This is incorrect. Solver’s display is an example of false precision. It is not true that the many digits provide useful information. The exact answer is 6.25 and 12.75. What you are seeing is Solver noise. You must learn to interpret Solver’s results as inexact and not report all of the decimal places. There is another way in which Solver can fail us and it is much more serious than incorrectly interpreting the results. Solver Behaving Badly STEP Start from $x_1 = 1, x_2 = 20$ to see a demonstration that Solver is not perfect. After setting cells B11 and B12 to 1 and 20, respectively, run Solver. What happens? A miserable result (an actual, technical term in the numerical methods literature) occurs when an algorithm reports that it cannot find the answer or displays an obviously erroneous solution. Figure 3.9 displays an example of a miserable result. Solver is clearly announcing that it cannot find an answer. If you look carefully at the spreadsheet (click cancel or OK if needed to return to the sheet), you will see that Solver blew up when it tried a negative value for $x_1$. The objective function cell, B7, is displaying the error #NUM! because Excel cannot take the square root of a negative number. To be clear, when we start from 1,20, Excel tries to move left and crosses over the y axis into negative x territory. Since the utility function is $x_1^{0.5}$, it tries to take the square root of a negative number, producing an error, and crashing the algorithm. When Solver fails, there are three basic strategies to fix the problem: 1. Try different initial values (in the changing cells). If you know roughly where the solution lies, start near it. Always avoid starting from zero or a blank cell. 2. Add more structure to the problem. Include non-negativity constraints on the endogenous variables, if appropriate. In the case of consumer theory, if you know the buyer cannot buy negative amounts, add this information. 3. Completely reorganize the problem. Instead of directly optimizing, you can put Solver to work on equations that must be met. In this problem, you know that MRS $= \frac{p_1}{p_2}$ is required. You could create a cell that is the difference between the MRS and the price ratio and have Solver find the values of the choice variable that force this cell to equal zero. Let’s try the second strategy. STEP Reset the initial values to 1 and 20, then launch Solver (click the Data tab and click Solver) and click the Add button (at the top of the stacked buttons on the right). Solver responds by popping up the Add Constraint dialog box. STEP Select both of the endogenous variables in the Cell Reference field, select $>=$, and enter 0 in the Constraint field so that the dialog box looks like Figure 3.10. Click OK. You are returned to the main Solver Parameters dialog box, but you have added the constraint that cells B11 and B12 must be non-negative. You might notice that you could have have simply clicked the Make Unconstrained Variables Non-Negative option, but adding the constraint shows how to work with constraints. STEP Once back at the main Solver Parameters dialog box, click Solve. This time, Solver succeeds. Adding the non-negativity constraint prevented Solver from trying negative $x_1$ values and producing an error. Perfect Complements Practice Problem Recall that L-shaped indifference curves represent perfect complements, which are reflected via the following mathematical function: $u(x_1,x_2) = min{ax_1,bx_2}$ Suppose $a = b = 1$ and the budget line is $50 = 2x_1 + 10x_2$. First, We want to solve this problem analytically. The Lagrangean method cannot be applied because the function is not differentiable at the corner of the L. The Lagrangean method, however, is not the only analytical method available. Figure 3.11 shows that when a = b = 1, the optimal solution must lie on a ray from the origin with slope +1. The optimal solution has to be on the corner of the L-shaped indifference curves because a non-corner point (on either the vertical or horizontal part of the indifference curve) implies the consumer is spending money on more of one of the goods without getting any additional satisfaction. Thus, we know that the optimal solution must lie on the line $x_2 = x_1$. We can combine this optimal solution equation with the budget constraint to find the optimal solution. The two equation, two unknown system can be solved easily by substitution. Of course, we know $x_2 = x_1$ so optimal $x_2$ is also $4 \frac{1}{6}$. Can Excel do this problem and do we get the same answer? Let’s find out. STEP Proceed to the PerfectComplements sheet to see how we set up the spreadsheet in Excel. Click on cell B7 to see the utility function. STEP Run Solver and get a Sensitivity Report. Solver can be used to generate a value for the Lagrangean multiplier (via the Sensitivity Report) even though we could not use the Lagrangean method in our analytical work. As with the previous problem (with quasilinear utility), we find that Solver and the analytical approach substantially agree. The answer is a repeating decimal, so Excel cannot get the exact answer, $4 \frac{1}{6}$, but it is really close. Previously, we saw that Solver could crash and give a miserable result. Now, let’s learn that Solver can really misbehave. STEP Starting from $x_1 = 1, x_2 = 1$, run Solver. What happens? You are seeing an example of a disastrous result which occurs when an algorithm reports that it has found the answer, but it is wrong. There is no obvious error and the user may well accept the answer as true. Solver reports a successful outcome, but the answer it gives is 1,1 and we know the right answer is $4 \frac{1}{6}$ for both goods. Disastrous results include an element of interpretation. In this case, we might notice that 1,1 is way inside the budget constraint and, therefore, the algorithm has failed. A truly disastrous result occurs when there is no way to independently test or verify the algorithm’s wrong answer. Miserable and disastrous results are well defined, technical terms in the mathematical literature on numerical methods. Disastrous results are much more dangerous than miserable results. The latter are frustrating because the computer cannot provide an answer, but disastrous results lead the user to believe an answer that is actually wrong. In the world of numerical optimization, they are a fact of life. Numerical methods are not perfect. You should never completely trust any optimization algorithm. Understanding SolverBe Skeptical This chapter enabled practice solving the consumer’s constrained optimization problem with two different utility functions, a quasilinear function and perfect complements. In both cases, we found that Excel’s Solver agreed, practically speaking, with the analytical method. The ability to solve optimization problems with two independent methods means we can be really sure we have found an optimal solution when they give the same answers. In addition, we explored how Solver actually works. It evaluates the objective function for different values of the choice variables. It continues searching for a better solution until it cannot improve much (an amount determined by the convergence criterion). Solver can fail by reporting that it cannot find a solution (called a miserable result) oreven worseby reporting an incorrect answer with no obvious error (which is a disastrous result). It is easy to believe that a result displayed by a computer is guaranteed to be correct. Do not be careless and trustingnumerical methods can and do fail, sometimes spectacularly. This point deserves careful repetition. You run Solver and it happily announces that a solution has been found and offers up a 15 or 16 digit number for your inspection. The problem, however, is that the solution is way off. Not in the millionth or even tenth decimal place, but completely, totally wrong. How this might happen takes us too far afield into the land of numerical optimization, but suffice it to say that you should always ask yourself if the answer makes common sense. Solver really is a powerful way to solve optimization problems, but it is not perfect. You need to always remember this. After running Solver, format the results with an eye toward ease of understanding and think about the result itself. Do not mindlessly accept a Solver result. Stay alert even if Solver claims to have hit pay dirtit may be a disastrous result! More explanation of Solver is available in the SolverInstructions.doc file in the SolverCompStaticsWizard folder. Exercises 1. In the quasilinear example in this chapter, use the first equation in the first-order conditions to find $\lambda\mbox{*}$. Show your work. 2. Use analytical methods to find the optimal solution for the same perfect complements problem as presented in this chapter, except that $a = 4$ and $b = 1$. Show your work. 3. Draw a graph (using Word’s Drawing Tools) of the optimal solution for the previous question. 4. Use Excel’s Solver to confirm that you have the correct answer. Take a picture of the cells that contain your goal, endogenous variables, and exogenous variables.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/03%3A_Optimal_Choice/3.02%3A_More_Practice_and_Understanding_Solver.txt
This chapter applies the consumer choice model to a real-world example. We will see that the model can be used to explain why someone would illegally sell food stamps. We also tackle an important policy question: If cash dominates food stamps, why not just help low-income people by giving them cash? A Short History of Food Assistance in the United States The primary responsibility for ensuring poor people (including children) in the United States have enough to eat lies with the Department of Agriculture (USDA). They run a program that enables low-income people to spend government-provided benefits on eligible food in stores. The USDA’s web page, (www.fns.usda.gov/snap/short-history-snap), is the source of the information below. The Data and Research tab on the USDA’s website has usage and cost datathere are around 40 million participants and the program spends roughly $70 billion per year. This is one of the largest transfer programs in the fight against poverty. It offers critical support for low-income households. The first Food Stamp Program, in 1939, was very different from today’s version. Originally, "the program operated by permitting people on relief to buy orange stamps equal to their normal food expenditures. For every$1 worth of orange stamps purchased, 50 cents worth of blue stamps were received. Orange stamps could be used to buy any food. Blue stamps could only be used to buy food determined by the Department to be surplus." Important changes were made in the 1960s and, in 1977, the purchase requirement was eliminated. Households below the poverty line who met other criteria (such as work or study requirements) were eligible to receive food stamps. Figure 3.12 shows that these stamps were like paper currency; they were rectangular, but only about half the size of a dollar bill. There were different dollar denominations in a booklet. When buying food at the supermarket, the consumer tore out the stamp and paid for the food. They would pay for any non-food items with cash or a check. In 2008, it was renamed the Supplemental Nutrition Assistance Program (SNAP) to avoid stigma. It could be embarrassing to pay with food stamps since everyone in line immediately knew that you were receiving government assistance. Today, both names, food stamps and SNAP, are used. SNAP has always been battered by politics, with benefits expanding and contracting depending on the rhetoric of the day. There are the usual arguments over administrative costs, but cheating on the part of recipients has been an especially contentious issue. In 2002, all states were required to use Electronic Benefits Transfer (EBT) cards. This was supposed to stop the illegal sale of food stamps (and reduce stigma), but fraud remains a focus of critics. We can model and analyze food stamps with the Theory of Consumer Behavior. We will focus on how food stamps can be incorporated into the consumer’s optimization problem and why selling food stamps is so difficult to stop. Food Stamp Theory Recall from the Budget Constraint chapter that food stamps are a subsidy that produces a budget constraint with a horizontal segment, as shown in Figure 3.13. We use the $x_1$ variable on the x axis to represent units of food. The $x_2$ variable on the y axis captures all other goods lumped together. We get the flat part of the constraint because food stamps can be used to buy only food. STEP Open the Excel workbook FoodStamp.xls and read the Intro sheet. Proceed to the BudgetConstraint sheet. Change cell E13 from 10 to 20. Notice that the horizontal segment, which is the monetary value of the food stamps divided by the price of food, gets longer. Also notice that the chart on the right, showing the budget constraint if the food stamp amount was treated as cash, has no horizontal segment. In the chart on the right, the value of the food stamp subsidy is computed (xbar times price of food) and then added to income as if it were cash; hence the name, cash-equivalent subsidy. It should be quite clear that the cash-equivalent subsidy provides consumption possibilities that are unattainable above the horizontal segment of the food stamp budget constraint. The most other goods the food stamp recipient can buy is $33 \frac{1}{3}$ units, while the cash-equivalent consumer can buy 40 units of $x_2$. STEP Proceed to the Inframarginal sheet. It combines a food stamp budget constraint with a Cobb-Douglas utility function. The word inframarginal (or submarginal) means below the edge or margin. The edge in this case is the kink in the budget constraint. This consumer is inframarginal because his optimal solution is on the downward sloping part of the budget line, below the kink. He will use up his food stamp allotment on food and then spend some of his cash income to get additional food. The sheet reveals that he buys 35 units of food (valued at $70, as shown in cell B15), 20 of which he obtains with food stamps and the remaining 15 he buys with cash. We can easily see that he is optimizing because the “MRS equals the price ratio” condition is met. This is reflected in the graph where the highest attainable indifference curve is just touching the budget constraint. STEP Click on cell B25 to see the formula for the budget constraint. This formula is using an IF statement to implement the constraint in Excel. Expressed as an equation, the budget line looks like this: The first equation says that if the consumer buys an amount of food that is less than or equal to xbar, that frees up his whole cash income to spend on good 2. This is the horizontal line component. Things are more complicated if the consumer wants more than xbar of food. The second equation says that the consumer will have to use cash to buy amounts of $x_1$ greater than xbar and it computes the amount of $x_2$ that can be purchased as a function of $x_1$. This constraint (rewritten to equal zero) has been entered in a single cell with an IF statement: =IF(x1_$<$x1bar,m/p2_-x2_,m/p2_-(p1_/p2_)*(x1_-x1bar)-x2_) The underscore (_) character is used in the variable names to distinguish them from cell addressese.g., p2_ is not cell P2. From Excel’s Help on the IF function: Returns one value if a condition you specify evaluates to TRUE and another value if it evaluates to FALSE. Use IF to conduct conditional tests on values and formulas. Syntax: IF(logical test,value if true,value if false) Applying this information to the formula in cell B25, we can see that it has three parts, separated by commas. The first part says that if $x_1 < x1bar$ (that is the condition being evaluated), then the consumer can buy $m/p_2$ amount of $x_2$ (this second part produces the horizontal line in the budget constraint), else (the third part is what happens if $x_1$ is not less than x1bar) the consumer can buy $x_2$ along the downward sloping part of the budget line. This problem shows that Excel can be used to handle complicated examples in the Theory of Consumer Behavior. This food stamp problem has a kinked budget constraint, but using Excel’s IF statement allows us to implement the constraint in the workbook and use Solver to find the optimal solution. This problem also can be solved via analytical methods, but it is cumbersome and difficult to deal with the kinked budget constraint. We will use the easier numerical approach to conduct our analysis. STEP Proceed to the Distorted sheet. This sheet is exactly the same as the Inframarginal sheet with one crucial exception: the preferences, in cells B21 and B22, are different. The consumer in the Distorted sheet prefers other goods more and food less than the consumer in the Inframarginal sheet. The change in exponents in the Cobb-Douglas utility function has affected the indifference map. The curves are much flatter in the Distorted sheet compared with the Inframarginal sheet. The Distorted sheet opens with the optimal values for food and other goods from the Inframarginal sheet. It is obvious that the MRS does not equal the price ratio and the indifference curve is cutting the budget constraint at the current bundle of $x_1$ and $x_2$. This consumer is not optimizing at this point. Corner Solution STEP Run Solver on the Distorted sheet. Solver announces it has found the optimal solution, yet the MRS still does not equal the price ratio. Is this really the optimal solution? Yes, it is the optimal solution. We have encountered what is called a corner solution (or boundary optimum). In this case, the equimarginal condition, MRS $= \frac{p_1}{p_2}$, does not hold because the optimal solution is found at one of the end points (or corners) of the constraint. STEP To see what is happening here, copy the optimal solution from the Inframarginal sheet (copy cells B13 and B14) and paste in the Distorted sheet (select cells B13 and B14 and then paste). The graph and MRS is immediately updated and you can see that the distorted consumer would not select the inframarginal consumer’s bundle. Which way should this consumer moveup or down the budget line? The graph makes clear that up is the right way to go, but you should notice that the marginal condition, MRS $< \frac{p_1}{p_2}$, tells you the same thing. STEP Click the button. Click a few more times and pay attention to the chart and the MRS in cell H26. Also keep an eye on utility in cell B9. Each click lowers the amount of $x_1$ by one unit and increases the amount of $x_2$ by $\frac{2}{3}$. By moving up the budget line, this consumer is improving her satisfaction and closing the gap between the MRS and the price ratio. Do not be misled by the display – the indifference curves are not shifting. Remember that the indifference map is dense, meaning that every point has an indifference curve through it. We cannot draw in all of the indifference curves because the graph would then be solid black. The consumer is simply moving from one indifference curve to another one that was not previously displayed. STEP Keep clicking the button. Eventually, you will hit the kink in the budget line and you will not be able to move northwest any longer. Instead, you will be on the horizontal segment and as you move strictly west, utility falls. Notice that the price ratio is now showing zero. On the flat part of the budget line, when the amount of food purchased is less than or equal to how much food can be bought with food stamps alone, it makes sense that additional food is free, in terms of spending cash on food. The consumer simply has to use the available food stamps to acquire food and this does not reduce cash income. Once you are on the flat part of the budget line, you should see that the graph and marginal condition point you to choosing more food. STEP Click on the button repeatedly to move east and, eventually, down the budget line. Use the two buttons to crawl up and down until you find the bundle that maximizes utility. You should end your travels at the kink – and MRS does not equal the price ratio there! This happens because the complicated constraint is producing a corner solution. The distorted consumer wishes she could continue crawling up the downward sloping line, consuming less than the food stamp allotment of food and more of other goods, but she cannot do this. She cannot use food stamps to buy other goods. Thus, her best, or optimal, solution is at the kink. In a corner solution, we accept that the "MRS equals the price ratio" condition is not met. We really are maximizing even though the MRS does not equal the price ratio. We have found the best we can do given the constraints on our choices. Another way to explain what is happening is that we always want to minimize $|MRS - \frac{p_1}{p_2}|$. With an interior solution, we can make this difference zero, but with a corner solution, we cannot because a constraint is preventing us from reaching MRS $= \frac{p_1}{p_2}$. However, a corner solution does give us the lowest $|MRS - \frac{p_1}{p_2}|$ value and we are doing the best we can at this solution. Corner solutions are an important concept and we will see them again in future work. They arise whenever we are prevented from continuing to improve by going in a particular direction. Cash Instead of Food Stamps STEP Proceed to the Cash sheet. Notice that cell B24 computes the cash value of the food stamps and that the chart has a linear budget constraint with no kink. Click cell B25 to see that the constraint is the familiar income minus expenditures, with income equal to the sum of income plus the cash value of the food stamps. The idea here is that instead of giving food stamps, we provide low-income people the cash-equivalent value. They are no longer constrained to buy food alone, but can purchase any goods with the cash received. The cash subsidy shifts the budget line out, with no kink or horizontal segment like we saw with the food stamp program. The sheet opens with the inframarginal consumer’s optimal solution. It is the same as before, when she was given food stamps. Cash or food stamps are the same to this consumer. STEP Click on the button to quickly apply the preferences for the distorted consumer. Run Solver. With cash, the distorted consumer chooses an optimal bundle that is different from the one chosen under the Food Stamp Program. She finds an interior (as opposed to a corner) solution in the far northwest corner, which means she has opted for little food and more of other goods. Figure 3.14 summarizes our work to this point. If you compare the inframarginal consumer, by looking top left and then bottom left, in Figure 3.14, you can easily see that there is no change in his behavior:$40 in food stamps versus $40 in cash are the same to this consumer. On the other hand, comparing the top right and bottom right panels in Figure 3,14 reveals that the distorted consumer chooses less food and more other goods when given cash. This is why we say her choices are distorted by the food stamp program. If she had cash, she would make different choices. The distortion results in a decrease in satisfaction for this consumer. The Carte Blanche Principle and Deadweight Loss Carte blanche, a term of obvious French origin (literally, “blank document”), means unconditional authority or freedom to act in any way you wish. In economics, the Carte Blanche Principle means that cash is always as good as or better than in-kind. Cash allows the consumer to buy anything, while in-kind transfers, such as food stamps, restrict the set of choices. Figure 3.14 shows the Carte Blanche Principle in action. Cash dominates food stamps. If you are an inframarginal consumer, the cash and food stamps are the same. This consumer is going to buy more food than can be purchased with the allotment of food stamps anyway so if you gave him the cash equivalent value, he would spend the cash on food. If you are a distorted consumer, however, you are better off if you are given cash because cash can be used to buy the other goods that you prefer over food. With food stamps, when you maximize utility and do the best you can, you end up at a lower level of utility than if you had the cash-equivalent. In economics, deadweight loss is a measure of inefficiency. It is a number that tells you how much a given solution differs from the best solution. In this application, deadweight loss is the difference in utility due to using food stamps instead of cash. We could try to compute, for each consumer, the maximum utility with cash minus the maximum utility with food stamps. For the inframarginals, this number would be zero, but it would be positive for the distorted consumers. Unfortunately, this approach would be exceedingly difficult to actually carry out. Even if we managed to do it, remember, we cannot simply add the utility values for different people. Utility is ordinal, ranking only by higher or lower, with no meaningful information about distance or magnitude. Thus, we can never add the utilities of different people. Theory tells us deadweight loss exists, but the inability to make interpersonal utility comparisons means we are severely limited in how we can measure the sum of deadweight losses of two or more people. As a first pass, we can try to figure out how many distorteds and inframarginals there are. After all, if there are only a few distorted consumers, then we would know that food stamps were not affecting the decisions of too many people. A Food Stamp Experiment The empirical work described below comes from Whitmore’s “What are Food Stamps Worth?” available at arks.princeton.edu/ark:/88435/dsp01z603qx42c. Whitmore describes two controlled experiments carried out by the USDA in the early 1990s. In the San Diego experiment, around 1,000 people who were receiving food stamps were randomly selected to participate in the experiment. Half were randomly assigned to the control group and given food stamps as usual, while the other half, the treatment group, were given cash-equivalent aid (checks). Of the roughly 500 people given checks, about 100 were distortedthey bought less food compared to what they bought when they were given food stamps. But what were these distorted consumers buying instead of food? This is a crucial question. Most economists are willing to let individuals choose what to buy because the Theory of Consumer Behavior is built on rational, optimizing decision making. The fundamental world view of economic theory is that individuals know best how to spend their money. Others, however, argue that low-income consumers make poor decisions if left free to choose what to buy. They think distortion is a good thing because they want aid recipients to buy food. Whitmore (p. 3) says this: To some, this distortion is the best part of the food stamp program: the government can ensure that needy families get enough to eat and that they don’t spend the money on other things. To others, this distortion represents a waste of resourcesit is inefficient to give in-kind transfers instead of cash. At its most extreme, the issue can be stated this way: Taxpayers will support buying food for the poor, but not drugs, alcohol, and other wasteful consumption. But exactly how distorted consumers would spend cash is an empirical question and Whitmore has the data to answer it. Researchers in the San Diego experiment kept careful food diaries. When Whitmore compared the purchases of the distorted treatment group to the food stamp control group, she found a marked decrease in a few specific items, like juice and soda, for distorteds. So, surprisingly, Even though spending on food declines for the treatment group, the food diary data from San Diego provide no firm evidence that cashing-out food stamps leads to declines in nutritional intake, and suggest that it may actually reduce extreme over-consumption of calories, an important contributing factor to obesity. (Whitmore, p. 35) The picture that many have of the indigent as drug addicts or exceptionally poor decision makers is unsupported by Whitmore’s data. It is true that if forced to spend a subsidy on food, low-income households will spend more on food, but that does not imply that this is better. By definition, low-income people are struggling with paying for, not just food, but a whole host of necessities, including shelter, clothing, transportation, and utility bills. A cash-equivalent subsidy means they can buy food if that is the greatest need or make other important purchases. The Illegal Sale of Food Stamps The Theory of Consumer Behavior can be used to explain what most people find puzzling when they first hear about itthere is an active, illegal market in food stamps. Whitmore (p. 4) estimated that food stamps sold for 61 cents on the dollar. The theory can also explain why it has proven incredibly difficult to stop the illegal sale of food stamps. STEP Proceed to the Selling sheet. Observe that the budget constraint has been modified yet again. The segment below the food stamp allotment (x1bar) is no longer horizontal. We have enabled the consumer to sell food stamps and move up the budget constraint. The slope of this portion of the budget constraint is $ER*p_1/ p_2$,where ER is the exchange rate of food stamps for cash. With ER initially set at 0.6 (in cell B24), a seller of food stamps would get 60 cents for every dollar of food stamps sold. The slope of the budget line is 60% of the $p_1/p_2$ ratio or 1.2. Notice that cell B16 has been added and it reports the income generated by the sale of food stamps. It shows zero because the opening position is at the kink (20, 33.33) so this distorted consumer isn’t selling any food stamps. STEP Change cell B13 to 10 and watch how the cells and the chart change. B16 now reports that the consumer is making$12 from the sale of food stamps. They "sold" ten units of food, valued at $20 in cash, but only 60% of that in food stamps. With $p_2 = 3$, she can buy four more units of $x_2$. STEP Set cell B14 to 37.33 to move the consumer to the budget line. But is this is the optimal solution? In fact, comparing cell G27 to H26 tells you that it is not. The consumer is selling too many food stamps at this point. STEP Run Solver. You should get a result like Figure 3.15, which shows the consumer choosing just under 15 units of food and adding$6.29 of food stamp income (explaining how they managed to buy more than $33 \frac{1}{3}$ units of $x_2$). Notice also that, once again, the MRS (-0.4) equals the slope of the budget constraint (-0.4) on the relevant part of the budget line. The consumer maximizes utility and reaches a higher level of satisfaction than what is attainable by staying on the kink and not selling the food stamps. The ability to get higher satisfaction explains the unintended consequence of an active illegal trade in food stamps. This analysis does not incorporate the costs of selling food stamps, including the risk of getting caught. There is no doubt that EBT cards make it more difficult to sell food stamps, but the inability to stop the illegal trade testifies to the forces at playthe search for higher satisfaction is powerful indeed. One Last Question If the Carte Blanche Principle is true, then why does the government use food stamps instead of cash to help the poor? Whitmore devotes the conclusion of her paper (p. 38) to answering this question: A crucial aspect of the success of the Food Stamp Program is its political popularity. The Food Stamp Program is not an entitlement program, so its budget must be approved annually in the Farm Bill. The program’s budget has always been fully funded, due largely to two factors: its popularity as a targeted welfare program among voters, and its popularity among farmers because they think it increases demand for food. (footnote omitted) As a practical matter, it is not true that, in general, the poor will squander cash subsidies or make terrible buying decisions. Giving aid in the form of food stamps generates a deadweight loss for those distorted consumers who would have been better off with cash. As Whitmore points out, however, it is politically impossible to imagine what is today a \$70 billion program being funded annually as a pure cash giveaway. Economics meets politics and the result is a flawed, but functioning anti-poverty program. Exercises 1. Which parameter in the Selling sheet, with the exchange rate set to 0.9, would have to be changed to represent the case of a distorted consumer who decides not to sell food stamps for cash? What would the value of this parameter be? 2. Explain under what condition the MRS equals the price ratio rule (as a condition that the optimal solution has been found) can be violated. 3. A seller of food stamps would obviously prefer a higher price, but what would be the advantage of a higher price in terms of the Theory of Consumer Behavior?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/03%3A_Optimal_Choice/3.03%3A_Food_Stamps.txt
The Carte Blanche Principle says that cash is always as good as or better than in-kind. There is a corollary from the public finance literature: Lump sum taxes are better than quantity taxes. Public finance is a field of economics that studies the role of government in the economy. Budgeting, collecting taxes, and government spending are some of the areas studied by public finance economists. There are, of course, many different kinds of taxes. A lump sum tax is a fixed amount that must be paid, regardless of how much is purchased. A head tax, where a fee is charged to each person, is an example of a lump sum tax. A quantity tax is an amount for each unit sold so it is added to the price of the product. Federal, state, and local governments levy quantity taxes on gasoline, alcoholic beverages, and tobacco. Unlike a lump sum tax, if more is bought, more quantity tax is paid. Most people are familiar with sales tax, but this is yet another tax variant. Like a quantity tax, more is paid as more is purchased, but a sales tax is a percentage of the total purchase value. This is an ad valorem tax, which is Latin for "according to value." The goals of taxation can be complicated. The primary motivation for taxes is to pay for government spending, but taxes can also be used to discourage particular activities. Both of these motivations are at play in the case of cigarettes. Cigarette Smoking and Taxes The average number of cigarettes sold per day in the United States and Japan since 1900 is shown in Figure 3.16. Visit ourworldindata.org/smoking to see an interactive version of this chart and add other countries. The pattern is the same around the worldrising smoking rates reach a peak, then a rapid decline. American soldiers were given cigarettes during the two world wars and this drove the sharp increase in cigarette smoking. The collapse in its smoking rate in the 1940s shows that Japan did not do this. In both countries, awareness of the damaging health effects of smoking triggered the decline. As consumption underwent this long rise and fall, cigarette tax policies also changed dramatically. Tobacco products have always been taxed, but cigarette taxes have risen dramatically in the last few decades. Figure 3.17 shows tax rates in US states in 2019. There is wide variation in state cigarette tax rates. In 2019, New York and Connecticut had the highest state tax of $4.35 per 20-pack of cigarettes. Missouri had the lowest,$0.17 per pack. Other governmental levels also tax cigarettes. New York City, for example, adds a $1.50 per pack tax, bringing state and local taxes to$5.85 per pack. To this we add the federal tax rate of $1.0066 per pack. Finally, smokers pay a sales tax on the total price paid (including the quantity taxes). In New York City, a pack of cigarettes cost over$10 in 2019. We will analyze the quantity tax by using the Theory of Consumer Behavior. We will also compare it to a lump sum taxan option that is not currently being used by the government. To make a good comparison, we have to make sure that the taxes are revenue neutral. This means that the tax revenues generated by the tax proposals are the same. It would not be fair to compare a quantity tax that generated $50 in revenues to a$100 lump sum tax. Quantity Tax STEP Open the Excel workbook CigaretteTaxes.xls, read the Intro sheet, and proceed to the QuantityTax sheet. Cell B21 enables us to levy a quantity tax. The sheet opens with cell B21 = 0, which means there is no tax. The sheet also opens with the consumer considering the bundle 20,60. The MRS is greater than the price ratio (in absolute value) and the consumer can move down the budget constraint so we know utility is not being maximized. STEP Utility is maximized at 1250 by consuming 25 units of cigarettes ($x_1$) and 50 units of other goods ($x_2$). Run Solver to confirm this result. Suppose we impose a $1/unit quantity tax on cigarettes. What effect does this have on the consumer? STEP You can find the consumer’s optimal solution after levying the tax by changing cell B21 to 1 and running Solver. Notice how the chart updated when B21 was set to one. The red budget constraint shows how the line rotated and swung in when the tax was imposed. This is the same as increasing the price of good 1. After running Solver, you can see that the consumer responds by buying fewer cigarettes. We can also find the optimal solution using analytical methods by solving the following constrained optimization problem: $\max\limits_{x_1,x_2,\lambda}U(x_1,x_2) = x_1x_2 \ \textrm{s.t. } 100 = 2(x_1 + Q\_Tax) + x_2$ The consumer wishes to maximize utility (which is Cobb-Douglas with both exponents equal to 1), subject to the budget constraint, with parameter values for income and prices plugged in. We leave $Q\_Tax$ as an exogenous variable so we can find the optimal solution as a function of $Q\_Tax$. We have worked on this problem before, except $p_2 = 1$ (instead of 3) and we have added the quantity tax. The Lagrangean procedure remains the same and we walk through the four steps to find the answer. 1. Rewrite the constraint so that it is equal to zero. $0 = 100 - 2(x_1 + Q\_Tax) - x_2$ 2. Form the Lagrangean function. Notice that we are working with a mixed concrete and general problem. We have numerical values for prices, income, and the utility function exponents, but we have the amount of the quantity tax as a variable. We use this strategy whenever we want to find the optimal solution as a function of a particular exogenous variable. 3. Take partial derivatives with respect to $x_1$, $x_2$, and $\lambda$. 4. Set the derivatives equal to zero and solve for $x_1\mbox{*}$, $x_2\mbox{*}$, and $\lambda\mbox{*}$. We use the usual solution method, moving the lambda terms to the right-hand side and then dividing the first equation by the second, which allows us to cancel the lambda terms. Finding an expression for $x_2$ seems like an answer, but it is not because it is a function of $x_1$. To be a solution (which is called a reduced form), we must solve for $x_1$ asa function of exogenous variables alone. We must keep working. Canceling the lambda terms has moved us closer to an answerwe have reduced the three equation, three unknown system to two equations in two unknowns. We substitute the first equation into the second and solve for the optimal amount of good 1. Then, we substitute this into our expression for $x_2$ to get the optimal amount of good 2. We can check this solution with Solver’s result by substituting $Q\_Tax = 1$ into the reduced form solution for the two goods. Optimal cigarette consumption is $\frac{50}{3}$ or $16 \frac{2}{3}$. Because $Q\_Tax$ does not appear in the optimal solution for good 2, its value is simply 50 for any value of $Q\_Tax$. Lump Sum Tax Let’s see how the consumer would optimize with a lump sum tax that raised the same tax revenue for the government. STEP Making sure that you have run Solver in the Quantity sheet with B21 = 1 so that B11 is approximately $16 \frac{2}{3}$, proceed to the LumpSumTax sheet. The quantity tax imposed in the QuantityTax sheet has been replaced with a revenue-neutral lump sum tax. With a$1/unit quantity tax, the consumer purchases $16 \frac{2}{3}$ units of $x_1$, which means the state generates $16.67 of revenue from the quantity tax. It could have generated the same revenue by taxing the consumer$16.67, regardless of how much $x_1$ or $x_2$ the consumer bought. This is called a lump sum tax because you pay a fixed amount (that’s the “lump sum” part) no matter what you decide to buy. The difference in the way the lump sum tax operates is reflected in the budget constraint equation. Instead of being part of the price of good 1 like a quantity tax, the lump sum tax is subtracted from income. $100 = 2(x_1 + Q\_Tax) + x_2\ 100 - Lump\_Tax = 2x_1 + x_2$ The two charts show how the lump sum tax works differently than the quantity tax. Instead of rotating, the new budget line (in red) in the LumpSum sheet has shifted inwards. How would the consumer respond to this tax? STEP Run Solver to find the optimal solution with the lump sum tax. Before we compare the quantity and lump sum tax solutions, we confirm Solver’s answer in the LumpSum sheet by solving the problem analytically. STEP Try your hand at this problem. Check your work (or peek if you get stuck) by clicking the button. Remember, Solver gave you an answer so can be quite sure you are correct if your analytical work gives the same result. Comparing Quantity and Lump Sum Taxes We now have the data needed to compare the two tax schemes, as shown in Figure 3.18. The first row shows that the consumer will buy the bundle 25,50 when there is no tax, generating an optimal utility of 1250. Obviously, there is no revenue because there is no tax. The second row shows that utility falls to $833 \frac{1}{3}$ with an optimal solution of $16 \frac{2}{3}$,50 with a $1/unit of $x_1$ quantity tax. The tax produces$16.67 of revenue for the government. The last row shows that a revenue-neutral lump sum tax of $16.67 would result in purchases of $21 \frac{5}{6}$ and $41 \frac{2}{3}$, which would give a level of utility of 868. The primary lesson is that, for this consumer, if the government needed to raise$16.67 of tax revenue, the lump sum tax is better than the quantity tax because the consumer’s maximum utility is higher under the lump sum tax. Notice that we are not violating the rule against interpreting utility values as being meaningful. We are not comparing two consumers. We are not treating utility as if it were on a cardinal scale by saying, for example, that there is a gain of 868 minus $833 \frac{1}{3}$ equals $34 \frac{2}{3}$ utils of increased satisfaction. We are merely saying that satisfaction is higher under the lump sum tax scheme than the revenue-neutral quantity tax. A graph can be used to explain this rather curious result that lump sum taxes enable higher utility than equivalent revenue quantity taxes. It is a complicated graph, so we will build up to it in stages. The first layer is simply the initial solution, before any tax is applied. It is shown in Figure 3.19. Figure 3.20 shows what happens with a quantity tax. The budget constraint rotates in because the price paid by the consumer (composed of the price of the product plus the tax) has increased. The consumer is forced to re-optimize and find a new optimal solution, labeled Quantity Tax. Utility has clearly fallen since we are on a lower indifference curve. Then we add a final layer to show the lump sum tax, as shown in Figure 3.21. This enables comparison of the two tax schemes. The lump sum tax budget constraint has to go through the optimal choice bundle with the quantity tax so that the lump sum tax raises the same revenue as the quantity tax. It also has to be parallel to the original budget constraint. Because it cuts the indifference curve at the quantity tax’s optimal solution, we know we can move down the budget line and reach a higher indifference curve than the quantity tax solution. Figure 3.21 shows that, starting from the Original Choice point, we can compare a quantity tax and a revenue-neutral lump sum tax. Figure 3.21 makes clear that the lump sum tax enables attainment of a higher level of utility than the quantity tax because the indifference curve attainable under the lump sum tax is higher than the indifference curve that maximizes utility with the quantity tax. The reason why the lump sum tax is better is due to the fact that it is non-distorting. It leaves the relative prices of the two goods unchanged. The Lesson and a Follow-up Question The lesson is that the Theory of Consumer Behavior has been used to show that lump sum taxes are better than quantity taxes. Generating the same amount of revenue, lump sum taxes enable the consumer to reach a higher level of satisfaction than quantity taxes. This begs a question: Why do we see quantity taxes instead of lump sum taxes? Why are cigarettes (and alcohol and gasoline) so heavily quantity taxed? The answer lies in the diversity of consumers. The lesson holds only for each individual consumer. It is a fact that there is a revenue-neutral lump sum tax that leaves each individual consumer better off. The amount, however, of the preferable lump sum tax is different, in general, for each consumer. It depends on how many cigarettes (or alcohol or gasoline) each consumer buys. In other words, the lesson does not hold for all consumers taken as a whole. Thus, a single lump tax for all consumers will not necessarily yield higher utility than a quantity tax for each consumer. This point is obvious if you consider a consumer who does not buy the taxed product at all. This consumer would prefer any size quantity tax to a lump sum tax. After all, if you do not smoke, you do not have to pay any quantity tax on tobacco. The collapse in smoking (see Figure 3.16) goes a long way to explaining why cigarette taxes have soared. Lump Sum Corollary to the Carte Blanche Principle We used the Theory of Consumer Behavior to demonstrate a corollary to the Carte Blanche Principle: for consumers of a particular product, a lump sum tax is better than a revenue-neutral quantity tax. If given the option between a quantity and a revenue-neutral lump sum tax, a consumer who buys the taxed good would prefer the lump sum tax because it will leave the consumer with a higher level of utility. Unlike the quantity tax, the lump sum tax will not distort the relative prices faced by the consumer. Although the Lump Sum Corollary is true, we see quantity taxes for various products because the Lump Sum Corollary does not apply to all consumers taken as a group. It is not true that there is a single lump sum tax that is preferred to a quantity tax by all consumers. Exercises 1. Return to the CigaretteTaxes.xls workbook and apply a \$2/unit quantity tax. Run Solver. Find the solution by evaluating the reduced form. Show your work. Do the two methods agree? 2. Repeat this for the lump sum tax. Find the revenue-neutral solution via Solver, evaluate the reduced form expression at the new Lump_Tax, and compare the two methods. Do the two methods agree? 3. Would the percentage change in the consumer’s consumption of $x_1$ be more affected by a quantity tax if her indifference curves were flatter, assuming a Cobb-Douglas utility function? Describe your procedure in answering this question.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/03%3A_Optimal_Choice/3.04%3A_Cigarette_Taxes.txt
• 4.1: Engel Curves • 4.2: More Practice with Engel Curves • 4.3: Deriving a Demand Curve • 4.4: More Practice with Deriving Demand This section derives the demand curve from two different utility functions, quasilinear preferences and perfect complements, to provide practice deriving demand curves. Nothing new here, just practice applying the tools, techniques, and concepts of the economic way of thinking. • 4.5: Giffen Goods In introductory economics courses around the world, demand is always drawn downward sloping so that as price rises, ceteris paribus, quantity demanded falls. Economists have long been intrigued, however, by a perplexing possibility: quantity demanded rising as price rises. An upward sloping demand curve! Can this happen? Yes, but it is quite rare and it took decades to figure it out. • 4.6: Income and Substitution Effects Without a doubt, the demand curve is the most important idea in the Theory of Consumer Behavior. This section remains focused on the demand curve, extending the analysis of the consumer’s optimal response to a change in price. The core concept is that the total effect on quantity demanded (given by the demand curve) for a given change in price can be broken down into two separate effects, called income and substitution effects. • 4.7: More Practice with IE and SE • 4.8: A Tax-Rebate Proposal 04: Compartive Statics The Theory of Consumer Behavior is built on an optimization problem: maximize utility subject to a budget constraint. It is written in equation form like this: $\max\limits_{x_1,x_2}U(x_1,x_2) \ \textrm{s.t. } p_1x_1 + p_2x_2 = m$ This problem can be solved analytically or with numerical methods and the solution can be displayed by a canonical graph, as in Figure 4.1. But it turns out that this is just a first step in how economists think. The material in this chapter gets to the heart of the economic approach: we explore how the optimal solution responds to a shock, a change in an exogenous variable, holding everything else constant. This is called comparative statics. The most important comparative statics exercise is based on changing a price, enabling us to derive a demand curve. We start, however, by shocking income and tracking the response. This produces an Engel curve. Starting here gives you a chance to absorb and master the logic of comparative statics before diving into the demand curve. Initial, Shock, New, Compare To do comparative statics analysis, we follow a four-step procedure. 1. We find the initial solution. 2. We change a single exogenous variable, called the shock, holding all other exogenous variables constant. Economists use a Latin phrase, ceteris paribus, as shorthand. This literally means with other things held equal and economists use the phrase to mean everything else held constant. 3. We find the new optimal solution. 4. Finally, we compare the new to the initial solution to see how the optimal solution responded to the shock. Comparative statics is the fundamental methodology of economics. It gives a framework for interpreting observed behavior. This framework has been given many names, including: the method of economics, the economic approach, the economic way of thinking, and economic reasoning. While comparative clearly points to the comparison between the new and initial solution, the meaning of statics (not be confused with statistics) is less obvious. It means that we are going to focus on positions of rest and not worry about the path of the solution as it moves from the initial to the new point. There are a few complications and additional issues to be aware of when doing comparative statics analysis. Analytical and numerical methods can be used, but they do not always exactly agree. In addition, we have several ways of comparing the new and initial solutions. A qualitative comparison focuses only on direction (up or down), while quantitative comparisons compute magnitudes of the change in response (either as a difference or a percentage change). Finally, we can display the comparative statics analysis in the canonical graph itself or a separate chart. These three issues will be demonstrated via example. Elasticity Basics Elasticity is a pure number (it has no units) that measures the sensitivity or responsiveness of one variable when another changes. Elasticity, responsiveness, and sensitivity are synonyms. An elasticity number expresses the impact one variable has on another. The closer the elasticity is to zero, the more insensitive or inelastic the relationship. Elasticity is often expressed as "the something elasticity of something," like the price elasticity of demand. The first something, the price, is always the exogenous variable; the second something, in this case demand (the amount purchased), is the response or optimal value being tracked. A less common, but perhaps easier, way is to say, “the elasticity of something with respect to something.” The elasticity of demand with respect to price clearly shows that demand depends on and responds to the price. Unlike the difference between the new and initial values, elasticity is computed as the ratio of percentage changes in the values. The endogenous or response variable always goes in the numerator and the exogenous or shock variable is always in the denominator. The percentage change, $\frac{new - initial}{initial}$, is the change (or difference), $new - initial$, divided by the initial value. This affects the units in the computation. The units in the numerator and denominator of the percentage change cancel and we are left with a percent as the units. If we compute the percentage change in apples from 2 to 3 apples, we get 50%. The change, however, is $+1$ apple. If we divide one percentage change by another, the percents cancel and we get a unitless number. Thus, elasticity is a pure number with no units. So if the price elasticity of demand for apples is $-1.2$, there are no apples, dollars, percents, or any other units. It’s just $-1.2$. The lack of units in an elasticity measure means we can compare wildly different things. No matter the underlying units of the variables, we can put the dimensionless elasticity number on a common yardstick and interpret it. Figure 4.2 shows the possible values that an elasticity can take, along with the names we give particular values. Empirically, elasticities are usually low numbers around one (in absolute value). An elasticity of $+2$ is extremely responsive or elastic. It means that a 1% increase in the exogenous variable generates a 2% increase in the endogenous variable. The sign of the elasticity indicates direction (a qualitative statement about the relationship between the two variables). Zero means that there is no relationshipi.e., that the exogenous variable does not influence the response variable at all. Thus, $-2$ is extremely responsive like $+2$, but the variables are inversely related so a 1% increase in the exogenous variable leads to a 2% decrease in the endogenous variable. One (both positive and negative) is an important marker on the elasticity number line because it tells you if the given percentage change in an exogenous variable results in a smaller percentage change (when the elasticity is less than one), an equal percentage change (elasticity equal to one), or greater percentage change (elasticity greater than one) in the endogenous variable. Elasticities are a confusing part of economics. Below are six common misconceptions and issues surrounding elasticity. Reading these typical mistakes will help you better understand this fundamental, but easily misinterpreted concept. 1. Elasticity is about the relationship between two variables, not just the change in one variable. Thus, do not confuse a negative elasticity as meaning that the response variable must decrease. The negative means that the two variables move in opposite directions. So, if the age elasticity of time playing sports is negative, that means both that time playing sports falls as age increases and time playing sports rises as age decreases. 2. Elasticity is a local phenomenon. The elasticity will usually change if we analyze a different initial value of the exogenous variable. Thus, any one measure of elasticity is a local or point value that applies only to the change in the exogenous variable under consideration from that starting point. You should not think of a price elasticity of demand of $-0.6$ as applying to an entire demand curve. Instead, it is a statement about the movement in price from one value to another value close by, say $3.00/unit to$3.01/unit. The price elasticity of demand from $4.00/unit to$4.01/unit may be different. There are constant elasticity functions, where the elasticity is the same all along the function, but they are a special case. 3. Elasticity can be calculated for different size changes. To compute the $x$ elasticity of $y$, we can go from one point to another, $\frac{\%\Delta y}{\%\Delta x}$, or use the derivative’s infinitesimally small change at a point, $\frac{dy}{dx}\frac{y}{x}$. These formulas will be explained below, but the point now is that economists are sloppy in their language and do not bother to distinguish elasticity calculated at a point via calculus (for an infinitesimal change) and elasticity calculated for a finite distance from one point to another. If the function is nonlinear, these two methods give different results. If an economist mentions a point elasticity, it is probably calculated via calculus as an infinitesimally small change. 4. Elasticity always puts the response variable in the numerator. Do not confuse the numerator and denominator in the computation. In the x elasticity of y, x is the exogenous or shock variable and y is the endogenous or response variable. Students will often compute the reciprocal of the correct elasticity. Avoid this common mistake by always checking to make sure that the variable in the numerator responds or is driven by the variable in the denominator. 5. You already know this, but remember that elasticity is unitless. The x elasticity of y of 0.2 is not 20%. It is 0.2. It means that a 1% increase in x leads to a 0.2% increase in y. 6. Perhaps the single most important thing to remember about elasticity is: Do not confuse elasticity with slope. This may be the most common confusion of all and deserves careful consideration. Economists, unlike chemists or physicists, often gloss over the units of variables and results. If we carefully consider the units involved, we can ensure that the difference between the slope and elasticity is crystal clear. The slope is a quantitative measure in the units of the two variables being compared. If $Q\mbox{*} = \frac{P}{2}$, then the slope, $\frac{dQ\mbox{*}}{dP} = \frac{1}{2}$. This says that an increase in P of $1/unit will lead to an increase in $Q\mbox{*}$ of $\frac{1}{2}$ a unit. Thus, the slope would be measured in units squared per dollar (so that when multiplied by the price, we end up with just units of Q). Elasticity, on the other hand, is a quantitative measure based on percentage changes and is, therefore, unitless. The P elasticity of $Q\mbox{*}$ = 1 says that a 1% increase in P leads to a 1% increase in $Q\mbox{*}$. It does not say anything about the actual, numerical$/unit increase in P, but speaks of the percentage increase in P. Similarly, elasticity focuses on the percentage change in $Q\mbox{*}$, not the change in terms of number of units. Thus, elasticity and slope are two different ways to measure the responsiveness of a variable as another variable changes. Elasticity uses percentage changes, $\frac{\%\Delta y}{\%\Delta x}$, while the slope does not, $\frac{\Delta y}{\Delta x}$. They are two different ways to measure the effect of a shock and mixing them up is a common mistake. Comparative Statics Analysis of Changing Income STEP Open the Excel workbook EngelCurves.xls, read the Intro sheet, and proceed to the OptimalChoice sheet. We have run Solver and the initial solution, $x_1\mbox{*} \approx 25$ and $x_2\mbox{*} \approx 16 \frac{2}{3}$, is displayed. Our first attempt at comparative statics analysis is straightforward: change income, ceteris paribus, and compute the response in $x_1\mbox{*}$ and $x_2\mbox{*}$. STEP Change cell B18 to 150 (this is the shock) and then run Solver to find the new optimal solution. The budget line shifts out and the consumer takes advantage by re-optimizing and moving to a new, highest attainable indifference curve. STEP Compare the initial and new values of $x_1 \mbox{*}$ and $x_2 \mbox{*}$ given the $50 increase in income. In qualitative terms, we would say that the increase in income has led to an increase in optimal consumption of the two goods. In quantitative terms, we can compute the response as the change in the own units of the two variables. The own units statement of comparative statics for $x_1 \mbox{*}$ is $\frac{\Delta x_1 \mbox{*}}{\Delta m}$. Income rose by$50 and optimal consumption of each good went up by 12.5 units. We compute $\frac{37.5 - 25}{150 - 100}$ so we say that we get an increase of $\frac{1}{4}$ unit for every $1 increase in income. Elasticity is another a way to present a quantitative comparative statics result. We use a formula that multiplies the slope by the initial values. Income elasticity of $x_1 \mbox{*} = \frac{\Delta x_1 \mbox{*}}{\Delta m}\frac{m}{x_1 \mbox{*}} = [\frac{37.5 - 25}{150 -100}][\frac{100}{25}] = 1$. This elasticity is unit elastic. This means that a 1% change in income leads to a 1% change in the optimal purchase of good 1. We had a 50% increase to income and that produced a 50% increase in $x_1 \mbox{*}$. The elasticity formula seems mysterious, but it is easily derived from the definition of the ratio of percentage changes. $\frac{\% \Delta x_1 \mbox{*}}{\% \Delta m} = \frac{\frac{\Delta x_1 \mbox{*}}{x_1 \mbox{*}}}{\frac{\Delta m}{m}} = \frac{\Delta x_1 \mbox{*}}{x_1 \mbox{*}}\frac{m}{\Delta m} = \frac{\Delta x_1 \mbox{*}}{\Delta m}\frac{m}{x_1 \mbox{*}}$ The algebra above shows how slope and elasticity are connected. Multiplying the slope by an initial position is the same as computing a percentage change. While it is certainly possible to do comparative statics analysis by running Solver to find the initial solution, changing a parameter on the sheet, running Solver again to find the new solution, and then comparing the initial and new solutions, the tediousness of this manual approach is obvious. Fortunately, there is a better way. It involves using the Comparative Statics Wizard Excel add-in. STEP Click the button to make sure you start from the initial parameter values. STEP Install the Comparative Statics Wizard add-in, Cswiz.xla, from the MicroExcel archive. Instructions and documentation are available in the CompStatics.doc file in the SolverCompStaticsWizard folder. You can see which add-ins are installed by accessing the Add-ins Manager dialog (In Excel 2019, File: Excel Options: Add-ins: Go). STEP Once the Comparative Statics Wizard add-in is installed, from the OptimalChoice sheet, click the Add-ins tab on the Ribbon, then click Wizard and Comp Statics (in earlier versions, execute Tools: Wizard: Comp Statics) to bring up the main dialog box of the CSWiz add-in, shown in Figure 4.3. STEP Click on the button and answer the three questions posed. You are providing Excel with the information it needs to organize the results. Clearly, the goal is cell B7 so you will click on cell B7 when prompted by the first question. Excel enters the absolute reference to that cell ($B$7) in the dialog box and you click OK. Follow the same procedure for the next two questions. The endogenous variables are in cells B11:B12 and the exogenous variables are in cells B16:B20 so can click and drag to select those cells. Notice how the Comparative Statics Wizard add-in presumes that you have properly organized and set up the problem on the spreadsheet. STEP Once you have provided the goal, endogenous and exogenous variable cells, click the button. Step 2 uses Excel’s Solver to find the initial solution. It temporarily hides the Comparative Statics Wizard and brings up Solver so you can use it to find the optimal solution. STEP At the Step 2 screen, click the button to bring up the Solver dialog box. Click Solve to have Solver find the initial solution. Read the message in the box after you have run Solver. It explains what you have done so far. Having found the initial solution, we are ready to input the shock. STEP At the Step 3 screen, click the button. As in the first screen, you are asked three questions. The first question asks for the shock variable itself. In this case, click on cell B18 (the income variable value, not the label). The second question is the amount of change. Enter 50. The third question is the number of shocks. The default value is 5. Accept this value by clicking the OK button. You have asked Excel to change income, holding the other variables constant, from 100 to 150 to 200 to 250 to 300 to 350—five jumps of 50 each from the 100 initial value. STEP After verifying that you have entered the shock information correctly, click the button to continue. The Step 4 screen is the heart of the add-in. You have provided the goal, endogenous and exogenous variable information, Solver found the initial solution, and you have told Excel which variable to shock and how. Excel is ready to run the problem over and over again for each of the shock variable values you provided. It is essentially the manual approach, but Excel does all of the tedious work. STEP Click the button. The bar displays Excel’s progress through the repeated optimization problems. It runs Solver at each value of income, but it is very fast. STEP Click the button, read the information in the box, and click the button. Excel takes you to a sheet it has inserted into the workbook with all of the comparative statics results. This sheet is similar to the CS1 sheet. Notice how the results are arranged. It begins with the initial parameter values (widen column A if needed), then displays a table with income in column A, followed by maximum utility and the optimal values of the two goods. The results produced by the Comparative Statics Wizard can be further processed as shown in the CS1 sheet. STEP Proceed to the CS1 sheet. Columns F and G contains slope and elasticity calculations. Click on the cells to see the formulas. Notice that you have to be careful with parentheses when doing percentage change calculations in Excel. Simply entering “= C14 – C13/C13” will not do what you want because Excel’s order of operations rule will divide C13 by C13 (which is 1) and subtract that from C14. Income Consumption and Engel Curves There are two graphs on the CS1 sheet. They appear to be the same, but they are not. One graph is an income consumption curve and the other is an Engel curve. They are related and understanding their connection is important. Ernst Engel (not to be confused with Karl Marx’s benefactor and friend, Friedrich Engels) was a 19th century German statistician who analyzed consumer expenditure data. He found that food purchases increased as income rose, but at a decreasing rate. This became known as Engel’s Law. A graph of quantity demanded for a good as a function of income, ceteris paribus, is called an Engel curve. The income consumption curve (ICC) shows the effect of the increase in income in the canonical indifference-curves-and-budget-constraint graph. In other words, the ICC shows the comparative statics analysis on the underlying, canonical graph. Panel A in Figure 4.4 shows the income consumption curve. Panel B shows that the Engel curve for $x_1$ plots the relationship between income and optimal $x_1$. This presentation graph shows only the optimal value of the endogenous variable ($x_1$) as a function of the shock variable (m) and hides everything else. There is an Engel curve graph for $x_2$, but it is not displayed. STEP Use your comparative statics results to make Engel and income consumption curves. This will help you understand the relationship between the two curves. For the Engel curve, select data in m (in column A) and $x_1$ (in column C). For the ICC, you need to select $x_1$ and $x_2$ (in columns C and D). After selecting the data, click the Insert tab in the Ribbon and choose the Scatter chart type in the Charts group. The slope of the Engel curve reveals if the good is normal or inferior. A normal good, as in Figure 4.4, has a positively sloped Engel curve: when income rises, so does optimal consumption. An inferior good has a negatively sloped Engel curve, increases in income lead to decreases in optimal consumption of the good. Figure 4.5 shows this case. Hamburger is the classic inferior good example. As income rises, the idea is that you eat less hamburger meat and more of better cuts of beef. The example also serves to point out that goods are not either normal or inferior due to some innate characteristic, but that the relationship is a local phenomenon. Figure 4.6 shows how a consumer might react across the full range of income. Do you understand the story this graph is telling? Figure 4.6 shows that hamburger is normal at low levels of income (with increasing consumption as income rises), but inferior at higher levels of income. Our Cobb-Douglas utility function cannot generate this complicated Engel curve. Analytical Comparative Statics Analysis of Changing Income We can derive the Engel curve for the problem in the EngelCurves.xls workbook via analytical methods. As usual, we rewrite the constraint and form the Lagrangean, then take derivatives, and solve the system of equations. The novelty this time is that we leave m as a letter so that our final answer is a function of income. This enables us to derive an Engel curve. 1. Rewrite the constraint so that it is equal to zero. $0 = m - 2x_1 - 3x_2$ 2. Form the Lagrangean function. We take derivatives and set them equal to zero. To solve for the optimal values of $x_1$ and $x_2$, move the lambda terms in the top two equations to the right-hand side and divide the first equation by the second to eliminate lambda (and give the familiar MRS = $\frac{p_1}{p_2}$ condition. Then solve for optimal $x_2$ in terms of $x_1$. Substitute this expression for $x_2$ into the third first-order condition and solve for optimal $x_1$. We can evaluate this expression at any value for m. If we substitute in $m = 100$, we get $x_1 \mbox{*} = 25$ which is what we got when we solved this problem with an income of$100. Our reduced form expression for $x_1 \mbox{*}$ agrees with the values in columns A and C of the CS1 sheet that we produced via the numerical approach using the Comparative Statics Wizard. The numerical method picks individual points off the Engel curve function that we derived here. There is also an Engel curve for $x_2 \mbox{*}$. It is $x_2 \mbox{*} = \frac{1}{6}m$. Of course, these Engel curves are for this particular consumer, with this particular utility function and set of exogenous variables. Different preferences will give different Engel curves. If we make the problem more general, in the sense of substituting letters for numbers in the Lagrangean, then these exogenous variables will appear in the reduced form expression. In other words, the one-quarter and one-sixth constants in the Engel curves will be changed into an expression with the exogenous variables. Evaluating that expression at the current values of the exogenous variables will give one-quarter and one-sixth. If you change an exogenous variable other than income, you will no longer move along the Engel curve. Instead, you will shift the entire Engel curve. To compute an own units response in $x_1 \mbox{*}$ given a change in income, we can simply take the derivative with respect to m, which is simply $\frac{1}{4}$. This means the slope of the reduced form is constant at any value of m. The elasticity at a given value of m can be computed via the following formula: $\frac{dx_1 \mbox{*}}{dm}\frac{m}{x_1 \mbox{*}}$ Because it is calculated at a particular point, this is called point elasticity, as opposed to an elasticity measured from one point to another. Economists usually compute and report point elasticities, but they often omit the adjective and simply call the result an elasticity. Notice how the point elasticity formula is similar to the elasticity formula from one point to another, $\frac{\Delta x_1 \mbox{*}}{\Delta m}\frac{m}{x_1 \mbox{*}}$. We have simply replaced the delta with a dthis shows that the two formulas are the same except for the size of the change in m. Instead of a discrete-size change, the point elasticity formula is based on an infinitesimally small change in m. At m = 100, the point income elasticity of $x_1 \mbox{*} = (\frac{1}{4})(\frac{100}{25}) = 1.$ Good $x_2$ also has a constant unit income elasticity. Rays from the origin always have constant unit elasticities. The utility function plays a crucial role in comparative statics outcomes. Cobb-Douglas utility functions always yield linear Engel curves with constant unit income elasticities. We do not believe that, in the real world, Engel curves are always linear and unit income elastic. While there are other utility functions with less restrictive results, they are more difficult to work with mathematically. Ease of algebraic manipulation helps explain the popularity of the Cobb-Douglas functional form. An Engel Curve is Comparative Statics Analysis This chapter introduced comparative statics analysis. It focused on tracking the optimal solution as income changes. This is called an Engel curve. Comparative statics analysis, including elasticities, can be done via numerical and analytical methods. The Comparative Statics Wizard handles much of the tedious work in the numerical approach. We can compute an elasticity in two ways: at a point and from one point to another. The former uses the derivative and latter is based on a discrete-size change in the exogenous variable. A point elasticity is one based on the derivative. Both elasticities are based on percentage changes, but the derivative uses infinitesimally small changes in the exogenous variable. We will often compare the two methods. In this case, the two methods agreed perfectly. This will not always be true. Exercises 1. Change the price of good 1 from 2 to 3 in the OptimalChoice sheet of the EngelCurves.xls workbook. From $m = 100$, use the Comparative Statics Wizard to create a graph of the Engel curve for good 1. Title the graph and label the axes. Take a picture of your graph and paste it in your Word document. 2. Why is the slope of your graph different than the one in the CS1 sheet? 3. Compute the income elasticity of demand for good 1 from $m = 100$ to 200. Show your work. 4. Compute the income elasticity of demand for good 1 at $m = 100$. Show your work. 5. Why are your answers in question 3 and 4 the same?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/04%3A_Compartive_Statics/4.01%3A_Engel_Curves.txt
This section derives Engel curves via numerical and analytical methods for different utility functions. It applies the same logic as the previous chapter. This is mastery by repetition. Recognizing how the same steps are used is essential to thinking like an economist. Quasilinear Preferences This example uses a quasilinear utility function, $U = x_1^{\frac{1}{2}} + x_2$. The budget constraint is $140 = 2x_1 + 10x_2$. We begin with the analytical approach. We rewrite the constraint and form the Lagrangean, leaving m as a letter (since we want to derive an Engel curve). We take derivatives and set them equal to zero. To solve for the optimal values of $x_1$ and $x_2$, we follow our usual approach, moving the $\lambda$ terms over to the right-hand side and dividing the two equations to cancel the $\lambda$s. Notice that the MRS is a function of $x_1$ alone. This is a property of the quasilinear utility function. We can solve for $x_1 \mbox{*}$ from the MRS equal to the price ratio equation. Next, we plug this value into the third first-order condition and solve for $x_2 \mbox{*}$. To compute an own units response in $x_1 \mbox{*}$ given a change in m, we can simply take the derivative with respect to m, which is zero (because m does not appear in the $x_1 \mbox{*}$ reduced form). Thus, increases in income leave $x_1 \mbox{*}$ unchanged. In other words, the Engel curve for good 1 is horizontal at 6.25. The own units response for $x_2 \mbox{*}$ is $\frac{dx_2 \mbox{*}}{dm}\frac{m}{x_2 \mbox{*}} = \frac{1}{10}$. This means that an additional dollar in income leads to a $\frac{1}{10}$ increase in good 2. We can use the income elasticity formula, $\frac{dx_1 \mbox{*}}{dm}\frac{m}{x_1 \mbox{*}}$, to compute the income elasticity. At m = 140, the income elasticity of $x_1 \mbox{*}$ = (0)(140/6.25) = 0, which is perfectly inelastic. This means that changes in m have no effect at all on $x_1 \mbox{*}$. These results seem a little strange. Perhaps the numerical approach and Excel can shed some light on what’s going on here. STEP Open the Excel workbook EngelCurvesPractice.xls, read the Intro sheet, then go to the QuasilinearChoice sheet. It shows the optimal solution, 6.25, 12.75, for m = 140. Change income to 160. As expected the budget line shifts out. STEP Run Solver to find the new initial solution. The resulting chart looks like Figure 4.7. Figure 4.7 and your screen show that the value of $x_1 \mbox{*}$ remained unchanged as income rose from $140 to$160. This consumer maximizes utility by using all of the extra $20 in income on good 2. Figure 4.7 also displays a key property of the quasilinear functional form: the indifference curves are vertically shifted and actually parallel to each other. Thus, when we increase income, the new point of tangency is found directly, vertically up from the original solution. STEP Return income to its initial value of$140. Run the Comparative Statics Wizard, applying 5 shocks to income in $10 dollar increments. Your results should look like the CS1 sheet. STEP Create Engel and income consumption curves. For the Engel curves, this requires making a chart of $x_1 \mbox{*}$ as a function of m and another chart of $x_2 \mbox{*}$ as a function of m. For the income consumption curve, the chart is $x_2 \mbox{*}$ as a function of $x_1 \mbox{*}$. Each point on this chart is a point of tangency between the budget line and maximum attainable indifference curve. Your first attempt at making a chart of $x_1 \mbox{*}$ as a function of m will not yield a horizontal line at 6.25. Look closely, however, at the y axis scale. The problem is that Solver is reporting numbers very close to, but not exactly, 6.25 as income changes. But these slight differences in optimal $x_1$ are not meaningful. They are Solver noise. In fact, for all of these values of m, optimal $x_1$ really is exactly 25. We need to clean up Solver’s results. Simply changing the display to fewer decimals will not work. This will change the display of the y axis, but Excel will still have the same number in its memory. Instead, we have to use Excel’s ROUND function to change the numbers produced by Solver. The ROUND function has two arguments, the cell you want to round and the number of decimal places. So, ROUND(123.456,1) evaluates to 123.5. STEP Enter this formula in a blank cell, "=ROUND(123.456,-2)" to see what a negative argument does. We can use the ROUND function to round Solver’s results to the hundredths place. Cell F12 shows how this strategy is implemented. STEP Apply Excel’s Round function to your comparative statics results and then make a chart of the Engel curve for good 1 using the rounded data. Your final chart should look like the one in the CS1 sheet. Finally, we can use the CSWiz results to examine the responsiveness of the endogenous variables to the changes in income we applied. STEP Compute the response to the income changes in own units and income elasticities for $x_1 \mbox{*}$ and $x_1 \mbox{*}$. Check your work with the results in the CS1 sheet. Notice that the responsiveness results from the numerical method are the same as that via the analytical approach. Perfect Complements STEP Proceed to the PerfCompChoice sheet to practice on another utility function. This function reflects preferences in which the two goods are perfect complements. This gives L-shaped indifference curves, but our analysis proceeds as usual. The problem is to maximize the perfect complements utility function subject to the budget constraint. The PerfCompChoice sheet shows that $p_1 = 2, p_2 = 10, a = b = 1.$ We do the problem first via the analytical method, leaving m as a letter so we can find $x_1 \mbox{*} = f(m)$ and $x_2 \mbox{*} = f(m)$these are Engel curves for goods 1 and 2. In section 3.2, we showed how to solve this problem by finding the intersection of two lines on which the solution must lie. Since $a = b = 1$, the optimal solution must be where $x_1 = x_2$ (a ray from the origin with slope $+ 1$). Of course, the solution must also lie on the budget line, so we can solve this system of two equations and two unknowns by substituting in $x_1$ for $x_2$ in the budget constraint equation. Since $x_2$ must equal $x_1$ at the optimal solution, we know $x_2 \mbox{*} = \frac{m}{12}$. To compute an own units response in $x_1 \mbox{*}$ given a change in income, we can simply take the derivative with respect to m, which is simply $\frac{1}{12}$. This slope is constant and the Engel curve is linear. The income elasticity at a given value of m can be computed via the point elasticity formula, $\frac{dx_1 \mbox{*}}{dm}\frac{m}{x_1 \mbox{*}}$. At $m = 50$, the income elasticity of $x_1 \mbox{*} = \frac{1}{12}\frac{50}{4.167} = 1$. This means that a 1% change in m will result in a 1% change in $x_1 \mbox{*}$. STEP Run the Comparative Statics Wizard on the PerfCompChoice sheet (you can make the change in income$10) and create Engel and income consumption curves. STEP Compute the response to the income changes in own units and income elasticities for $x_1 \mbox{*}$ and $x_2 \mbox{*}$. Check your work with the results in the CS2 sheet. Notice that the results in Excel are the same as the analytical approach. The Utility Function Determines the Shape of the Engel Curve This section ran a comparative statics analysis of a change in income on quasilinear and perfect complement utility functions. This enabled practice in deriving Engel curves and income consumption curves, along with computing responsiveness in own units and elasticities. The quasilinear function has the peculiar result that the income elasticity of $x_1 \mbox{*}$ is zero. This happens because the indifference map of a quasilinear utility function is a series of vertically parallel curves. Thus, when the budget line shifts out, the new optimal solution is found directly above the initial solution and $x_1 \mbox{*}$ remains unchanged. With the perfect complements utility function, we were able to find an analytical solution even though we could not use the Lagrangean method. The Engel curve for $x_1 \mbox{*}$ has a constant slope and a unit income elasticity. These are the same properties for the Engel curve we found in the previous chapter using the Cobb-Douglas functional form. The shape of the Engel curve, its slope and income elasticity are all influenced by the consumer’s utility function. The relationship is complicated, so there is no rule or simple statement about how the functional form of utility determines the Engel curve. Ernst Engel wanted to know how spending on food changed as income rose. He believed food purchases would increase at a decreasing rate as income increased, as shown in Figure 4.8. This makes common sense. As you get richer and richer, you can buy a much nicer house and cars, but it is difficult to spend a lot more on food. This is known as Engel’s Law. None of three utility functions we have encountered thus far (Cobb-Douglas, quasilinear, and perfect complements) are capable of generating an Engel curve that conforms to Engel’s Law for food purchases. If we were interested in food, we would have to find and use a utility function with an Engel curve that conformed to Engel’s Law. Such functions exist, but as you can imagine, they are more complicated than the computationally simple functions we have used thus far. Exercises 1. In the QuasilinearChoice sheet, copy cell B11 and paste it in cell C11. Set income to \$200 and run Solver to find the new optimal solution. In cell D11, enter a formula to find the difference between cell C11 and B11. Is this tiny difference meaningful? Explain. 2. Having changed income and run Solver in question 1, if you connected the initial and new solutions on the chart, you would get a vertical line. Why is this happening? Will this happen with every consumer? 3. Having changed income and run Solver in question 1, is good 1 a normal or an inferior good? Explain. 4. Use Word's Equation Editor to solve the general version of the perfect complements problem. In other words, find $x_1*$ and $x_2*$ for
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/04%3A_Compartive_Statics/4.02%3A_More_Practice_with_Engel_Curves.txt
We know how to find the initial optimal solution in the Theory of Consumer Behavior and we have explored the comparative statics properties of a change in income. We are well prepared to embark on the most important comparative statics analysis in the Theory of Consumer Behavior: deriving a demand curve. Numerical Comparative Statics Analysis of Changing Price STEP Open the Excel workbook DemandCurves.xls and read the Intro sheet,then go to the OptimalChoice sheet. The problem is set up, but the consumer is not optimizing because the MRS does not equal the price ratio and the consumer can move to higher indifference curves by traveling up the constraint. STEP Run Solver to find the initial solution: $x_1 \mbox{*} = 25$ and $x_2 \mbox{*} = 16 \frac{2}{3}$. Next, we explore how this initial optimal solution changes as the price of good 1 changes, ceteris paribus. This comparative statics analysis will produce a demand curve. Before we actually do it, can you anticipate what will happen when we increase the price of good 1? Believe it or not, if you try to figure it out firstbefore actually seeing ityou will learn more. Take a moment to think: what will happen to the graph on your screen when we increase the price of $x_1$? Let’s see how you did. STEP Shock: Change cell B16 to 3. Figure 4.9 shows how your screen should look. With a higher $p_1$,the budget constraint rotates in, pivoting on the $x_2$ intercept. The consumer now has fewer consumption possibilities and needs to re-optimize to find the new optimal solution. STEP New: Run Solver to find the new optimal solution. We have completed initial, shock, and newthe last step is to compare. Figure 4.10 shows a table that displays the comparative statics results. In qualitative terms, we can see that $x_1 \mbox{*}$ falls as $p_1$ rises, but $x_2 \mbox{*}$ remains unchanged. Quantitatively, we can compute the own units response in good 1 as new minus initial $x_1 \mbox{*}$, which is $16 \frac{2}{3} - 25 = - 8 \frac{1}{3}$ divided by 1 (from $3-2$). This is the value displayed in the table. The own units response in $x_2$ is zero since it did not change. Responsiveness in percentage terms is the price elasticity of demand. We need to compute the percentage change in $x_1 \mbox{*}$ divided by the percentage change in $p_1$. The numerator is $- 33\%$ because $\frac{16 \frac{2}{3} - 25}{25} = - \frac{1}{3}$. The denominator is $\frac{3 - 2}{2} = 0.5$ or 50%. So, a 50% increase in price, from $p_1 = 2$ to 3, caused a 33% decrease in quantity demanded. Thus the price elasticity of demand is $\frac{- 0.33}{0.5}= - \frac{2}{3}$ or roughly $- 0.67$. This number is displayed in the table in Figure 4.10. The same calculation can be performed on $x_2$. Since we are considering the effect on good 2 from a shock to the price of good 1, we call this a cross price analysis. The term cross is used in economics when we examine the effect of i on j; an own effect, for example, would be $p_1$ on $x_1$. We quickly realize that the cross price elasticity, the $p_1$ elasticity of $x_2$, is zero because the numerator is zero. This is perfectly inelastic or completely unresponsive. Comparative statics via numerical methods is easier with the Comparative Statics Wizard add-in. If it is not installed, return to the beginning of this chapter to load the CSWiz add-in. STEP Analyze the effect of a change in $p_1$ by running the CSWiz add-in and changing the price of good 1 by $1 increments (for five shocks). You can see a slightly different comparative statics analysis in the CS1 sheet. Instead of changing price by one dollar increments, the CS1 sheet was performed with a shock size of 0.1. STEP Use your comparative statics results to make a demand curve, a graph of $x_1 \mbox{*} = f(p_1)$. To do this, select the $p_1$ data in column A, then hold down the ctrl key (and keep holding it), while selecting the $x_1$ data in column C. With cells in columns A and C selected, select the Scatter chart type. Title the graph and label the axes. Another way to display the comparative statics results is via the price consumption (or offer) curve, as shown in Panel A of Figure 4.11 for a utility function that is not Cobb-Douglas and not meant to display the increasing price analysis that you just completed. Instead, a price decrease is shown. There is a lot going on in Figure 4.11. The graph on the left (Panel A) shows a price decrease swinging the budget constraint out. It uses numbers to indicate the initial and new optimal solutions. Panels B and C show demand, but look closely, the axes have been flipped. Instead of graphing $x_1$ as a function of $p_1$, the exogenous variable ($p_1$) is on the y axis in Panel B. This is a backwards, but common presentation in economics. The roots of this strange way of presenting the results can be traced back in the history of economics to Alfred Marshall in 1890. Modern economists call the graph in Panel B of Figure 4.11 an inverse demand curve because it is plotted as $P = f(Q)$. The demand curve, the mathematically correct version, is $Q = f(P)$ because we plot $y = f(x)$ with y as the dependent variable that is determined by x. In introductory economics, the inverse demand curve is used. The professor just draws a downward sloping line or curve and pronounces that it is obvious that as price goes up, quantity demanded falls (we will soon see that this is not guaranteed). As the level of sophistication rises, especially if we are doing econometrics and trying to estimate a demand curve, economists use the mathematically correct demand curve. Economists are used to both ways of presenting demand. It is confusing at first, but you can get the hang of it pretty quickly. STEP Read the information in the CS1 sheet. It explains how the ROUND function was used to create the price consumption curve from the comparative statics results. Notice that the price consumption curve for changes in $p_1$ in the Excel workbook is horizontal. This is a property of the Cobb-Douglas utility function and is not especially realistic. The indifference map in Figure 4.11 is not based on a Cobb-Douglas utility function because the price consumption curve is not horizontal. Another useful Excel skill to master that is especially relevant right now involves controlling the x and y axes. Excel’s default is that the leftmost column of selected data goes on the x axis. If we want to make a demand curve with the data in the CS1 sheet, this is convenient. We select the data in column A ($p_1$), hold down the ctrl key and select the data in column C ($x_1$). When you make a Scatter chart, Excel puts price on the x axis and quantity on the y axis. But what if we want to make an inverse demand curve, with $p_1$ on the y axis? One easy way to do it is by directly editing the SERIES formula in the chart. STEP Visit vimeo.com/econexcel/using-series-formula to watch a quick, 5-minute video of how the SERIES formula works. After you watch the video, try it on your demand curve chart. Can you flip the axes by directly editing the SERIES formula? Click on your demand curve, then switch columns A and C in the x and y arguments in the SERIES formula. To see an example of this, click on the series in the chart in the CS1 sheet. Analytical Comparative Statics Analysis of Changing Price We take the opportunity here to extend our previous analytical work. We could just leave $p_1$ as a letter since we want to derive a demand curve, but we will be more aggressive and leave all exogenous variables as letters. This will give us the most general answer we can get. We rewrite the constraint and form the Lagrangean. Although it seems more formidable than when numbers are used in place of letters, we can apply the usual strategies for taking derivatives and solving the first-order conditions to find the optimal solution. We take derivatives and set them equal to zero. To solve for the optimal values of $x_1$ and $x_2$, we move the lambda terms to the right-hand side and divide the first equation by the second. This gets rid of lambda and gives the familiar MRS = $\frac{p_1}{p_2}$ condition, which can then be solved for optimal $x_2$ as a function of optimal $x_1$. We substitute this expression into the third first-order condition (the budget constraint) and solve for optimal $x_1$. This expression contains the demand curve for $x_1$ because it shows the quantity demanded at a given $p_1$. It also contains an Engel curve because it shows how $x_1$ varies with income. It also shows how $x_1$ moves when c or d, the consumer’s tastes and preferences, changealthough, such a graph is unnamed. Furthermore, this expression can be evaluated for any combination of exogenous variable values. For example, suppose $c = d = 1, p_1 = 2$, and $m = 100$. Then it can be seen easily that optimal $x_1$ = 25. In fact, you can readily see that the reduced form expression for optimal $x_1$ agrees with the numerical approach using the Comparative Statics Wizard to recalculate the optimal solution at given values of $p_1$. We can use our reduced form expression to calculate an own units response to a shock in $p_1$ by taking the derivative with respect to $p_1$. This formidable-looking expression is the instantaneous rate of change of the demand curve at a particular point. Because $x_1 \mbox{*}$ is a nonlinear function of $p_1$, its derivative with respect to $p_1$ contains $p_1$. The fact that the demand curve is not a line explains why we get different results when we compute responsiveness with $\Delta$ versus d. STEP Read the CS1 sheet carefully. Your primary goal is to understand the relationship between $\Delta$ in cells F14 and G14 versus the derivative in cells I13 and J13. The key idea is this: as $\Delta$ gets smaller, it approaches d. Thus, earlier, we computed the price elasticity of demand from $p_1=2$ to 3 and got $-0.67$. But the CS1 sheet shows an elasticity of $-0.95$ (in G14) as we go from $p_1=2$ to 2.1 and when we use the derivative formula, which is based on an infinitesimally small change in $p_1$, we get an elasticity of $-1$. Notice that, unlike the demand curve, $x_1 \mbox{*} = f(p_1)$, the Engel curve, $x_1 \mbox{*} = f(m)$ is a line for the Cobb-Douglas utility function. We say, "x one star is nonlinear in p one" and "x one star is linear in m." Because the Engel curve is a line, $\Delta m$ and the derivative with respect to m give identical results. The size of the change in m does not matter if the relationship is linear. The unit price elasticity is a property of a Cobb-Douglas utility function. We can use the reduced form expression for $x_1 \mbox{*}$ to show that we always get a $-1$ price elasticity. So Cobb-Douglas produces three constant elasticities: 1. Unit income elasticity 2. Unit own price elasticity 3. Zero cross price elasticity None of these are especially realistic. Cobb-Douglas is common because it is easy to work with, not because it produces sensible elasticities. A Point Off the Demand Curve? Unlike an introductory economics course where demand curves appear out of the blue as downward sloping lines or curves, understanding where demand curves come from and what they actually represent are major goals for us. So far, we have a mechanical understanding of the derivation of demand. Yes, it is true that changing $p_1$, ceteris paribus, and tracking how $x_1 \mbox{*}$ changes is how a demand curve is derived. And, yes, it is true that at every price, quantity demanded is the solution to an optimization problem for that price. But let’s try a thought experiment not included in introductory economics. If we consider what it means to be at a point off the demand curve, such as point Z in Figure 4.12, it helps us understand that the demand curve is really like a ridgeline across the top of a mountain range. With a point Z to the right of the inverse demand curve, we know that the consumer is buying too much $x_1$, as shown by the vertical dashed line in the graph on the left of Figure 4.12. We cannot precisely plot the point Z on the indifference curve graph because we do not know how much good 2 the person is buying at point Z. We do know, however, that she is not optimizing. In other words, at point Z, this consumer is failing to maximize satisfaction and is not on the tangency of the budget line and highest attainable indifference curve. Considering the meaning of a point off the demand curve reveals that a demand curve is a geometrical object with a special characteristicevery point on the demand curve is a point of maximum utility given prices and income. If we added an axis for utility, the demand curve would show itself as a 3D object that displayed the maximum utility at each given price. In other words, the demand curve is a ridgeline that connects mountain peaks, as shown in the sketch on the right in Figure 4.12. A Demand Curve Is a Comparative Statics Exercise Deriving a demand curve is the most important comparative statics exercise in the Theory of Consumer Behavior. Demand and supply (the most important comparative statics exercise in the Theory of the Firm) are at the heart of the market mechanism. Given a particular functional form for utility, demand curves can be derived via numerical methods, picking off individual points on the demand curve for explicit values of price, ceteris paribus. Slopes and elasticities can be computed. Demand curves can also be derived via analytical methods by finding the reduced form expression as a function of price (and any other exogenous variables). Slopes and elasticities can be computed by using the derivative. For Cobb-Douglas utility, we found that $x_1 \mbox{*} = (\frac{c}{c+d})\frac{m}{p_1}$. For this reduced form, the numerical and analytical methods yield different values for slopes and elasticities based on changing $p_1$ because the demand curve is a curve, instead of a line (like the Engel curve). The smaller the discrete change in $p_1$ used in the numerical method, the closer it gets to the analytical result. We can also "derive" a demand curve with graphs, as shown in Figure 4.11. We can display the effect of a price change by rotating the budget line and showing the initial and new points of tangency. If we display the $p_1$ and corresponding optimal amount of $x_1$ in a separate graph, we have graphically derived a demand curve (or inverse demand curve, if we flip the axes). Finally, if we work out the implications of a point off the demand curve, we can see the demand curve in a new lightit is actually a 3D object represented in 2D space. All of the points on the demand curve are actually points of maximum utility subject to the budget constraint. Exercises 1. In the OptimalChoice sheet, click the button and reproduce Figure 4.10 with a decrease (instead of an increase) in $p_1$ from$2/unit to \$1/unit. Use Word’s Table feature to create the table and fill in the cells. 2. Use Word’s Drawing Tools to create a graph of the price consumption curve and demand curve for $x_1$ (as in Figure 4.11) that accurately reflects the shock and results from question 1. 3. What is the difference between a demand curve and an inverse demand curve?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/04%3A_Compartive_Statics/4.03%3A_Deriving_a_Demand_Curve.txt
This section derives the demand curve from two different utility functions, quasilinear preferences and perfect complements, to provide practice deriving demand curves. Nothing new here, just practice applying the tools, techniques, and concepts of the economic way of thinking. Quasilinear Preferences We begin with the analytical approach. Rewrite the constraint and form the Lagrangean, leaving $p_1$ as a letter so we can derive a demand curve. $\max _{x_{1}, x_{2}, \lambda} L=x_{1}^{1 / 2}+x_{2}+\lambda\left(140-p_{1} x_{1}-10 x_{2}\right)$ STEP Follow the usual Lagrangean procedure to solve this problem. For help, refer back to section 4.2 where we solved this same problem except with $m$ instead of $p_1$. You should find reduced form expressions like this: \begin{aligned} x_{1}^{*} &=\frac{25}{p_{1}^{2}} \ x_{2}^{*} &=14-\frac{2.5}{p_{1}} \end{aligned} The first expression, $x_1 \mbox{*} = \frac{25}{p_1^2}$, is a demand curve for $x_1 \mbox{*}$ because it gives the quantity demanded of $x_1$ as a function of $p_1$. If we rewrite the equation in terms of $p_1$ like this, $p_1^2 = \frac{25}{x_1 \mbox{*}} \rightarrow p_1 = \frac{5}{\sqrt{x_1 \mbox{*}}}$ then we have an inverse demand curve, with price on the y axis as a function of quantity on the x axis. The derivative of $x_1 \mbox{*}$ with respect to $p_1$ tells us the slope of the demand curve at any given price. \begin{aligned} x_{1}^{*} &=25 p_{1}^{-2} \ \frac{d x_{1}^{*}}{d p_{1}} &=-2 \cdot 25 p_{1}^{-3}=-\frac{50}{p_{1}^{3}} \end{aligned} The own price elasticity of demand is: $\frac{d x_{1}^{*}}{d p_{1}} \cdot \frac{p_{1}}{x_{1}^{*}}=-\frac{50}{p_{1}^{3}} \frac{p_{1}}{\frac{25}{p_{1}^{2}}}=-2$ The constant elasticity of demand for good 1 is a property of the quasilinear utility function. Notice that 2 is the reciprocal of the exponent on $x_1$ in the utility function. In fact, with $U = x_1^c + x_2$, the price elasticity of demand for $x_1$ is $-\frac{1}{1-c}$ for values of $c$ that yield interior solutions. The expression for optimal $x_2$ is a cross price relationship. It tells us how the quantity demanded for good 2 varies as the price of good 1 changes. The equation can be used to compute a cross price elasticity, like this: $\frac{d x_{2}^{*}}{d p_{1}} \cdot \frac{p_{1}}{x_{2}^{*}}=\frac{2.5}{p_{1}^{2}} \frac{p_{1}}{14-\frac{2.5}{p_{1}}}=\frac{2.5}{p_{1}\left(14-\frac{2.5}{p_{1}}\right)}=\frac{2.5}{p_{1}\left(\frac{14 p_{1}-2.5}{p_{1}}\right)}=\frac{2.5}{14 p_{1}-2.5}$ Unlike the own price elasticity, the cross price elasticity is not constant. It depends on the value of $p_1$. It is also positive (whereas the own price elasticity was negative). When $p_1$ rises, optimal $x_2$ also rises. This means that goods 1 and 2 are substitutes. Complements, on the other hand, are goods whose cross price elasticity is negative. This means that an increase in the price of good 1 leads to a decrease in consumption of good 2. Demand can also be derived via numerical methods. STEP Open the Excel workbook DemandCurvesPractice.xls, read the Intro sheet, then go to the QuasilinearChoice sheet. The consumer is maximizing satisfaction at the initial parameter values because the marginal condition, MRS = $\frac{p_1}{p_2}$, is met at the point 6.25,12.75 (ignoring Solver’s false precision) and income is exhausted. We can explore how this initial optimal solution varies as the price of good 1 changes via numerical methods. We simply change $p_1$ repeatedly, running Solver at each price, while keeping track of the optimal solution at each price. The Comparative Statics Wizard add-in handles the tedious, cumbersome calculations and outputs the results in a new sheet for us. STEP Run the Comparative Statics Wizard on the QuasilinearChoice sheet. Increase the price of good 1 by 0.1 (10 cent) increments. You can check your comparative statics analysis by comparing your results to the CS1 sheet, which is based on 1 (instead of 0.1) dollar size shocks. Of course, the numbers will not be exactly the same since the $\Delta p_1$ shock size is different. The columns of price and optimal $x_1$ are points on the demand schedule. The numerical approach via the CSWiz essentially picks individual points on the demand curve for the given prices. If you plot these points, you have a graph of the demand curve. The analytical approach, on the other hand, gives the demand function as an equation. You can evaluate the expression at particular prices and generate a plot of the demand curve. The two approaches, if done correctly, will always yield the same graphical depiction of the demand curve. They may not, however, yield the same slopes or elasticities. STEP Using your results, create demand and price consumption curves. Compute the own unit changes and elasticities for $x_1 \mbox{*}$ and $x_2 \mbox{*}$. The CS1 sheet shows how to do this if you get stuck. You can click on cells to see their formulas. Think about how the formulas work and how they compute the answer. It is critical that you notice that your own unit changes and elasticities are closer to the instantaneous rates of change in columns I and J of the CS1 sheet because you have smaller changes in $p_1$ and, for this utility function, $x_1 \mbox{*}$ is nonlinear in $p_1$. Take a moment to reflect on what is going in the calculations presented in the CS1 sheet. The color-shaded cells invite you to compare those cells. Now, let’s walk through this slowly. STEP Click on cell F13 to see its formula. It is computed as the change in optimal $x_1$ for a $1 increase in $p_1$. There is a decrease of about 3.47 units when price increases by 1 unit. STEP Click on cell I12 to see its formula. It is computed by substituting the initial price,$2/unit, into the expression for the derivative (displayed as an equation above the cell). The result of the formula, $-6.25$, is the instantaneous rate of change. In other words, there will be a 6.25-fold decrease in optimal $x_1$ given an infinitesimally small increase in $p_1$. STEP Go to your CSWiz results and, if you have not done so already, compute the change in optimal $x_1$ for a \$0.1 increase in $p_1$. You should find that your slope is about $-5.8$. The change in optimal $x_1$ is about 0.58, but you have to divide by the change in price, 0.1, to get the slope. Notice that your answer is much closer to the derivative-based rate of change ($-6.25$). This is because you took a much smaller change in price, 0.1, than the one dollar change in price in the CS1 sheet and you are working with a curve. STEP Return to the CS1 sheet and compare cells G13 and J12. The same principle is at work here. Because the demand curve is nonlinear, the two cells do not agree. Cell G13 is computing the elasticity from one point to another, whereas cell J12 is using the instantaneous rate of change (slope of the tangent line) at a point. If you compute the price elasticity from 2 to 2.1 (using your CS results), you will find that it is much closer to $-2$. Finally, you might notice that unlike the Cobb-Douglas utility function, which produced a horizontal price consumption curve (PCC), the quasilinear utility function in this case is generating a downward sloping price consumption curve. In fact, the slope of the price consumption curve tells you the price elasticity of demand: Upward sloping PCC means that demand is inelastic, horizontal PCC yields a unit elastic demand (as in the Cobb-Douglas case), and downward sloping PCC gives elastic demand (as in this case). Perfect Complements We begin with the analytical approach. $U(x_1, x_2)=min\{ax_1,bx_2\}$ For $a = b = 1$, we know that we can find the intersection of the optimal choice and budget lines to get the reduced form expressions for the endogenous variables, $x_1 \mbox{*} = \frac{m}{p_1 + p_2}$ (which is the same for $x_2 \mbox{*}$ since $x_1 \mbox{*} = x_2 \mbox{*}$). This solution says that when a and b are the same in a perfect complements utility function, the optimal amounts of each good are equal and found by simply dividing income by the sum of the prices. The reduced form expression contains Engel and demand curves. Holding prices constant, we can see how m affects consumption. Likewise, holding m and $p_2$ constant, we can explore how optimal $x_1$ varies as $p_1$ changes. This, of course, is a demand curve for $x_1$. As usual, we find the instantaneous rate of change by taking the derivative with respect to $p_1$. The $p_1$ elasticity of $x_1$ is the derivative multiplied by $\frac{p_1}{x_1 \mbox{*}}$. \begin{aligned} \frac{d x_{1}^{x^{*}}}{d p_{1}} &=-\frac{m}{\left(p_{1}+p_{2}\right)^{2}} \ \frac{d x_{1}^{x^{+}}}{d p_{1}} \cdot \frac{p_{1}}{x_{1}^{x^{*}}} &=-\frac{m}{\left(p_{1}+p_{2}\right)^{2}} \frac{p_{1}}{p_{1}+p_{2}}=-\frac{p_{1}}{p_{1}+p_{2}} \end{aligned} We can also derive demand for a perfect complements utility function via numerical methods. STEP Proceed to the PerfCompChoice sheet and run the Comparative Statics Wizard with an increase in the price of good 1 of 0.1 (10 cents). Can you guess what we will do next? The procedure is the same every time: we solve the model then explore how the optimal solution responds to shocks. STEP Create demand and price consumption curves based on your comparative statics results. Compute the own units changes and elasticities for $x_1 \mbox{*}$ and $x_2 \mbox{*}$. The CS2 sheet shows how to do this if you get stuck. As before, you will want to concentrate on how your own units changes and elasticities are closer to the instantaneous rates of change than the $\Delta p_1$ in columns F and G of the CS2 sheet because you have smaller changes in $p_1$ and we are dealing with a nonlinear relationship. The lesson is clear: whenever the demand curve is not a line, that is, $x_1 \mbox{*}$ is nonlinear in $p_1$, then $\Delta p_1$ will not exactly equal $dp_1$. As the size of the discrete change in price gets smaller, the numerical method result will approach the result based on the derivative. Although the two methods might not exactly agree, they are usually pretty close. How close depends on the curvature of the relationship and the size of the discrete shock. This means you can always check your analytical work by doing a manual $\Delta$ shock and computing the change from one point to another. Notice also that the price consumption curve is upward sloping and the price elasticity is less than one (in absolute value). Deriving Demand from the Consumer’s Utility Maximization Problem The primary purpose of this section was to provide additional practice in deriving demand with different utility functions. Clearly, the demand curve is strongly influenced by the utility function that is being maximized given a budget constraint. Two examples were used to demonstrate how the analytical and numerical methods are related. Calculus is based on the idea of infinitesimally small changes. You can see calculus in action by using the CSWiz to take smaller changes in pricewhich drives the numerical method ever closer to the derivative-based result. Exercises 1. Return to the QuasilinearChoice sheet and click the button. Now change the exponent on good 1 from 0.5 to 0.75. Use the Comparative Statics Wizard to derive a demand curve for this utility function. 2. Working with the same utility function as in the first question, derive the demand for $x_1 \mbox{*}$ via analytical methods. Use Word’s Equation Editor as needed. Show your work. 3. Using your results from questions 1 and 2, compute the own price elasticity via numerical and analytical methods. Do they agree? Why or why not? Show your work and take screen shots as needed.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/04%3A_Compartive_Statics/4.04%3A_More_Practice_with_Deriving_Demand.txt
Demand curves are derived by doing comparative statics on the consumer’s optimization problem: Change price, ceteris paribus, and track optimal consumption of a good. In introductory economics courses around the world, demand is always drawn downward sloping so that as price rises, ceteris paribus, quantity demanded falls. Economists have long been intrigued, however, by a perplexing possibility: quantity demanded rising as price rises. An upward sloping demand curve! Can this happen? Yes, but it is quite rare and it took decades to figure it out. We begin with a definition: Giffen goods are goods that have upward sloping demand curves. Giffen’s connection to this counter intuitive demand relationshipprice rises and you want to buy more?is controversial. Giffen and the Irish Potato Famine The Great Irish Famine took place during 1845-1848. To put the disaster in proper perspective, the famine killed at least 12 percent of the population over a three-year period. Another 6-8 percent migrated to other countries. In terms of the percentage of population affected, the 1845-48 famine is one of the largest ever recorded. Other famines have killed more people in total because the affected populations were larger, not the percentage of exposure. For instance, the 30 million or more people who perished in the Chinese famine of 1958-62 were 5 percent or 6 percent of the population. (Rosen, 1999, p. S303) Why did so many people die? This is a difficult question to answer comprehensively. The economics of famine are complicated. The proximate answer is that the Irish ate a lot of potatoes and a potato blight destroyed the food source. Rosen (1999, p. S303) says this: As difficult as it is to imagine today, on the eve of the famine, per capita consumption of potatoes is reliably estimated to have averaged 9 pounds (40-50 potatoes) per person per day (Bourke 1993). Diets were astonishingly concentrated on potatoes, especially in rural areas. Grain was grown in rural Ireland but was either sent to towns or exported abroad. When blight wiped out the potato crop, why didn’t the Irish eat something else or just import food? This is hard to understand. Books have been written on the subject. The Biblio sheet in GiffenGoods.xls has references. In fact, Amartya Sen won a Nobel Prize in Economics for his work on famine. It turns out that it is not simply a matter of too little foodamazingly, food can be just a few miles away and yet many people can be starving! But our focus is on Giffen goods and the story picks up decades after the famine. Although there is no evidence that he ever said anything close to "price increase led to higher quantity demanded," Sir Robert Giffen (1837–1910) is credited with using the behavior of potato prices and quantities to state the claim that quantity demanded rose as prices rose. Figure 4.13 shows Irish potato prices before, during and after the famine. Although consumption fell when price spiked in 1847 to more than double the 1846 price, somehow the legend grew that quantity demanded increased as prices rose in this time period. Thus, the Irish potato became the canonical example of a Giffen goodeven though there is no evidence that price and quantity moved in the same direction. Economists began arguing over whether or not quantity demanded rose as the price spiked and, even if it did not, whether it was theoretically possible. It would take decades of contentious debate before the matter was settled. Two Common Mistakes in the Giffen Debate Before explaining how we could, in theory, get a Giffen good, we need to clear up two mistakes in thinking about Giffen goods. Both mistakes involve violating the strict ceteris paribus requirement that underlies a demand curve. The first mistake has a long history in econometrics and the second is easily corrected once we remember that we must hold everything else constant. Estimating demand from observed prices and quantities is quite difficult. It turns out that plotting price and quantity data over time and fitting a line is no way to estimate a demand curve. Suppose that the observed quantity of potatoes sold and consumed really had increased as the price spiked in 1847. Would that have been a good way to support the Giffen good claim? Absolutely not. The problem is that the price and quantity data in different time periods do not fulfill the ceteris paribus requirement. It is true that price and quantity changed over time, but presumably so did other factors that affect demand and supply. STEP Open the Excel workbook GiffenGoods.xls read the Intro sheet, then go to the ID sheet and read it carefully. Make sure to click the buttons and think about the charts that are displayed. This sheet walks you through a simple example and shows why fitting a line to observed market price and quantity data is a really bad move. The heart of the confusion lies in the inability to extract the individual supply and demand curves that produce the observed data. This is called the identification problem. So, even if it is true that we see prices and quantities moving together, that is not a demonstration of Giffen behavior. The second mistake is less easy to forgive. No complicated issues of estimation are involved. We simply forget that demand requires that the ceteris paribus condition hold. Suppose you notice that a particular brand of jeans has become increasingly popular and suddenly more people want it as its price rises. Have we discovered a Giffen good? Absolutely not. We are violating the crucial ceteris paribus part of the definition of a demand curve by failing to hold constant everything except a change in price. In this case, the increased popularity of a particular brand is a shock to the demand curve, shifting it right. This is not a Giffen good because we are not working with a single, fixed demand curve. Instead, as in the second chart in the ID sheet, changes in demand are driving new equilibrium price–quantity combinations. Having seen two common mistakes in trying to understand and show Giffen behavior, both involving violation of the strict ceteris paribus condition, the natural question then is: Can true Giffen goods, ones that meet the specific requirements of a demand function, exist? The answer is yes. Giffen Goods in Theory The left graph in Figure 4.14 shows the canonical graph of the Theory of Consumer Behavior displaying a Giffen good, while the right shows its associated upward sloping demand curve. Notice that the indifference curves require a little tweaking and somewhat odd placement to make $x_1$ be a Giffen good. Remember that indifference curves cannot cross, but they do not have to be similarly shaped and equally separated. For $x_1$ to be Giffen, point 2 in Figure 4.14 has to lie to the left of point 1 so that the decrease in $p_1$ leads to a decrease in optimal $x_1$. Do not be confused by the decrease in $x_1$. Quantity demanded fell, but so did price. Thus, we have a positive relationship between price and quantity demanded (they are moving together) and an upward sloping demand curve. This is a Giffen good. To be crystal clear, it is not the fact that optimal $x_1$ decreased that tells us we have a Giffen good, but that it decreased as price fell. If we started at point 2 and raised the price, the budget constraint would swing in, and we would move to point 1, with an increase in optimal $x_1$. We would have Giffenness because $x_1$ rose as $p_1$ increased, We would be traveling up the upward sloping demand curve. A version of Figure 4.14 is depicted in every microeconomics book that discusses Giffen goods and, make no mistake, this is a canonical graph in micro theory. But dead graphs on a printed page (or computer screen) force the reader to reconstruct individual elements and can be difficult to disentangle. With Excel at our disposal, we can walk through a numerical example to gain complete mastery of the concept of Giffenness. STEP Proceed to the Optimal1 sheet and look at the utility function. The sheet models a Giffen good. The utility function is admittedly quite complicated, but a simple functional form like Cobb-Douglas or quasilinear is never going to produce Giffenness. The U1 sheet shows that this functional form meets the requirements of well-behaved preferences. The coefficients have been set to values that do not violate the axioms of revealed preference in the range we are working in. The indifference curves, for example, will never intersect. Another example of a utility function that exhibits Giffen behavior is $U=ax_1+\ln{x_1}+\frac{x_2^2}{2}$. This is implemented in the Optimal2 sheet. We will use the Optimal1 sheet here and save the Optimal2 sheet for Q&A work. These are just two of the many functional forms that meet the requirements of well-behaved utility that could exhibit Giffen behavior. The Optimal1 sheet opens with $x_1$ = 44 and $x_2$ = 11. A single indifference curve is displayed and it does not have the curvature we have been used to seeing. Recall that perfect substitutes are straight lines, so we can infer that this utility function is expressing preferences with a high degree of substitutability between the two goods. Without running Solver, we know this is the optimal solution because the MRS equals the price ratio. STEP It is hard to see that the budget line is just touching the indifference curve, but if you click the button, you will see that the tangency condition is clearly met. Since we are working on Giffen behavior, we want to explore the effects of a change in price on the quantity demanded. We will increase the price of $x_1$ and see how the consumer responds. Before we do, think through what will happen. How will the constraint change and where must the new tangency point lie if $x_1$ is a Giffen good? STEP Change $p_1$ to 1.1. What happens? The budget line pivots around the y intercept. It may look like a parallel shift, but it really is not. STEP Click the button to see that the price increase has, as expected, rotated the budget line in. The 44,11 initial optimal bundle is no longer affordable. The consumer must re-optimize. STEP Run Solver. What happens? Figure 4.15 shows the result. Optimal consumption of good 2 has collapsed from 11 to around 1.5 and the consumer now wants to buy 48.6 units of good 1, which is more than the initial amount of 44. This is amazing! The price of good 1 went up by 10 cents (from 1 to 1.1) and the optimal amount of good 1 increased by 4.6 units (from 44 to 48.6). Price rose, ceteris paribus, and so did quantity demanded! This is a concrete, numerical example of a Giffen good. We can use the Comparative Statics Wizard to explore more carefully the demand curve resulting from this bizarre utility function. STEP Use the Comparative Statics Wizard to trace the demand curve from 0.1 to 3. Set cell B16 to 0.1, then apply 300 (yes, 300) shocks by increments of 0.01 with the CSWiz add-in. Finally, create a graph of the inverse demand curve, $p_1$ as a function of $x_1 \mbox{*}$. Your results should look like Figure 4.16, which is also in the CS1 sheet. That is certainly a strange looking demand curve. It is Giffen in a range. In other words, a Giffen good is not intrinsically and everywhere a Giffen good. Giffenness is a local phenomenon. The demand curve pictured in Figure 4.16 has three different behaviors. As price rises from zero, quantity demanded falls. This continues until a price of about 70 cents. From there, penny increases lead to increased consumption of good 1. In this range, $x_1$ is a Giffen good. There is a third region, at prices such as $2 and$3, where the good is not Giffen. So, this example has shown that Giffen goods are not only possible, they can be modeled by the Theory of Consumer Behavior. We now know that there are utility functions that reflect well-behaved preferences that generate Giffen behavior. Giffen Goods in Theory and Practice A Giffen good is a strange creature in economics. The phenomenon of quantity demanded rising as price increases was first purportedly sighted during the Irish potato famine and named after Sir Robert Giffen, even though there is no evidence that Giffen actually claimed seeing quantity demanded rise as prices rose, ceteris paribus. Certainly there are utility functions that give rise to Giffen goods. Certainly individual consumers may have well-behaved preferences that yield Giffen behavior. But has a Giffen good ever been spotted? Do Giffen goods exist in the real world in the sense that a market demand curve is upward sloping? This is the subject of much debate. Ceteris paribus is a difficult requirement to meet. The actual sighting of a Giffen good in the real world remains contentious. We know for sure that the original example, potatoes during the Great Irish Famine, was flawed and there is little evidence that it was a Giffen good. The Biblio sheet has a few references that can start you learning more about the history of Giffen goods in economics. The next section gives an even deeper explanation for Giffen goods. It establishes the specific conditions needed for Giffenness to occur. Exercises 1. Use the results in the CS1 sheet to find the price range for which we see Giffen behavior. Report your answer and describe your procedure. 2. Use the Optimal1 sheet utility function and parameter values to find the optimal solution via analytical methods. Show your work. Note that $x_1 < \frac{a}{b}$,so the utility function is $U = ax_1 - \frac{b}{2}x_1^2 + cx_2 + \frac{d}{2}x_2^2$ 3. Use Word’s Drawing Tools to reproduce Figure 4.14, depicting $x_1$ as a Giffen good, but use a $p_1$ increase (instead of a decrease).
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/04%3A_Compartive_Statics/4.05%3A_Giffen_Goods.txt
Without a doubt, the demand curve is the most important idea in the Theory of Consumer Behavior. We have derived the demand curve analytically and numerically. The demand curve tells us the optimal amount to buy at a given price. It also tells us how quantity demanded will change as price changes, ceteris paribus. This section remains focused on the demand curve, extending the analysis of the consumer’s optimal response to a change in price. The core concept is that the total effect on quantity demanded (given by the demand curve) for a given change in price can be broken down into two separate effects, called income and substitution effects. Our attention is still on the change in quantity demanded as price changes, ceteris paribus, but by breaking apart the observed response when price changes, we get a deeper explanation of demand. We also explain how we might get a Giffen good. Intuition Before diving into complicated graphs and math, let’s review the story behind income and substitution effects. Seeing the big picture improves your chances of really understanding what income and substitution effects are all about. Suppose that, ceteris paribus, price rises. We know the consumer has to re-optimize. We know the consumer will choose a new optimal combination of goods. We can see the consumer buy a different amount after the price changes. If we simply compute the change in the amount purchased of $x_1$ before and after the price change, we are comparing two points on the demand curve. This is called the total effect of a price change. The breakthrough idea is that the increase in price has two channels by which it affects the consumer. One channel focuses on the fact that a price increase is like a decrease in purchasing power. After all, given an income level, if prices double, then I can buy half of what I bought before. My income has not changed, but my purchasing power has fallen just the same as if my income had been cut in half. The income effect reflects the fact that price changes affect optimal quantity demanded by altering purchasing power. The other channel is called the substitution effect. The idea is that a price change in one good alters the relative prices faced by the consumer and induces substitution of the relatively cheaper good for the relatively more expensive one. When $p_1$ rises, $x_1$ is relatively more expensive than $x_2$ and so I am naturally going to avoid $x_1$ and be attracted to $x_2$. Figure 4.17 shows the two channels below the total effectthey are submerged and not directly observed. Added together, they make up the total effect. We will see that the income effect can be either positive or negative, but the substitution effect is always negative (assuming well-behaved preferences). When price goes up, the substitution effect says "buy less." Of course, if price falls, the reverse occurs and, according to the substitution effect alone, consumption increases. The reason the income effect is ambiguous in sign is the fact that there are normal and inferior goods. If the good is normal, then optimal $x_1$ rises as income increases, but if the good is inferior, then consumption and income are inversely related. Finally, it helps to know the underlying motivation behind the discovery of income and substitution effects. Economists were arguing about the existence of Giffen goods. The Law of Demand said price and quantity were inversely related. Income and substitution effects explained under which conditions Giffen behavior (an upward sloping demand curve) is possible. We will see that if the income and substitution effects work together, then the demand curve is guaranteed to be downward sloping. Understanding income and substitution effects will allow us to give a more refined, precise definition of the Law of Demand. Numerical Example of Income and Substitution Effects STEP Open the Excel workbook IncSubEffects.xls, read the Intro sheet, and proceed to the OptimalChoice sheet. We have the usual Cobb-Douglas utility function with a conventional budget line. We have done this problem before and the initial optimal solution is 25,$16 \frac{2}{3}$. STEP Decrease $p_1$ by 1 to $1/unit (in cell B17). Figure 4.18 displays what is on your screen. The red line is the familiar new budget line (after the price decrease). There is, however, a dashed line that has not been used before. This dashed line represents the outcome of a thought experiment. STEP Click the button to see a second graph of the situation. It has the axes scale adjusted so you can see better what is going on. The dashed line is critical to understanding the splitting of the total effect into income and substitution effects. It has the same slope as the new budget line, yet it goes through the initial optimal solution. What we have done is pretend to take away enough income from the consumer to enable him to buy the initial bundle with the new, lower $p_1$. We took away income (shifting down the budget constraint relative to the new budget line) because the fall in price implies an increase in purchasing power. Had there been a price rise, we would have had to increase income to compensate for the price increase. We will find a tangency solution on the dashed line and this will allow us to split the total effect into the income and substitution effects. Of course, nothing like this actually happens in the real world. When the price falls, the consumer re-optimizes, buying a new optimal bundle, and that is the end of the story. But for the purposes of understanding the demand curve, we figure out what the consumer would buy at the imaginary dashed line and we use that to split the total effect into the substitution and income effects. But this is all way too abstract. Let’s actually do it so you can see how it works. To figure out how much income to take away to cancel out the changed purchasing power from the price change, we use the Income Adjuster Equation. $\Delta m = x_1 \mbox{*}\Delta p_1$ Applied to this problem, we know that $x_1 \mbox{*}$ is 25 (from the initial optimal solution) and the change in $p_1$ is $-1$ (because the price fell from 2 to 1, so $new - initial$ is $1 - 2$); thus, we have: $\Delta m = x_1 \mbox{*}\Delta p_1$ $\Delta m = [25][-1] = -25$ The minus tells us that we have to take away income. The dashed line is based on an income of$75, $p_1 = 1$, and $p_2 = 3$. In summary, we have three budget lines when we work with income and substitution effects: (1) the usual initial line, (2) the usual new line from the change in price, and (3) the imaginary (dashed) line that has been adjusted to pass through the initial optimal solution. We find the usual new optimal solution so we can compute the total effect first, then we use the dashed line to find the income and substitution effects. STEP With $p_1 = 1$, run Solver. Figure 4.19 shows that the consumer chooses the 50,$16 \frac{2}{3}$ combination. Thus, we have two points to consider so far: • Point A: Initial: At $m = 100, p_1 = 2, x_1 \mbox{*} = 25, x_1 \mbox{*} = 16 \frac{2}{3}$. • Point C: New: At $m = 100, p_1 = 1, x_1 \mbox{*} = 50, x_1 \mbox{*} = 16 \frac{2}{3}$. Notice that Excel displays three difference curves around the current optimal solution, but there are actually an infinite number of curves going through every point in the quadrant. With $c = d = 1$ being held constant, the indifference map is not changing in any way. We are simply displaying different indifference curves whenever $x_1$ and $x_2$ in cells B12 and B13 change. Points A and C are two points on the price consumption curve and two points on the demand curve. The total effect of a $1/unit decrease in the price of good 1 can be found by measuring the movement from A to C: for $x_1$, the total effect is $+25$ units and for $x_2$, the total effect is zero ($x_2 \mbox{*} = 16 \frac{2}{3}$ before and after the price shock). The total effect can be directly observed. With the initial price, we can see the consumer purchase 25 units of good 1 and $16 \frac{2}{3}$ of good 2. We see the price of good 1 fall by$1/unit and watch the consumer respond by buying 25 units more of $x_1$ and leaving the amount of $x_2$ unchanged. We are now ready for the key move. We will hypothetically take away exactly $25 of income so we can find the optimal solution on the imaginary, dashed line. The consumer does not actually have income taken away. It is a thought experiment. Working out what the consumer would do in this hypothetical situation allows us to split the total effect into its constituent parts. STEP Change income to$75 (notice that the budget line now lies on top of the dashed budget line) and run Solver. You can safely ignore the steeper line in the chartall we want is point B, the optimal solution with the dashed budget line. Solver tells us that point B is 37.5,12.5. This gives us three points to consider: • Point A: Initial: At $m = 100, p_1 = 2, x_1 \mbox{*} = 25, x_2 \mbox{*} = 16 \frac{2}{3}$. • Point B: Unobserved: At $m = 75, p_1 = 1, x_1 \mbox{*} = 37 \frac{1}{2}, x_2 \mbox{*} = 12 \frac{1}{2}$. • Point C: New: At $m = 100, p_1 = 1, x_1 \mbox{*} = 50, x_2 \mbox{*} = 16 \frac{2}{3}$. Look carefully at the three points and concentrate on how points B and C differ: C uses new $p_1$ with original m, while B is based on new $p_1$ with adjusted m (adjusted in a special way so that the dashed line goes through point A). With these three points, we can compute total, income, and substitution effects for $x_1$ and $x_2$. The three effects are shown by arrows on the axes of Figure 4.20. This is a complicated graph. Take your time and read it with care. Try to separate the different elements and lines to different parts of the problem: initial (A), new (C), and intermediate positions (B). There are effects measured from one point to another for both $x_1$ and $x_2$. These $\Delta$s are calculated the usual way as $new - initial$. For $x_1$, we find: • SE: A to B: $37 \frac{1}{2} - 25 = 12 \frac{1}{2}$ • IE: B to C: $50 - 37 \frac{1}{2} = 12 \frac{1}{2}$ • TE: A to C: $50 - 25 = 25$ Notice that the total effect (TE) can be found by computing the difference from A to C ($50 - 25 = 25$) or taking advantage of the fact that SE + IE = TE, so $12.5 + 12.5 = 25$. The effects for $x_1$ are all computed along the x axis in terms of units of $x_1$. Analyzing the effect on $x_2$ of a change in $p_1$ gives us cross income and substitution effects for $x_2$, which are shown by arrows on the y axis, in Figure 4.20. • SE: A to B: $12 \frac{1}{2} - 16 \frac{2}{3} = - 4 \frac{1}{6}$ • IE: B to C: $16 \frac{2}{3} - 12 \frac{1}{2} = 4 \frac{1}{6}$ • TE: A to C: $16 \frac{2}{3} - 16 \frac{2}{3} = 0$ On $x_2$, the income and substitution effects work against each other. The substitution effect, from A to B, lowers the amount of $x_2$ since $p_1$ fell, making $x_2$ more expensive relative to $x_1$. But when we move from B to C, the income effect exactly cancels out the SE. The fall in $p_1$ has increased our purchasing power and, since $x_2$ is a normal good, we want to buy more of it. It is a property of the Cobb-Douglas utility function that the cross IE and SE effects cancel each other out, leaving a zero total effect. This is not a usual or common result and it demonstrates how the functional form imposes structure on the demand curve. Let’s return now to $x_1$ and focus on its substitution effect, which we know is always negative. This leads immediately to a question: If the SE is always negative, then why is it $+12.5$ in Figure 4.20? The answer to this apparent contradiction is that the negative refers to the relationship, not the actual value of the SE. Given that price fell, an increase in quantity purchased is consistent with a negative effect because it is the relationship between the two variables that is being described as negative. Likewise, the sign of the income effect can be tricky. The key is to pay attention to which shock variable is being considered. The income effect measured as the response to a change in income is positive, in this case, because as I move from B to C, my income is increased and I respond by increasing my optimal consumption of good 1. Now you might ask, "If the two effects work together, then how is the substitution effect negative and the income effect positive?" This is because we defined the income effect as the response to a change in income, like the movement from point B to C in Figure 4.20. But, if you remember, this example began with a decrease in the price of good 1. The decrease in the price of good 1 can be interpreted as an increase in income, in the sense of greater purchasing power. If we tie the 12.5 increase in good 1 from the income effect to the decrease in price of good 1, we see that this negative relationship reinforces the negative substitution effect and gives a negative total effect. Now that we know how the income and substitution effects combine to form the total effect of a price change, we can show how easy it is to compute them from a reduced form solution. We first have to solve the model analytically and get a reduced form expression as a function of $m$ and $p_1$. We have done this before for a Cobb-Douglas utility function and found $x_1 \mbox{*} = (\frac{c}{c+d})\frac{m}{p_1}$ If we substitute in $c=d=1$, we have $x_1 \mbox{*} = \frac{m}{2p_1}$ At $m=100$ and $p_1=2$, $x_1 \mbox{*}=25$. This is the initial solution (point A). If $p_1$ falls to $1/unit, then we plug in $m=100$ and $p_1=1$, which gives the new solution (point C), $x_1 \mbox{*}=50$. The total effect is $50 - 25 = 25$. To find the SE, we need point B. We use the reduced form expression to compute quantity demanded with adjusted m ($75) and new $p_1$ ($1/unit). $x_1 \mbox{*} = \frac{m}{2p_1} = \frac{[75]}{2[1]} = 37.5$ Once we have point B, we have split the total effect from A to C and we can compute the SE and IE by going from A to B and B to C, respectively. The SE is $37.5 - 25 = 12.5$ and the IE is $50 - 37.5 = 12.5$. These results agree with our earlier work. Income and Substitution Effects via Graphs Income and substitution effects are complicated. Figure 4.20 is not easy to understand. There are three budget lines and a lot going on. So what is so important about income and substitution effects that makes it worthwhile to master them? Income and substitution effects hold the key to explaining how we can get a Giffen good. They mark real progress in economics, settling a long debate about whether or not upward sloping demand curves are possible. We will deconstruct the income and substitution effect graph (Figure 4.20), examining each layer one at a time, to show the source of Giffen behavior. We begin with Figure 4.21. On the left we have the initial optimal solution and the right displays a single point on the demand curve (not shown). Next, we decrease the price of good 1, as shown in Figure 4.22, which creates a new budget line. We know the consumer will re-optimize and choose a new optimal solution along the new, flatter line, but Figure 4.22 does not show this new solution quite yet. Instead, it shows the point B solution on a dashed line with the income that would have to be taken away to cancel out the increased purchasing power from the price decrease. Figure 4.22 shows the optimal solution, point B, for the hypothetical situation with lower $p_1$ and adjusted m. The rightward pointing arrow is the SE for $x_1$ is the substitution effect, from point A to B on the x axis. The dashed line has a flatter slope (new $p_1$ is less than initial $p_1$) through point A. This guarantees that B is to the right of A. This is why the SE is always negative. It is impossible to draw a point B to the left of A without making the indifference curves cross. With MRS = $\frac{p_1}{p_2}$ at A, lowering $p_1$ and adjusting m so dashed line goes through A, means the consumer must move southeast to find the highest indifference curve tangent to the dashed line. Now, we are ready to show point C. We have a known negative substitution effect and all that remains to be done is to find the indifference curve tangent to the new budget line (with lower $p_1$). The key insight is that there are several possible positions for point C. Figure 4.23 shows three possibilities. Figure 4.23 shows that the final position of point C depends on whether the good is normal or inferior, with a subcategory of inferior goods that are Giffen. • C1: Good 1 is a normal good so the income effect from B to C works together with the movement from A to B and we end up at point C1. In this case, and for any point C to the right of B, we get a downward sloping demand curve. • Good 1 is an inferior good so the income and substitution effects work against each other. The movement from B to C will be to the left and leave us with a point C to the left of B. There are two possibilities: 1. C2: The income effect pushes the consumer to buy less $x_1$, but it is less than the substitution effect (which leads to buying more $x_1$ as $p_1$ falls). We end up at point C2 between A and B and the demand curve is still downward sloping. 2. C3: The income effect not only works against the substitution effect, it is stronger, swamping it. Point B to C moves in the opposite direction than A to B and and is bigger than A to B. This leaves the consumer to the left of B at point C3. The demand curve is upward sloping. This is a Giffen good. It can be difficult to draw a Giffen good correctly because the indifference curves cannot cross. So, in Figure 4.23, the space available for point C3 is tightC3 can only fit to the left of A and to the right of the indifference curve that is shown tangent to B. Figure 4.23 also makes clear that it is the indifference curves, which come from the utility function, that determine how quantity demanded responds to a change in price. How a good generates utility (i.e., whether utility is Cobb-Douglas, quasilinear, perfect complements, or another functional form) determines whether it is normal, inferior, or Giffen. The decomposition of the total effect into income and substitution effects provides the condition which must hold for Giffen behavior: the income effect must work against the substitution effect and be bigger. We can reinforce this key insight with a mathematical expression that gives more detail on exactly how we get Giffenness. The Slutsky Equation In 1915, decades after the supposed spotting of a Giffen good during the Irish potato famine, Eugen Slutsky published a paper in an Italian journal that showed how to decompose the total effect of a price change into income and substitution effects. He had a mathematical expression that showed how it was possible to get an upward sloping demand curve! Unfortunately, his work went unnoticed. Twenty years later, John R. Hicks (a Nobel laureate in 1972) and R. G. D. Allen rediscovered the ideas in Slutsky’s paper. Sometimes, the idea of income and substitution effects are referred to as Slutsky-Hicks or Slutsky-Hicks-Allen. We will keep it simple and call it the Slutsky Equation. The Slutsky Equation, which we will not derive, says in mathematical terms something that we already know: The total effect of a price change can be expressed as the sum of a substitution and an income effect. It turns out that there are several ways to express the decomposition with a Slutsky Equation. Here are two versions: $\frac{\Delta x_1}{\Delta p_1}=\frac{\Delta x_1^{SE}}{\Delta p_1}+\frac{\Delta x_1^{IE}}{\Delta p_1}$ $\frac{\Delta x_1}{\Delta p_1}=\frac{\Delta x_1^{SE}}{\Delta p_1}-x_1 \mbox{*}\frac{\Delta x_1}{\Delta m}$ Both equations say the same thing: the total effect, $\frac{\Delta x_1}{\Delta p_1}$, is equal to the substitution effect, $\frac{\Delta x_1^{SE}}{\Delta p_1}$, plus the income effect. Where they differ is how they express the income effect. Look carefully at the denominators. The income effect in the first equation has a $\Delta p_1$ denominator, like the other two terms. What Slutsky figured out was that the income effect of price change, $\frac{\Delta x_1^{IE}}{\Delta p_1}$, could be written as $-x_1 \mbox{*}\frac{\Delta x_1}{\Delta m}$. In other words, the income effect channel of the price change can be expressed as the amount of good 1 initially purchased times the change in $x_1$ as income changes (the slope of the Engel curve). Notice the minus sign, which picks up the fact that when price falls, that is like an increase in income. Now we can really see how to get a Giffen good, which has an upward sloping demand curve so $\frac{\Delta x_1}{\Delta p_1} > 0$. Since the first term, the substitution effect is always negative, we definitely need an inferior good so that $\frac{\Delta x_1}{\Delta m} < 0$ so that the second term is positive. Obviously, if the good is extremely inferior, so that $\frac{\Delta x_1}{\Delta m}$ is much less than zero, we might get a Giffen good. But the Slutsky Equation reveals another way to get Giffen behavior. A large opposing income effect can be obtained by the good being inferior and the consumer buying a lot of it so that $-x_1 \mbox{*}\frac{\Delta x_1}{\Delta m}$ is a big positive number to outweigh the negative substitution effect. If the good is merely inferior, but the consumer buys little of it, then it less likely to be Giffen. This is why we look for Giffen behavior in staples, basic commodities that comprise a large share of the budget. Potatoes for the Irish, rice for Asians, and tortillas for Mexicans are three examples that economists have examined for Giffen behavior. For a poor person, these items could be consumed in large quantities, yet, as income rises, quantity demanded falls so they are inferior goods. The combination of a large $x_1 \mbox{*}$ and $\frac{\Delta x_1}{\Delta m} < 0$ could produce a large, positive $-x_1 \mbox{*}\frac{\Delta x_1}{\Delta m}$ term that is bigger than the negative substitution effect. Remember how we generated Giffen behavior with GiffenGoods.xls in the previous section? We increased the price from$1/unit to $1.1/unit and optimal $x_1$ rose from 44 to 48.6, while optimal $x_2$ fell dramatically from 11 to around 1.5. Notice how $x_1$ is a staple, dominating the amounts purchased of the two goods. We know its Giffen, but is $x_1$ also inferior? Let’s find out. STEP Open GiffenGoods.xls and proceed to the Optimal1 sheet. Click the button and run Solver to make sure you are at the optimal initial solution of 44,11. Increase m to 60 and run Solver. What happens? Yes, as we know must be true (since we know $x_1$ is a Giffen good), $x_1$ is an inferior good: optimal $x_1$ fell (to 39) as income increased to$60. Giffenness requires that $x_1$ be inferior and this example also reflects the fact that concentration of the consumer’s budget on an inferior good contributes to the production of a Giffen response. The Biblio sheet in GiffenGoods.xls, from the previous section, had several references to papers trying to find Giffen goods, yet the jury is still out. What is unquestioned, however, is the theoretical requirement: it must be an inferior good so that the IE is in the opposite direction and larger than the SE. The Slutsky Equation also enables us to fine tune a statement that is, strictly speaking, false. Introductory economics students around the world learn the Law of Demand: when price increases, ceteris paribus, quantity demanded must fall. In other words, holding everything else constant, quantity demanded and price are inversely related and demand is always downward sloping. This is fine, at the introductory level, where we do not want to confuse beginning students, but we know that an upward sloping demand curve is possibleit is called a Giffen good. They are a violation of the "Law" of Demand and we know they could exist. When their price rises, so does quantity demanded. Can we rehabilitate the Law of Demand so there is no exception? Yes, we can. Our knowledge of income and substitution effects points the way. We can more precisely define the Law of Demand. By inserting a qualifying clause, we can get the Law of Demand to be exactly right: If the good is normal, then quantity demanded falls as price rises, ceteris paribus. That is guaranteed to be true because a normal good has an income effect that works together with the substitution effect. Thus, there is no way to get Giffenness. The Cobb-Douglas utility function cannot give Giffen behavior. The reduced form solution, $x_1 \mbox{*} = (\frac{c}{c+d})\frac{m}{p_1}$, means that $\frac{dx_1 \mbox{*}}{dm} = (\frac{c}{c+d})\frac{1}{p_1} > 0$ so the income effect, $- x_1 \mbox{*}\frac{dx_1 \mbox{*}}{dm}$, is negative. This means the IE and SE are both negative and work together so there is no way the Cobb-Douglas utility function can generate Giffenness. TE = SE + IE Income and substitution effects are used by economists to better understand the demand curve and to explain Giffen behavior. By disassembling the total effect of a price change, the Slutsky Equation shows how a Giffen good can arise if the income effect opposes and swamps the substitution effect (which generates an upward sloping relationship between price and quantity demanded). Given a utility function and budget constraint, we find the initial optimal solution (point A). A price change will lead to a new optimal solution (point C) which we can use to compute the total effect. We can then use the Income Adjuster Equation to find a hypothetical point B that splits the total effect into substitution and income effects. Given a reduced form expression of $x \mbox{*} = f(p,m)$, we can find points A, B, and C by evaluating the expression at the appropriate p and m values to compute points A, B, and C. The Slutsky Equation is a mathematical presentation of income and substitution effects. The math gives us the insight that the income effect, $-x_1 \mbox{*}\frac{\Delta x_1}{\Delta m}$, is composed of initial optimal $x_1$ times the response of $x_1$ to an income change. This reveals that Giffenness is more likely to be found in inferior goods that also attract a high concentration of the consumer’s budget. There are even more ways to express the Slutsky Equation than the two used in this section. Instead of altering income to allow the consumer to buy the initial bundle of goods, you can change income to allow the consumer to be on the initial indifference curve. This is sometimes referred to as the Hicks substitution effect. Exercises 1. Reproduce, using Word’s Drawing Tools, Figures 4.21, 4.22, and 4.23, explaining each graph in your own words. 2. Repeat question 1, with one key change: apply a price increase in good 1 (instead of a price decrease). 3. In stating the Law of Demand, some economists choose to include a condition that the good is normal, like this: If the good is a normal good, then price and quantity demanded are inversely related, ceteris paribus. Why is the normal good clause needed? 4. Given the demand function, $x_1 \mbox{*} = 20 + \frac{m}{20p_1}$, compute the total, income, and substitution effects when price falls from $5 to$4/unit, with income of $1000. Show your work. 5. Use the Optimal1 sheet in GiffenGoods.xls to find points A, B, and C for a shock in $p_1$ from$1 to \$1.1/unit. Compute the TE, SE, and IE for $x 1$. Show your work and explain what you did.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/04%3A_Compartive_Statics/4.06%3A_Income_and_Substitution_Effects.txt
This chapter uses a quasilinear utility function to provide practice working with income and substitution effects. There is a surprising twist when using the quasilinear functional form. See how fast you can figure it out. STEP Open the Excel workbook IncSubEffectsPractice.xls, read the Intro sheet, then go to the OptimalChoice sheet. Notice that the absolute value of the MRS is less than the price ratio. Because the slope of indifference curve at 16.25,10.75 is less than the slope of the budget constraint, we know the consumer should travel northwest along the budget constraint, buying more $x_2$ and less $x_1$, until the MRS $= \frac{p_1}{p_2}$. STEP Run Solver to find the initial optimal solution. Figure 4.24 shows this result. STEP Proceed to the CS1 sheet. It shows a comparative statics analysis of an increase in the price of good 1 from $2/unit to$7/unit in $1 increments. It also charts the results as an inverse demand curve for $x_1$. The demand curve tracks the total effect of a price change. When the price of good 1 rises from$2 to $3, the quantity demanded falls from $6 \frac{1}{4}$ to $2 \frac{7}{9}$. By subtracting the new from the initial value, we see that the total effect is a decrease of $3 \frac{17}{36}$ units of $x_1$, displayed in cell F13 as $-3.47222$. Income and substitution effects explain how this total effect came to be by dismantling the total effect into two parts that add up to the total. The substitution effect tells us how much less the consumer would have purchased when price rises strictly from the fact that the relative prices of the two goods have changed. We compute how much income we have to give the consumer to cancel out the reduced purchasing power caused by the price increase to focus exclusively on the relative price change. The substitution effect is always negative. Figure 4.25 shows a typical decomposition of the total effect (TE) into the substitution effect (SE) and income effect (IE) with indifference curves suppressed to highlight the budget lines under consideration. From point A, price rose and the consumer will now be at point C on the new budget line (labeled $p_1 \uparrow$). The dashed line is the result of a hypothetical scenario in which the consumer has been given enough income to purchase the initial bundle A. Notice how the original budget line and the dashed line go through point A. The dashed line has a higher price, but also a higher income. Thus, the movement from point A to point B reflects solely the different relative prices in the goods, without any change in purchasing power. This is the substitution effect. While the substitution effect is focused on relative prices, the income effect is that part of the response in quantity demanded when price changes that is due to changed purchasing power. From point B, a decrease in income from the dashed to the new budget line leads to a decrease in $x_1$ (at point C). Thus, $x_1$ is a normal good from point B to C in Figure 4.25 and the two effects are working in tandem. The demand curve is guaranteed to be downward sloping for this price change. In the CS1 sheet, we have seen that the demand curve is downward sloping because quantity demanded falls when price rises. But an open question still remains: Do the income and substitution effects work as in Figure 4.25? We know point A, the initial optimal solution, is $x_1 \mbox{*} = 6.25$ when $p_1$ =$2/unit and point C is about 2.78 units of $x_1$ when price rises to $3/unit. We need point B to do the income and substitution effects analysis. The first step in finding point B is to use the Income Adjuster Equation to compute how much income to give the consumer in order to cancel out the effect of the reduced purchasing power. $\Delta m = x_1 \mbox{*}\Delta p_1$ $\Delta m = [6.25][+1]$ STEP On the OptimalChoice sheet, set cell B16 to 3. The chart updates, showing the new budget constraint in red (swinging in since price rose) and the dashed line. To find point B, we need the optimal solution for the dashed line constraint so we need to change in income on the sheet. STEP Set cell B18 to 146.25. This applies the dashed line budget constraint to this problem. Run Solver to find point B. Your result might surprise you. Solver says the optimal solution is about 2.78 for $x_1$, but that is the same answer we had for point C. What is going on here? We turn to analytical work to shed light on this mysterious result. Following the procedure in section 3.2, we found this reduced form solution for the quasilinear utility function, $U = x_1^c + x_2$: $x_1 \mbox{*} = (\frac{p_1}{cp_2})^\frac{1}{c-1}$ We use the initial values of c and $p_2$ in the OptimalChoice sheet to simplify things a bit: $x_1 \mbox{*} = (\frac{p_1}{[0.5][10]})^\frac{1}{[0.5]-1} = (\frac{p_1}{5})^\frac{1}{-0.5} = (\frac{p_1}{5})^{-2} = (\frac{5}{p_1})^2 = \frac{25}{p_1^2}$ This is the same kind of expression, $x_1 \mbox{*} = f(p_1, m)$, that we used in the previous section for a Cobb-Douglas utility function, $x_1 \mbox{*} = \frac{m}{2p_1}$, to find points A, B, and C. You might be puzzled. Exactly where is m for the quasilinear reduced form expression for $x_1$? It is not there, although a mathematician might say that we could easily include it by writing the reduced form expression like this: $x_1 \mbox{*} = \frac{25}{p_1^2} + 0m$ The fact that m does not affect optimal $x_1$ for a quasilinear utility function is the source of the surprising result for point B. We can apply the usual procedure for finding points A, B, and C with a reduced form expression to show this. Point A is the initial optimal $x_1$ solution so we plug in $p_1 = 2$ and find $x_1 \mbox{*} = \frac{25}{2^2} = 6.25$. Point C is the new optimal $x_1$ solution so we plug in $p_1 = 3$ and find $x_1 \mbox{*} = \frac{25}{3^2} = \frac{25}{9} = 2 \frac{7}{9}$. Point B is found using new $p_1$ and adjusted m,$146.25. But notice that adjusted m is irrelevant because it does not affect $x_1$. Point B is $x_1 \mbox{*} = 2 \frac{7}{9}$, the same as point C. Figure 4.26 shows what is going on here. Unlike the typical case, there is no income effect at all with quasilinear utility, so TE = SE. As usual, the substitution effect is the move from point A to B and the income effect is the movement from B to C. The IE is zero because C is directly below B. The total effect is A to C. It is the utility function that is driving this result. A utility function with the functional form $U = f(x_1) + x2$ has no income effect because the indifference curves are vertically parallel. If you shift the budget line via an income shock, the new tangency point will be directly above or below the initial point. In other words, the income consumption curve is vertical. Thus, the total effect is composed entirely of the substitution effect. This is the curious twist produced by the quasilinear functional form. We saw that the income consumption curve is vertical and Engel curve is horizontal in section 4.2 (see Figure 4.7). Economics is certainly cumulative and ideas learned are often worth remembering because they tend to show up again. Finally, notice that we now know that quasilinear preferences cannot yield Giffen behavior. After all, if the substitution effect is always negative and the income effect is zero, there is no way for the total effect to ever be positive. Quasilinear Preferences Yield Zero Income Effects Splitting a total effect into income and substitution effects works for any utility function. After finding the total effect, the Income Adjuster Equation can be used to determine the income needed to cancel out the change in purchasing power from the price change (i.e., setting the imaginary, dashed budget line). Finding the optimal solution with the new price and adjusted income budget constraint determines point B and allows us to split the total effect in two parts. Of course, the component parts, SE and IE, need not be equal nor share the same sign. We know that Giffen goods arise when the income effect opposes and swamps the always negative substitution effect. In the case of quasilinear preferences, we have a situation where there is no income effect. The Slutsky decomposition still applies, however, with the total effect being entirely composed of the substitution effect. Exercises 1. Click the button on the OptimalChoice sheet and apply a price decrease for good 1 from $2/unit to$1.90/unit. Compute the total, substitution, and income effects. Show your work. 2. Use Word’s Drawing Tools to draw a graph similar to Figure 4.26 that shows the total, substitution, and income effects from the 10 cent decrease in price from question 1. Questions 3 and 4 are difficult. Revisit questions 2 and 3 in EngelCurvesPracticeA.doc (in the Answers folder in the MicroExcel archive) for more detail on the corner solution for this utility function at low levels of income. 1. With quasilinear utility, the income consumption curve is vertical and the Engel curve horizontal only above a threshold income level. At very low levels of income, we get a corner solution. Click the button on the OptimalChoice sheet and set income to 10. This will generate a corner solution. Compute the total, substitution and income effects from a 10 cent price increase in good 1 (from 2 to 2.1). Show your work. 2. Use Word’s Drawing Tools to draw a graph depicting your results for question 3.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/04%3A_Compartive_Statics/4.07%3A_More_Practice_with_IE_and_SE.txt
A Tax-Rebate Proposal This section examines a tax-rebate plan that provides further practice with the logic of income and substitution effects. This application shows that they are more than an intellectual curiosity. The heart of the idea is for the government to reduce consumption of a particular good, for example, gasoline, without hurting the consumer. The idea is to tax a good and then turn around and rebate (give back) all of the tax revenue to the consumer. Can we alter the consumer’s choices without lowering satisfaction? We keep things simple by ignoring administrative costs of collecting the tax and rebating it so the tax and rebate leaves the consumer’s income unchanged. Proponents point out that the government is not making any money (all of the tax revenue raised is refunded back) so the consumer is not going to be hurt. Opponents contend that this scheme will have no effect because the rebated tax will immediately be spent on the taxed good and we will end up right where we started. Who is right? We use the Theory of Consumer Behavior to find out. Along the way, income and substitution effects will come into play. A Concrete Example STEP Open the Excel workbook TaxRebate.xls and read the Intro sheet, then go to the QuantityTax sheet. We have a Cobb-Douglas utility function with an option to apply a per unit (quantity) tax on good 1. The workbook opens with no tax and the consumer maximizing satisfaction by buying the bundle 25,50, yielding $U^* = 1250$. We begin by applying a quantity tax. STEP Change cell B21 to 1. Notice that a new budget line appears. The consumer cannot afford the original bundle and must re-optimize. Run Solver to find the new optimal solution. You should find that the consumer will now buy the bundle $16 \frac{2}{3}$,50 and maximum utility falls to 833.33. Cell B22 shows that the government collects $16.67 ($1/unit tax on the 16.67 units purchased). The idea behind the tax-rebate proposal called for rebating the tax revenue so that the consumer would not be hurt by the tax. We need to implement the rebate part of the proposal. STEP Change cell B18 to 116.67. This shifts the budget constraint out. Run Solver to find the optimal solution. You should find that the consumer optimizes by purchasing 19.445 units of $x_1$ and 58.335 units of $x_2$. This result presents us with a problem. This is not the tax-rebate scheme the government envisioned. After all, the government is collecting more tax revenue ($19.445) than the consumer is getting as a rebate ($16.67). Instead of giving the consumer $16.67, let’s give her$19.445. What does the consumer do in this case? STEP Change cell B18 to 119.445. This shifts the budget constraint out a little bit more. Run Solver to find the optimal solution. Now the consumer buys a little more $x_1$, just over 19.9 units. But we still do not have a revenue neutral policy. We need to increase m again. This process of repeatedly doing the same thing is called iteration. STEP Set the cell B18 value to $100 (initial m) plus the amount of tax revenue in cell B22. Run Solver. You can see that we are converging because the increases to income keep getting smaller and smaller. There is a tax rebate that yields an optimal $x_1$ that generates a tax revenue that exactly equals the tax rebate. The value of this tax rebate is$20. STEP Set cell B18 to $120. Run Solver. You should see that the optimal solution is 20,60 and maximum utility is 1200. If Solver is off by a little bit (this is false precision), you can enter 20 and 60 in cells B11 and B12. Since they buy 20 units of $x_1$, the consumer is paying$20 in tax. Since they are getting a tax rebate of $20 (m is set is 120), the tax they pay is exactly canceled out. We are ready to evaluate this program. Who’s Right? Proponents argued that by taxing the good and then turning around and rebating (giving back) the tax revenues to the consumer, we can alter the consumer’s choices without lowering satisfaction. Since the government is not making any money (all of the tax revenue raised is refunded back), the consumer is not going to be hurt. Clearly the supporters of the tax-rebate proposal are wrong. The consumer had an initial $U^* = 1250$ and now has a new $U^* = 1200$. While we cannot meaningfully say that utility has fallen by 50 (because utility is measured on an ordinal, not cardinal scale), we can say that utility has fallen. Thus, in fact, the consumer is hurt by the tax-rebate proposal. Critics, on the other hand, believed that this scheme will have no effect since the rebated tax will immediately be spent on the taxed good and we will end up right where we started. Because the consumer went from an initial bundle of 25,50 to 20,60 after the$20 tax-rebate, it is obvious that the critics are wrong also. This consumer has altered purchasing plans and is, in fact, buying less $x_1$. So, wait, who’s rightthe critics or the supporters of the scheme? Neither. They are both wrong. Income and substitution effects will help us explain why. We return to the original problem without a tax or rebate and the initial solution of 25,50. The $1/unit tax is just like a price increase. We can find point B and compute the substitution and income effects from such a price change. We first use the Income Adjuster Equation. $\Delta m = x_1 \mbox{*}\Delta p_1$ $\Delta m = [25][+1]$ This result says that a$25 increase in income to $125 will allow us to buy the initial bundle. STEP Set income in cell B18 to 125 (and confirm that there is a$1/unit tax in cell B21) and run Solver. The optimal solution is $20 \frac{5}{6},62 \frac{1}{2}$. We have points A, B, and C so we can compute total, substitution, and income effects of the $1/unit price increase due to the tax without any rebate. • SE (A to B): $20 \frac{5}{6} - 25 = - 4 \frac{1}{6}$ • IE (B to C): $16 \frac{2}{3} - 20 \frac{5}{6} = - 4 \frac{1}{6}$ • TE (A to C): $16 \frac{2}{3} - 25 = - 8 \frac{1}{3}$ Figure 4.27 displays these results with each point signifying a tangency between the budget line and an indifference curve (not drawn in to make it easier to read the graph). The tax-rebate proposal is closely related to Figure 4.27. The tax is like a price increase that moves the consumer from A to C and the rebate is like an income effect that moves the consumer from C to B. However, if you look carefully, the changes in income are not the same. In the tax-rebate proposal, the revenue-neutral rebate is$20, whereas in our income and substitution effect work we gave the consumer $25 to be able to purchase the original bundle. A$25 rebate is not revenue neutral because the consumer buys only $20 \frac{5}{6}$ units of $x_1$ so the government ends up losing revenue. The rebate has to be $20 to be consistent with the break-even logic of the proposal. In addition to the income and substitution effects, Figure 4.28 adds point D, which shows the optimal solution given the tax-rebate proposal. Point D (at coordinate 20,60) has utility of 1200, which is, of course, lower than point B (the combination $20 \frac{5}{6},62 \frac{1}{2}$ yields just over 1300 units of utility). More importantly for the purposes of evaluating the proposal, utility at point D is less than utility at point A (where 25,50 generates $U^* = 1250$). The key to the analysis lies with point D in Figure 4.28. It has to be on the initial budget line to fulfill the revenue-neutral condition of the proposal. But we know point A was the initial optimal solution on that budget line, so we can deduce that the consumer prefers point A to point D (and any other point on the initial budget line) and will suffer a decrease in satisfaction if the tax-rebate proposal is implemented. Tax-rebate Schemes Taxes are often used to pay for government services and fund programs deemed worthy by society, but they can also be corrective. Taxes on specific products can discourage particular activities (think cigarettes and smoking). Simultaneously taxing a good and rebating the tax revenue periodically appears as a policy proposal (often with regard to gasoline). Proponents claim the rebate cancels out the price increase from the tax. The scheme is related to income and substitution effects. The tax is like a price increase and the rebate is like an income effect. Although similar to income and substitution effects, there is one important difference in tax-rebate proposals: a revenue-neutral rebate does not return enough income to allow the consumer to buy the pre-tax bundle or to reach the pre-tax level of satisfaction. Thus, the consumer cannot reach the initial level of satisfaction. It is true, however, that a tax-rebate policy will alter consumption patterns. Whether the loss in utility is compensated by the changed consumption pattern is a different question. Exercises 1. Analytically, we can show that the demand curves for goods 1 and 2 with a Cobb-Douglas utility function (where c = d) are $x_1 \mbox{*} = \frac{m}{2(p_1+Q_Tax)}$ and $x_2 \mbox{*} = \frac{m}{2p_2}$. Use these demand functions to compute the income, substitution, and total effects for $x_1$ for a$1/unit tax. Show your work. 2. We know that the tax-rebate scheme gives back too little income to return the consumer to the initial level of utility (1250 units). With a \$1/unit tax, find that level of rebate where the consumer is made whole in the sense that $U^* = 1250$. Describe your procedure in answering this question. 3. At point D in Figure 4.28, is the MRS greater or smaller in absolute value than the price ratio before the tax-rebate scheme is implemented? How do you know this?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/04%3A_Compartive_Statics/4.08%3A_A_Tax-Rebate_Proposal.txt
This chapter introduces a wrinkle to the standard consumer theory model that greatly enhances its applicability. Instead of treating income as a given cash amount, we model the consumer as having a given initial endowment of goods that can be traded for other goods. This transforms the consumer into a combined consumer and seller. Although the power of this approach may not be immediately obvious, we will see that a wide variety of examples such as saving/borrowing, charitable giving, and much more can be handled with this modification. The Budget Constraint in an Endowment Model Instead of the usual income (m) variable, an Endowment Model is characterized by a budget constraint that equates expenditures and revenues from sales out of the initial endowment. $p_1x_1 + p_2x_2 = p_1 \omega _1 + p_2 \omega _2$ The term on the right-hand side says that the consumer has a given amount of each good, $\omega _1$ and $\omega _2$ (this is Greek letter omega so we have omega-one and omega-two). Because the initial amounts of each good are given, $\omega _1$ and $\omega _2$ are exogenous variables. The starting amount of each good, the coordinate pair $\omega _1$, $\omega _2$, is called the initial endowment. If we multiply the initial amount of each good by the price of that good, as done in the right-hand side of the budget constraint equation, we get a dollar-valued amount that represents the total income that can be raised by selling the entire endowment. Thus, the budget constraint says that spending (on the left-hand side) must equal the value of the consumer’s assets (on the right-hand side). The classic example to illustrate someone operating with an endowment model constraint is a farmer who goes to market with his crop. He sells his produce and, with the revenue obtained by selling, buys other goods. The core idea is that the farmer is a buyer and a seller. Perhaps a more modern example is eBay. People sell all kinds of products and turn around and buy different products. It is a massive online garage-sale community. Once again, the core idea is that eBayers sell and buy. In an Endowment Model, what the agent can buy depends on how much revenue is generated by sales. High prices for goods to be sold are a good thing from the agent’s point of view because they generate a lot of revenue with which to buy other goods. Because Endowment Models transform the consumer into a combined buying-selling agent, we can get different results than we saw in the Standard Model. One critical difference is that price increases lead to decreases in quantity demanded (assuming the good is normal), as usual, but as price keeps rising, we can cross the zero barrier and get negative quantity demanded! We will see that the agent switches from being a buyer to being a seller. This is a key idea. Let’s put these abstract ideas into concrete examples so we can understand what is going on with the Endowment Model. STEP Open the Excel workbook EndowmentIntro.xls, read the Intro sheet, then go to the MovingAround sheet. Follow the instructions on the sheet to learn how we can create a budget line from a single point. Just like the Standard Model, the agent faces a consumption possibilities frontier, also known as the budget line, that shows the feasible combinations. Bundles beyond the line are unattainable. STEP Proceed to the Properties sheet. The Endowment Model Extends the Standard Model The Endowment Model is the Standard Model of the Theory of Consumer Behavior with an initial endowment of goods instead of cash income. This transforms the consumer into the dual-role of seller and buyer of goods. The driving force in the agent’s decision making remains utility maximization. Many of the ideas behind the Standard Model (such as equating the MRS and price ratio) carry over to the Endowment Model. Of course, the framework for presenting and understanding the model, comparative statics analysis, remains the same. It may seem that replacing income with an initial endowment is a minor twist, but we will see that the Endowment Model enables analysis of a wide range of choice problems. Exercises 1. Perform a comparative statics analysis of c, the exponent on $x_1$, using the Comparative Statics Wizard. Use increments in c of 0.1. State the effect of changing c on $x_1 \mbox{*}$. Describe your procedure and take screen shots of your results as needed. 2. Use your comparative statics results to find the c elasticity of $x_1 \mbox{*}$ from 1 to 1.1. Show your work. 3. Use the reduced form expression in this chapter to find the c elasticity of $x_1 \mbox{*}$. Show your work. 4. Compare your answers from questions 2 and 3. Explain why they are the same or differ.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/05%3A_Endowment_Models/5.01%3A_Introduction_to_the_Endowment_Model.txt
Suppose the government wants to stimulate saving by workers so they won’t be poor when they retire. Individual Retirement Accounts (IRAs) and 401(k) (their section in the tax code) plans enable savings to grow tax free, so the interest rate earned is higher than if returns were taxed. A higher interest rate should stimulate more saving. But how much more? Typically, estimates of the interest rate elasticity of savings are positive, but quite small, say 0.15. If someone had this elasticity, would attempts to stimulate saving by increasing the interest rate be effective? No, because the low interest rate elasticity of savings means that saving is not responsive to changes in the interest rate. Suppose the interest rate doubles so we have a huge 100% change. Because the elasticity is 0.15, that means we will see only a 15% increase in savings. A more realistic 10% increase in the interest rate would generate a small 1.5% increase in savings. The small elasticity tells us that shocks to the interest rate are not going to move the amount saved by very much. This is an example of interpreting an elasticity. Computing an elasticity is important (and you will continue to see examples of how to do it), but understanding what an elasticity is telling us is even more critical. Now that we know the elasticity is low and what that means, this leads to a second question: What would make the interest rate elasticity of savings be so small? The rest of this chapter offers an application of the Endowment Model to answer this question. In addition, income and substitution effects play a major role in the explanation. There is no doubt about it, learning economics is a cumulative undertakingthe same ideas keep popping up again and again. The Intertemporal Choice Model Intertemporal choice means the agent faces a decision that spans across time periods. Saving over the years working means less consumption, but that allows for more consumption when retired. We model the agent as deciding what to consume every year over their lifespan. Just as when we modeled the consumer buying just $x_1$ and $x_2$ instead of many goods and services, we make a simplifying assumption that collapses many time periods into two: present and future. In the present, right now, the agent works and in the future, one year later, she does not (she retires). In addition, there is another implied simplifying assumption: the agent knows with certainty how long she will live. She is born and works as one-year old, is retired as a two-year old and dies on the last day of her second year. She decides, as soon as she is born, how much she will consume in year 1 (the present) and year 2 (the future). Instead of having two goods $x_1$ and $x_2$, we have consumption of a single good in the present, $c_1$, and the future, $c_2$. The price of the single good is $1/unit so if you have, say,$40, you can buy 40 units. There is no inflation so the price is the same in both time periods. Notice the usual modeling technique at work hererealistic details are simply assumed away. Most people’s lives unfold as follows: Childhood becomes teen-aged years, and then a long period of working adult life eventually turns to retirement years and death. The Intertemporal Choice Model collapses all of that into two time periods. It also assumes away complications from not knowing exactly when we die. Faced with criticisms about the unrealistic nature of the model, economists respond by saying that we are not interested in realism. We reduce the complex real world to a model that can be analyzed with comparative statics to produce testable predictions. For economists, the goal is not to describe reality, but to predict via comparative statics. We strip away all complications to create an unreal, incredibly simple model that contains the kernel of the problem so we can work out how the agent responds to shocks. Modeling is not easy. There is science (and math) and art involved. Users and consumers of these models need sharp critical thinking skillssometimes important elements are assumed away. We continue building the model by defining the initial endowment as the amount of present and future income you start with. The initial endowment in the first year is $m_1$ and in the second year $m_2$. The first year’s initial endowment is income from working and the second year’s initial endowment is income from sources like Social Security. Thus, it makes sense that $m_1 > m_2$, which says that income is higher during the working than the retired year. Since the price is $1/unit, the initial endowment incomes are also initial endowment consumption in the two periods. We are ready to work on the optimization problem itself. We follow the usual approach, modeling the budget constraint, then satisfaction, then putting the two together to find the initial solution. Of course, after finding the initial optimum we will do comparative statics analysis, where we will answer the question: What causes the interest rate elasticity of savings to be so small? The Budget Constraint STEP Open the Excel workbook IntertemporalChoice.xls and read the Intro sheet, then go to the MovingAround sheet. The consumer begins at the initial endowment point, 80,20, where 80 represents her income and consumption in time period 1 (remember that the price of the good is$1/unit). Income and consumption of 20 in time period 2 is lower (given that she is not working). These numbers are arbitrary and do not have any special meaning. A critical concept for the Endowment Model is that the agent does not have to stay at the initial position. In this application, she can move by saving or borrowing. Saving means you consume less in the present and carry over the unconsumed portion into the future. Saving is like selling present consumption and buying future consumption. Suppose she saves 30 units of consumption in year 1 by saving $30. What would be her position in the second year? STEP Change cell B19 to 50. This implements the plan to increase future consumption, but look at cells B21 and B22. Instead of simply reallocating from 80,20 to 50,50, by saving 30 units, she got an extra 6 units in interest on her savings. If you save$30 for one year at 20%, you end up with $56. The$30 you saved (called the principal) and interest earned of $30 x 20% =$6 makes your savings worth $36 in the future and we add this to the$20 of initial future income to get the grand total of $56. There is an equation that gives us the value of $c_2$ for any chosen value of $c_1$. $c_2 = m_2 + (m_1 - c_1) + r(m_1 - c_1)$ The equation says that the amount of consumption in time period 2 equals the initial endowment amount in time period 2, $m_2$, plus the principal saved, $m_1 - c_1$, plus the interest earned on the amount saved, $r(m_1 - c_1)$. We can rewrite this in a simpler form by collecting the savings term. $c_2 = m_2 + (1+ r)(m_1 - c_1)$ This is the equation of the budget constraint in this model. It shows that the intercept is $m_2 + (1+ r)m_1$ and the slope is $-(1+r)$ (just multiply through by $(1+r)$). The slope tells us that saving$1 will yield $1 + r$ dollars in time period 2. What would be the maximum consumption possible in time period 2? We have two ways to answer this question. STEP Change cell B19 to 0. She consumes nothing now and ends up with 116 units in the future. "But she will starve if she consumes nothing in period 1." That would be another constraint that is not being modeled. We are not saying she will consume nothing in the present time period, we are merely exploring the consumption possibilities. Saving everything (the same as consuming nothing in the present) can also be found by computing the value of the y intercept. We can evaluate $m_2 + (1+ r)m_1$ at $m_1 = 80, m_2 = 20$, and $r=20\%$, yielding $20 + (1+0.2)80 = 116$. This is the same answer that we got with Excel. The y intercept tells us the future value of the agent’s initial endowment, measuring income in both periods in terms of time period 2. Instead of saving, the agent can borrow. Suppose the agent decided to consume more than 80 units in time period 1. How could she do this? Easy: use her time period 2 income to borrow from it. As before, however, we have to be careful. The interest rate plays a role. STEP Change cell B19 to 90. She borrows $10 from her future income. Does she end up with 90,10subtracting 10 from $c_2$ and adding it to $c_1$? No way. As Excel shows, she has to pay interest on the borrowed funds. If she borrows$10, she ends up with only $8 in the future because she has to pay back the principal ($10) and the interest ($2). What is the most she could consume in time period 1? STEP Change cell B19 to 100. What happens? She cannot do this. She cannot choose negative $x_2$. She does not have enough future income to enable 100 units of time period 1 consumption. STEP Continue entering numbers in cell B19 until you drive $c_2$ (in cells B23 and B24) to zero. The x intercept is $96 \frac{2}{3}$. It is the present value of her endowment, measuring income in both periods from the standpoint of time period 1. STEP Proceed to the Properties sheet. Our work in the MovingAround sheet makes it easy to understand the budget line displayed in the Properties sheet. Clearly, given an initial endowment, movement up the budget line is saving and down is borrowing. These are just consumption possibilities. We do not know what this person will do until we incorporate her preferences. We do know she can be anywhere on the constraint (including the initial endowment point). It all depends on her indifference map and where the highest attainable indifference curves lie. STEP Proceed to the Changes sheet. Change the interest rate, cell L8, to 50%. Your screen will look like Figure 5.5. Our work with the Endowment Model in the previous section enables us to easily interpret the result. As before, the budget constraint swivels around the initial endowment point. Above the initial endowment point, the increase in r is a good thing, increasing consumption possibilities. If the agent is a saver, the shock is welcome. Borrowers, however, would not be happy with an increase in r. This is a price increase to present consumption and reduces consumption possibilities for borrowers. STEP Click the button. Change $m_1$ and $m_2$ to see how these shocks are like an income shock. It maintains the slope, but shifts the budget constraint. Now that we understand how the budget constraint works, we are ready to turn to the agent’s goal, maximizing utility. Preferences The agent has preferences over present and future consumption that can be captured by the indifference map. We use the usual Cobb-Douglas function form to express preferences as a utility function. STEP Proceed to the Preferences sheet. Compare the utility functions with d = 0.5 and d = 0.1. The utility function allows us to model different preferences. Figure 5.6 shows two different agents with different rates of time preference for future consumption. The person on the right exhibits a strong preference for present consumption, while the person on the left is more willing to wait. A more immediate gratification personality is represented on the right side of Figure 5.6. We would say this person is more impatienthe likes present much more than future consumption. The exponent d is much smaller than c, which means inputs into the utility function through $c_2$ provide much less utility than via $c_1$. The steep indifference curves reveal that he is willing to trade a great deal of future consumption for a just a little more present consumption. His MRS at a given point (for example, 6,6) is higher (in absolute value) than the MRS of the person on the left. We do not say the person on the right has "bad preferences" (although the language used in this example, such as impatience does seem to connote disapproval). Economists take preferences as given. We are not supposed to judge them as right or wrong. A person with preferences that substantially ignore the future is treated the same as someone who does not like broccoli or likes the color blue. There is a complication here, however, in that a person’s rate of time preference almost certainly changes over time. A young person may not save much because she does not value the future, but she may regret her decision when she gets older. Deciding whose preferences should rule, young or old you, is a difficult philosophical problem. With the budget line and preferences, we can now solve the constrained utility maximization problem. Finding the Initial Solution STEP Proceed to the OptimalChoice sheet. Figure 5.7 shows the initial display. The current bundle is 80,20the initial endowment point. The agent is not maximizing satisfaction subject to the budget constraint. The indifference curve is clearly cutting the budget line and, therefore, the agent should move northwest up the budget line to maximize utility. Run Solver to find the initial solution. The agent opts for the point $64 \frac{4}{9},38 \frac{2}{3}$. This means she has decided to save $15 \frac{5}{9}$ of her present consumption. She chooses this present and future combination, implying this level of saving, because this maximizes utility subject to the budget constraint. Notice that the negative net demand is interpreted as saving. It is computed as optimal $c_1$ minus the initial endowment of present consumption. As mentioned earlier, saving is like selling present consumption to buy greater future consumption. We often drop the minus sign so we do not get confused by increases and decreases in saving. Comparative Statics We focus on $r$. We want to know how savings will respond when $r$ changes. Remember our question: Why is the interest rate elasticity of savings so low? Before we begin our comparative statics analysis, we need to be clear about the language used. Since the shock variable, $r$, is measured as a percent, things can get confusing once we start working on responses and elasticities. We need to keep clear the difference between a percentage point change and percent change. They sound the same, but the former is a difference ($\Delta$), $\text{new} - \text{inital}$, and the latter is a percent computation, $\dfrac{\text{new} - \text{initial}}{\text{initial}}. \nonumber$ So, if $r$ increases from 20% to 30%, that is a 10 percentage point change since we compute 30 - 20, but a 50 percent change: $\frac{30-20}{20}$. The same language would be used if we were working with unemployment rates. An increase from 5% to 6% is a one percentage point increase and a 20% increase. The finance literature uses basis points for differences in variables measured in percents. There are 100 basis points in one percentage point. If a bond yield rises from 3.25% to 3.35%, that is an increase of 10 basis points. STEP Run the Comparative Statics Wizard, changing the interest rate by 10 percentage points (0.1) increments. Keep track of $c_1$, $c_2$, net demand, and whether the person is a saver or borrower (cells D11 and E11). Your results should be similar to those in the CSr sheet. STEP Use your CSWiz results to compute the interest rate elasticity of savings from r = 20% to 30%. We find that the interest rate elasticity of savings from r = 20% to 30% is about 0.11. (Check the formula in cell I15 in the CSr sheet if needed.) That is quite low. A 50 percent increase in r only increased savings by a little over 5 percent. This elasticity is similar to the 0.15 elasticity at the beginning of this chapter. Why is this happening? Why is saving so unresponsive to changes in the interest rate? The answer lies in the income and substitution effects. For savings, the income and substitution effects from a change in r work in opposite directions (when $c_1$ is a normal good). Thus, they tend to cancel each other out and the total effect ends up being small. To head off serious misunderstanding, you need to know right now that this does not mean that we are dealing with a Giffen good. We will see that we are dealing with cross effects when r rises for a saver and Giffen goods are defined in terms of own effects. Also, $c_1$ and $c_2$ are both normal goods in a Cobb-Douglas utility function so we know we can’t get Giffenness. STEP To see how the income and substitution effects apply to this problem, return to the OptimalChoice sheet. Suppose r increases to 300%. Change B16 to this absurdly high interest rate. This huge change enables us to see clearly what is happening on the graph. The budget line swivels in a clockwise direction, getting much steeper. Remember that the slope is $-(1+r)$ so an increase in r makes the line steeper. This is good for savers and bad for borrowers. STEP After changing cell B16 to 300%, run Solver to find the new initial solution. Solver gives the new optimal solution, $c_1 \mbox{*} = 56 \frac{2}{3}$ and $c_2 \mbox{*}=113 \frac{1}{3}$, when $r=300\%$. Optimal savings has increased from$15.56 to $23.33, so that is good news, but this is a pretty weak response to the massive increase in the interest rate from 20% to 300%. Figure 5.8 shows the initial solution (point A) and the new optimal solution (point C). It also includes a dashed line that is parallel to point C’s budget line, but goes through point A. This, of course, is the line that is used to separate the total effect into income and substitution effects using point B. How much income ($m_1$) did we have to take away (hypothetically, of course) to cancel out the income effect of the higher interest rate? We can use Excel to answer this question. STEP With r = 300%, enter the initial solution (point A). To minimize rounding error, use a formula with fractions. So, enter "= 64 + 4/9" in B11 and "= 38 + 2/3" in B12. Now, start decreasing $m_1$ (in cell B17). Your goal is to find that value of $m_1$ so that the initial solution is on the budget linei.e., the constraint cell is zero. A little experimentation should convince you that $m_1 = 69 \frac{1}{9}$ is the value that puts the dashed budget line through the initial solution. If you want to be daring, you could use Solver. Call Solver, then click the button. The objective is the constraint cell (B23) and you want to make the value of it zero by changing $m_1$ (B17). Solver gives the same answer as above. Or, you could use the budget constraint to find the $m_1$ needed to buy the original optimal bundle with $r= 300\%$. Simply plug in the initial optimal solution along with the new value of r (and initial $m_2$) and solve for $m_1$. You are finding the value of $m_1$ that would enable you to buy the initial optimal combination with the higher interest rate. The analytical answer agrees with the numerical approach. STEP Now, with r = 300% and $m_1 = 69 \frac{1}{9}$, run Solver to find point B. Be careful with the interpretation of savings for point B. Remember that income is not really $m_1 = 69 \frac{1}{9}$, but 80. This means that at point B, the agent would save$30.59, not \$19.07 as displayed in cell D11. Figure 5.9 shows the results in a table. You can see Figures 5.8 and 5.9 side by side by scrolling down to row 50 or so in the OptimalChoice sheet. Look at how the substitution effect leads to a large increase in savings, but the income effect cancels out part of this increase. The income and substitution effects provide an explanation for the low interest rate elasticity of savings. What is happening is that the two effects are working against each other when r rises and the agent is a saver. Does this mean $c_1$ is an inferior good? No. The reason why the effects are opposing each other is because, for savers, an increase in the interest rate is like a decrease in the price of future consumption so the effects on $c_1$ and savings are actually cross effects. Look carefully at Figure 5.8. In the region of the graph with points A, B, and C, it is as if we decreased $p_2$,and rotated the budget line up clockwise (with a steeper slope). Saving and Borrowing Explained The Intertemporal Choice Model is an application of the Endowment Model in the Theory of Consumer Behavior. The model says that the agent chooses the amount to consume in time periods 1 and 2 in order to maximize satisfaction given a budget constraint. The model explains saving (or borrowing) as an optimizing move on the part of an agent who is trading off present and future consumption. The model can also explain why the interest rate elasticity of savings is often estimated as a positive, but small number, which means that saving is quite unresponsive to the interest rate. The explanation rests on the fact that the income effect opposes the substitution effect for $c_1$ and savings (for those with negative net demand for $c_1$). Exercises 1. Solve the problem in the OptimalChoice sheet using analytical methods. In other words, find the reduced form expressions for optimal $c_1$, $c_2$, and saving from \begin{aligned} &\max _{c_{1}, c_{2}} u\left(c_{1}, c_{2}\right)=c_{1}^{c} c_{2}^{d} \ &\text { s.t. } c_{2}=m_{2}+(1+\mathrm{r})\left(m_{1}-c_{1}\right) \end{aligned} Show your work. 2. Use the parameter values in the OptimalChoice sheet (with r = 20%) to evaluate your answers for question 1. Provide numerical answers for the optimal combination of consumption in time periods 1 and 2 and for optimal saving. 3. Do your answers from question 2 agree with Excel’s Solver results? Is this surprising? Explain. 4. Use your reduced form solution from question 1 to compute the interest rate elasticity of savings at r = 20%. 5. In working through this chapter, you found the interest rate elasticity of savings from r = 20% to 30%. Why is the elasticity computed at a point (in question 4 above) different from this elasticity?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/05%3A_Endowment_Models/5.02%3A_Intertemporal_Consumer_Choice.txt
The phrase "an economic analysis of" is code for "using the framework of optimization and comparative statics to study observed behavior." In this case, we use the Endowment Model from the Theory of Consumer Behavior to study charitable giving. How can economics have anything to say about giving away money? Isn’t charity something really nice people do, not the selfish, rational maximizers that inhabit economics? Doesn’t this mean that thinking like an economist is useless for studying charity? These questions are based on a common misunderstanding that economics applies only to a subset of the world. So, the mistaken thinking goes, you can use economics to study certain things like banking or unemployment, but not war or marriage. This is wrong because modern economics is not defined by content, but by method. Anything involving choice, like going to war or getting married or brushing your teeth or joining a church can be analyzed with the tools of economics. We will see that the economic approach offers a different view of charitable giving. By casting the problem as a choicehow much to give is the key endogenous variablewe can apply the optimizing and comparative statics framework of economics. We do not claim this is the only or even the best perspective, but it does provide another way to understand charity. Basic Facts about Giving Each year, people all around the world give away a lot of money, goods, and time (as volunteers). Humans are sympathetic when people close to them are in distress. All religions encourage charity and caring for people less fortunate. Giving USA provides data on philanthropy in the United States. Figure 5.10, from the 2018 Annual Report, shows the breakdown of the $410 billion that were contributed to charities in 2017. To help understand what this number means, we can compare total contributions to the size of the economy and we find a giving rate of about 2.1% of GDP. The 2018 Annual Report contextualizes total giving by tracking giving over time, shown in Figure 5.11. Total giving jumped in the mid 1990s and reached its highest level in 2017. That is good news. The Internal Revenue Service is another source of data on charitable giving because taxpayers claim deductions when they give to charity to lower the tax owed. The IRS also collects data on non-profit organizations which do not pay tax, but they have to file Form 990. IRS data can be found at www.irs.gov/statistics. Charitable giving data shows that it not only varies over time, there is also tremendous individual variation. Many people give nothing, others give a little, and a few people donate a lot. Religions encourage members to tithe, giving 10% of their income. Upon death, some people give substantial fractions of their estates to charity, while others hand it all to their heirs. There are many questions we can ask about charitable giving, but our top three are: 1. Why do people give to charity? 2. What determines how much they give? 3. How can charitable giving be stimulated? Because this is an economic analysis of charity, we are going to answer these questions by using the method of economics. We will set up and solve an optimization problem. This will provide the economic explanation for why people give and what determines how much they give. We will see that charitable giving can be stimulated by changing exogenous variables, ceteris paribus. Our model will do the usual stripping away of realistic details, making incredible simplifying assumptions, to enable us to solve the model and play comparative statics games. Keep your eye on the procedure as we set up, solve, and compute our key measurethe tax break elasticity of giving. An Endowment Model of Giving As usual, we begin with the budget constraint, then we model preferences, and we use both to find the initial solution to the problem of maximizing satisfaction subject to the budget constraint. The optimization problem is entirely from the donor’s point of view. It is the donor, the giver, who decides how much, if any, to grant to the beneficiary, the recipient. Figure 5.12 depicts the donor’s budget constraint in this application. The initial endowment is the coordinate pair that represents the donor’s consumption (on the y axis) and the beneficiary’s consumption (on the x axis). There is only one good (which represents consumption of all goods) and its price is$1/unit. So, if the donor has $100 and the beneficiary only$10, we know the initial endowment is at the point 10,100. Giving is modeled as moving down the budget line in Figure 5.12. If the donor gives $20 away, then she will have$80 and the beneficiary will have $30. Of course, the donor could give all of her money away, choosing to be at the x intercept. It is easy to see that the donor decides how much, if any, to give, by choosing a point on the budget line which determines both the donor’s own consumption and the beneficiary’s consumption. Thus, at any point on the budget line, we can compute the amount of giving as simply the vertical distance (along the y axis) from the initial endowment to the point on the budget line. If the donor decides to stay on the initial endowment point, then they give nothing to the beneficiary. The slope of the budget line is $-1$ because there is a dollar-for-dollar exchange from the donor to the beneficiary. Notice that this budget line does not extend left or northwest from the initial endowment because that would imply taking money from the beneficiary. The donor cannot do that. Finally, because we will (of course) be doing comparative statics analysis, we point out that a tax break for those who donate money means that the budget line will have a shallower slope. If the donor gives$1 and is rewarded, for example, with a 30decrease in taxes, then the recipient gets $1, but the donor actually gave only 70. The slope is not $-1$, but $-(1 - TaxBreak)$. By adjusting the tax break, we can see how the agent responds. This is too abstract. It is time to go to Excel to understand how the tax break really works. STEP Open the Excel workbook Charity.xls and read the Intro sheet, then go to the MovingAround sheet. All you see is a single point at 20,80this is the initial endowment. The donor gives nothing and there is no tax break. STEP Change cell C5, the amount the donor gives, to 20. The beneficiary gets the 20, adding it to his initial 20, and new red dot is at 40,60. The slope of the constraint is $-1$, displayed in I5. Without a tax break, every dollar given is subtracted from the donor and added to the beneficiary. But the tax code incentivizes giving by lowering the donor’s tax liability. STEP Change E5, the amount of the tax break, to 40%. The red dot jumped up. Hit ctrl-z a few times to move back forth between zero and a 40% tax break. With or without the tax break, the beneficiary still gets 20, but a tax break on charitable donations affects how much the donor actually gave up. With a 40% tax break, the sheet shows that the donor really gave up only 12 because taxes are lowered by 8 (40% of 20). Thus, the slope of the constraint is $-0.6$. Wait, if the donor gives 12 and the recipient gets 20, who makes up the difference? The government. The beneficiary gets the full donation, but the donor pays less tax to the government. Clearly, by manipulating the tax break, the government can make charitable giving less expensive to donors. So, if the tax break increases, what happens to the budget line? Think it through. You can check yourself when we get to the OptimalChoice sheet. But before we get there, we have to consider the donor’s preferences. The constraint is only about possibilities. To know what the donor will do, we need to know the donor’s utility function. The neat trick here is to enable the beneficiary’s consumption to affect the donor’s satisfaction. The way we model giving is to have the self-interested agent care about others. The usual Cobb-Douglas functional form will represent the donor’s satisfaction derived from her own consumption and the beneficiary’s consumption. $U=BeneficiaryCon^cDonorCon^d$ As usual, the exponents allow us to model different preferences. If c and d are equal, the donor gets as much satisfaction from her own consumption as the beneficiary’s consumption. She is a saint. Although possible, this is unlikely. Most people get more satisfaction from their own consumption and, thus, d is greater than c. We will use the OptimalChoice sheet with different exponent values to see the effect on the graph, but it is worth thinking through two scenarios. What would happen to the indifference curves, starting from $c=d$ as we lowered c? What would happen to the indifference curves if c fell all the way to zero? Again, thinking this through and testing yourself is good way to learnyou can check your answer in the OptimalChoice sheet. It is worth remembering that preferences are not right or wrong. We take them as given and we model the agent as maximizing based on given preferences. It can be difficult to do thiswe naturally disapprove of someone who doesn’t care about others. Another source of confusion is that preferences can and do change, but that is not to say that they are chosen by the agent. Changes to preferences are like shocks to other exogenous variablesthey are imposed by forces outside the agent’s control and then the agent re-optimizes in the new environment. STEP Proceed to the OptimalChoice sheet to see how the donor’s optimization problem can be implemented in Excel. The sheet shows a mathematical expression of the constrained utility maximization problem. The constraint is different than usual. If we write the constraint as an equation, we need to compute the y intercept and incorporate the fact that the donor cannot take from the recipient (the empty space in the northwest corner of Figure 5.12). We cannot use the usual Lagrangean method to deal with this complicated constraint because it only works with equality constraints. There is an analytical method called Kuhn-Tucker that can be used, but it is beyond the scope of this book. Fortunately, the numerical method is still available. For Excel and Solver, the complicated constraint is easily handled by adding a second constraint (cell B26) and incorporating it as an inequalitythis allows the donor to choose $m_1$ or greater for the beneficiary. The usual budget line constraint is in cell B25. Applying both constraints gives Solver the equivalent of Figure 5.12 and it has no trouble finding the optimal solution. Figure 5.13 shows the starting position. The endogenous variables are consumption by beneficiary and donor. These are chosen by the donor to maximize utility subject to the budget constraint. The exogenous variables include the amount of the tax break (initially set at zero so the slope of the budget constraint is $-1$), prices normalized to one, the initial endowment, and the impact of donor and beneficiary consumption on the donor’s utility. With $c=d$, the donor cares as much about the beneficiary as herself and the MRS $> \frac{p_1}{p_2}$ at the initial endowment. We know the donor can increase her satisfaction by traveling down the budget line. For example, suppose the agent decided to donate$10. How would this affect the chart? STEP Change cell B11 to 30 and B12 to 70. The MRS is now closer to the price ratio and utility has risen (from 1600 to 2100). The agent has moved down the budget line and is on a higher indifference curve. STEP Run Solver to find the initial optimal solution. The agent chooses the point 50,50 to maximize utility (at 2500), which means she donates $30 to the beneficiary. The net demand is the amount of giving and we express it as a dollar amount and as a percentage of the donor’s income (cell D13). This is one mighty nice donor. She has an incredibly high giving rate of 37.5%. Because $c = d$, she cares as much about the beneficiary as she does herself. It makes common sense that she picks an equal 50,50 split as her optimal solution. Comparative Statics There are several shocks to consider. We start with preferences. STEP Change the exponent for the beneficiary’s consumption to 0.2. This answers the earlier question about the effect of c on the indifference curves: they become much flatter as c falls, ceteris paribus. With $c=0.2$, the donor does not care as much about the beneficiary as before. The shape of the indifference curve is tied to the MRS. With $c=0.2$, the MRS at 50,50 has fallen to 0.2 (in absolute value). The low MRS and flat indifference curve mean that the donor is willing to trade only a little of her consumption for a lot of additional beneficiary consumption. The culmination of lowering c is a donor who does not care about the beneficiary at all. With $c=0$, the indifference curves became horizontal, MRS is zero, and beneficiary consumption is a neutral good. It is obvious that the donor with $c=0.2$ is not going to be as generous as before when $c=1$, but how much will they give? STEP Run Solver. Figure 5.14 displays the result. The result is a surprise. The best the agent can do is to donate nothing so that is what she does. Even though the MRS does not equal the price ratio, this donor is optimizing. This is a corner solution. Our work thus far provides answers to two of the three questions we initially asked. 1. Why do people give to charity? To maximize satisfaction. A donor gives because the consumption of others affects his or her utility. Notice that giving is perfectly compatible with self-interest. The economic model says that the donor feels good when she gives and that is why she gives. 2. What determines how much they give? Clearly preferences matter. How much the donor cares about others (the exponent c in the donor’s utility function) plays a major role. Of course, the constraint also matters. Donor’s income, beneficiary’s income, and the slope of the constraint affect the amount of giving. 3. How can charitable giving be stimulated? Let’s work on the third question. We could try to convince people to care more about others, increasing c (certainly this is a primary goal of religion), but a way to stimulate giving is to lower the price of giving. As we saw earlier, dollars given to charity reduce the donor’s taxable income and reduce tax owed. If the donor is in a 30% tax bracket, every dollar donated to charity saves the donor 30 cents in taxes. Thus, the beneficiary receives the dollar, but the donor is actually paying only 70 centswith Uncle Sam picking up the remaining 30 cents. What effect will a 30% tax break have on the budget constraint and charitable giving of a donor with $c=0.2$? Apply the shock in Excel and find out. STEP Change the tax_break variable (B16) to 30% and note that $p_1$ becomes 0.70 and the budget line swings out. The new red budget line is flatter than the original because of the tax break. This answers the earlier question about the effect of a tax break on the budget constraint: the bigger the tax break, the more the line swings and flattens out. This is just like lowering $p_1$ in the Standard Model. Notice that the MRS is greater than the slope of the new budget line. This agent can improve her utility by traveling down the constraint. This means she will donate to the beneficiary, as shown in Figure 15.15. But exactly how much giving does the tax break generate? Let’s find out. STEP With $c=0.2$ and $tax\textunderscore break=30\%$, run Solver. In this case, the tax break has induced charitable giving. It is hard to see on the graph, but the MRS $= \frac{p_1}{p_2}$ condition (under the chart) tells you the indifference curve is now tangent to the budget line. Figure 15.5 shows what happened. With a tax break of 30%, we get$1.67 of giving which is 2.1% of the donor’s income (the American giving rate in 2018). We can also explore how responsive our donor would be to further shocks in the taxbreak. We will compute the tax break elasticity of giving. STEP Change the tax_break cell to 40%. That’s a 10 percentage point change in the tax break and a rather hefty 33% change. The budget line swings out a little bit more, but it is hard to see the change in the chart. We know, however, since MRS does not equal $\frac{p_1}{p_2}$, that we need to re-optimize. STEP Run Solver. Charity increased from $1.67 to$3.33. That is a big responsea doubling or 100% increase in giving was generated from a 33% increase in the tax break. That is a tax break elasticity of giving of 3. STEP Proceed to the CS1 sheet to see a more detailed comparative statics analysis. Notice that the shock was 1% point, not 10. Notice also that the elasticity from a tax break of 30% to 31% is about 2.87 (H17), not 3. Even though we do not have a reduced form expression, the fact that the measured elasticity depends on the size of the shock tells us that giving is a non-linear function of the tax break. But regardless of whether it is 3 or 2.87, that high an elasticity is really good news, right? If giving is super-responsive to a tax break, little tweaks in the tax break will generate big increases in giving. But we need to be careful in how we interpret our result. We do not know whether these preferences and other exogenous variables are representative of many donors. That is an empirical question that requires real-world data. For example, with $c = 0.5$, tax break increases are much less effective in stimulating more giving. STEP Click the button, change c to 0.5 and the tax break to 30%, and run Solver. Charitable giving is at $17.33. This makes sense since giving is much higher than it was when $c=0.2$ and $tax\textunderscore break=30\%$. But what is the tax break elasticity of giving? STEP Change the tax break cell to 40% and run Solver. Charitable giving rises to$18.67. Ponder the computation for a moment. There are a lot of numbers floating around. How would you compute the tax break elasticity of giving? It is the percentage change in giving divided by the percentage change in the tax break. The numerator is $\frac{18.67-17.33}{17.33} \approx 7.7\%$. The denominator is 33% ($\frac{0.4-0.3}{0.3}$notice that it doesn’t matter if you use the percents version, $\frac{40\%-30\%}{30\%}$). Thus, the tax break elasticity is $\frac{7.7\%}{33\%} = 0.23$. This result is much less favorable for a policymaker looking to increase charitable giving by manipulating the tax break. For this donor, giving is insensitive to tax break increases. The Theory of Consumer Behavior can explain a wide variety of giving outcomes. Unfortunately, theory alone does not tell us about the magnitude of a particular effect in the real world. By changing c, we see that the tax break elasticity of giving is drastically affected, ranging from extremely elastic (3) to quite inelastic (0.23). We must gather data and employ econometric techniques to estimate the responsiveness of giving as the tax break changes in the real world. Theory does, however, give us a framework for analyzing the problem. The Economic Approach Is Widely Applicable Charitable giving can be viewed through the lens of an Endowment Model using the Theory of Consumer Behavior. The initial endowment is the consumption of the donor and the beneficiary. The donor can choose to give part, all, or none of her endowment to the beneficiary. The amount she gives is determined by that point that maximizes her satisfaction subject to the budget constraint. We can stimulate giving by lowering the price of giving. This rotates the budget line and yields a new optimal solution. The amount of the increase in giving is an empirical question that cannot be answered by theory alone. If we view giving as the solution to an optimization problem, we are doing an economic analysis of giving. “An economic analysis” is a phrase often used to communicate that the behavior under consideration will be cast in the framework of optimization and comparative statics. Many people think economics is about stocks, business, and money. This content-based definition of economics is too limited. Economics is a method of analysis and it can be applied to such “non-economic” issues as charity and many, many other areas. Seeing charitable giving through the lens of economics does not mean that this is the only way to study charity. The hope is that it provides insight and furthers understanding of what is surely a multifaceted, complex process. Exercises 1. The total change in charitable giving can be explained via the income and substitution effects for giving. For $c = 0.5$, compute the income and substitution effects when the tax break changes from 30% to 40%. Describe your procedure. 2. Use Word’s Drawing Tools to draw a rough sketch of the income and substitution effects for giving, labeling points A, B, and C and using arrows to show the income, substitution, and total effects. Do not include the indifference curves to reduce clutter. 3. Income and substitution effects were originally used to explain Giffen goods. If the tax break increase leads to a decrease in charitable giving, is this Giffen behavior? Why or why not?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/05%3A_Endowment_Models/5.03%3A_An_Economic_Analysis_of_Charity.txt
Why do people buy insurance? If you are an economist, the answer is easy: because it makes them better off. According to economists, people solve an optimization problem and it turns out that those who buy insurance end up with greater satisfaction, on a higher indifference curve, than if they did not buy insurance. We will use an Endowment Model to explain how and why insurance is an optimal choice. We will see yet another application of how to solve a constrained utility maximization problem and perform comparative statics analyses. But the really deep lesson is that the Theory of Consumer Behavior is amazingly flexible and can answer questions from a wide range of problems. In this chapter, we have explored why people save and borrow, give to charity, and, now, buy insurance. First, we will set up the problem with the usual constraint, indifference curves, and initial optimal solution (with MRS equal to the slope condition). The presence of risk, a probability that an event occurs, throws a curveball into the analysis, but we will convert things into our usual framework. Second, we will do comparative statics. For example, we derive a demand curve for insurance. We can explore the effects of a higher premium, the price of insurance, on the quantity of insurance demanded. We are on the lookout for the premium elasticity of insurance. An Endowment Model of Insurance There are three parts to every optimization problem. In this case, we have the following: 1. Goal: maximize satisfaction (as represented by the utility function). 2. Endogenous variables: consumption in two states of nature, good and bad; by choosing the amount of insurance, we control two choice variables at once. 3. Exogenous variables: initial assets, potential loss, probability of loss, insurance premium, and preferences over the states of nature. As usual, we start with the constraint, then turn to preferences, and finally use the constraint and utility function to find the initial solution. STEP Open the Excel workbook Insurance.xls and read the Intro sheet, then go to the Constraint sheet. The idea is that you have an asset, say your car or house, which may suffer a given amount of damage from an accident, called the PotentialLoss, with a known probability, $\pi$ (the Greek letter, pi) that the damage occurs. Initially, the PotentialLoss is $10,000, which is only a fraction of the value of the house. You can buy K dollars of insurance, this is the InsuredAmount, by paying a price (called a premium) of $\gamma$ (the Greek letter, gamma) per$100 of insurance coverage. On opening, you are not buying any insurance. If you buy insurance, then if the accident occurs, you get reimbursed for the loss. You can buy insurance in $100 increments, up to the PotentialLoss, in which case you would be fully insured. The trade-off is that you have to pay for insurance up front, before you know if the accident will happen or not. After you decide how much insurance to buy, there are two possible outcomes, known as states of nature: the bad and good outcomes. STEP Click on cell B18 to see the formula for your consumption in the bad outcome. The ConsumptionBad outcome means the accident actually occurred, leaving your consumption as $InitialAssets - PotentialLoss + K - \gamma K$. You subtract the loss that occurred and the amount you paid for insurance ($\gamma K$), but you add the amount K that the insurance company pays you because you suffered the accident. You could be fully covered, but you do not have to be. You decide how much insurance to buy. Your consumption in the good state of nature is simply $InitialAssets - \gamma K$. You do not suffer the accident, but you still have to pay for the insurance. STEP Click on cell B19 to see the formula for the good outcome. Cells B23:B25 display in which state of nature you end up. Cell B23 has the formula "=RAND()." This draws a number from a uniform distribution on the interval [0,1]. STEP Hit the F9 key on your keyboard repeatedly to understand Excel’s RAND() function works. Each time you hit the F9 key, Excel draws a random number from 0 to 1 in cell B23. The number drawn is never smaller than zero or bigger than one. Cell B24 converts the random draw in the cell above it into a zero or a onezero means the accident did not happen (good outcome) and one means it did (bad outcome). It uses an IF statement to display a "1" (the accident happened) when the random draw is less than 0.01 (the value of $\pi$ in cell B8). It is hard to see that anything is really happening in cell B24 because the probability of the accident occurring is so small. STEP Change $\pi$ to 50%, then hit F9 a few times. You should be able to see cell B24 flip from 0 to 1 and back again as the random draw is less than 0.5 and greater than 0.5. Notice that the FinalAssets variable, cell B25, depends on whether or not the accident occurred. Next, let’s buy some insurance to see what that does to the spreadsheet. STEP Click the button and set cell B13 to$1000. This will cost you $10. Notice the values for the good and bad states of nature. You have altered both. If the accident occurs, your consumption is$25,990, which is $990 better than the$25,000 for the bad outcome when you did not buy insurance. Of course, the good outcome is $10 lower (at$34,990) in the good outcome because you have to pay for the insurance even when the accident does not occur. STEP Click the button. Click OK to the "4" points default option and read each text box as it appears. At the end, the budget line is displayed (see Figure 5.16). From the initial endowment ($C_b, C_g$ without insurance), you can move down the budget line by buying insurance. You lower your consumption in the good state of nature ($C_g$ is on the y axis), but raise it in the bad state of nature ($C_b$ is on the x axis). The terms of trade (the slope of the budget line) are determined by gamma (the insurance premium). The slope of the budget line is $-\frac{\gamma}{1-\gamma}$, which with $\gamma = 0.01$ is $\frac{-1}{99}=0.01$ (the "01" keeps repeating forever). The graph rounds the slope to five decimal places. Changes in initial assets or potential loss shift the budget constraint. We are interested, however, in deriving a demand curve for insurance so we will shock the insurance premium (the price of insurance). This will pivot or rotate the budget line. STEP Change the insurance premium to $1.20 per$100 of insurance coverage. You see the familiar swinging in (clockwise rotation) from a $p_1$ increase. A buyer of insurance would be disappointed in this shock because her consumption possibilities are diminished. Now that we understand the constraint, we turn to the agent’s tastes. We model utility as preferences over the two states of nature. The fact that there is risk involved in which state of nature occurs complicates things. Instead of having utility simply depend on the amount of consumption in the good and bad outcomes, we include the agent’s expectations about the chances of each outcome occurring. Fortunately, our usual Cobb-Douglas functional form can incorporate this new information. We use the exponents in the Cobb-Douglas functional form to represent the agent’s beliefs about the probability of the accident occurring. There are two simplifying assumptions. The first is that the agent accurately gauges the probability of loss, which means we can use $\pi$ as the exponent in the utility function. The second assumption uses the fact that there are only two mutually exclusive outcomes so the bad outcome occurs with probability $\pi$ and the good outcome has likelihood $1-\pi$. The possibility of a partial loss is assumed away. The utility function is then $U=C_b^\pi C_g^{1-\pi}$ The idea behind the utility function is simple: The higher the probability of loss, the more the agent will care about the bad outcome. In terms of the indifference map, the higher $\pi$, the steeper the indifference curves. This means the agent cares more about consumption in the bad state of nature as risk rises. Unlike the Standard Model where the exponents in the Cobb-Douglas utility function can be used to represent changes in preferences, changes in the exponents do not indicate a change in preferences for the utility function with risk. To get a change in preferences, we need an entirely different utility function. It is beyond the scope of this book, but there is a great deal of research on choosing with random outcomes. The field of behavioral economics was born with the discovery of paradoxes, violations of transitivity and other inconsistencies, when people made choices involving randomness. Our Cobb-Douglas utility function can be written as an expected utility function by simply taking the natural log: $ln U=\pi C_b + (1-\pi)C_g$ This function reflects risk averse preferences. It is a starting point for modeling attitudes and feelings toward risk and randomness. STEP Proceed to the Preferences sheet to see an implementation of the Cobb-Douglas utility function. The sheet tries to give a new way of understanding how constrained utility maximization works. It shows consumption in the bad and good states of nature, $25,000 and$35,000, respectively, without insurance. This is the initial endowment point. With $\pi = 1\%$, we can compute the level of utility for the initial endowment combination of consumption in the bad and good states of nature. This is shown in cells D13 and E13. We can also compute the MRS at the initial endowment, displayed in cells G13 and H13. The Dead and Live utility and MRS are the same because we are at the initial endowment. The Dead cells are numbers. They will not change when we change the cells in column B. The Live cells contain formulas. They will update when you change the values of $C_b, C+g$, and $\pi$. STEP Ponder and answer the question in cell A6. Click on the when you are ready. Do the same for B10. The Live utility and MRS cells change when you change cells B13 and B14. As you moved down from the initial endowment, utility rose and the MRS fell. It got closer to the slope which means we are closer to the optimal solution. We are ready to find the initial optimal solution. STEP Proceed to the OptimalChoice sheet. The OptimalChoice sheet reproduces the Constraint sheet, but it adds the indifference map to the chart and displays the slope of the budget line and the MRS at the bottom of the chart. It also displays the utility in cell B20 from the chosen consumption in the two states of nature. It is really hard to see what is happening with the indifference curve at the initial endowment and the slope of the budget line. STEP Zoom indouble-click the y axis and make the minimum bound 34800 and the maximum bound 35200. You can now see clearly that when MRS $>$ slope of the budget line, the budget line cuts the indifference curve. By moving down the budget line, you can reach higher levels of satisfaction. STEP Enter 5000 in cell B13 to see where the agent stands when buying $5000 of insurance. The chart shows movement down the budget line to a higher level of utility. We are closer to the optimal solution, but not there yet because MRS is not equal to the slope of the budget line. STEP Run Solver to find the optimal solution. The Solver dialog box is notable for the fact that there are no constraints. The way we implemented the problem in Excel enabled us to directly maximize the utility cell by choosing a single variable, the amount of insurance purchased. We can still use, however, the canonical Theory of Consumer Behavior graph to show the result. At the optimal solution, the consumer decides to buy$10,000 of insurance. In the bad state, if the accident occurs, the agent is fully covered, so is consumption $35,000? No, because the agent has to pay$100 for the insurance, so consumption would be $34,900 in the bad state. In the good state, where there is no accident, consumption is also$34,900. This is surprising. Insurance has removed the effect of risk. Consumption is the same in both states. This is an extreme example of diversification. Diversification is a strategy to lower risk by spreading your wealth over different states of nature. By moving $100 from the good state of nature (buying insurance), the agent has a guaranteed level of utility regardless of whether the accident happens. Without insurance, the expected return is$34,900 since 99% x $35,000 + 1% x$25,000 = $34,900. But the agent has to put up with the risk of every 1 in 100 times getting$25,000. By diversifying, the expected return is the same, $34,900, with absolutely no risk. Such a perfect resultthe complete elimination of riskrelies on the fact that the two states of nature are perfectly correlated. In the real world, when states of nature are not perfectly correlated (such as the stock market), diversification can lower risk while maintaining the same expected return, but it cannot completely eliminate it. We know that people buy insurance because it increases satisfaction. This application models choosing the amount of insurance that maximizes utility subject to the budget constraint. Next, we use the model to derive a demand curve for insurance. Comparative Statics The procedure is straightforward: we vary the insurance premium (the price of insurance), $\gamma$, ceteris paribus, and track the optimal amount of insurance purchased (K) to derive a demand curve for insurance. We use numerical methods and leave the analytical work for the exercises. STEP In the OptimalChoice sheet, change $\gamma$ to$1.30 per $100 of insurance. What happens? The budget line (displayed in red on your screen) gets steeper. The agent needs to re-optimize. STEP Run Solver to find the new optimal solution. If you did not zoom in on the y axis as instructed earlier, it is hard to see on the chart, but the cells below the chart confirm that the MRS equals the slope of the budget line when the agent buys$1847 of insurance. We can conclude that demand for insurance is downward sloping when the premium rises from $1.00 to$1.30 since the amount of insurance purchased fell from $10,000 to$1847. That is extremely responsive. STEP Compute the price elasticity of demand. Proceed to the CSgamma sheet to check your answer. Notice that Excel tries to help when you enter the formula by formatting the result as dollars. This is incorrect. Elasticity is unitless. The CSgamma sheet shows that the CSWiz add-in was used to explore the effect of the insurance premium on the amount of insurance purchased. Gamma was incremented by 0.1 (10 cents) with 10 shocks. Optimal $K, \gamma K, C_b,$ and $C_g$ were tracked as $\gamma$ changed. The sheet includes a chart of $K \mbox{*} = f(\gamma)$, the demand curve for insurance. Notice the curious behavior of the model as $\gamma$ rises: at \$1.40, optimal K becomes negative. This is an Endowment Model. When premium prices get high enough, the agent switches from buying insurance to selling insurance! If this option is not allowed, you can impose the constraint in Excel that K be greater than or equal to zero. Then, with high premiums, the consumer is at a corner solution and buys no insurance. Modeling Insurance via the Endowment Model Insurance is another application of an Endowment Model in the Theory of Consumer Behavior. The usual ideas were applied: the budget constraint, preferences, and MRS equals slope of budget line at the optimal solution. In addition, the usual recipe of the economic approach, finding the initial optimum and then comparative statics, was followed. But this application does have its own twists and novelties. We used a Cobb-Douglas functional form to model satisfaction where the exponents reflect the probabilities of the states of nature. We also used Excel’s Solver without a budget constraint because of the way we implemented the problem in Excel. To be clear, this problem can be solved via the Lagrangean method (see the first exercise question) and we could have implemented a "max U subject to a constraint" model in Excel. We would get, of course, the same answer. Exercises 1. Use analytical methods to derive a general reduced form solution for $K \mbox{*}$. Show your work. Although you can use the Lagrangean method, it is easier to maximize the utility directly, substituting in the values for each state of nature. $\max\limits_{K}U=C_b^\pi C_g^{1-\pi }$ The key is that consumption in the good and bad states of nature depends on K: $C_b = InitialAssets - PotentialLoss + K - \gamma K$ $C_g = InitialAssets - \gamma K$ We can simply substitute these equations into the utility function and maximize this: $\max\limits_{K}U=[InitialAssets - PotentialLoss + K - \gamma K]^\pi [InitialAssets - \gamma K]^{1-\pi}$ 2. Compare the analytical versus numerical approaches by evaluating your answer to question 1 at the initial parameter values in the OptimalChoice sheet. (Click the button if needed.) Do you find that $K \mbox{*} = \10,000$? 3. Use your reduced form for K* to find the probability of loss elasticity of insurance demand at $\pi$ = 1%. Show your work. If you cannot find the reduced form, use 4. Use the Comparative Statics Wizard to find the probability of loss elasticity of insurance demand from $\pi = 1\%$ to 1.1%. Take a picture of your results, including the elasticity calculation. 5. Compare your answers in question 3 and 4. Do these elasticities differ? Why or why not?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/05%3A_Endowment_Models/5.04%3A_An_Economic_Analysis_of_Insurance.txt
In finance, a portfolio means the total holdings of stocks, bonds, and other securities of an individual (or other entity, such as a trust or foundation). Because the investor can decide which securities to include in her portfolio, in other words, because choices are made, we can apply the method of economics. Optimal Portfolio Theory is the name given to the application of the Theory of Consumer Behavior to analyze decisions about which assets to hold. An important stop on our journey is shown in Figure 6.1, the initial solution to the constrained optimization problem. There are some strange features in Figure 6.1 and you are not expected to understand it right away. Perhaps the weirdest thing is that the budget constraint and indifference curves are upward sloping. Because risk (on the x axis) is a bad (not a good), the agent substitutes more of the bad for more of the good (return, on the y axis) on an indifference curve. There are also, however, elements that are familiar and comfortable in Figure 6.1. There are exogenous (green) and endogenous (blue) variables with a goal. There is a constraint and a few curves with a tangency highlighted that is obviously the optimal solution. And we can see the usual MRS = slope condition below the chart. Of course, Figure 6.1 is just the initial optimal solution. There is more to do than simply finding the initial solution. That is why Figure 6.1 is an important stop on our journey, but we have more to travel. We want to explore how the optimal solution changes as one of the exogenous variables changes, ceteris paribus. This is called comparative statics analysis. The procedure that defines the Theory of Consumer Behavior is clear: constraint, preferences, find initial solution, then comparative statics to make statements about how a shock variable affects an optimal choice variable. We will do an elasticity computation and interpretation of the shock. The short way of saying all of this is to just say that we are going to do an economic analysis of portfolio choice. But since we will be talking about returns from assets, volatility, and the stock market, let’s look at some data to make sure we understand some basic facts. Stock Market Returns STEP Open the Excel workbook RiskReturn.xls and read the Intro sheet, then go to the Data sheet. The sheet has returns from the S&P 500 index, a group of 500 large companies, downloaded from www.moneychimp.com/features/market_cagr.htm. These data are used to show that returns are quite volatile. The sheet also explains the difference between the arithmetic and geometric mean. STEP Read the explanation in the Data sheet, scroll down to see the data (all the way down to 1871), and then click the button. This reveals more material. Keep reading and clicking the buttons until you get to the end and then be sure to click the button. Of utmost importance is that you understand the volatility in the S&P 500 returns. They swing wildly and unexpectedly, from incredible spurts of 50% to staggering losses of almost negative 50%. STEP Look in columns A and B of the Data sheet at the 1930s, during the Great Depression. Scroll slowly back up, looking at the data. The volatility in the stock market, measured by the standard deviation, SD, of almost 20%, is unwelcome and unsatisfying. The fear of financial disaster and the risk of losing money lowers utility. Then why do people put their money in assets like the S&P 500? Because the overall annual return is highmuch higher than safer, less volatile assets. For the S&P 500, the overall annual return (as you now know, measured by the geometric mean, GM, or compound annual growth rate, CAGR) is about 9% per year. The stock market’s 9% annual return is much higher than that available from a safe, stable asset that produces consistent annual returns like US Treasury Bills. Cell H10 in the More sheet shows that the SD is a mere three percentage points. The variability arises because the yield changes over time, but once you buy a US Treasury note for a particular length of time, you can be quite sure that you will be paid. But right below the SD we see that the overall annual return is one-third of the stock market’s return. The key point is that financial markets offer the investor a menu of options, from low risk, low return to high risk, high return, and the investor chooses. All we need to do is model that choice as an optimization problem. Optimal Portfolio Theory The Compare, Mix, and Constraint sheets in RiskReturn.xls demonstrate that an investor can mix two assets, a risk-free and a risky asset, to create a portfolio that has a particular combination of risk and return. The investor is not free to pick any combination of risk and return. They must stay within the constraint imposed by the market. The idea is that you have a fixed amount of money, say $10,000, to allocate across two assets. The risk-free asset, say a US Treasury Bill, has a certain (practically speaking) rate of return, say 5% per year, which is unrealistically high for the current climate. Thus, you are sure to get 5% of$10,000, or $500, along with your initial investment of$10,000 at the end of the year. Each year, a $10,000 investment is guaranteed to produce$500 of return. The risky asset, say a mutual fund of stocks, has a greater return, but also volatility in the actual realized return. We will suppose that the actual return will be drawn from a normal distribution centered on 12%, with a spread of 20%. Both of these values are a little higher than the historical experience of the S&P 500 (in the Data sheet). Our parameter values mean that the typical realized value in our hypothetical world will be around 12% $\pm$ 20% points. It also means you will actually lose money (suffering a negative return) about a quarter of the time. But this is way too abstract. To understand the meaning of these parameters, let’s work on a concrete problem with actual numbers and a clear display of what is going on. STEP Go to the Compare sheet. The bell-shaped curve is the normal distribution from which each year’s return will be drawn. The center and spread are controlled in cells A2 and C2. The sheet allows you to run the two investments against each other and shows how volatility impacts the annual returns. STEP Click the button. For the risk-free asset, cells I3 and L3 show 5% and $500. In other words, if you place$10,000 in the risk-free asset, these are the returns on that investment. The risky asset is different. Cells J4 and M4 show a number that is taken from the normal distribution on the left of your screen, centered on 12 with an SD of 20. Thus, the number in J4 is likely to be around 12, but could easily be in the range $- 8$ to 32 ($\pm 1$ SD from the average) and roughly 95% of the time will be between $-28$ and 52 ($\pm 2$ SDs from 12). STEP Click the button a few times. You can clearly see what is happening here. The return from the risk-free asset is always the same, but the risky asset bounces around. Once you have more than one year of returns, the display shows more information in columns P:S. You can see the arithmetic mean of the returns, SD, the exact geometric mean, and its approximation. STEP Click the button many times, at least 20. Notice what is happening to the average of the returns of the risky asset as you keep adding years: The average return is converging to 12% (the average return from the normal distribution in A2). In other words, over the long haul, the risky asset will outperform the risk-free asset. However, in any one year, the risky asset can do pretty badly. Look at your screen to confirm that this is true. You will see some whopper losses (and gains)just like the real-world S&P 500 data. STEP Click the button and set the dispersion to 6% (in C2). Repeatedly (many times) click the button. The SD of the normal distribution controls the variability. The lower SD makes the normal distribution much more spiked. In other words, the draws from the distribution are much more concentrated at the average and it is much less likely that you will see values far from the center of the distribution. As you get one yearly return after another (keep drawing more returns), it is easy to see that the returns are much closer to 12%. You will rarely lose money with an average of 12% and an SD of 6%. In finance, risk is denoted by the Greek letter sigma, $\sigma$. The SD and $\sigma$ are the same thing. Both represent risk as volatility and bounce in returns, including the possibility of negative returns. Risk is bad and undesirable. The lower the risk, the better. What determines the amount of risk in the risky asset? That depends on the asset. We have seen that the S&P 500 has a lot of volatility. From 1871 to 2019, it has experienced an overall annual return of about 9% with an SD of 18%. The More sheet showed that other assets have different volatility. So, the investor is given the average and SD parameters of various assets and chooses what to invest in. Although we ran risk-free and risky assets in the Compare sheet, in fact, the choice is not simply between a risk-free and a risky asset. You can combine the two in varying proportions. For example, you could split your investment and put $5000 in the risk-free and$5000 in the risky asset. In this case, your return would be halfway between the risk-free and risky assets: $\frac{r_f + r_ m}{2} = 8.5\%$ Although the return is lower than using the risky asset alone, your risk, the variability in returns, would be cut in half also. STEP Proceed to the Mix sheet to see this idea in action. The Mix sheet is the same as the Compare sheet, except it has a scroll bar in H1 to control the allocation of your $10,000 across the two assets. STEP After you set the scroll bar value (any value will do; pick the one you think makes the most sense for you), click the button many times. You should be able to see that the average return for your mix (or portfolio) converges on a return that is in between the risk-free and risky assets. In other words, you can choose the return and risk that you get. You must, however, trade them offmore return requires accepting more risk. STEP Experiment. Use the button to try different mixes and parameter values (yellow-backgrounded cells A2, C2, and F2). You can copy the Mix sheet (right-click the sheet tab, select Move or Copy, and check Create a Copy) if you want to compare different scenarios. The more you experiment, the more you learn. Your work in the Compare and Mix sheets makes understanding the constraint much easier because you have seen that there are two assets that can be mixed to form a portfolio with a continuous range of risk and return possibilities. This constitutes the constraint for the investor. He or she is free to choose combinations of risk and return, trading higher risk for greater return. STEP Proceed to the Constraint sheet. There are two endogenous variables, YourRisk and YourReturn, in cells B14 and B15. These are the risk and return you have chosen, in other words, a single point on the budget line. However, we can create a single variable, YourMix (just like in the Mix sheet) that controls the proportion of your investment in the two assets and the values of risk and return you select. Clearly, you can mix the risk-free and risky assets in any combination from 0 to 100%. Zero means you buy just the risk-free asset and 100% means you buy only the stock market. Do not confuse the exogenous variable Market Risk with the endogenous variable YourRisk. The riskiness of the risky asset, sigma, is exogenous to the agent. But the agent determines how much risk to take and, therefore, the chosen amount of risk is endogenous. STEP Change B13 to 20%, 50%, and 90%. As you change B13, the red dot moves on the constraint. You can put the red dot wherever you like along the line. At 50%, you are setting YourRisk to 10% (this is the variability in the 50/50 portfolio) and YourReturn to 8.5% (halfway between $r_f$ and $r_m$). The equation of the budget line (derived in the Constraint sheet) is $YourReturn = r_f + \frac{r_m - r_ f}{\sigma}YourRisk$ Clearly, if you choose a risk of zero, then your return is the risk-free return. This is the y intercept. As you accept more risk, your return grows with a slope given by $\frac{r_m - r_ f}{\sigma}$ Notice that combinations under the budget constraint are feasible, but will not be selected because more return can always be obtained at the same risk by going straight up. Points to the northwest of the line are more desirable, but are unattainable. Which mix is the best, the optimal choice? We cannot answer this question with the constraint alone. It tells us only the choices we can make. To answer the question, we need to model preferences. But before we leave the constraint, let’s explore the effect of a change in sigma, Market Risk. This will be our shock variable when we do comparative statics analysis. Remember when you lowered the SD to 6% and that made the variability in the risky asset go way down? That was a welcome shock. What would happen to the constraint if we applied that shock? Before we do it, ponder the question. Do you have an answer? Let’s see how you did. STEP Change Market Risk, cell B10, to 6. The budget line rotates up (counterclockwise) around the y intercept. This gives the investor access to higher returns with the same risk or the same return with less risk. Mathematically, it also makes sense since we lowered the denominator in the slope, so the slope term increased, making the line steeper. STEP Proceed to the Preferences sheet to see how we handle risk as a bad. Our usual Cobb-Douglas functional form can be modified to reflect a bad with a simple tweak: $U(YourRisk,YourReturn)=(30-YourRisk)^aYourReturn^b$ The clever trick here is subtracting a variable from a constant, which has been chosen to be bigger than the possible values of the variable. By having a constant, 30, which is a bigger number than the relevant range for Risk (from zero to 20), as we increase the chosen amount of YourRisk, $30 – YourRisk$ falls. This gives us a bad because utility falls as YourRisk rises (for $YourRisk < 30$). YourReturn is a goodas YourReturn rises, so does utility. The chart shows three representative, upward sloping indifference curves. The investor gets equal satisfaction by the combinations of risk and return on a single indifference curve. If the investor takes on more risk, she must be given more return to compensate. STEP The agent is free to choose any combination of risk and return that is on the budget line. Change B12 to 50. Figure 6.2 shows the result. In addition to the three original indifference curves with a black dot, three new curves are displayed along with a red dot. The black dot is the initial 75% mix choice and it produced Dead Utility of 153.75 and a Dead MRS of about 0.6833. The red dot is live in the sense that it depends on the value of B10. The chart displays the indifference curve that goes through the mix value in B10, along with an indifference curve and another below it. A mix of 50% risky is better than 75% for this investor because utility went up. The red dot is on a higher indifference curve. Notice also that the MRS fell, getting closer to the slope of the budget line. That means the investor is getting closer to the optimal solution. STEP Change B12 to 90. Now the reverse is true. The red dot is on a lower indifference curve and the MRS is farther away from the slope. STEP Change the exponent on YourReturn in B19 to 4 and click the button. The indifference curves are now much flatter. What does this mean? STEP Change B12 to 50 and 90. We are getting different results than before? What is going on? If $b > a$, the investor cares more about return than risk. The flat indifference curves (with low MRS) mean that they are willing to accept a lot of risk for a little more return. These preferences mean that this investor will find an optimal solution with a high risk, high return combination. STEP Change B19 to 0.4 and click the button. Explore the satisfaction produced by mixes of 50% and 90%. What do you learn? With a low b (lower than a), this investor is more concerned with risk. They are conservative and their optimal solution will lie on a low mix value. In fact, these preferences produce a corner solution, with the investor putting all$10,000 into the risk-free asset. Preferences are not right or wrong. If you are young and saving for retirement, it makes sense that $a < b$, but even then, if a person does not like risk, that is not a defect. An aggressive investor is not in any sense better than a conservative investor. Some people like risk and others do not in the same way that some people like broccoli or the color blue and others do not. Preferences are not set in stone. They can be affected by the environment. A short time horizon, such as needing funds for college in a year, will rotate the indifference map, reflecting an investor who is more conservative. Likewise, retired people, typically, become more conservative and less willing to accept risk. With the constraint and preferences modeled, we are ready to find the optimal solution. STEP Proceed to the OptimalChoice sheet to see the numerical method in action. The OptimalChoice sheet opens with an inefficient solution. The MRS is greater than the slope of the budget line so the indifference curve cuts the line. The agent should move down the line, accepting less return for less risk. This increases satisfaction. But how far down to travel? STEP Run Solver to find the answer to this question. At the optimal solution, the MRS equals the slope of the budget line and the agent is on the highest attainable indifference curve. For this agent (with these attitudes toward risk and return) and the given market trade-off between risk and return (captured by the equation of the budget constraint), the optimal solution is found with a mix of about 39% of funds invested in the risky asset. Thus, the optimal risk to accept is $7 \frac{6}{7}$ and the optimal return is $7 \frac{3}{4}$. Via analytical methods, we can use this Lagrangean to find optimal YourRisk ($x_1$) and YourReturn ($x_2$). Try doing this problem and if you get stuck, the solution for a similar problem in the Q&A sheet is in the Answers folder. Comparative Statics As usual, there are a number of comparative statics exercises to consider and they can be done via numerical or analytical methods. Let’s explore the effect of an increase in sigma, the amount of risk the market forces you to bear in return for better performance. STEP In the OptimalChoice sheet, increase $\sigma$ from 20 to 25. What happens? Figure 6.3 and your screen show a new, red budget line that has rotated clockwise and down. The flatter slope is bad for the investor because consumption possibilities have been reduced. The market says that for a given amount of return, you must accept more risk. How will the investor respond to this shock? STEP Run Solver to find out. You will see that the agent chooses less risk and less return. What elasticity is under consideration here? There are several. There is the sigma elasticity of YourRisk, the sigma elasticity of YourReturn, and the sigma elasticity of YourMix. Of course, these elasticities can also be computed at a point, using the derivative. One of the exercises asks you to do exactly that. STEP Try your hand at computing the sigma elasticity of YourRisk from $\sigma = 20\%$ to 25%. Check your answer in the CSsigma sheet. Of course, these elasticities can also be computed at a point, using the derivative. One of the exercises asks you to do exactly that. Because the change in sigma is a change in the slope of the budget line, we can use the Slutsky decomposition approach to break down the total effect into income and substitution effects. This work is left for you as an exercise. Asset Allocation is an Optimization Problem Optimal Portfolio Theory is yet another application of the Theory of Consumer Behavior. The twist here is that one of the choices, risk, is a bad. The agent cannot ignore risk. She is forced to accept more risk to secure greater return. The core concepts of the Theory of Consumer Behavior remain easily visible: a budget constraint describing consumption possibilities, preferences translated into an indifference map, maximization of utility given a budget constraint, and MRS equals slope of budget line at the optimal solution. Perhaps most importantly, once we cast the problem as a choice, how to allocate assets among stocks, bonds, and other financial instruments, we are firmly in the land of Economics. This particular optimization problem is different from previous applications in that individuals are keenly interested in getting the optimal solution right. There is often a lot of money at stake and mistakes can prove costly (for example, with a retirement portfolio). As economists, we remain interested in comparative statics. Changing preferences are an important shock variable in this application. We do not shake our heads at the conservative investor who finds an optimal solution (given conservative preferences) at a low risk, low return point. Exercises 1. Use the equation that follows to solve for YourRisk* ($x_1$) and YourReturn* ($x_2$) in terms of the exogenous variables. Show your work. 2. Use your reduced form solution to find the sigma elasticity of YourRisk at $\sigma = 20\%$ (and the values of the other exogenous variables from the initial position of the OptimalChoice sheet-click the button if needed). Show your work. 3. Use Word’s Drawing Tools to draw a well-labeled graph that depicts the total, income, and substitution effects for YourRisk. Make the substitution effect greater than an opposing income effect. 4. Compute the total, income, and substitution effects for YourRisk for the change in sigma from 20% to 25%. Show your work and describe your procedure.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/06%3A_Bads/6.01%3A_Risk_Versus_Return.txt
Cars are much, much safer today than in the past. Everyone knows that seat belts, airbags, and anti-lock brakes have made cars safer. The future holds great promise: guidance and avoidance systems, fly-by-wire technology that will eliminate steering columns, and much more; culminating in self-driving vehicles that communicate with each other. But cars remain dangerous, both to vehicle occupants and others, such as cyclists and pedestrians. The United States uses the Fatal Accident Reporting System (FARS) to gather information about every motor vehicle crash in which someone dies. Such an event requires sending detailed information to FARS. Police record many variables, including time, weather conditions, demographic data, and whether drugs or alcohol were involved. STEP To see the data, open the Excel workbook SafetyRegulation.xls and read the Intro sheet, then go to the Data sheet. You can see that 36,650 people died in 2018 in a traffic accident. About half of the fatalities were drivers, almost 5,000 were motorcyclists, and 7,354 were non-motorists. While FARS has data on the total number of deaths back to 1994 (36,254), simply comparing total fatalities over time is not a good way to measure driving safety. Under Other National Statistics, the data show that, year after year, there are many more people driving cars many more miles. So, we need to adjust the total number of fatalities to account for these increases. We need a fatality rate, not the total number of fatalities. By dividing total deaths by the number of miles traveled, we get a measure of fatalities per mile traveled. This results in a tiny number so, to make it easier to read, the fatality rate is reported per 100 million miles traveled. Adjusting with miles traveled is not the only way to create a fatality rate. The Data sheet shows rates based on population, registered vehicles, and licensed drivers. They all tell the same story. Figure 6.4 shows the United States traffic fatality rate. The number of fatalities per 100 million miles traveled has fallen from 1.73 in 1994 to 1.17 in 2017, which is about a 30% decrease during this time period. That is welcome news. Less encouraging in Figure 6.4 is the leveling off since 2009 and the increase from 2014 to 2016. Distracted driving because of phone use and texting are suspected contributors. The data in FARS only track fatalities and, thus, say nothing about nonfatal accidents. It turns out we are doing better here alsoinjury rates and severity of injury have also declined. So, all is well? Actually, not exactly. Although it may seem greedy, fatalities and injuries should have fallen by a lot more. We are doing better because fatal accident and injury rates have fallen, but we should be doing much, much better. After all, the car you drive today is much, much safer than a car from 20 or 30 years ago. If the vehicle you drive today is much safer than vehicles from 20 or 30 years ago, then fatal accident and injury rates should have fallen more to reflect these improvements. So, what is going on? Economics can help answer this question. We will apply the remarkably flexible Theory of Consumer Behavior to driving a car. Any problem that can be framed as a choice given a set of exogenous variables can be analyzed via the economic approach. There are certainly choices to be made while driving: what route to take, how fast to drive, and what car to drive are three of many choices drivers make. We will focus on a subset of choices that involve how carefully to drive. Theoretical Intuition The key article that spawned a great deal of further work in this area was written in 1975 by University of Chicago economist Sam Peltzman. The abstract for "The Effects of Automobile Safety Regulation" (p. 677) says, Technological studies imply that annual highway deaths would be 20 percent greater without legally mandated installation of various safety devices on automobiles. However, this literature ignores offsetting effects of nonregulatory demand for safety and driver response to the devices. This article indicates that these offsets are virtually complete, so that regulation has not decreased highway deaths. Time-series (but not cross-section) data imply some saving of auto occupants’ lives at the expense of more pedestrian deaths and more nonfatal accidents, a pattern consistent with optimal driver response to regulation. This requires some translation. By technological studies, Peltzman is referring to estimates by engineers that are based on extrapolation. Cars with seat belts, airbags, anti-lock brakes, and so on are assumed to be driven in exactly the same way as cars without these safety features. This will give maximum bang for our safety buck. Economics, however, tells us that we won’t get this maximum return on improved safety features because there is a driver response to being in a safer car. By offsetting effects, Peltzman means that the gains from the safety devices are countered, offset, by more aggressive driving. Peltzman’s key insight, which separates an economist from the way an engineer considers the problem, is to incorporate driver response. He says on page 681: The typical driver may thus be thought of as facing a choice, not unlike that between leisure and money income, involving the probability of death from accident and what for convenience I will call "driving intensity." More speed, thrills, etc., can be obtained only by forgoing some safety. This claim sounds rather outrageous at first. Do I suddenly turn into an Indy 500 race car driver upon hearing that my car has airbags? No, but consider some practical examples in your own life: • Do you drive differently in the rain or snow than on a clear day? • Do speed bumps, if you can’t swerve around them, lead you to reduce your speed? • Would you drive faster on a road in Montana with no cars for miles around versus on the Dan Ryan Expressway in Chicago? In which case, Montana or Chicago (presuming you are actually moving on the Dan Ryan), would you pay more attention to the road and your driving? • If your car had some magic repulsion system that prevented you from hitting another car (we almost have this), would you drive faster and more aggressively? Economists believe that agents change their behavior to find a new optimal solution when conditions change. In fact, many believe this is the hallmark of economics as a discipline. Many non-economists either do not believe this or are not aware of how this affects us in many different ways. If you do not believe that safer cars lead to more aggressive driving, consider the converse: Do more dangerous cars lead to more careful driving? Here is how Steven Landsburg puts it: If the seat belts were removed from your car, wouldn’t you be more cautious in driving? Carrying this observation to the extreme, Armen Alchian of the University of California at Los Angeles has suggested a way to bring about a major reduction in the accident rate: Require every car to have a spear mounted on the steering wheel, pointing directly at the driver’s heart. Alchian confidently predicts that we would see a lot less tailgating. (Landsburg, p. 5) The idea at work here is only obvious once you are made aware of it. Consider the tax on cars over \$30,000 passed by Congress in 1990. By adding a 10% tax to such luxury cars, staffers computed that the government would earn 10% of the sales revenue (price x quantity) generated by the number of luxury cars sold the year before the tax was imposed. They were sadly mistaken. Why? People bought fewer luxury cars! This is a response to a changed environment. You cannot take for granted that everyone will keep doing the same thing when there is a shock. This idea has far-reaching application. Consider, for example, its relevance to the field of macroeconomics. Robert Lucas won the Nobel Prize in Economics in 1995. His citation reads, "for having developed and applied the hypothesis of rational expectations, and thereby having transformed macroeconomic analysis and deepened our understanding of economic policy." (See www.nobelprize.org/prizes/economic-sciences/1995/press-release/) What exactly did Lucas do to win the Nobel? One key contribution was pointing out that if policy makers fail to take into account how people will respond to a proposed new policy, then the projections of what will happen will be wrong. This is called the Lucas Critique. The Lucas Critique is exactly what is happening in the case of safety features on cars. Economists argue that you should not assume that drivers are going to continue to behave in exactly the same way before and after the advent of automobile safety improvements. What we need is a model of how drivers decide how to drive. The Theory of Consumer Behavior gives us that model. You know what will happen next: we will figure out the constraint. And after that? Preferences. That will be followed by the initial solution and, then, comparative statics. We will find the effect of safer cars on accident risk. This is the economic approach. The Initial Solution The driver chooses how intensively to drive, which means how aggressively to drive. Faster starts, not coming to a complete stop, changing lanes, and passing slower cars are all more intensive types of driving, as are searching for a song or talking on your phone while driving. More intensive driving saves time and it is more fun. Driving intensity is a good and more is better. Unfortunately, it isn’t free. As you drive more intensively, your chances of having an accident rise. No one wants to crash, damaging property and injuring themselves or others. Your accident risk, the probability that you have an accident, is a function of how you drive. The driver chooses a combination of two variables, Driving Intensity and Accident Risk, that maximize utility, subject to the constraint. The equation of the constraint ties the two choice variables together in a simple way. $\text{Driving Intensity} = \text{Safety Features} \times \text{Accident Risk}$ Safety Features represents the exogenous variable, safety technology, and provides a relative price at which the driver can trade risk for intensity. On the Initial line in Figure 6.5, the driver is forced to accept a great deal of additional Accident Risk for a little more Driving Intensity because the line is so flat. When cars get safer, the constraint line gets steeper, rotating counterclockwise from the origin, as shown in Figure 6.5. There are two ways to understand the improvement made available by better safety technology. The horizontal, dashed arrow shows that you can get the same Driving Intensity at a much lower Accident Risk. You can also read the graph vertically. For a given Accident Risk, a safer car gives you a lot more Driving Intensity (follow the vertical, solid arrow). Figure 6.5 shows that safer technology can be interpreted as a decrease in the price of Driving Intensity. It affects the graph just like a decrease in $p_2$ in the Standard Model. The constraint is only half of the story. We need preferences to find out how a driver will decide to maximize satisfaction. We use a Cobb-Douglas functional form to model the driver’s preferences for Accident Risk ($x_1$) and Driving Intensity ($x_2$), subtracting Accident Risk from a constant so that increases in $x_1$ lead to less utility. $U(x_1,x_2)=(1-x_1)^cx_2^d$ Risk is measured between zero and 100 percent so $0 \le x_1 \le 1$. As $x_1$ increases in this interval, utility falls. The indifference curves will be upward sloping because $x_1$, Accident Risk, is a bad. We can solve this model via numerical and analytical methods. We begin with Excel’s Solver. STEP Proceed to the OptimalChoice sheet. The sheet shows the goal, endogenous variables, and exogenous variables. Initially, the driver is at 25%,0.25, which is a point on the budget line (because the constraint cell shows zero). We will use % notation for Accident Risk because it is a probability. The unrealistically high chances of an accident were chosen to maximize visibility on the graph. We use decimal points (such as 0.5) for the driving intensity variable, which we interpret as an index number on a scale from 0 to 1. We know the opening point is feasible, but is it an optimal solution? In previous Excel files, the graph is immediately displayed so you can instantly see if there is a tangency. The missing graph gives you a chance to exercise your analytical powers. Can you create a mental image of the chart even though it is not there? Remember, comparing the slope of the budget line to the MRS at any point tells us what is going on. The slope is simply the Safety Features exogenous variable, which is $+1$. So now the graph looks like Figure 6.5 with a 45 degree line from the origin. But what about the indifference curves? The MRS is minus the ratio of marginal utilities. With $c = d = 1$, we have $M R S=-\dfrac{\dfrac{d U}{d x_{1}}}{\dfrac{d U}{d x_{2}}}=-\dfrac{-x_{2}}{1-x_{1}}=\dfrac{x_{2}}{1-x_{1}}$ We evaluate this expression at the chosen point, 25%, 0.25, and get $\dfrac{x_{2}}{1-x_{1}}=\dfrac{[0.25]}{1-[25 \%]}=\dfrac{1}{3}$ We immediately know the driver is not optimizing. In addition, we know he can increase satisfaction by taking more risk and more intensity, traveling up the budget line because the indifference curve is flatter ($\dfrac{1}{3}$) than the budget line ($+1$) at the opening point of 25%,0.25. Do you have a picture in your mind’s eye of this situation? Think about it. Remember, the MRS is smaller than the slope so the indifference curve has to be flatter where it cuts the line. STEP When you are ready (after you have formed the mental picture of the situation), click the button to see what is going on at the 25%,0.25 point. The canonical graph (with a bad) appears and the cells below the chart show the slope and MRS at the chosen point. STEP Next, run Excel’s Solver to find the optimal solution. With $c = d = 1$ and a Safety Features value of 1, it is not surprising that the optimal solution is at 50%,0.50. Of course, at this point, the slope = MRS. To implement the analytical approach, the Lagrangean looks like this: $\max _{x_{1}, x_{2}, \lambda} L=\left(1-x_{1}\right) x_{2}+\lambda\left(x_{2}-S x_{1}\right)$ An exercise asks you to find the reduced form solution. Comparative Statics Suppose we get safer cars so the terms of trade between Driving Intensity and Accident Risk improve. What happens to the optimal solution? STEP Change cell B16 to 2. How does the engineer view the problem? To her, the driver keeps acting the same way, driving just like before. There will be a great gain in safety with much lower risk of an accident. This is shown by the left-pointing arrow in Figure 6.6. Intensity stays the same and risk falls by a great deal. For the engineer, because Driving Intensity remains constant, if it was 0.5, then improving Safety Features to 2 makes the accident risk fall to 25%. We simply travel horizontally along a given driving intensity to the new constraint. The economist doesn’t see it this way at all. She sees Driving Intensity as a choice variable and as the solution to an optimization problem. Change the parameters and you change the optimizing agent’s behavior. It is clear from Figure 6.6 that the driver is not optimizing because the slope does not equal the MRS. STEP With new safety technology rotating the constraint line, we must run Solver to find the new optimal solution. The result is quite surprising. The Accident Risk has remained exactly the same! What is going on? In Peltzman’s language, this is completely offsetting behavior. The optimal response to the safer car is to drive much more aggressively and this has completely offset the gain from the improved safety equipment. How can this be? By decomposing the zero total effect on Accident Risk into its income and substitution effects, we can better understand this curious result. Figure 6.7 shows what is happening. The improved safety features lower the price of driving intensity, so the driver buys more of it. On the y axis, the substitution and income effects work together to increase the driver’s speed, lane changes, and other ways to drive more intensively. On the x axis, which measures risk taken while driving, the effects oppose each other, canceling each other out and leaving no gain in accident safety. As driving intensity gets cheaper, the substitution effect (the move from A to B in Figure 6.7) leads the driver to choose more intensity and pay for it with more risk. The income effect leads the driver to buy yet more intensity and (because risk is a normal bad) less risk. The end result, for this utility function, is completely offsetting behavior. Of course, this is not necessarily what we would see in the real world. We do not know how many drivers are represented by these preferences. The income effect for risk could outweigh the substitution effect, leaving point C to the left of A in Figure 6.7. Theory alone cannot answer the question of what we will see in the real world. Empirical work in this area does confirm that offsetting behavior exists, but there is disagreement as to its extent. An Economic Analysis of Driving Choices abound when it comes to cars and driving. Should I take the highway or stay on a surface street? Change the oil now or wait a while longer? Pass this slow car or just take it easy and get there a few minutes later? Because there are choices, we can apply economics. This chapter focused on applying the Theory of Consumer Behavior to the choice of how intensively to drive. The agent is forced to trade off a bad (the risk of having an accident) for getting there faster and greater driving enjoyment. Yes, teenagers make different choices than older drivers and everyone drives differently on a congested, icy road than on a sunny day with no traffic, but our comparative statics question focused on how improved automobile technology impacts the optimal way to drive. Offsetting behavior is an application of the Lucas Critique: do not extrapolate. Instead, we should recognize that agents change their behavior when the environment changes. Theory cannot tell us how much offsetting behavior we will get. Only data and econometric analysis can tell us that. Economists believe that we have not had as great a reduction in automobile fatalities and injuries as our much, much safer cars would enable because drivers have chosen to maximize satisfaction by trading some safety for driving intensity. Offsetting behavior explains why we aren’t doing much, much better in traffic fatalities. But do not despairwe are maximizing satisfaction given our new technology. Exercises 1. Use the equation that follows to solve for $x_1 \mbox{*}$ and $x_2 \mbox{*}$ in terms of S (safety features). Show your work. $\max _{x_{1}, x_{2}, \lambda} L=\left(1-x_{1}\right) x_{2}+\lambda\left(x_{2}-S x_{1}\right) \nonumber$ 2. Use your reduced form solution to find the S elasticity of $x_1 \mbox{*}$ at $S=1$. Show your work. 3. If the utility function was such that Driving Intensity was a Giffen good, describe where point C would be located on Figure 6.7. 4. If the utility function was such that Driving Intensity was a Giffen good, would this raise or lower traffic fatalities? Explain.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/06%3A_Bads/6.02%3A_Automobile_Safety_Regulation.txt
We began the Theory of Consumer Behavior with the Standard Model where cash income (m) is given. The Endowment Model replaced given cash income with an initial endowment of two goods so the budget constraint became $p_1x_1 + p_2x_2 = p_1\omega _1 + p_2\omega _2$. We then focused on choices with badsrisky assets and accidents. The application in this section is another example using a bad. As always, our eventual goal is comparative statics and elasticity. In this case, we will derive a supply curve for labor and concentrate on the wage elasticity of labor supply. An innovation in this section is that the accompanying Excel workbook is less finished than usual. This enables you to practice implementing the model in Excel. Setting Up the Problem Instead of a mere consumer, the agent in this application is a consumer and worker. Although an initial amount of non-labor income is assumed, total income can be increased by working. More hours at work means more income and greater consumption of goods and services. Consumption is good, but work is bad. Therein lies the problem. Our consumer/worker can buy a single good, G, representing all consumer goods, at price p. Utility increases as she consumes more G. The 24 hours in a day are divided into two types: work and leisure. The number of hours spent working in one day, H, is chosen by the agent. Earned income is simply $wH$,where w is the wage rate in $/hr. Although work generates income, our agent does not like to work. H is a bad in the utility function. With this background, we are ready to organize the information into the three areas that comprise an optimization problem: 1. Goal: maximize utility, which is a function of goods consumed, G, and work, H, where H is a bad. 2. Endogenous variables: G, the amount of goods consumed, and H, the number of hours worked. 3. Exogenous variables: p, the price of the composite good; w, the wage rate; m, unearned, non-labor income; and parameters in the utility function. The solution to this constrained optimization problem is depicted on a graph with a budget constraint and set of indifference curves. We consider each of these elements separately and then combine them. Budget Constraint The budget constraint is $m + wH \ge pG$. This equation says that total income is composed of unearned income (m) and earned income ($wH$). The inequality means that the consumer/worker cannot spend more on goods and services ($pG$) than the total income available. Because no time elapses in this optimization problem, there is no reason for the agent to save (i.e., spend less than available) and we can make the constraint a strict equality, $m + wH = pG$. This allows us to use the Lagrangean method to solve the problem analytically. In terms of a graph, it is easy to see that we can write the constraint as the equation of a line (with G on the y axis and H on the x axis) by dividing by p: $m + wH = pG$ $G = \frac{m}{p}+\frac{wH}{p}$ Suppose w =$10/hr, m = $40, and p =$1/unit. What would the constraint look like? STEP Open the Excel workbook LaborSupply.xls and read the Intro sheet, then go to the YourConstraint sheet. Your task is to fill in the G column and create a chart of the constraint. There are three steps. STEP Click on B12 and enter a formula equal to the equation for G. The cells w, p, and m are not named so you should use absolute references ($in front of column letters and row numbers) to enable easy filling down of the formula. When finished, the formula in B12 should look like this: =$B$4/$B$3 + ($B$2/$B$3)*A12. STEP The next step is to fill down the formula. STEP Finally, create a chart with H and G as the source data. Be sure to label the axes of your chart. The chart is based on hour intervals of work, but fractions of hours are possible. Thus, your chart should be a scatter chart with points connected by lines. STEP Click the button to see a finished version of the budget constraint. The agent is free to choose any point on the constraint. The y intercept, 40 (equal to $\frac{m}{p}$), yields a small value of consumption, but the agent does not have to work. Movement up the line yields more G, but requires more H. Points to the northwest of the line are unattainable. For example, the consumer/worker cannot afford the 10,200 combination. Working 10 hours adds$100 to the $40 non-labor income. This is not enough to buy$200 worth of goods. What shock would enable our consumer/worker to buy the 10,200 combination? There are three possibilities, one for each exogenous variable in the constraint. STEP From the Constraint sheet (click the button from the YourConstraint sheet if needed), change the wage to 16 in B2. The constraint rotates up, counterclockwise, with a steeper slope and the same intercept, and the combination 10,200 is now feasible, which is easily confirmed by looking at the chart and row 22. Changes in wages, ceteris paribus, rotate the constraint around the unearned income intercept. STEP Return the wage to 10 in B2 (the constraint returns to its initial position when you hit the Enter key) and set p (in B3) to 0.7. Instead of raising the wage, we have made the composite good cheaper. As with a wage increase, this is welcome news since there are more consumption possibilities. The constraint appears to simply rotate up again, but look more carefully at the chart and underlying data. The slope is steeper, but the intercept has also changed. The $40 of unearned income now buys a little more than 57 units of G. As before, it is easy to see that the combination 10,200 is now feasible. Changes in price (p), ceteris paribus, rotate and shift the constraint. STEP Return the price to 1 in B3 (the constraint returns to its initial position when you hit the Enter key) and set m (in B4) to 100. This time, the constraint shifts vertically up. With$100 of unearned and $100 of earned income (from working 10 hours), the combination 10,200 is now feasible. Changes in unearned income (m), ceteris paribus, shift the constraint. Changes in w, p, and m affect the constraint. The initially unattainable combination of 10,200 can be made feasible by appropriately changing any of one of these three exogenous variables. Preferences In previous applications with bads, we used a Cobb-Douglas utility function and subtracted the bad from a constant. The same approach is adopted here. Because the time period under consideration is a day, which has 24 hours, preferences can be represented by $U(H, G) = (24 - H)^cG^d$. With $H = 0$, the agent gets the maximum value from the first term of the utility function, but remember that earned income will then be low and, therefore, G will be small. Like the budget constraint, we need a visual representation of the utility function. STEP Proceed to the YourIndiffCurve sheet to implement the utility function in Excel. The sheet is unfinished. You need to fill in column B and draw a graph of the indifference curve. The indifference curve is initially based on $c = d = 1$ and a level of utility of 1960. To fill in column B, you need to solve for the value of G that yields a utility level of 1960, given H. In other words, rewrite the utility function in terms of G, like this: STEP Use the expression above to enter a formula in B12 that computes the value of G necessary to produce a utility of 1960 when H = 2. Your formula should look like this: = ($B$5/((24 - A12)(̂$B$3)))(̂1/$B$4). It evaluates to a value of $G = 89.09$. This result makes sense because when H = 2, then $24 - 2 = 22$ and 22 x 89.09 (since $c=d=1$) equals a utility value of 1960. Notice again the use of absolute references. STEP Fill down the formula and draw a chart with H and G as the source data. Label the axes. Your chart is a graph of a single indifference curve. In fact, the entire quadrant is full of these upward sloping indifference curves and utility increases as you move in a northwesterly direction (taking less of the bad, H, and more of the good, G). This is the usual indifference map when we have a bad on the x axis. Click the button to check your work or if you need help. Finally, remember that changes in the exponents make the indifference curves flatter or steeper. A Q&A question explores this point. Finding the Initial Optimal Solution Having modeled the constraint and preferences, we are ready to find the initial solution. The numerical approach is covered here; the analytical method is an exercise question. STEP Proceed to the YourOptimalChoice sheet. It is blank! You need to implement the problem in this sheet and run Solver to find the initial solution. Organize the problem into the usual components: goal (maximize utility), endogenous variables (H and G), exogenous variables ($w, p, m, c$, and d), and a cell for the constraint. The utility function is $U(H, G) = (24 - H)^cG^d$. The wage rate is$10/hr, the price of G is $1/unit, unearned income is$40, and $c = d = 1$. Click the button once you are finished or if you get stuck and need help. Figure 6.8 shows the canonical graph of the initial optimal solution for the consumer/worker’s constrained utility maximization problem. This consumer/worker maximizes utility by working 10 hours, thereby earning $100 and then buying 140 units of G. There is no better solution. Traveling up or down the budget constraint is guaranteed to lower utility because the indifference curve is just touching the constraint at 10,140. The mathematical way of saying this is that the MRS = $\frac{w}{p}$ at 10,140. Comparative Statics: Deriving Labor Supply How does $H \mbox{*}$ respond as the wage rate changes, ceteris paribus? This comparative statics question yields the labor supply curve. We concentrate on the numerical approach and leave the analytical method for an exercise question. STEP Proceed to the OptimalChoice sheet (in the YourOptimalChoice sheet, click the button if needed). Use the Comparative Statics Wizard to pick a few points off of the labor supply curve. Make the size of the change in the wage rate 10 and apply the default five shocks. Use the CSWiz data to compute the wage elasticity of hours worked from $w=\10$ to$20/hr. Create a graph the supply and inverse supply of labor curves. STEP Proceed to the CS1 sheet and scroll down (if needed) to check your work. Notice the labor supply and inverse labor supply curves (scroll down if needed). The shape of the curve is intriguing. As wage rises, optimal H seems to level offit continues to increase, but ever more slowly. Notice also that the computed wage elasticity of labor supply from w = 10 to 20 in E14 is quite small at 0.1. This means that hours worked is unresponsive to changes in wages. Labor supply has been extensively studied and extremely small elasticities with respect to wage are commonly found (see McClelland and Mok (2012) for a review of the literature). Income and substitution effects explain this result. STEP Return to the OptimalChoice sheet and click the button, then change the wage rate (in B16) from 10 to 20. The budget constraint rotates up (counterclockwise) in the charta welcome change in consumption possibilities. The initial optimal solution, 10,140, is no longer optimal. The consumer/worker needs to re-optimize. STEP Run Solver (with w = 20). The new optimal solution is at H = 11. A 100% increase in the wage (from 10 to 20) has produced a total effect of a 1 hour, or 10%, increase in hours worked. We can decompose this total effect into income and substitution effects by shifting down the budget line to cancel out the increased purchasing power of the wage increase. In other words, we need to draw in an imaginary, dashed line that goes through the initial solution, with a steeper slope caused by the higher wage. We can use a modified version of the Income Adjuster Equation to determine the amount of income we need to take away. Recall that we determine how much income to change via $\Delta m = x_1\Delta p_1$. In the labor supply model, $x_1$ is obviously H, and the price is now the wage, but we also need a sign change. An increase in the wage increases consumption possibilities in the labor supply model so we need a minus sign to show that wage increases must be offset by income decreases. Below is our modified Income Adjuster Equation with values substituted in: $\Delta m = (\Delta H \mbox{*})(-\Delta w)$ $\Delta m=(10)(-10)=-100$ This says that we must lower unearned income by $100 to cancel out the increased purchasing power from the$10/hr wage increase. STEP Confirm that w = 20 (in B16) and change m to $-60$ (in B17). Notice that the budget line goes through the initial combination, 10,140. The line is not dashed, but it should be. Remember that this budget line does not actually exist. No one is going to take $100 from the agent. We are doing this to decompose the total effect of the wage increase into the income and substitution effects. STEP Run Solver with w = 20 and m = $-60$. $H \mbox{*} = 13.5$ hours of work and Figure 6.9 shows the three effects. The substitution effect is $+3.5$, the movement from H = 10 (the initial optimal solution) to 13.5 (the optimal solution with the higher wage, but lower m). It is the horizontal movement from point A to B. The income effect is $-2.5$, the movement from H = 13.5 (point B) to H = 11 (point C). The negative sign is important. It says that when income rises, the agent buys less of the bad. The total effect is, of course, the observed movement from point A to point C, a 1-hour increase in hours worked. This is what would actually be observed as the wage rose from$10/hr to $20/hr. Figure 6.9 makes clear why the response of hours worked to a wage increase is inelasticthe income and substitution effects are working against each other. The fact that the relative price of goods for an hour of work is cheaper drives the agent to work and consume more (this is the substitution effect, from A to B). But the increase in purchasing power encourages the agent to work less (from B to C, the income effect). The total effect on hours worked is small when the two effects are added together. In fact, the income and substitution effects can explain an even more curious phenomenon that has been observed in the real worldhours worked actually falling as wage rises. Figure 6.10 shows the underlying graph and derived labor supply curve for an unknown utility function. Unlike the labor supply derived from the Cobb-Douglas utility function, which was always positively sloped, the labor supply curve in Figure 6.10 is said to be backward bending. At low wages, increases in wage lead to more hours worked (such as from point 1 to 2), but the supply curve becomes negatively sloped when wages rise from point 2 to 3. We have already seen that the small wage elasticity from point 1 to 2 is caused by the income effect’s working against the substitution effect. The same explanation underlies the negative response in hours worked as wages rise from point 2 to 3. In this case, not only does the income effect oppose the substitution effect, it actually swamps it. Figure 6.11 shows what happens when we are on the backward bending portion of the labor supply curve. The substitution effect always induces more hours worked as wages rise. This is the movement from A to B. The income effect, however, counters some of this increase in hours worked. We can afford to work less (from B to C) because the wage is higher. When we are on the backward bending portion of the labor supply curve, the income effect actually overcomes the substitution effect so that the total effect (A to C) is a reduction in hours worked as the wage rises. In Figure 6.11, any point C to the left of A yields a point on the backward bending portion of the labor supply curve. Wage rises and I work less sounds just about as weird as price rises and I buy more. Is this Giffen behavior? No because the wage change is not an own price effect. Figure 6.12 shows $p_1$ and $p_2$ changes in the Standard Model where two goods are purchased given fixed income. On the left, the change in $p_1$ produces an own effect on $x_1$ and a cross effect on $x_2$.If $x_1$ rises as $p_1$ rises, then $x_1$ is Giffen. If $x_2$ rises as $p_1$ rises (notice the cross effect), however, that does not make $x_2$ a Giffen good. We use the cross effect to say that the goods are substitutes (instead of complements). To determine whether $x_2$ is Giffen, we have to use the graph on the right of Figure 6.12. If $x_2$ rises as $p_2$ rises (notice the own effect), then $x_2$ is Giffen. In other words, we need an own price change to determine Giffenness. Figure 6.12 makes clear that a change in the wage in the labor supply optimization problem is like a change in the price of $x_2$ in the Standard Model. The wage change is like the graph on the right, with an upward sloping budget constraint. The rotation is around a fixed valuethe x intercept in the Standard Model and unearned income in the labor supply model. Thus, the change in wage is an own price effect for G (on the y axis) and a cross price effect for H (on the x axis). Because a change in the wage exerts a cross effect on hours worked, we cannot say anything about Giffenness for hours worked. We could, however, say that G was Giffen if it fell when wage rose. That would really be weird. Look at the figures of income and substitution effects in this chapter and you will never find a final point C that lies below an initial point A. In fact, leisure (work’s counterpart) is usually treated as a normal good: higher income leads to more leisure (and less work). Deriving the Labor Supply Curve Labor Economics is a major field within Economics. As a course, it is usually offered as an upper-level elective, with Intermediate Microeconomics as a prerequisite. Labor supply and demand are fundamental concepts. The former is based on a model in which work is a bad (the opposite of leisure, which is a good) and a consumer/worker maximizes satisfaction subject to a budget constraint. By changing the wage, ceteris paribus, we can derive a labor supply curve. Economists are well aware that labor supply is often quite insensitive to changes in wages. This is explained by the opposing substitution and income effects. The backward bending portion of the labor supply curve is observed when the income effect swamps the substitution effect. This is not Giffen behavior, however, because we are dealing with a cross (not own) price effect. Exercises 1. Show all of your work. 2. Do your results for $H \mbox{*}$ and $G \mbox{*}$ agree with the numerical approach in the text? Is this surprising? 3. Using the Comparative Statics Wizard, the wage elasticity of labor supply from$10/hr to $20/hr is 0.1. Use your reduced form solution for $H \mbox{*}$ to find the wage elasticity of labor supply at w =$10/hr. Show your work. 4. Does your point wage elasticity from the previous question equal 0.1 (the wage elasticity based on a \$10 wage increase)? Why or why not? 5. Whether the labor supply curve is upward sloping or backward bending has nothing to do with the Giffenness of work. If labor supply is positively sloped, G and H are substitutes or complements, but which one? Draw a graph that helps you explain your answer.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/06%3A_Bads/6.03%3A_Labor_Supply.txt
The Theory of Consumer Behavior is based on the idea that buyers choose how much to buy based on preferences, income, and given prices. We know, however, that buyers do not face a single pricethere is a distribution of prices and sellers change their prices frequently. You would think consumers would be unable to choose in such an environment. After all, how can they know the budget constraint without prices? The answer is that they search or, in other words, they go shopping, and then use the lowest prices found to solve their constrained utility maximization problem. Search Theory is an application of the economic approach to the problem of how long to shop in a world of many prices. Search is a productive activity because it enables one to find lower prices, but it is costly. One can search too little, ending up paying a high price, or search too muchspending hours to find a price that is a few pennies lower does not make much sense. This chapter introduces the consumer’s search optimization problem and is based on the idea that consumers decide in advance how many price quotes to obtain, according to an optimal search rule. This type of search procedure is known as a fixed sample search. Describing the Search Optimization Problem We assume that consumers do not know the prices charged by each firm. We simplify the problem by assuming that the product in different stores is identical (i.e., homogeneous) so the consumer just wants to buy at the lowest price. Unfortunately, finding that lowest price is costly so the buyer has to decide how long to search. STEP Open the FixedSampleSearch.xls workbook and read the Intro sheet, then proceed to the Setup sheet. The first task is to create the distribution of prices faced by the consumer. We assume that prices remain fixed during the search process. STEP Click the button. You will be asked a series of questions that will establish the prices charged by all of the sellers. This is the population. The idea is that consumer will sample (draw) from the population. This is shopping. STEP Hit OK when asked the number of stores selling the product to accept the default number of 1000 (no comma separator when entering numbers in Excel). Choose Uniform for the distribution and then press OK to accept 5 when prompted for the number of stores. Accept the default values of 0 and 1 for the minimum and maximum prices. After you hit Enter, you will see a column of red numbers in column A that represent the prices charged by each of the 1,000 stores selling the product. The consumer knows that stores charge different prices, but cannot immediately see each individual store price. They cannot see the lowest and highest price stores in cells B2 and B3. STEP Scroll down to see the prices charged at each store and confirm that the minimum price store, displayed in cell F2, is correct. It is difficult to see by simply scrolling down and looking at the prices, but the uniform distribution you used means that prices are scattered equally from zero to one. The normal distribution, on the other hand, would concentrate prices near the average, with fewer low and high prices (like a bell-shaped curve). The log-normal is the most realistic of the threeprices have a long right-hand tail (with a few stores charging very high prices). The primary advantage of the uniform distribution is that it is the easiest to work with analytically. Figure 7.1 shows a histogram of 1,000 prices from U[0,1]. This notation means that we include the endpoints so we have a uniform distribution with a zero minimum and a maximum of one (giving an average of 0.5 and an SD of 0.2887). The prices are not exactly evenly distributed on the interval from zero to one. They are drawn from a uniform distribution on the interval 0 to 1, but each realization of 1,000 prices deviates from a purely rectangular distribution due to randomness in sampling from the uniform distribution. The more stores you include in the population, the closer Figure 7.1 will get to a smooth, rectangular distribution. You can see a histogram of your population prices by scrolling over to column AA of the Setup sheet. Consumers know the distribution of prices, but they do not know which firm is charging which price, so they cannot immediately go to the firm that has the lowest price. Instead, the fixed sample search model says that the consumer chooses a number of prices to sample (which you set as 5) and then chooses the lowest of the observed prices. STEP Click the button. A price will appear in the sample column, and a pop-up box tells you where that price came from. Hit OK each time the display comes up. You will hit OK five times because you chose to sample from five stores. The consumer chooses among the 1,000 stores randomly and ends up with five observed prices. Column L reports the sample average price, the SD of the sampled prices, and the minimum price in the sample (in cell L7). The consumer will purchase the product at the minimum price observed in the sample. Why doesn’t the consumer visit every store and then pick the lowest price? Because it is costly to obtain price information, as shown in cell L11. Each shopping trip (to collect a price) costs 4 cents. To sample all 1,000 stores would cost the consumer an exorbitant $40. On average, the consumer would pay$0.54 (the average of the price distribution plus the cost of obtaining one price) by buying the product at the very first store visited. Clearly, it is better to buy immediately, n = 1, than to sample every store, n = 1,000, but what about other fixed sample sizes? How much will the consumer pay, on average, when sampling five stores? STEP Hit the button repeatedly to draw more samples of size five. Keep your eye on the total price paid in cell L22. Every time you get a new sample, you get a new total price (composed of the minimum price in sample plus 20 cents). There is no doubt about itthe total price the consumer ends up paying is a random variable. This makes this problem difficult because we need to figure out what the consumer can expect to pay usually or typically. We want to know the average total price. The next section shows how. Monte Carlo Simulation The plan is to alter the spreadsheet so a new sample can be drawn simply by recalculating the sheet, which is done by hitting the F9 key. We can then install the Monte Carlo simulation add-in and use it to repeatedly draw new samples, tracking the lowest price in each sample. STEP Select cell range J2:J6. You should have five cells highlighted. In the formula bar, enter the following formula: =DRAWSAMPLEARRAY() and then press Ctrl + Shift + Enter (hold down and continuing holding down the Ctrl key, then hold down and continue holding down the Shift key, and then hit the Enter key). Your sample of five prices will appear in the sample column. After you select the cells, do not simply hit the Enter key. This will put the formula only in the first cell. You want the formula in all five cells that you selected. You have to press Ctrl + Shift + Enter simultaneously. You have used an array function (built into the workbook) that spans the five cells you selected. You cannot individually edit the cells. If you mistakenly try to do so and get stuck, hit the esc (escape) key to return to the spreadsheet. When using this array function, it may display #VALUE. Simply hit the F9 key when this happens to refresh the function. If that does not work, recreate the population. When using the DRAWSAMPLEARRAY() function, you must be sure to set the number of draws in cell C15 to correspond to the number of cells selected and used by the function. If there is a discrepancy, a warning will be displayed. STEP Hit F9 a few times and keep your eye on cells L7, the minimum price, and L22, the total price paid. These cells update each time you hit F9. A new sample of five prices is drawn and the minimum price and total price paid are recalculated for the new sample. The DRAWSAMPLEARRAY() function enables Excel to display the minimum (best) price random variable, but we need to figure out the average minimum price when five price quotes are obtained. This can be done by repeatedly resampling and tracking each outcome – this is called Monte Carlo simulation. STEP Install the Monte Carlo simulation Excel add-in, MCSim.xla, available freely from www3.wabash.edu/econometrics and the MicroExcel archive (in the same folder as the Excel workbook for this section). Full documentation is available at this web site. This powerful add-in enables sophisticated simulations with the click of a button. Remember that installing an add-in requires use of the Add-ins Manager. Do not simply open the MCSim.xla file. Once installed, you can use the add-in to determine the average minimum price and total price paid for the product when five prices are sampled. STEP Run the Monte Carlo simulation add-in on cells L7 and L22 with 10,000 repetitions. Your MCSim add-in dialog box should look like Figure 7.2. Click the button to run the simulation. Your simulation results will look something like Figure 7.3, but your results will be slightly different. The average of the minimum price distribution should be near 0.17 (1/6). Thus, the consumer will usually pay around $0.37 (adding the 20 cents in search cost) for the product. The total price paid is a shifted version of the best price. So now we know that the consumer can expect to pay about$0.37 when searching five stores. This is better than buying at the first store visited, which was $0.54. Compared to the buying at the first store, the expected marginal gain of shopping at five stores, in terms of a lower expected minimum price, is $\0.50 - \0.17 = \0.33$. The additional cost of searching for five prices instead of one is$0.16. The additional benefit is greater than the additional cost is another way to know that five stores is better than one store. But we want to know more than just that searching five stores is better than buying at the first store; we want to find the best sample sizethe one that gives the lowest total price paid. STEP Hit the button. Change the number of draws in cell C15 to 10. Select cell range J2:J11 and then type in the formula bar: =DRAWSAMPLEARRAY(). Then press the Ctrl + Shift + Enter combination to input the array formula. Your sample of 10 prices will appear in column J. Hit F9 a few times and watch what happens to cell L7, the minimum price. It bounces, but with 10 prices instead of five, it bounces around a different, lower mean. STEP To find the typical price the consumer can expect to pay, run a Monte Carlo simulation of the minimum and total price when 10 stores are visited. Comparative Statics The reduced form expression makes comparative statics analysis straightforward. It is obvious that higher c, search cost, leads to lower optimal sample size, as shown in Figure 7.5. Search cost is not the same for each consumer. Time is an important element of search cost. Those with more valuable time and, therefore, higher search cost will optimize by obtaining fewer price quotes. The availability of information is another component of search cost. Informational advertising is how firms let consumers know where they are and what prices are being charged. We can model this type of advertising as a decrease in search coststoday, all the consumer has to do is go online to see what prices are being offered. Search costs are still positive (consumers do not know, for example, whether all firms advertise or just some), but lower than without advertising. Consumers obtain the product for a lower total price when advertising lowers search costs. If we allow for multiple purchases, that is, a value of q $> 1$, then the returns to search increase and, other things equal, the optimal number of searches increases. The effect of increasing q on the relationship between the cost of search and the optimal number of searches is shown in Figure 7.6. For example, the driver of an 18-wheel truck that carries two 200-gallon diesel tanks is going to search more than someone looking to fill her car with gas. But this example leads to the next chapter, where we introduce a different search model. Results of Fixed Sample Search Incomplete price information leads to an entirely new optimization problem. Because consumers will not search every store, since that is too expensive, we see price dispersion. This is a major result of search theory and it deserves further explanation. You would think that competition would tend to make prices of the same product equal. This is known as the Law of One Price. But this only applies to a world where consumers can costlessly gather prices. In other words, the Law of One Price will fail to hold whenever it is costly to collect price data. This is true in the real world, where some consumers will end up paying higher prices than others because the minimum price in their particular information set is different than the minimum price in another consumer’s set. Because lower search costs induce more search, a reduction in search costs would have the effect of reducing (but not eliminating) price dispersion. Because optimizing consumers will choose not to canvass every store for prices as long as search is costly, price dispersion will exist. This is the key result of the fixed sample search model. Economists have been interested in search theory for decades. The internet promised a big decrease in search cost and it may well have delivered that, but more recently, technology has really upended search theory. Today, your online search behavior is monitored and your clicks influence the prices you see. The next level search models do not treat the population of prices as given and do not allow the consumer to randomly sample without changing the price distribution. Consumers still have an optimization problem to solve, but so do firms. Exercises Suppose the price distribution of 1,000 firms is uniform, with an average price of $50 and an SD of$28.87. Search cost, c, is \$1 per price. 1. On what interval (from the minimum to the maximum) are prices equally likely to fall? 2. Implement this problem in the Setup sheet and run a Monte Carlo simulation with a sample size of 20. Take a picture of your results (like Figure 7.3) and paste it in a Word document. What is good about obtaining 20 prices? What is bad? 3. Use the equation for the average minimum price as a function of n for this distribution, $AverageP_{min}=\frac{100}{n+1}$, to find the optimal sample size. Show your work. 4. Find the c elasticity of n at $q = c = 1$. Show your work.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/07%3A_Search_Theory/7.01%3A_Fixed_Sample_Search.txt
We introduced Search Theory with a Fixed Sample Search Model. A consumer samples from the population of stores and gets a list of n prices for a product, then chooses the minimum price. The bigger n, the lower the minimum price in the list, but the price paid to obtain the price quotes increases as n rises. The consumer has to decide how many prices to obtain. This section explores the properties of a different situation that is known as the Sequential Search Model. Unlike fixed sample search, where the consumer obtains a set of price quotes and then picks the lowest price, sequential search proceeds one at a time. The consumer samples from the population and gets a single price, then decides whether or not to accept it. If she rejects it, she cannot go back. As the epigraph shows, the sequential search model is easily applied to job offers, but it will be applied in this chapter to another common search problembuying gas. Setting Up the Model Imagine you are driving down the road and you need fuel. As you drive, there are gas stations (say N = 100) to the left and right (taking a left does not bother you too much) and you can easily read the price per gallon as you drive up to each station. If you drive past a station, turning around is out of the question (there is traffic and you have a weird phobia about U-turns). There is a lowest price station and the stations can be ranked from 1 (lowest, best price) to 100 (highest, worst price). You do not know the prices coming up because the stations are randomly distributed on the road. The lowest price station might be 18th or 72nd or even the very first one. Figure 7.7 sums it all up. Suppose you focus on the following question: How do you maximize the chances of finding the cheapest station? You might argue that you should drive by all of the stations, and then just pick the best one. This is a terrible idea because you cannot go back (remember, no U-turns). Once you pass a station, you cannot return to it. So, this strategy will only work if the cheapest station is the very last one. The chances of that are 1 in a 100. A strategy for choosing a station goes like this: Pick some number $K < N$ where you reject (drive by) stations 1 to K, then choose the first station that has a price lower than the lowest of the K stations that you rejected. Perhaps K = 50 is the right answer? That is, drive by stations 1 to 50, then look at the next (51st) station and if it is better than the lowest of the 50 you drove by, pull in. If not, pass it up and consider the 52nd station. If it is cheaper than the previous 51 (or 1 to 50 since we know the 51st station isn’t cheaper than the cheapest of the first 50), get gas there. Continue this process until you get gas somewhere, pulling into the last (100th) station if you get to it (it will have a sign that says, “Last chance gas station”). This strategy will fail if the lowest price is in the group of the K stations you drove by, so you might want to choose K to be small. But if you choose K too small, you will get only a few prices and the first station with a price lower than the lowest of the K stations is unlikely to give you the lowest price. So, K = 3 is probably not going to work well because you probably won’t get a super low price in a set of just three so you probably won’t end up choosing the lowest price. For example, say the first three stations are ranked 41, 27, and 90. Then as soon as you see a station better than 27, you will pull in there. That might be 1, but with 26 possibilities, that’s not likely. On the other hand, a high value of K, say 98, suffers from the fact that the lowest price station is probably in that group and you’ve already rejected it! Yes, this problem is certainly tricky. The Sequential Search Model can be used for much more than buying gasit has extremely wide applicability and, in math, it is known as optimal stopping. In hiring, it is called the secretary problem. A firm picks the first K applicants, interviews and rejects them, then picks the next applicant that is better than the best of the K applicants. It also applies to many other areas, including marriagesearch online for Kepler optimal stopping to see how the famous astronomer chose his spouse. STEP Open the Excel workbook SequentialSearch.xls and read the Intro sheet, then proceed to the Setup sheet. Column A has the 100 stations ranked from 1 to 100. The lowest priced station is 1, and the highest priced station is 100. STEP Click the button. It shuffles the stations, randomly distributing them along the road you are traveling in column D. Cell B7 reports where the lowest priced station (#1) is located. Columns C and D report the location of each station. Column D changes every time you click the button because the stations are shuffled. Cell F2 sets the value of K. This is the choice variable in this problem. Our goal is to determine the value of K that maximizes the probability that we get the lowest priced station. On opening, K = 10. We pass up stations 1 to 10, then take the next station that is better than the best of the 10 stations we rejected. STEP Click the button. This reshuffles the stations and draws a border in column D for the cell at the Kth station. Cell F5 reports the best of the K stations (that were rejected). Cell F7 displays the station you ended up at. STEP Scroll down to see why you ended up at that station and read the text on the sheet. Cell F7 always displays the first station that is better (lower) than the best of the K stations in cell F5. STEP Repeatedly click the button. After every click, see how you did. Is 10 a good choice for K? The definition of a good choice in this case is one that has a high probability of giving us the cheapest station. Our goal is to maximize the chances of getting the cheapest station. We could have a different objective, for example, minimize the average price paid, but this would be a different optimization problem. For the classic version of the optimal stopping problem, we count success only when we find the cheapest station. STEP Change K to 60 (in F2) and repeatedly click the button. Is 60 better than 10? This is difficult to answer with the Setup sheet. You would have to repeatedly hit the button and keep track of the percentage of the time that you got the cheapest station. That would require a lot of patience and time tediously clicking and recording the outcome. Fortunately, there is a better way. Solving the Problem via Monte Carlo Simulation The Setup sheet is a good way to understand the problem, but it is not helpful for figuring out the optimal value of K. We need a way to quickly, repeatedly sample and record the result. That is what the MCSim sheet does. STEP Proceed to the MCSim sheet and look it over. With N = 100 (we can change this parameter later), we set the value of K (in cell D7) and run a Monte Carlo simulation to get the approximate chances of getting the best station (reported in cell H7). Unlike the MCSim add-in used in the previous section, this Monte Carlo simulation is hard wired into this workbook. Thus, it is extremely fast. STEP With N = 100 and K = 10, click the button. The default number of repetitions is 50,000, which seems high, but a computer can do hundreds of thousands of repetitions in a matter of seconds. Figure 7.8 shows results. Choosing K = 10 gives us the best station about 23.4% of the time. Your results will be slightly different. Notice that we are using Monte Carlo simulation to approximate the exact answer. Monte Carlo simulation cannot give us the exact answer. By increasing the number of repetitions, we improve the approximation, getting closer and closer, but we can never get the exact truth with simulation. The answer it gives depends on the actual outcomes in that particular run. The only way simulation would give the exact answer is if it was based on an infinite number of repetitions. Can we do better than getting the best station about 23% of the time? We can answer this question by exploring how the chances of getting the lowest price varies with K. By changing the value of K and running a Monte Carlo simulation, we can evaluate the performance of different values of K. STEP Explore different values of K and fill in the table in cells J3:M10. As soon as you do the first entry in the table, K = 20, you see that it beats K = 10. STEP Use the data in the filled in table to create a chart of the chances of getting the lowest price station as a function of K. Use the button under the table to check your work. What do you conclude from this analysis? One problem with Monte Carlo simulation is the variability in the results. Each run gives different answers since each run is an approximation to the exact answer based on the outcomes realized. Thus, it seems pretty clear that the optimal value of K is between 30 and 40, but using simulation to find the exact answer is difficult. Figure 7.9 displays results of series of Monte Carlo experiments. Notice that we doubled the number of repetitions to increase the resolution. The best value of K appears to be 36, but the noisiness in the simulation results makes it impossible to determine the answer. With Monte Carlo simulation, we can continue to increase the number of repetitions to improve the approximation. STEP Proceed to the Answers sheet to see more simulation results. The Answers sheet shows that even 1,000,000 repetitions are not enough to definitively give us the correct answer. Simulation is having a difficult time distinguishing between a stopping K value of 36 or 37. An Exact Solution This problem can be solved analytically. The solution is implemented in Excel. For the details, see the Ferguson citation at the end of this chapter. STEP Proceed to the Analytical sheet to see the exact probability of getting the cheapest station for a given K-sized sample from N stations from 5 to 100. For example, cell G10 displays 32.74%. This means you have a 32.74% probability of getting the cheapest station out of 10 stations if you drive by the first six stations and then choose the next station that has a price lower than the cheapest of the K stations you drove by. For N = 10, is K = 6 the best solution? No. The probability of choosing the cheapest station rises if you choose K = 5. The 3 and 4 choices are close, but clearly, optimal K = 3 (with a 39.87% likelihood of getting the cheapest station) is the best choice. In the example we have been working on, we had N = 100. Monte Carlo simulations showed optimal K around 36 or 37, but we were having trouble locating the exact right answer. STEP Scroll down to see the probabilities for N = 100. Click on cells AL100 and AM100 to see the exact values. The display has been rounded to two decimal (percentage) places, but the computation is precise to more decimal places. K* = 37 just barely beats out K = 36. The fact that they almost give the exact same chances of getting the lowest price explains why we were having so much trouble zooming in on the right answer with Monte Carlo simulation. It can be shown (see the Ferguson source in the References section) that optimal K is $\frac{N}{e}$, giving a probability of finding the cheapest station of $\frac{1}{e}$. For N = 100, $\frac{N}{e} \approx 36.7879$. If K was a continuous endogenous variable, $\frac{N}{e}$ would be the optimal solution. But it is not, so the exact, correct answer is to pass on the first 37 stations and then take the first one with a lower price than the lowest price of stations 1 to 37. It is a mystery why the transcendental number e, the base of natural logarithms, plays a role in the solution. Figure 7.10 shows that as N rises, so does optimal K. What elasticity is under consideration here? The answer is the N elasticity of K. From N = 50 to 100 is a 100% increase. What happens to optimal K? It goes from 18 to 37, so a little more than a 100%. The elasticity is slightly over one. If you use the continuous version of K, then K exactly also doubles and the N elasticity of K is exactly one. Sequential Search Lessons Unlike the Fixed Sample Search Model (where you obtain a set of prices and choose the best one), the Sequential Search Model says that you draw sample observations one after the other. This could apply to a decision to choose a gas station. As you drive down the road, you decide whether to turn in and get gas at Station X or pass up that station and proceed to Station Y. Faced with price dispersion, a driver deciding where to get gas can be modeled as solving a Sequential Search Model. Although there can be other objectives (such as getting lowest average price), the goal could be to maximize the chances of getting the lowest price. We found that as N rises, so does optimal K. The more stations, the more driving you should do before picking a station. Like the Fixed Sample Search Model, the Sequential Search Model does not have any interaction between firms and consumers. Price dispersion is given and the model is used to analyze how consumers react in the given environment. In the pre-internet and smartphone days, deciding where to get gas was quite the challenge. A driver passing signs with prices (like Figure 7.7) was a pretty accurate representation of the environment. There was no Google maps or apps displaying prices all around you. Notice, however, that the Law of One Price does not yet apply to gas prices. Ferguson points out that our Sequential Search Model (which mathematicians call the secretary problem) is part of a class of finite-horizon problems. "There is a large literature on this problem, and one book, Problems of Best Selection (in Russian) by Berezovskiy and Gnedin (1984) devoted solely to it" (Ferguson, Chapter 2). Fixed Sample and Sequential Search Models are merely the tip of the iceberg. There is a vast literature and many applications in the economics of search, economics of information, and economics of uncertainty. Exercises 1. Use the results in the Analytical sheet to compute the N elasticity of K* from N = 10 to 11. Show your work. 2. Use the results in the Analytical sheet to draw a chart of K* as a function of N. Copy and paste your graph in a Word document. 3. Run a Monte Carlo simulation that supports one of the N-K* combinations in the Analytical sheet. Take a picture of your simulation results and paste it in a Word document. 4. Explain why the Monte Carlo simulation was unable to exactly replicate the percentage of times the lowest priced station was found.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/07%3A_Search_Theory/7.02%3A_Sequential_Search.txt
Behavioral Economics The field of Behavioral Economics (and Behavioral Finance) is a growing research area that focuses on how decisions are actually made. It is closely tied to psychology and neuroscience. Behavioral economists reject the idea of utility maximization as an assumed black box. Both experimental methods and sophisticated procedures (such as MRI brain scans) are used to examine how real-world problems are actually solved. A number of results have emerged that challenge the conventional wisdom in mainstream economics. One area of long-standing interest in psychology involves repeated choice problems. This chapter focuses on a particular kind of repeated choice in which the satisfaction obtained currently depends on past decisions. This is called distributed choice. Suppose you are deciding whether to watch TV or play a video game. You face this choice repeatedly. The satisfaction from watching TV or playing a video game depends on how often that choice has been made before. What is the best combination of TV and video games over a period of time and, more importantly, how well do people handle this kind of repeated choice? Instead of explaining why the repeated choice optimization problem is difficult and presenting results from human trials, it is more fun (and you will learn more) to let you first participate in an experiment. The Choice Game STEP Open the Excel workbook Melioration.xls and read the Intro sheet, then go to the Choice Game sheet to play this simple game. Your goal is to click the A or B buttons as many times as possible in 10 minutes. When you make a choice, by clicking on one of the buttons, you are forced to wait. Waiting is costly because you cannot click (make another choice) while waiting. STEP Click the option button (near the top left corner of the screen) to see how the game works. You get up to 100 practice trials. In practice mode, time is not kept. You can take as long as you want between button clicks. Practice now. There is definitely something going on that you are trying to figure out and there is an optimal strategy. You can click the same button over and over or switch back and forth. Are you ready to play? Unlike practice, when you play, a timer will be running. You will not use the buttons on the sheet like you did in practice mode. The buttons will be on a dialog box, right next to each other. You will have 10 minutes to make as many choices as possible. The time remaining will be displayed as you play. Ten minutes might be too long for you to play so click the button if you want to stop playing. As long as you start play and make a few choices, you will be able to continue working and learning about melioration. STEP Click the option button. Good luck! After you finish the game, a message box displays your score and a Results sheet shows a record of your picks. It reports results based on a full ten minutes of play, so if you stopped prematurely, you can ignore your results. Let’s deconstruct this game and see how it works. Figure 8.1 shows the first 10 choices made by a player. The player started with A, then switched to B with his 7th choice, but switched back to A, then ended with B. STEP You can see the full record of yet another player by clicking the button (near cell G9 in the Results sheet, which was revealed when you finished playing the choice game). This player tried streaks of A and B. Notice how the time paused changed. These results sheets also compare the number of choices made to the maximum possible and computes a score as a percentage of the maximum. Let’s find out how the maximum can be attained and why people are usually so bad at playing this game. Actual Results Experimental trials with this game were conducted by Herrnstein and Prelec (1991) and you can compare how you did to the average result (and to the player in the MoreResults sheet). STEP Click the button in the MoreResults sheet. The Data sheet shows how 17 subjects played the choice game that you just played. Each dot in the chart, reproduced in Figure 8.2, shows the fraction of times that a player chose A (on the x axis) and the corresponding average delay endured by that player (on the y axis). The player with the shortest delay, the first one in the table, also has the most choices (number of choices = 600/average delay) and is the winner in this set of players. How did you do? STEP To add your result to the chart in Excel, copy your results from cells J2 and K2 of the Results sheet, select cell A23 in the Data sheet, and Paste Special (Values) (or simply type in the two numbers). A red dot will appear in the chart. This shows how you did. Did you beat the best player out of the 17 in the chart? We know you could have because even the best player in that group of 17 failed to optimize. The explanation for this failure requires that we understand the delay function for each choice. The heart of the choice game is the wait time between choices. The duration of the pause is a function of the previous 10 choices (including the current choice). For choice A, the wait time, in seconds, is 2 + 0.4 x Proportion of A Choices in the last 10 choices. So, if the last 10 choices had been B, then A would have a very short and satisfying pause time of just 2 seconds. As you click on A, however, the pause time for choice A rises by 0.4 seconds until it hits a maximum of 6 seconds. Choice B’s wait time is determined by 8 - 0.4 x Proportion of B Choices in the last 10 choices. As you click on B, the duration of the pause gets lower and lower until reaching a minimum of 4 seconds. STEP Confirm that the wait times were determined as described by returning to the three results sheets and examining the pause times in columns B and C. You can see that the first clicks of A and B had pause times of 2 and 8 seconds, respectively. You can also check that each pause time is following the functions described above. The MoreResults sheet with the streaky A and B strategy makes it easy to see the mechanics of the choice game. Choice A exhibits increasing marginal costevery time you click on A, you are penalized and forced to wait longer. Choice B rewards you with a decrease in wait time when it is clicked, but the wait time starts very high so you have to be persistent and stick to it. Plus, choice A is always 2 seconds lower than choice B so you are constantly being lured toward choice A. Most people play this game by being attracted to A’s short wait time, until it gets unbearable and they switch to B. But they can’t stay with B long because it is painful to wait at first and they do not have the patience and self-discipline to stick with B. Sound familiar? B could be exercise or dieting or studyingyou know you should and it gets easier if you stick to it, but it can be hard to start. Now that you know the rules of the game, how do you actually optimize with this game? Simplestart with choice B and never deviate. STEP To see this optimal strategy in action, go to the Solution sheet by clicking the button in the Data sheet (below the chart). Column B shows what happens when you exclusively choose A. It starts well, but you end up with many 6 second pauses. STEP Scroll down to see that you make 103 choices in 600 seconds, yielding an average delay of 5.8 seconds. This is a poor outcome. Column F displays what happens when B is exclusively chosen. The first few wait times are long, but each choice of B lowers the wait time until the minimum, 4 seconds, is reached. STEP Scroll down to see that clicking choice B every time lets you make 144 choices (with an average delay of 4.167 seconds). The strategy of choosing B exclusively cannot be beat (except for an endgame correction, which is one of the exercise questions). If the player switches from B to A, the temporary gain is swamped by higher wait times when the inevitable switch back to B occurs. To be sure that this point is clear, consider switching after having reached the 4 second minimum pause time for choice B. What would happen? STEP Change cell K15 (in the Solution sheet) to A. Five consecutive A choices are made and each one has a pause time less than or equal to four seconds, as shown in column L. Thus, we have saved time. But when we switch back to B (since we know A’s pause time will continue to rise and we can get to 4 seconds with B), we have to suffer higher pause times. The trade-off is not worth it. We end up making fewer choices (142 instead of 144) and suffering a longer average delay. The Solution sheet makes clear the following key point: The optimal strategy is to choose B exclusively and never deviate. If you failed to do this, do not worry; you have plenty of company. Very few humans figure this out. Melioration Explained Herrnstein and Prelec (1991) designed the experiment to test for the presence of something called melioration (pronounced mee-lee-uh-RAY-shun). To meliorate (or ameliorate) means to make better or more tolerable. Melioration says that we are drawn to choices that immediately reduce pain or give immediate satisfaction. We do a poor job of maximizing when there is a trade-off between short- and long-run returns. We are shortsighted and look to make immediate improvements. In fact, melioration has been found in other animals besides humans. The attraction of switching to A and having the pause time fall is melioration at work. The immediate pain of waiting is lessened and, thus, players are drawn toward choice A. In addition to the actual choices from the 17 players, Figure 8.3 shows wait times for choices A and B given the proportion of A choices in the previous 10. It is easy to see, once again, that the optimal solution is to choose B exclusively because that lets you travel down the solid line to the intercept at 4 seconds. If you ever jump on the A train, you are swept upwards toward a 6-second wait time. Figure 8.3 shows that if the last 10 choices were B and then A was chosen, the player would immediately gain a reduction in wait time from 4 to 2 seconds (jumping from the higher to the lower line). For a few choices, the player would be better off, but after the 5th consecutive A choice, the wait time would be greater than 4 seconds. The player would be forced to endure longer wait times than would have been obtained by sticking with B. Furthermore, it is hard to switch to B because wait time immediately jumps by 2 seconds. The player will have to suffer through the ride down the B line, with choice A promising a 2-second decrease with every click. The immediate attraction of the 2-second decrease is the core of the melioration process that guides subjects to choose A. Figure 8.3 makes clear that the 17 human subjects who played the choice game failed to optimize. The fraction of allocation to A should be zero, but most players do not do this. This begs the question, so what? Herrnstein and Prelec (1991) argue that the lack of optimization is a big deal. For them, choice is often not a single, isolated decision, but a series of many decisions, distributed over time. Frequency of athletic exercise, buying lottery tickets, choices of restaurants, and rate of work in freelance occupations are some of the examples offered. For all of these distributed choice problems, melioration is common and this means people systematically fail to optimize. "This would imply that preferences as revealed by the marketplace may be a distortion of the true underlying preferences" (Herrnstein and Prelec, 1991, p. 137). Melioration helps explain complaints about one’s own behavior (such as exercising too little), which is part of a growing literature on self-control. It also may contribute to the study of impulsiveness and addiction. Of course, this presumes that the laboratory findings carry over to real-world settings. This is often an Achilles’ heel of experimental economics. Results are often criticized as having little external validity because they are based on fake scenarios played by college students. Herrnstein and Prelec (1991) acknowledge that little money was at stake (they paid their players based on performance), but they rely on two other motivating factors. "First, delays are genuinely annoying and the difference between two and four seconds is not trivial, as any computer user will appreciate. Second, the ‘puzzle’ nature of the experiment presents a challenge that is presumably satisfying to solve" (Herrnstein and Prelec, 1991, p. 144). Others have tried to nail down exactly what causes melioration and how it can be overcome. Neth, Sims, and Gray (2005, p. 357) were surprised: We hypothesized that frequent and informative feedback about optimal performance might be the key to enable people to overcome the documented tendency to meliorate when choices are rewarded probabilistically. Much to our surprise, this intuition turned out to be mistaken. Instead of maximizing, 19 out of 22 participants demonstrated a clear bias towards melioration, regardless of feedback condition. The Future of Behavioral Economics With faculty, courses, conferences, and specialized journals, there is no doubt that Behavioral Economics is here to stay. In 2002, the Nobel Prize in Economic Sciences was awarded to Daniel Kahneman and Vernon Smith for work incorporating psychology and laboratory methods in the study of decision making. Richard Thaler won the Nobel in 2017 for his contributions to behavioral economics. Unlike conventional economics, which simply assumes optimizing behavior and rationality, behavioral economists seek to determine under what conditions agents struggle to optimize. They work with psychologists and neuroscientists to devise tests and laboratory experiments. The key result is that they find persistently sub-optimizing behavior. Melioration is but one simple example of work in this area. Melioration means that decision makers fail to optimize because they focus on the small (immediate, single choice) instead of the large (future, many choices). This can be applied any time that incremental steps lead to an undesirable place: A person does not normally make a once-and-for-all decision to become an exercise junkie, a miser, a glutton, a profligate, or a gambler; rather, he slips into the pattern through a myriad of innocent, or almost innocent choices, each of which carries little weight. Indeed, he may be the last one to recognize "how far he has slipped," and may take corrective action only when prompted by others. (Herrnstein and Prelec, 1991, p. 149) According to the behavioral economists, the list of examples where humans struggle to optimize is actually quite long. Evaluating probabilities (such as risk), choice over time, and misperception of reality are all areas being actively studied. It remains unclear whether the results being generated by behavioral economists are merely a series of peculiar puzzles that will extend the boundaries of economics or more serious anomalies that will one day bring down the paradigm of rationality and optimizing behavior that is the hallmark of modern, mainstream economics. Exercises If you did the Q&A problems and changed the parameters, set them back to the original values (2 and 0.4 for A and 8 and \(-0.4\) for B). 1. With your observation included, copy and paste the chart titled Actual Trial Results in a Word document. Comment briefly on how you did. 2. What endgame correction could be implemented to increase the total number of choices? What is the true, exact maximum number of choices? Explain. Herrnstein and Prelec (1991), p. 142, point out that, "In fact, the subjects showed no evidence of having been influenced by the endgame contingency." 3. With columns Q:U in the Solution sheet, use Solver to find the optimal solution to the choice game. Notice how the choice variables have been constrained. How does Solver do? Explain. 4. Training someone to touch type does not guarantee continued touch typing in the workplace. How would melioration explain this result? References The epigraph is from a course available freely at ocw.mit.edu. The course description in the epigraph was from the Spring 2004 version of Behavioral Economics and Finance (see ocw.mit.edu/courses/economics/14-127-behavioral-economics-and-finance-spring-2004/). The readings for this course include introductory and more advanced work. The repeated choice problem in this chapter is based on two papers: (1) Richard J. Herrnstein and Drazen Prelec, “Melioration: A Theory of Distributed Choice,” The Journal of Economic Perspectives, Vol. 5, No. 3 (Summer, 1991), pp. 137–156, www.jstor.org/stable/1942800 and (2) Herrnstein and Prelec’s “Melioration,” pages 235–263 in Choice Over Time, edited by George Loewenstein and Jon Elster (1992). Herrnstein, a psychologist, teamed up with Charles Murray, a political scientist, to write a controversial book titled The Bell Curve: Intelligence and Class Structure in American Life (1994). The book argued that nature (IQ) is more important than nurture (socioeconomic status) in explaining a wide range of outcomes. Another paper specifically focused on melioration is Hansjörg Neth, Chris R. Sims, and Wayne D. Gray, "Melioration Despite More Information: The Role of Feedback Frequency in Stable Suboptimal Performance," Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting, 2005, doi.org/10.1177/154193120504900330. There are many books on behavioral economics and finance. A classic is from Nobel Prize winner Richard Thaler, The Winner’s Curse: Paradoxes and Anomalies of Economic Life (1994). This is a good place to start learning about behavioral economics. Other good reads include the following:
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/08%3A_Behavioral_Economics.txt
This chapter is different. It does not have steps that you follow as you work in Excel. It does not have any exercise questions. There is an Excel file that you will open and work on, but it is entirely self-contained. Just open the file and start reading. Before you begin, however, consider a little of the science behind learning. Once we know how we learn, then we can optimize! The Neuroscience of Learning Suppose you want to improve your free throw shooting and you really cared about this so you decided to practice for one hour per day for two weeks. Most people think that standing at the free throw line and shooting free throws would be the best use of your time, but this is wrong. A much better use of your one hour per day is to shoot from all over the courtspend 10 minutes in one spot, then move to another spot, varying distance from say 10 to 20 feet (the free throw line is 15 feet from the basket). This is interleaved practice and it works also for learning and studying. Interleaved practice is counter-intuitive and paradoxical. Many coaches refuse to believe it, but careful controlled experiments in a variety of applications reveal it is a fundamental principle (Brown, et al., 2014). It works for physical skills (don’t throw 100 curve balls, interleave with other pitches), memorization (don’t repeat one thing, interleave items), and higher learningreflect on how this book has repeated concepts like elasticity in a variety of applications. In addition to interleaving, below is a list of best-practice learning strategies that you can apply to every course you take: 1. Interleaved Practice (switching) 2. Spaced Practice (avoid cramming) 3. Elaboration (invent your own how and why questions) 4. Concrete Examples (the more specific, the better) 5. Dual Coding (words and visuals) 6. Retrieval Practice (repeatedly recall what you know) Unbeknownst to you, this book has been using all of these strategies to help you learn. To get more information on these six science-based ways to learn more efficiently, visit these two web sites: And one more thing that you believe about learning that is wrong: you think your ability to learn economics (or math or music) is preordained. Your brain either has a knack for economics or it does not and, if not, you cannot learn economics (or math or music). This is wrong. Neuroscience makes clear that your brain is plastic. It is moldable and flexible. You have already learned a great deal of economics, math, and Excel. Yes, some details are fuzzy and you have not mastered every single thing, but keep trying. As you see more examples and applications, it gets easier to grasp and your understanding deepens. Rational Addiction As you work on the Excel file, you will be reviewing concepts and feel comfortable with Solver, charts, and Excel itself. This will reinforce basic material that you already know, but you will also be exposed to some new ideas as you continue to master the economic way of thinking. This application is controversial and generates passionate debate. Non-economists, especially, find it outrageous. After you finish, you can make up your own mind on what you think about it. Open RationalAddiction.xlsm to begin.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/09%3A_Rational_Addiction.txt
The production function is the backbone of the Theory of the Firm. It describes the current state of technology and how input can be transformed into output. The production function can be displayed in a variety of ways, including product curves and isoquants. In every optimization problem faced by the firm, the production function is included. Key Definitions and Assumptions Inputs, also known as factors of production, are used to make output, sometimes called product. As shown in Figure 10.1, the firm is a highly abstract entitya black boxthat transforms inputs into output. The specific details of how the firm is organized and how it actually combines the inputs to make goods and services is ignored by the theory, hidden in the black box. Inputs are often broken down into large categories, such as land, labor, raw materials, and capital. We will simplify even further by collapsing everything that is not labor into the capital category. Labor, L, is human toil and effort. It is measured in units of time, usually hours. Capital has a confusing history in economics. As a factor of production, capital, K, means things that produce other things, such as machinery, tools, or equipment. That is different from financial or venture capital that is a fund of money. The title of Karl Marx’s famous book, Das Kapital, uses capital in the sense of wealth, denominated in money. The Theory of the Firm’s K is measured in numbers of machines. Like labor, capital is rented. The firm does not own any of its machines or buildings. This is extremely unrealistic, but allows us to avoid complicated issues involving depreciation, financing of machinery purchases (debt versus equity, for example), and so on. Another extreme simplifying assumption is that there is no time involved. Like the consumer maximizing utility subject to a budget constraint, the firm exists only for a nanosecond. It makes decisions about how much to produce to maximize profits with no worries about inventories or the trajectory of future sales. It produces the output in an instant. We avoid complications arising from the production of more than one good or service by assuming that the firm produces only one product. That makes revenues simply price times quantity sold of the one product. Without going into detail again about unrealistic assumptions, it seems helpful to point out that we are not trying to build an accurate model of a real-world firm. Our primary goal is to derive a supply curve. We want to know how a firm responds to a change in price, ceteris paribus. By assuming away many real-world complications, we can model the firm’s maximization problem, solve it, and do comparative statics to get the supply curve. Mathematical Representation Just like the Theory of Consumer Behavior, which uses a utility function to model tastes and preferences, the Theory of the Firm uses a production function to capture the ability of firm’s to transform inputs into outputs. Unlike utility, production is objective and observable. We can count how much output is made from a given number of hours of labor and machines. The production set describes all of the technologically feasible outputs from a given amount of inputs. The production function describes the maximum output possible from a given amount of inputs. Notice how the production function assumes the inputs are being used in the best way possible. The most abstract, general notation for a production function is $y = f(L, K)$. The $f()$ represents the technology available to the firm. A specific, concrete example of a production function is the Cobb-Douglas functional form: $y=AL^\alpha K^\beta$. Let’s see what it looks like in Excel. STEP Open the Excel workbook ProductionFunction.xls, read the Intro sheet, then go to the Technology sheet to see an example of the production function. In Figure 10.2, the production set is the surface of the 3D object and everything inside; the production function is just the surface. The production function implicitly includes an already solved engineering optimization problemit gives the maximum output from any given combination of inputs. In other words, we are assuming that the inputs are organized in their most productive configuration and nothing is wasted. Notice that the Cobb-Douglas function on the Technology sheet has been set up so it can be controlled by a single parameter, $\alpha$ (alpha), by making the exponents $\alpha$ and ($1 - \alpha$). Use the scroll bar to change alpha and notice how the shape of the production function surface changes. Alpha is a parameter that takes values between zero and one. STEP Click the button to return the sheet to its default, initial position. Product Curves In addition to the 3D view, the production function can be displayed in other ways. To graph the production function in two dimensions, we need to suppress an axis. If we keep output and suppress one of the input axes we get a total product curve. If we suppress output and keep the two inputs, we get an isoquant. Product and output mean the same thing. The total product curve is the number of units of output produced as one input is varied, holding the other constant. STEP Click the and buttons to see the product curves for labor and capital. In addition to the total product curves, there are average and marginal product curves. The average product is simply output per unit of input. Thus, the average product of labor is Y/L and the average product of capital is Y/K. The marginal product curves tell us the additional output that is produced as input is increased, holding the other input constant. Marginal product can be computed based on finite-size changes in an input or via the derivative. Via calculus, the marginal product is simply the derivative of the production function with respect to the input. For the Cobb-Douglas function in the Technology sheet, the marginal products are found by taking the partial derivatives with respect to L and K: $MP_L = \frac{\partial Y}{\partial L}=(1 - \alpha)AK^\alpha L^{(1-\alpha)-1}=(1 - \alpha)AK^\alpha L^{-\alpha}$ $MP_K = \frac{\partial Y}{\partial K}=\alpha AK^{\alpha -1}L^{1-\alpha}$ STEP Scroll down and click on cell C52 to see that the marginal product is computed via the change in output from an increase of 2 hours of labor, with $K=4$. This computes the marginal product of labor as the rise over the run from $L=0$ to $L=2$ on the total product curve. STEP Click the button and then click on cell C58 to reveal the marginal product computed via the derivative. Since the total product is a curve, the slope of the tangent line at $L=2$ is not the same as the rise over the run from one point to another. STEP Now look at the total, marginal, and average product curves. Notice how the product curves are drawn based on a given amount of capital. If the amount of capital changes, then the product curves shift. Marginal and average product can be graphed together because they share a common y axis scale, output per unit of input. The total product curve can never be graphed with the marginal and average product curves because the total product curve uses output as its y axis scale. The graphs demonstrate that when total product increases at a decreasing rate, marginal product is decreasing. When total output increases at a decreasing rate as more input is applied, ceteris paribus, we are obeying the Law of Diminishing Returns. As long as alpha is between zero and one, our Cobb-Douglas production function exhibits diminishing returns. The Law of Diminishing Returns does not deny that there can be ranges of input use where output increases at an increasing rate. It says that, eventually, continued application of more input along with a fixed factor of production must lead to diminishing returns in the sense that output will increase, but not as fast as before. Thus, the Law of Diminishing Returns is simply a statement that marginal productivity must, eventually, be falling. As with utility, the Cobb-Douglas functional form is convenient, but there are many, many other functional forms available. STEP Proceed to the Polynomial sheet to see a different functional form. The charts are strikingly different than before. Unlike the Cobb-Douglas functional form, which always shows diminishing returns, the polynomial production function exhibits all three different phases of returns: increasing, diminishing, and negative returns. At low levels of labor use, output is increasing at an increasing rate so the total product curve is curved upward and marginal product is increasing. In this range, as long as marginal product is rising and output is increasing at an increasing rate, output rockets upward, growing faster and faster. When the marginal product curve reaches its peak, the total product curve is at an inflection point. From here, additional labor leads to increases in output, but at a decreasing rate, leveling off as L increases. We say that diminishing returns have set in. The Polynomial sheet is color coded so it is easy to see where the total product curve changes character. Cells with yellow backgrounds signal the range of labor use where diminishing returns apply. As more and more labor is used, total product reaches its maximum point (where marginal product is zero). Beyond this point, we are in a range of negative returns. This is a theoretical possibility, but not a practical one. No profit-maximizing firm would ever operate in this region because you can get the same amount of output with fewer workers. It is worth remembering that the Law of Diminishing Returns does not say that we always have diminishing returns for every level of labor use. Instead, the law says that, eventually, diminishing returns will set in. It is also important to understand the difference between diminishing and negative returns. The former says output is rising, but slower and slower, while the latter says output is actually falling. Notice the relationship between the marginal and average product curves. It is no coincidence that the marginal product curve intersects the average product curve at the maximum value of the average product. There is a guaranteed relationship between marginal and average curves: Whenever the marginal is greater than the average, the average must be rising and whenever the marginal is less than the average, the average must be falling. Thus, the only time the two curves meet is when the marginal and average are equal. STEP Change the parameter for the b coefficient from 30 to 40. Notice that the S shape becomes much more linear. The range of increasing returns is larger and we do not hit negative returns over the observed range of L from 0 to 25. STEP Set the parameter for the b coefficient to 80. Over the observed range of L from 0 to 25, we see only increasing returns. STEP Change the $\delta L$ parameter from 1 to 2. This makes L go up by two and the range goes from 0 to 50. Diminishing returns do kick in; it just takes more labor for the Law of Diminishing Returns to be observed when the b coefficient is set to 80. Diminishing versus Decreasing Returns One extremely confusing thing about the Law of Diminishing Returns has to do with another concept called returns to scale. Unlike the Law of Diminishing Returnswhich is based on applying more and more of a particular input while holding other inputs constantreturns to scale focuses on the effect on output of changing all of the inputs by the same proportion. There is no law for returns to scale. A production process may exhibit increasing, decreasing, or constant returns to scale, across all values of input use. For example, the Cobb-Douglas function in the Technology sheet has constant returns to scale because if you double L and K, you are guaranteed to double output. You can see this is true by comparing the points 2,2 and 4,4 in the table in the Technology sheet. A more complete demonstration uses a little algebra. We begin with the production function: $AK^\alpha L^{1-\alpha}$ Next, we double both L and K: $A(2K)^\alpha (2L)^{1-\alpha}$ We expand the terms with exponents: $A(2^\alpha) (K^\alpha) (2^{1-\alpha})(L^{1-\alpha})$ We collect the "2" terms: $A(2^{\alpha + (1-\alpha)} (K^\alpha) (L^{1-\alpha})$ The alphas add to zero ($\alpha - alpha = 0$) so we get: $A2K^\alpha L^{1-\alpha}$ Thus, we have shown that doubling the inputs from any input levels leads to doubling the output, and this is called constant returns to scale. If the exponents in the Cobb-Douglas function do not sum to 1, then the function does not exhibit this property. The Cobb-Douglas function in the Technology sheet obeys the Law of Diminishing Returns for each input (with $0 < \alpha <1$), yet it has constant returns to scale. Do diminishing returns imply decreasing returns to scale? No, absolutely not. The two concepts are independent. They ask different questions. The Law of Diminishing Returns is about what happens to output when a single input is increased, ceteris paribus, and decreasing returns to scale says that output will less than double when all inputs are doubled. Isoquants In addition to product curves, another way to represent the production function uses the isoquant. The prefix iso, meaning equal or the same (as in isosceles triangle), is combined with quant (referring to the quantity of output) to convey the idea that the isoquant displays the combinations of L and K that yield the same output. STEP Return to the top of the Technology sheet and click the button (near cell H28) to see the isoquant map, as displayed in Figure 10.3. An isoquant is simply a 2D, top down view of the 3D surface. Unlike the product curves, which give a view from the side, the isoquant shows L and K on the x and y axes, respectively, and suppresses output. Notice that Excel cannot correctly draw the isoquant map, putting garbled characters in the bottom left-hand corner of the chart and producing a squiggly, jagged display at the bottom. You might be thinking that it looks a lot like an indifference map. There are definitely strong parallels between isoquants and indifference curves. Both are top-down views of a 3D object and, therefore, both are level curves or contour plots. Both are used to find and display the solution to an optimization problem. However, there is one critical difference: unlike an indifference curve, each isoquant is, in principle, directly observable and the isoquants can be compared on a cardinal scale. With indifference curves, the utility function is a convenient fiction and the numerical values merely reflect rankings. No one cares that a particular indifference curve yields 28 utils of satisfaction. This is not the case for isoquants because the suppressed axis, output, is measurable. You can certainly say that one isoquant gives twice the output as another or that one isoquant gives 17 more units of output than another. One way in which indifference curves and isoquants are the same is that we can compute the slope between two points or the instantaneous rate of change at a point on an isoquant. To avoid confusion with MRS, we call this slope the technical rate of substitution, TRS. With labor on the x axis and capital on the y axis, the TRS tells us how much capital we can save if one more unit of labor is used to produce the same level of output. From one point to another, the TRS can be computed as the rise over the run, $\frac{\delta K}{\delta L}$. At a point, we compute the TRS as the ratio of the derivatives with respect to L and K from the production function: $TRS=-\frac{MP_L}{MP_K}=-\frac{\frac{\partial f(L,K)}{\partial L}}{\frac{\partial f(L,K)}{\partial K}}$ Whereas MRS is universally used for the slope of an indifference curve, MRTS (marginal rate of technical substitution) is sometimes used for the slope of the isoquant. MRTS and TRS are perfect synonyms. We will use TRS. The TRS (like the MRS) is a number that expresses the substitutability of labor for capital at a point on an isoquant. So, the TRS of two different L and K combinations on the same isoquant might be $-100$ and $-2$. The TRS = $-100$ value says that the firm can replace 100 units of capital with 1 unit of labor and still produce the same output. The isoquant would be steep at this point. If a point has a TRS = $-2$, 1 unit of labor can replace 2 units of capital to get the same output. The isoquant at this point would be much flatter than the point with the TRS = $- 100$. Just like the MRS, the TRS tells us how steep the isoquant is at a point. The steeper the isoquant, the more capital can be replaced by labor and still produce the same output. Technological Progress Over time, technologyour ability to transform inputs into outputimproves. Electric power and computers are examples of technological progress that enables more output to be produced from the same input. There are two kinds of technological change. The Cobb-Douglas functional form can be used to illustrate each type. Suppose increased education improves the productivity of labor. This would be modeled as an increase in the exponent for labor in the Cobb-Douglas production function. Small changes, say from 0.75 to 0.751, lead to large responses (e.g., in output or labor use) because we are working with an exponent. This is known as labor-augmenting technological change. We could also have a situation where the coefficient A in the function $AK^\alpha L^\beta$ increased over time. As A rises, the same number of inputs can make more output. This technological progress is said to be neutral (in terms of the utilization of L and K) because TRS does not depend on A. We can show this by walking through the steps needed to find the TRS. First, we compute the marginal products of L and K from the function, $Y=AL^\alpha K^\beta$: $MP_L = \frac{\partial Y}{\partial L}= \alpha A L^{\alpha-1}K^\beta$ $MP_K = \frac{\partial Y}{\partial K}=\beta AL^\alpha K^{\beta -1}$ The TRS is minus the ratio of the marginal products: $TRS=-\frac{MP_L}{MP_K}=-\frac{\alpha A L^{\alpha-1}K^\beta}{\beta AL^\alpha K^{\beta -1}}=-\frac{\alpha K}{\beta L}$ The A terms cancel out, which means that the ratio of the marginal productivities of each input depends only on each input’s exponent and the amount of the input used. The Firm as a Production Function The production function is the starting point for the Theory of the Firm. As with utility, many, many functional forms can be used to represent real-world production processes. Economists represent the production function not as a 3D object, but in two dimensions. We get product curves (total, marginal, and average product curves) by focusing on output as a function of a single input, holding all other inputs constant. An isoquant suppresses the output and shows the different combinations of L and K that produce a given level of output. The TRS is similar to the MRS, and it will play an important role in the understanding the firm’s cost minimizing input choice. Remember to keep straight the difference between the Law of Diminishing Returns and idea of returns to scale. The former applies more and more of a single input, holding all other inputs constant; the latter reports what happens to output when all inputs are changed by the same proportion. Those are two different things. Exercises 1. Starting from a blank workbook, with K = 100, draw total, marginal, and average product curves for L = 1 to 100 by 1 for the Cobb-Douglas production function, $Q=L^\alpha K^\beta$, where $\alpha = 3/4$ and $\beta = 1/2$. Use the derivative to compute the marginal product of labor. Hint: Label cells in a row in columns A, B, C, and D as L, Q, MPL, and APL. For L, create a list of numbers from 1 to 100. For the other three columns, enter the appropriate formula and fill down. For MPL, do not use the change in Q divided by the change in L; instead enter a formula for the derivative for the MPL at a point. 2. For what range of L does the Cobb-Douglas function in question 1 exhibit the Law of Diminishing Returns? Put your answer in a text box in your workbook. 3. Determine whether this function has increasing, decreasing, or constant returns to scale. Use the workbook for computations and include your answer in a text box. 4. From your work in question 3 and the comment in the text that you cannot have constant returns to scale "if the exponents in the Cobb-Douglas function do not sum to 1," provide a rule to determine the returns to scale for a Cobb-Douglas functional form. 5. Is it possible for a production function to exhibit the Law of Diminishing Returns and increasing returns to scale at the same time? If so, give an example. Put your answer in a text box in your workbook. 6. Draw an isoquant for 50 units of output for the Cobb-Douglas function in question 1. Hint: Use algebra to find an equation that tells you the K needed to produce 50 units given L. Create a column for K that uses this equation based on L ranging from 20 to 40 by 1 and then create a chart of the L and K data. 7. Compute the TRS of the Cobb-Douglas function at L = 23, K = 312.5. Show your work on the spreadsheet.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/10%3A_Production_Function.txt
Input cost minimization is one of the three optimization problems faced by the firm. It revolves around the question of choosing the best combination of inputs, L and K, to produce a given level of output, q. The best combination is defined as the cheapest one. The idea is that many combinations of L and K can produce a given q. We want to know the amounts of labor and capital that should be used to produce a given amount of output as cheaply as possible. Of course, we answer this question by setting up and solving an optimization problem; then we do comparative statics. Because there is a constraint (we must produce the given q), we will use the Lagrangean method. Setting up the Problem The economic approach organizes optimization problems by answering three questions: 1. What is the goal? 2. What are the choice variables? 3. What are the given variables? The goal is to minimize total cost, TC, which is simply the sum of the amount paid to the workers, wL, and the amount spent on renting machines, rK. The endogenous variables are L and K. Labor is measured in hours and capital is the number of machines. The firm can decide to produce the given output by being labor intensive (using lots of labor and little capital), or roughly equal amounts of both, or by renting a lot of machinery and using little labor. The exogenous variables are the input prices, wage rate (w), and the rental price of capital (r). The wage rate, or wage for short, is measured in $/hour and the rental price of capital is$/machine. We assume that the firm is a price taker in the markets for labor and capital so it can rent as much L and K as it wants at the given w and r. The amount to produce, q, is also an exogenous variable in this problem. We are not considering how much should be produced, but what is the best way to produce any given amount of output. Finally, the firm’s technology, the production function, $f(L, K)$, is also given. Because the firm has to produce a given amount of output, we know this is a constrained optimization problem. Our work in the Theory of Consumer Behavior has made us expert at solving this kind of problem. As you will see, the analysis is similar, but there are some striking differences. One thing that does not change is our framework. We first explore the constraint to determine our options, then focus on the goal (to minimize TC), and, finally, we will combine the two to find the initial optimal solution. The Constraint The menu of options available to the firm is given by the isoquant. It serves as the constraint because the firm is free to choose L and K on the condition that it must produce the assigned level of output. Mathematically, the equation for the constraint is simply the production function, $q = f(L, K)$. STEP Open the Excel workbook InputCostMin.xls, read the Intro sheet, then go to the Isoquant sheet to see the isoquant displayed in Figure 11.1. Like the budget constraint in the Theory of Consumer Behavior gives us consumption possibilities, the isoquant gives the firm its feasible input options. All combinations below and left of the isoquant are ruled out. For example, there is no way to produce 100 units of output, holding quality and everything else constant, with the L,K combination of 100,20. The technology is simply not advanced or powerful enough to make 100 units of output with 100 hours of work and 20 machines. The points above and to the right of the isoquant are feasible, but they are clearly wasteful. In other words, the firm could produce 100 units of output with an L,K combination of 250,50, but the isoquant tells the firm it does not need that much labor and capital to make 100 units. At 250,50, it could travel straight down to K = 10 and still produce q = 100 or straight left (on the horizontal line at K = 50) until it hit the isoquant and use a lot less labor. The firm could also travel in a diagonal, southwest direction until it hit the isoquant to economize on both inputs. Points off the isoquant to the northeast (such as 250,50) are said to be technically inefficient. The inefficient part tells us that the firm is not minimizing its total cost at that point; technical describes the fact that the firm is not organizing its inputs so as to maximize output. In other words, the firm is not correctly solving the engineering optimization problem represented by the production function. Making 100 units of output with 250 hours of labor and 50 machines means that you are not getting the most out of your labor and capital. Economists call this situation technically inefficient. Since the firm cannot choose a combination below the isoquant and it is wasteful to choose a combination above the isoquant, we know the answer has to lie on the isoquant. STEP Use the scroll bar next to cell B11 to see the input mixes the firm might choose. As you change cell B11, the cell below changes also. It has a formula that computes the amount of K needed to produce the required output when you choose a value for L. The idea is quite clear: The firm will roll around the isoquant in search of the best combination. Rolling is a good word choice and image to rememberthe firm is free to choose a point high up or roll down to the bottom right. Because we do not have the input prices, we cannot find the optimal solution with the isoquant alone. STEP Change the exogenous variables to see how the isoquant is affected. Increases in A, c, and d pull the isoquant down. That makes sense given that these shocks are all productivity enhancing and the firm will need less L and K to make the given q = 100. Lowering q has the same effect, but this is not a productivity shock. You are simply telling the firm it does not have to produce as much as before so it makes sense that it can use less labor and capital. Notice how the constraint for this input cost minimization problem is a curve, not a line like it was for the utility maximization problem. Mathematically, that does not matter much, but it will impact the graph we draw to show the initial solution. Goal With the constraint in hand, we are ready to model the goal. In this problem, the goal is represented by a series of isocost (equal cost) lines. Total cost is $TC = wL+ rK$. If we solve this equation for K (in order to graph it in L-K space), we get the equation of a line: $TC = wL + rK$ $rK = TC - wL$ $K = \frac{TC}{r} - \frac{w}{r}L$ The K (or y axis) intercept is $\frac{TC}{r}$ and the slope is $- \frac{w}{r}$. Isocosts are a little tricky at first because you are used to seeing a linear constraint and a set of indifference curves. Input cost minimization has a curved constraint and a set of linear isocosts. In the equation of the line above, TC can take on any value. Thus, there is an isocost for $TC=\500$ and another for $TC=\500.01$ and an isocost for every single dollar amount. Every L,K point is on an isocost and the L,K points that have the same TC are on the same isocost. STEP Proceed to the Isocost sheet to see how the isocost lines are used to find the optimal solution. Each point on a particular isocost line has the exact same total cost. So, the point on Figure 11.2 (and on your screen) has a cost of $500 (since 2 x 190 + 3 x 40 = 500). STEP Click the to see how the firm’s cost minimization goal is represented on this graph. The firm can move to a new point by choosing a different combination of L and K. If the new point has the same TC of$500 as the initial point, then it will be on the same $500 isocost. STEP Increase L by 30 and decrease K by 20 so you will be at another point on the same isocost line of$500. Now you know that all points on the TC = $500 isocost line share the same total cost of$500. It is also obvious that the slope of each isocost line is $- \frac{2}{3}$ since $w=2$ and $r=3$. Because the firm can choose the input mix, it can choose any combination of L and K, provided that the chosen combination can produce the given amount of output. The firm wants to hire as few inputs as it can (to save on costs), but it has to meet the production target. How can it solve this problem? The Initial Optimal Solution We have the constraint (the isoquant) and the goal (get to the lowest isocost possible), so now we combine the two to find the optimal solution. STEP Proceed to the OptimalChoice sheet. The starting position shows an L,K combination that costs \$482.81. You can confirm this number both in cell B7 and on the chart (the middle label for the middle line). The idea is to be on the lowest isocost line (i.e., the one with the smallest intercept) that is just touching the isoquant because that means the firm will be minimizing the total cost of producing the given level of output. Clearly, the starting position is not optimal. You can see that the isocost is intersecting the isoquant. This information is also revealed by the slope and TRS information below the chart. The TRS, which is the slope of the isoquant at a point, is greater (in absolute value) than the slope of the isocost line at that point. At the opening position, the firm is said to suffer from allocative inefficiency because it is on the isoquant, but it fails to choose the cost minimizing input mix. Because it is on the isoquant, we know it is not technically inefficientit is using the opening combination of L and K to get the maximum output. The problem is that it is using the wrong combination of inputs in the sense that there is a cheaper way to produce the given output. We know there are two ways to solve optimization problems: analytically and numerically. Because we have Excel and the problem implemented on the sheet, we begin with the numerical approach. STEP Run Solver. The optimal solution is depicted by the canonical graph displayed in Figure 11.3. Solver’s answer, which is correct, has the firm choose an L,K combination whose isocost just touches the isoquant. There is no cheaper combination that can produce 100 units with the existing technology (given by the production function). If the firm went to an isocost that was one cent lower, it could not rent enough L and K to make 100 units of output. We can confirm Solver’s result by applying the Lagrangean method to solve this constrained optimization problem. We start by writing down the problem, using the parameter values from the OptimalChoice sheet. $\begin{gathered} %star suppresses line # \min\limits_{L,K}TC=2L+3K\ \textrm{s.t. } 100 = L^{0.75}K^{0.2}\end{gathered}$ The first step is to rewrite the constraint so that it is equal to zero. $100 - L^{0.75}K^{0.2}=0$ The second step is to form the Lagrangean by adding lambda, $\lambda$, times the rewritten constraint to the original objective function. We use an extra-large L for the Lagrangean function that is not at all related to the L for labor. $\begin{gathered} %star suppresses line # \min\limits_{L,K, \lambda}{\large\textit{L}}=2L+3K + \lambda (100- L^{0.75}K^{0.2})\end{gathered}$ The third step to finding the optimal solution is to take the derivative of the Lagrangean with respect to each endogenous variable and set each derivative to zero (giving us the first-order conditions). The fourth, and last, step is to solve this system of equations for $L\mbox{*}$, $K\mbox{*}$, and $\lambda \mbox{*}$. The system of three equations contains the answer, that is, the values of L and K that minimize TC. Our task is to use the equations to find these values that satisfy the three equations. There are many ways to solve the system, but we will use the same approach that we used in the Theory of Consumer Behavior. We will reduce the system from 3 to 2 to 1 equation and unknown. We move the terms with lambda in the first two equations to the right-hand side and then divide the first equation by the second. The Cobb-Douglas production function is easy to work with because the exponents of L and K sum to -1 and 1, respectively, when you apply the $\frac{x^a}{x^b}=x^{a-b}$ rule. As you can see above, this strategy cancels the lambdas and gives an expression for $L = f (K)$, which, in conjunction with the third first-order condition, reduces the system to two equations with two unknowns. $\begin{gathered} %star suppresses line # L=5.625K \ 100- L^{0.75}K^{0.2}\end{gathered}$ We substitute the expression for L into the constraint (the third first-order condition) and solve for $K\mbox{*}$. Then, substituting $K\mbox{*}$ back into the expression for $L = f (K)$, we get $L\mbox{*}$. $\begin{gathered} %star suppresses line # L=5.625K \ L=5.625[32.588] \ L\mbox{*}=183.31\end{gathered}$ Substituting $L\mbox{*}$ and $K\mbox{*}$ into the original objective function, we can compute the minimum cost of producing 100 units. $\begin{gathered} %star suppresses line # TC = 2L + 3K \ TC = 2[183.1]+3[32.588] \ TC\mbox{*}=\464.38\end{gathered}$ The analytical solution agrees with Solver’s answer. The work we did in dividing the first equation by the second yields an equimarginal condition that is similar to the MRS = $\frac{p_1}{p_2}$ rule from constrained utility maximization. At the optimal solution, we have $\frac{2}{3}=\frac{3.75K}{L}$ The left-hand side is the input price ratio and the right-hand side is the TRS. Thus, at the optimal solution we know that input price ratio must equal the TRS. This is a mathematical statement of the tangency we see in Figure 11.3. If this equimarginal condition is not met, but the firm is on the isoquant (i.e., it is technically efficient), then we have allocative inefficiency. If $|TRS| > \frac{w}{r}$, then the isocost is cutting the isoquant and the firm can lower total costs by rolling down the isoquant. The reverse, of course, applies if $|TRS| < \frac{w}{r}$. STEP If you have not done so already, double-click inside the box around cell J25 and use the scroll bar to show how the isocost and isoquant graph matches up with the TRS = $\frac{w}{r}$ equimarginal condition. Comparing Consumer and Firm Figure 11.3 bears a striking resemblance to the canonical graph used in the Theory of Consumer Behavior and the analytical work also contains strong similarities, but there are some critical differences between the consumer and firm optimization problems. Figure 11.4 presents a side-by-side comparison to highlight the contrasts between them. It makes sense to use the knowledge and skills learned from the Theory of Consumer Behavior, but do not fall into a false sense of security. The input cost minimization problem has its own characteristics and terminology. Cost Minimization is One of Three Problems The Theory of the Firm is actually a set of three interrelated optimization problems. The initial solution to the firm’s cost minimization problem focuses attention on the cheapest combination of inputs to produce a given level of output. We can apply the same techniques we used to solve the consumer’s utility maximization problem. The canonical graph is similar to the standard graph from the Theory of Consumer Behavior, but as Figure 11.4 shows, there are substantial differences between utility maximization and cost minimization. One important similarity is the continued use of the comparison of a price ratio to the slope of a curve to determine whether the optimal solution has been found. In the case of the constrained cost minimization problem, the firm will choose that combination of inputs where TRS = $\frac{w}{r}$. If this condition is not met, the direction of the inequality (> or <) tells us which way the firm should move to find the minimum total cost. Now that we understand the firm’s cost minimization problem and have found the initial solution, we are prepared to take the next stepcomparative statics analysis. The economic approach is unrelenting and monotonous. We apply the same framework to every problem. Through practice and repetition, you will learn to think like an economist. Exercises 1. The Q&A sheet asks you to change r to 30 and use Solver to find the initial solution. Find the initial solution to this same problem via analytical methods and compare the two results. Are they the same? Show your work. 2. The fixed proportions production function, $q = min\{\alpha L, \beta K\}$ is analogous to the perfect complements utility functional form. Suppose $\alpha = \beta = 1$, w = 10, r = 50, and q = 100. Find $L\mbox{*}$, $K\mbox{*}$, and $TC\mbox{*}$. Show your work. Use Word’s Drawing Tools to draw a graph of the optimal solution. 3. Given the quasilinear production function, $q = \sqrt{L}+K$, and input prices r = 2, and w = 5, find the cheapest way to produce 1000 units of output. Use analytical methods and show your work. 4. Set up the problem in question 3 in Excel and use Solver to find the optimal solution. Take a screen shot of the solution on your spreadsheet and paste it into a Word document. 5. Can isoquants intersect? Explain why or why not.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/11%3A_Input_Cost_Minimization/11.01%3A_Initial_Solution.txt
This chapter departs from the usual presentation style employed in this book. There is no Excel workbook associated with this application. Instead, you will be given the opportunity to answer questions and the answers are provided at the end of the chapter. Each question is highlighted by the usual Step marker. Try to work out each question on your own before looking at the answers. There are four goals: 1. To understand cost minimization with isoquants and isocosts. 2. To provide an example of how theory can be applied to real-world problems. 3. To illustrate how economics can help us understand what we observe. 4. To see that economics has wide and varied application. The inspiration and source of this application of cost minimization is from Edward Ames and Nathan Rosenberg, “The Enfield Arsenal in Theory and History,” The Economic Journal (Vol. 78, No. 312, December, 1968), pp. 827–842, www.jstor.org/stable/2229180. Ames and Rosenberg were economic historians and that immediately leads to a puzzler: how are economic historians different from regular historians? The answer has to do with the economic approach. Once trained as an economist, the methods and way of thinking can be applied to events and outcomes from the past. This is what Ames and Rosenberg did with the Enfield Arsenal. But before we get to that, we need to understand what rifling is all about. Rifling Rifles are a relatively recent innovation in firearms. Figure 11.5 shows an early version of the famous Enfield rifle with labels for the three main parts: the lock, stock, and barrel. It is the barrel that distinguishes rifles from smooth-bore muskets. The barrel of a rifle has a striated pattern that spins the bullet, increasing velocity and accuracy compared with a ball from a musket. STEP Watch this short video on rifling from The Story of the Gun: vimeo.com/25200729. But the Enfield rifle was important not because it rifled, but because of how it was made. The American System of Manufacturing Ames and Rosenberg (p. 827) explain what the Enfield Arsenal was in the introduction to their paper: This paper analyses a particular historical event, the establishment of the Enfield Arsenal, in the context of the literature cited. The British Government committed itself to the construction of the Enfield Arsenal in 1854 because it wished to be able to make large numbers of rifles for an impending war with Russia (now known as the Crimean War). The event is important because it marked the beginning of the movement of mass-production techniques from the United States to Europe. Technical changes in gunmaking in the nineteenth century were a major source of new machine techniques; and industrialisation in the nineteenth century is overwhelmingly the history of the spread of machine making and machine using. So an arsenal is an armory, a warehouse of guns and ammunition. Enfield is a place in England and the Enfield Arsenal is literally a building constructed by the British government in 1854 that would be used to store rifles made with mass-production techniques. The Enfield Arsenal was special because it was the first time the British would use mass-production techniques to make weapons. Up to this point, the British had made guns the old-fashioned wayby hand in the small shops of thousands of skilled artisans in the area around Birmingham. The stock was carefully carved by an experienced craftsman who fitted the stock with the lock and barrel. It was like a tailor making a bespoke suiteach rifle was one of a kind. A work of art. Ames and Rosenberg (p. 832, footnotes omitted) point out that making the stock by hand was especially slow and expensive to do: The gunstock was one of the most serious bottlenecks in firearms production. In England, at the time of the Parliamentary hearings, out of about 7,300 workmen in the Birmingham gun trade, the number of men employed in making gunstocks totalled perhaps as many as 2,000. Its highly irregular shape for long seemed to defy mechanical assistance, and the hand-shaping of the stock was a very tedious operation. Furthermore, the fitting and recessing of the stock so that it would properly accommodate the lock and barrel were extremely time-consuming processes, the proper performance of which required considerable experience. With Birmingham methods, it required 75 men to produce 100 stocks per day. Using the early (1818) version of the Blanchard lathe, 17 men could produce 100 stocks per day. This quotation requires some explanation. First, the reason for the Parliamentary hearings was that British politicians were angry with the Birmingham gunsmiths for not adopting fast, efficient mass-production techniques. There was an investigation and testimony was given. How could upstart Americans have better technology than the British, a nation that dominated the entire globe? It was a national embarrassment! Second, the quotation mentions the Blanchard lathe. This is a machine that cuts and shapes wood (and other materials like metal), but it is easier to understand if you see it. STEP Watch this video, vimeo.com/25200825, to understand how a lathe works and how the production of precision parts makes Diderot’s dream come true. The video explained that the new country of the United States of America needed weapons so the Springfield Armory was built in 1794 in Springfield, Massachusetts. At first, stocks were made by hand, just like in Birmingham. They were then individually fitted to each rifle. But in 1818, the Blanchard lathe burst on the scene. The narrator, echoing the British Parliamentary hearings, says, "Prior to the Blanchard Lathe, it took one to two days to make a rifle stock by hand. Now, a twelve-year-old boy could turn out a dozen stocks in a single day." The Blanchard lathe enabled a reorganization of the production process. In factories in the northeast, the United States began to use mass-production techniques to make rifles and pistols (and then bicycles, sewing machines, typewriters, and so on). This is the American system of manufacturing. A key element is that a machine can make a precision part so many almost identical parts can be made and then the product is assembled. The video points out that the history of gun-making is closely tied to the rise of mass-production techniques and precision manufacturing. In the video, William Ruger cites an idea from French philosopher Denis Diderot (1713–1784). Ruger says Diderot’s theory at that time was that "It would be possible to make all of the individual parts alike and then at the last minute assemble them, rather than fitting them together as you went, which was the customary thing up to that time." Adam Smith (1723 - 1790) was a contemporary of Diderot. For Smith, the division of labor explained the explosion in productivity that he saw all around him as the Industrial Revolution began. Breaking production into a series of steps and then assembling the parts enables many more units of output to be produced. This is called the division of labor. Smith emphasized several reasons for the greater productivity enabled by the specialization of labor: 1. Practice makes perfect: focusing on a single task makes you very good at it. 2. Saves time: no need to set things up when you move to a new task. 3. Innovation: adjustments are made by workers who are expert in a particular task. Machines such as the Blanchard lathe feed into the division of labor by enabling much finer specialization. For rifles, production with a lathe meant that they were no longer one of a kind. They were all alike and could be easily connected to the lock and barrel to make a rifle. By applying Diderot’s theory of assembling perfectly fitted parts and Smith’s division of labor, the Springfield Armory was able to enjoy a huge increase in productivity compared with Birmingham methods. So now you know exactly what a lathe is and how mass production played a key role in the exponential increases in productivity during the Industrial Revolution, but there is one more important advantage to mass production. Let’s see if you can figure it out. STEP What are the tremendous advantages of interchangeable parts in a rifle (or anything else for that matter) for the end user? The answer is at the end of this section, but take a few minutes to think about the question. What advantage would soldiers using rifles that were all alike have over enemies using individually made rifles? Two Big Questions The key date in this story is 1854. Until this time, the British used Birmingham methods, which means an experienced craftsman made each entire gun by hand. They shaped the stock, then attached it to the lock and the barrel. Each part was slightly different and could not be easily replaced if damaged. Beginning in 1854, rifles produced for the Enfield Arsenal, however, were made with interchangeable parts (including stocks made on lathes) that could be put together in an assembly line. Once in use, broken parts could be removed and new ones snapped on. Ames and Rosenberg (pp. 839–840, footnotes omitted) sum up the situation: As of 1785, neither the British nor the Americans could make guns with interchangeable parts. As of 1815, Americans could make guns with interchangeable metal parts, but could not make interchangeable gunstocks. As of 1820, they could make interchangeable gunstocks. At any date, presumably, they could use not only current methods but earlier methods which these had displaced. The United States had been mass-producing guns with interchangeable parts since 1815. The British waited until 1854 to use the superior, mass-production techniques. This gives rise to two big questions: 1. Why did the British wait so long to use mass-production techniques to make rifles with interchangeable parts? 2. Why did the British switch to mass-production techniques in 1854? 1. Why Did the British Wait so Long? A possible answer to the puzzle of why British gunsmiths did not adopt the new technology is that the British did not know about the Blanchard lathe so that is why they did not use it? STEP Is lack of knowledge about American technology a good answer? Why or why not? Another possible answer is poor management. Maybe British rifle manufacturers were lazy, stupid, and careless? The right answeradopt mass-production techniqueswas staring them in the face and they ignored it. STEP Is managerial failure a good answer? Why or why not? Economic historians give a third answer to why the British did not adopt the Blanchard lathe. They use the economic way of thinking. They look for differences in the environment that would lead to different optimal solutions. In other words, Ames and Rosenberg stop searching for why the British made a mistake and accept the fact that their refusal to adopt mass-production techniques was actually smart and correct. They look for reasons that justify the British decision to reject the Blanchard lathe. This is crazy, right? It is obvious that mass production is better. Well, it turns out that there are two critical differences between the United States and Britain in the first half of the 19th century that play an important role in deciding how to make rifles. First, the two countries had quite different labor forces. The British had a cohort of skilled rifle craftsmen and the United States did not. As the Parliamentary hearings noted, there were several thousand skilled craftsmen in Birmingham making stocks and rifles. The United States was a young country with mostly unskilled, male workers. Few skilled craftsmen would emigrate to the United States since they had good, high paying jobs at home. These supply and demand differences meant that, in the United States, wages for skilled craftsmen were much higher than in Britain, and wages for unskilled labor were lower. Second, wood was plentiful and cheap in the United States, but it was much more expensive in Britain. Ames and Rosenberg offer the following footnote (p. 831) to help explain why wood plays a critical role: Report of the Small Arms Committee, op. cit., Q. 7273-81 and Q. 7520-7521; G. L. Molesworth, "On the Conversion of Wood by Machinery," Proceedings of the Institution of Civil Engineers, Vol. XVII, pp. 22, 45-6. In the discussion which followed Mr. Moleworth’s paper Mr. Worssam, a prominent English dealer in woodworking machinery, made some interesting comparative observations which were summarised as follows: "He had seen American machines in operation, and he found that, although they might be adapted for the description of work required in that country, they were not so suitable for English work, in which latter high finish and economy of material were of greatest importance. In America the saws were much thicker than those used in the English saw-mills, so that they consumed more power, wasted more material, and did not cut so clean, or so true, though there was less care required in working them" (ibid., pp. 45-6). A key point in this long quotation is that American saws (and, of course, lathes) "wasted more material." A British skilled craftsman making stocks from lumber would be careful to "economize" on the material. In America, a 12-year old boy working with a lathe (a dangerous job) would not care at all about wasting wood. The different endowments of wood in the two countries meant that the Blanchard lathe was much more expensive to operate in Britain than in America. Now that we know how the United States and Britain differed with respect to (1) wages for skilled and unskilled labor and (2) operating costs for the Blanchard lathe, we are ready to make the case for the economic explanation for why the British waited so long to adopt mass-production techniques. As is typical in economics, the exposition will rely on graphs. But instead of just reading the explanation, you will try to do it yourself first. The idea is to apply the input cost minimization problem to this scenario. You can, of course, simply jump to the end of the section to see the answers, but you will learn much more if you try to do it yourself first. Follow the instructions and hints offered below and see how close you get. Make sure you understand where you made a mistake or in what ways you were confused. STEP Draw graphs that show how the different resource endowments and input prices affected the optimal input mix. Use the detailed instructions that follow as a guide. How do the graphs explain why the British waited so long to adopt mass-production techniques? We will use two sets of two graphs. The first set of two graphs will be for the labor force difference between the United States and Britain. The second set shows the effect of the different endowments of wood. Begin by drawing a graph representing the British situation in 1820 with respect to using skilled and unskilled labor to make, for example, an order of 1,000 rifles. It should have skilled labor on the y axis and unskilled labor on the x axis. Draw in an isoquant (representing the combinations of skilled and unskilled labor that would make the requested 1,000 rifles). Draw another graph, next to the first one, that is exactly the same. Your second graph represents the United States’ options for making 1,000 rifles in 1820. The fact that both isoquants are the same means that the two countries had access to the same technology and are making the same product. Next, you need to draw the isocost lines. This is where the difference in labor force comes into play. We know the British have skilled labor and the United States does notimmigrants to the United States were not typically experienced, educated workers, but young, unskilled males. That means the price of skilled labor is much higher in the United States. How is that reflected in the isocosts for your two graphs? The second set of graphs uses L and K as the inputs. As before, draw a pair of graphs side by side, one for the British and the other for the United States, with machinery on the y axis and labor on the x axis. Include the isoquants. Once again, the isoquants are the same, meaning that the British were aware of and could have used American methods. The key to the economic explanation for why the British did not do what the Americans were doing lies in the isocosts. Remember that early versions of the Blanchard lathe used a lot of wood and this increases the price of machinery. If r is much higher in Britain than in the United States, how does this affect the isocosts? Take a moment to look at your two sets of graphs. How can they be used to explain why the British rejected mass production before 1854? Proceed to the end of this section to check your graphs and answers. 2. Why Did the British Switch in 1854? The second big question revolves around the British decision to switch in 1854 and mass produce the Enfield rifle. Why did they do this? Why did they abandon their decades-old system of production centered in Birmingham, with a network of many small artisans and smiths that made firearms to individual order or in small batches? Our first possible answer matches up with the lack of knowledge answer to the first big question. Maybe, in 1854, the British heard that mass-production techniques utilizing the Blanchard lathe were available and immediately moved to adopt the new production methods? STEP Is sudden awareness of new American technology a good answer? Why or why not? The second possible answer, like before, relies on management. Maybe they wised up? What if British firearms manufacturers recovered from their slumber and moved quickly to modernize their industry? STEP Is managerial improvement a good answer for the switch? Why or why not? You probably got the first two right, but the third one is harder. It might be easy in general terms, but getting the details can be complicated. The third answer is based on economic reasoning. This means that when we see changes in behavior, we look for changes in the environment. We do not search for events or causes that changed a mistake into the right answer. Instead, we accept that the answer to not use mass production was correct for, say, 1830, but the new optimal solution, in 1854, was to switch to the American system. This is a key aspect of the economic approach, and it can be challenging to grasp. Our instinct when we see something change is to think of correction or improvement. Economists do not think this way. We see optimization everywhere so if something changes, it was optimal before and it has moved to a new optimal solution because of an exogenous shock. The search is on for shocks that switch the correct answer from “reject” to “accept” interchangeable parts. There are two ways in which Britain before 1854 differed from Britain after 1854 and these two ways impacted wages and the operating cost of machinery. These changes act as shocks on the input cost minimization problem and produce a new optimal solution. We first have to figure out the shocks, then we can see how they affect the optimal solution. STEP Answer these two questions: 1. What happened to the British labor force? 2. What happened to the Blanchard lathe? You may not be an expert on British labor in the 19th century or know anything about the Blanchard lathe, but you can think about what might have happened. Try to come up with a hypothesis. Think of recent changes in the labor force that you have heard about, especially those driven by technology (e.g., driverless cars and trucks). Think about how machines, computers, and technology in general have changed over time. After checking your answer at the end of this section, so now you know what happened, you are ready to draw graphs that illustrate the economic historian’s explanation for the British switch in production technique. STEP Draw graphs that show how the changes mentioned affected the optimal input mix. How do the graphs explain why the British switched to mass-production techniques in 1854? Draw two pairs of graphs just like before (unskilled and skilled labor on one and machinery and labor on the other), but this time we are comparing environments before and after 1854 in Britain (the United States has nothing to do with this). First, compare the optimal mix of unskilled and skilled labor for Britain in 1820 versus 1854. Remember that the skilled craftsmen died and were not replaced so the skilled wage rate rose. How does this make the 1854 graph different from the 1820 graph? In the second set of two graphs, with machinery and labor on the axes, we know that machinery got better and better (wasting less and less wood) over time so r fell. What will this shock do to the isocost lines? Check out the suggested answers at the end when you are finished. Take the time to debug any mistakes. Make sure you understand how the isocost lines shift and how the comparison of two graphs yields answers to the questions. Evaluating this Application At the beginning, we had four goals: 1. To understand cost minimization with isoquants and isocosts. 2. To provide an example of how theory can be applied to real-world problems. 3. To illustrate how economics can help us understand what we observe. 4. To see that economics has wide and varied application. You decide to what extent the goals were met. At the very least, you learned a little about American manufacturing in the 19th century and rifles (including where the phrase “lock, stock, and barrel” comes from). The application should help you understand the conventional isoquant–isocost graph and the firm’s input cost minimization problem. Remember that the higher the price of the input on the x axis or lower the price of the input on the y axis, the steeper the isocosts. But the real deep learning and big picture idea concerns how economists view the world. This is called economic reasoning or the economic approach. We did "an economic analysis of the Enfield Arsenal." The idea is that economics is not a discipline organized around content (the stock market or money, for example), but a way of thinking. Economists often interpret observed behaviors as optimal solutions to optimization problems and they see change as driven by a shock that takes us from one optimal solution to another. Thinking like an economist is difficult and sometimes counter-intuitive, but it can provide an interesting perspective on the world. Certainly, Ames and Rosenberg gave us a novel view of the issues surrounding the Enfield Arsenal. Exercises 1. Explain why the endowment of wood affects the price of machinery used in producing rifles in the 19th century. 2. What could have caused the British to switch to mass-production techniques before 1854? Give a concrete example. 3. If the British had used the Blanchard lathe in 1820, then that would have been allocatively inefficient. Draw a graph that shows this and explain what it means. 4. Ames and Rosenberg (p. 836) include additional differences between America and Britain, such as the fact that the British consumer liked fancier gunstocks: American machine processes could not produce guns of the kind favoured by English civilians. The Blanchard lathe produced stocks of a standard size, whereas English buyers did not want standard gunstocks. The English methods were suited to catering to the idiosyncratic needs of individual users. How would this information change the comparison of the isoquant–isocost graph in the two countries? Appendix: Suggested Answers STEP What are the tremendous advantages of interchangeable parts in a rifle (or anything else for that matter) for the end user? Fixing broken rifles! You can quickly repair a mass-produced rifle if one of its pieces (lock, stock, or barrel) breaks. A rifle built by hand is useless once one of its individual parts fails. You would need a skilled craftsman to fix it. On a battlefield, you could cobble together parts from different broken units to create operating weapons. And anyone could do thisthey would not have to be a skilled craftsman. In general, with precision parts, if the product breaks, you can buy a replacement part to repair the product. With bespoke items, you need an expert to adjust and refit to repair it. STEP Is lack of knowledge about American technology a good answer? Why or why not? This is a ridiculous explanation. Granted there is an ocean, but given the common language and communication, this answer makes no sense. In fact, there is lots of evidence that the British knew all about the American methods. They simply chose not to use them. STEP Is managerial failure a good answer? Why or why not? Like lack of knowledge, this is not a very satisfying answer. There is no reason to believe these specific people were especially poor managers. Economists are wary of this type of answer. Self-interested agents who respond to incentives are unlikely to make bad decisions, especially with great sums of money and lives at stake. There is a subtle point to be made here that separates economists and non-economists. The latter are much more likely to accept mistake and stupidity to explain an observed decision or behavior that turned out badly. Economists tend to stick with rational, optimizing agents and explain bad choices as a result of lack of information or differing objectives. STEP Draw graphs that show how the different resource endowments and input prices affected the optimal input mix. Use the detailed instructions that follow as a guide. How do the graphs answer explain why the British waited so long to adopt mass-production techniques? The isoquant is exactly the same in each graph in Figure 11.6. US skilled labor wages were very high because there were few experienced craftsmen migrating to the United States, but lots of young, unskilled workers. The slope of each isocost is the input price ratio, $-\frac{w_{Unskilled}}{w_{Skilled}}$. Thus, the US isocost lines are flatter than Britain’s. This leads to a different cost-minimizing input mix. The price of machinery includes the cost of wood use just like a car’s operating cost includes the cost of gasoline. The early versions of the Blanchard Lathe were quite wasteful, but this did not matter in the heavily forested United States. In Britain, however, wood was expensive. The British Isles were mostly deforested by then. This makes the isocost lines steeper in Figure 11.7 for Britain. Once again, factor prices help determine the input mix. So how do these graphs explain how economists view this historical episode? Varying resource endowments mean that each country faces its own set of input prices, which in turn lead to different cost-minimizing solutions. For the United States, unskilled labor with the Blanchard lathe was the cheapest way to make rifles. Not so for the British. At that time and place, with the skilled craftsmen and lack of cheap wood, rejecting mass production was the optimal decision. In fact, the economic approach says something even more outlandish. Had the British used mass production for rifles before 1854, that would have been a mistake! Take the US tangency point and transfer it to the British graph in Figures 11.6 and 11.7. Producing with the US input mix is allocatively inefficient for Britainthat is, the British would not be minimizing cost. Economists have no problem with agents making different choices. This does not mean that one is right and the other is wrong. All it means is that they face different prices. They are both optimizing. That is a difficult idea to wrap your head around. Ponder it. STEP Is sudden awareness of new American technology a good answer? Why or why not? This answer makes little sense. American and British people and entrepreneurs moved freely across the Atlantic and were well aware of production methods in each country. The claim that a new technique was suddenly made known to the British is absurd. STEP Is managerial improvement a good answer for the switch? Why or why not? This answer is pretty silly. To be credible, it requires an explanation for the sudden change from stupid, lazy, and careless producers of firearms to smart, energetic, and focused ones. There is no evidence of an explosion in managerial aptitude or a burst in managerial education. For this argument to be convincing, we will need a lot more evidence on British management prowess and how it changed over time. STEP Answer these two questions: 1. What happened to the British labor force? 2. What happened to the Blanchard lathe? The British labor force underwent a profound structural adjustment. The skilled craftsmen in the Birmingham gun trade died off and were not replaced. No skilled gunstock maker would suggest that his son follow him into the trade. They could see the writing on the wallthe machines were taking over. As the supply of these workers dwindled, the wages of skilled rifle artisans in Birmingham rose. Perhaps more important is the second shock. The Blanchard lathe was continually improved over time; more modern versions of the lathe wasted a lot less wood. Today, a lathe uses a laser sight to precisely cut the wood. No human could possibly compete with it. As the lathe wasted less wood, the operating cost of machinery fell. This is a nice example of how the price of an input can represent more than simply the out-of-pocket cost paid for the input. In this example, the price of a lathe is not simply the price paid for the machine itself; it includes the price of the wood used. So, the shocks to the input cost minimization problem are that the skilled labor wage rose relative to the unskilled and r fell relative to w. Notice how we first figure out what happened and then we model it. That is, we incorporate the story into one of the variables. In this case, the changing labor force increases the wage of skilled labor and the improving Blanchard lathe decreases r. STEP Draw graphs that show how the changes mentioned affected the optimal input mix. How do the graphs explain why the British switched to mass-production techniques in 1854? A high price of skilled labor makes the isocost lines flat (the slope falls in absolute value because the denominator increases). This leads to a more unskilled-labor intensive optimal input mix. As skilled craftsmen disappeared and their wages rose, there was greater incentive to use unskilled labor. Notice how the comparison in Figure 11.8 is across time periods. The price of machinery fell and fell as machines got better and better, making the isocost lines steeper and steeper (r is in the denominator) as shown in Figure 11.9, and leading to the adoption of mass-production techniques in Britainthe Enfield Arsenal was born. Notice how the Britain in 1854 graphs in Figures 11.8 and 11.9 are the same as the US graphs in Figures 11.6 and 11.7. This shows that when Britain faced the same input prices as the United States, they made the same, optimal decisions.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/11%3A_Input_Cost_Minimization/11.02%3A_The_Enfield_Arsenal.txt
We have solved the input cost minimization problem so the next task is comparative statics analysis. We will focus on shocking q (the quantity the firm must produce) and track minimum total cost. The relationship between $TC\mbox{*}$ and q is called the cost function. The novelty here is that we are not interested in how the optimal values of the endogenous variables, L and K vary as we shock q. Instead, we focus on the objective function, minimum total cost, and how it changes as q changes. Another important aspect of comparative statics analysis for the input cost minimization problem is that, unlike utility in the Theory of Consumer Behavior, total cost can be cardinally measured. We can compare the total costs of different firms and perform arithmetic on total cost. If the minimum TC for $q=10$ is $40 and it rises to$45 when $q=11$, we can say TC increased by $5. Because TC is cardinal, we will be able to interpret and use the Lagrangean multiplier. As usual, we will explore both ways to do comparative statics: • Numerical methods using a computer: Excel’s Solver and the Comparative Statics Wizard. • Analytical methods using algebra and calculus: conventional paper and pencil. Numerical Methods to Derive the Cost Function STEP Open the Excel workbook DerivingCostFunction.xls, read the Intro sheet, and proceed to the OptimalChoice sheet. The organization is the same as in the InputCostMin.xls workbook. The cost-minimizing way of producing 100 units of output is to use about 183.3 hours of labor with 32.6 machines, which costs$464.38. There is no other combination of L and K that makes 100 units at a lower cost. What happens if the firm needs to produce more, say, 110 units of output? STEP Change cell B18 to 110. The chart updates, showing a new (red) isoquant. The initial combination is not a viable option because it cannot produce 110 units. The firm has to re-optimize. STEP Run Solver to find the new optimal solution. The cost-minimizing amounts of labor and capital increase to produce the higher output required and the minimum total cost is now $513.39. We are looking for the minimum total cost. We want to know the cheapest way of producing any given output. This is called the cost function. We can show the comparative statics analysis on the isoquant-isocost graph or on a presentation graph where we plot $TC\mbox{*}=f(q)$, ceteris paribus. If we connected the points of tangency of isoquants and isocosts, we would get the least cost expansion path (LCEP). Our work thus far has revealed two points on the LCEP and cost function: when q = 100, TC =$464.38 and when q = 100, TC = $513.39. Let’s use the Comparative Statics Wizard to get more data so we can draw the LCEP and cost functions and understand how they are related. STEP Return cell B18 to 100, then run the Comparative Statics Wizard, applying 10 q shocks in increments of 10. The CS1 sheet shows what your results should look like. The CS1 sheet includes two graphs, the isoquant-isocost graph with the least cost expansion path and the cost function, as shown in Figure 11.10. Figure 11.10 should remind of you of other graphs we have drawn, such as Engel and demand curves. On the left, using the display of the optimal solution to the input cost minimization problem, we show how different q produce a set of tangency points that comprise the LCEP. On the right in Figure 11.10, we show only the minimum cost of producing each level of q, and hide everything else. This allows us to highlight the relationship between TC and q. The two graphs in Figure 11.10 make clear that the source of the cost function is the optimal solution of the cost minimization problem as q varies. Just like demand curves do not come out of thin air, but are derived from utility maximization, cost functions are derived from input cost minimization. We are interested in the shape of the cost function. It looks like a line, but is it really linear? To find out, we can see if it has a constant slope. If the slope is changing, we know the function is not linear. STEP In your CS sheet, find the slope at different points on the function by computing the change in TC divided by the change in q. Click the button (near cell C9 in the CS1 sheet) if you are stuck or to check your work. It is clear that the slope changes as output changes. This means that the cost function is nonlinear. Analytical Methods to Derive the Cost Function We can use the Lagrangean method to find $TC\mbox{*} = f(q)$. We will leave q as a letter instead of a number so that the reduced-form solution will include q. Then we can plug in any value of q to find minimum cost for that q and easily draw a graph of the cost function. The solution closely follows the work we did at the beginning of this chapter, but we proceed step-by-step to practice and reinforce the Lagrangean method. The problem is $\begin{gathered} %star suppresses line # \min\limits_{L,K}TC=2L+3K\ \textrm{s.t. } q = L^{0.75}K^{0.2}\end{gathered}$ The first step is to rewrite the constraint so that it is equal to zero. $q - L^{0.75}K^{0.2}=0$ The second step is to form the Lagrangean by adding lambda, $\lambda$, times the rewritten constraint to the original objective function. We use an extra-large L for the Lagrangean function that is not at all related to the L for labor. $\begin{gathered} %star suppresses line # \min\limits_{L,K, \lambda}{\large\textit{L}}=2L+3K + \lambda (q- L^{0.75}K^{0.2})\end{gathered}$ The third step to finding the optimal solution is to take the derivative of the Lagrangean with respect to each endogenous variable and set each derivative to zero (giving us the first-order conditions). The fourth, and last, step is to solve this system of equations for $L\mbox{*}$, $K\mbox{*}$, and $\lambda \mbox{*}$. We move the terms with lambda in the first two equations to the right-hand side and then divide the first equation by the second. The exponents cancel nicely (see section 11.1) and we get $L = 5.625K$. This is not a reduced-form solution because L is not a function of exogenous variables alone. We substitute this expression for L into the third first-order condition to get optimal K and then optimal L as shown below. Finally, we substitute the optimal solutions for $L\mbox{*}$ and $K\mbox{*}$ into the original objective function. This expression is the total cost function. It gives the cheapest cost of producing any given amount of output. If q = 100, TC =$464.38. Not surprisingly, this agrees with our results using numerical methods. Notice also that the cost function is clearly nonlinear. It is increasing at an increasing rate because the exponent on q is greater than one ($\frac{1}{0.95} \approx 1.05$). The derivative of TC with respect to q, the slope, is not constant because it depends on q. If the exponent was exactly 1, then the slope would be constant and the TC would be a line. The fact that this exponent is only slightly greater than one explains why TC looks almost linear in Figure 11.10. Interpreting Points Off the Cost Function When we derived the demand curve from the "maximize utility subject to a budget constraint" optimization problem, we explored what it meant to be off the demand curve (see Figure 4.12). We learned that points to the left or right of the inverse demand curve (with price on the y axis) mean that the consumer is not optimizing, i.e., the consumer is not choosing a point of tangency between the indifference curve and budget constraint. We can conduct the same kind of inquiry here, asking this question: What does it mean to be off the cost function? Unlike the inverse demand curve, where the exogenous variable is on the y axis, the cost function is graphed according to usual mathematical convention, with the exogenous variable, output, on the x axis. Thus, points off the curve are interpreted vertically above or below the cost function. What does it mean if a point is above the cost curve? Figure 11.11 helps us answer this question. On the left is the familiar isoquant/isocost graph. The cheapest way to produce q0 units of output is with the L and K combination at the point labeled $TC\mbox{*}$. The graph on the right of Figure 11.11 shows that $TC\mbox{*}$ is the point on the cost function at an output of q0. Point Z, a point above the cost function, reveals that the firm is producing the level of output q0 at a total cost above the minimum total cost. This means that the firm is choosing an input mix that is not cost minimizing. Point Z on the graph on the left of Figure 11.11 must lie on an isocost above the tangent isocost. We do not know exactly where point Z is on the graph on the left (so we do not know if there is technical or allocative inefficiency), but we do know it has to be somewhere on the isocost labeled TCz that has a total cost the same as the cost of producing point Z (on the graph on the right). Point Y on the right side of Figure 11.11 is below the cost function. How can this point be generated by the graph on the left? It cannot. There is an isocost with a total cost equal to that at point Y, but it is below the isoquant and, therefore, unattainable. In other words, point Y does not actually exist. The firm cannot produce q0 units of output at any cost less than $TC\mbox{*}$. Another way of thinking about TC geometrically, is that there are points above TC, but only empty space below it. Sure, on a printed page, chalkboard, or computer screen, there is white space above and below TC and you can write on it (just like point Y in Figure 11.11), but this is misleading. In fact, below TC there is nothing, total void. If you tried to put a point there, your hand would go through the paper! This has implications beyond pure theory. The fact that there are no points below the cost function means that we should never fit a line through a cloud of points to estimate a cost function. Instead of a least squares approach to estimating a cost function, estimation techniques in the stochastic frontier literature are based on fitting a curve around the observed points, as in Figure 11.12. Shifts in the Cost Function You learned in Introductory Economics that price causes a movement along a demand curve, but other shocks (like increasing income) change demand causing the entire curve to shift. The same thing happens with the cost function. Changing q leads to moving along the TC function, but other exogenous variables cause shifts in the cost function. STEP Proceed to the CostFn sheet. The sheet displays a cost function charted from the data above it. The data in columns L and M are actually formulas for the reduced-form expressions for $L\mbox{*}$ and $K\mbox{*}$. Column N has the minimum total cost for the benchmark problem and will not change because the cells are merely numbers (so it is labeled "Dead (Initial)"). Column O, however, has the reduced-form expression for $TC\mbox{*}$ and will update if any of the underlying parameters are changed (hence the "Live" label). STEP Click on a few cells in columns L, M, N, and O to see the formulas and values. The general versions of the reduced forms for the Cobb-Douglas production functions are provided and entered in cells. The expressions look daunting (and they are tedious to derive), but the derivation is straightforward: leave every exogenous variable as a letter and find the optimal solution for L,K, $\lambda$, and total cost. Initially, N and O are the same because the exogenous variable values have not been changed yet. Let’s do that now. STEP Change cell B20, the exponent on L, to 0.8. Your screen looks like Figure 11.13. The increase in labor productivity has shifted down the total cost curve. This makes sense. The increase in c has made it cheaper to produce any given output. You can experiment with other shocks to the cost function. Change input prices, input exponents, or A to see how the cost function shifts. Click the button or ctrl-z (undo) after every trial. Connect what you see on the screen with the shock you applied. Changes in q have no visible effect because you simply move along the cost function. Interpreting $\lambda \mbox{*}$ We end this chapter by showing that the Lagragean multiplier, $\lambda \mbox{*}$ has a useful interpretation in the input cost minimization problem. We will see that $\lambda \mbox{*}$ gives an easier way to derive a cost function than solving the constrained cost minimization problem with q as a letter and finding $TC\mbox{*} = f(q)$. The cost function shortcut uses the fact that $\lambda \mbox{*}$ gives the instantaneous rate of change in the optimum value of the objective function as the constraint varies. Thus, $\lambda \mbox{*}$ signals how relaxing the constraint would impact the goal. For utility maximization, we could relax the constraint by increasing income. The budget constraint in the Lagrangean is $m - p_1x_1 - p_2x_2 = 0$ so as m rises, the consumer will be able to reach greater maximum utility. The Lagrangean multiplier tells us how much more utility is gained as income increases. Unfortunately, utility is ordinal so $\lambda \mbox{*}$ does not have a useful interpretation in the Theory of Consumer Behavior. Things are different in the constrained input cost minimization problem. The objective function in this case is minimum total cost and is measured on a cardinal scale. We can directly observe minimum total cost and meaningfully compare how it changes within a firm and across firms. This means we can apply the interpretation of $\lambda \mbox{*}$ to input cost minimization. The constraint in the Lagrangean is $q - f(L,K)$. If we vary the constraint by having the firm produce one more unit of output, we know total cost would rise as we moved to a higher isoquant. The value of $\lambda \mbox{*}$ tells us by how much minimum total cost would rise. For example, at q = 100 in DerivingCostFunction.xls, $\lambda \mbox{*}$ is about $4.89. You can confirm this by numerical methods (using Excel’s Solver and getting the Sensitivity Report) or by analytical methods, solving for $\lambda \mbox{*}$ from the three first-order conditions. Either way, you will get (almost exactly) the same answer. But what does this tell us? The$4.89 value means that if we increase output by an infinitesimally small amount, minimum total cost will go up by $4.89-fold. Let’s use Excel to work on this. STEP Click the button in the CostFn sheet and take a look at the highlighted cell with a yellow background (P8). Click on it and read the formula. The value of P8 is$4.99. That is close to the value of $\lambda \mbox{*}$ of $4.89, but not quite exactly the same. What is going on? STEP Go to the CS1 sheet and take a look at the highlighted, yellow-backgrounded cell (E15) (click the button if needed). Its value is$4.90. This is much closer to $\lambda \mbox{*}$’s value of $4.89. Why? Because the change in q is much smaller in the CS1 sheet than in the CostFn sheet. As the change in q approaches zero, the change in $TC\mbox{*}$ divided by the change in q will approach $\lambda \mbox{*}$. STEP Return to the CostFn sheet and change cell K8 from 200 to 110. This replicates the CS1 sheet value for $\lambda \mbox{*}$. Next, set K8 to 101. What do you see? With K8 set to 101 so that $\Delta q = 1$, $\frac{\Delta TC}{\Delta q} = \4.89$, the value of $\lambda \mbox{*}$. Well, actually, not exactly$4.89. If we displayed more decimal places in P8 and computed the value of $\lambda \mbox{*}$ to more decimal places, the two would not agree. But they would get closer the smaller we made $\Delta q$. Of course, this is nothing more than a demonstration of the idea of the derivative. If you are puzzled as to how $\frac{\Delta TC}{\Delta q}$ can be that close to $\lambda \mbox{*}$ in the CS1 sheet (a one cent difference seems pretty small), given that the change in q is 10 units (which is hardly infinitesimally small), the answer lies in the total cost function: It simply is not very curvy. Because $TC \mbox{*}$ follows almost (but not quite) a straight line, computing the slope from q = 100 to q = 110 is close to the slope of the tangent line at q = 100. The purpose of the work above was to convince you that $\lambda \mbox{*} = \frac{dTC}{dq}$. The Lagrangean multiplier gives the instantaneous rate of change in minimum total cost with respect to output. STEP You can confirm the claim that $\lambda \mbox{*} = \frac{dTC}{dq}$ by changing the parameters in the CostFn sheet and keeping your eye on the rose-backgrounded cell H31. It computes the difference between $\lambda \mbox{*}$ in H13 and $\frac{dTC}{dq}$ in H30. The difference is always zero because these two things, $\lambda \mbox{*}$ and $\frac{dTC}{dq}$ are equivalent. You might ask, "So what?" In other words, what can we do with the knowledge that $\lambda \mbox{*} = \frac{dTC}{dq}$? A lot. For one thing, we can easily derive the cost function. After all, the rate of change in total cost as output changes is marginal cost (MC). Thus, $\lambda \mbox{*} = \frac{dTC}{dq} = MC(q)$. This means we can easily get the total cost function by simply integrating $\lambda \mbox{*}$ with respect to q. Furthermore, as we will see when we solve the output profit maximization problem, we usually want marginal revenue and marginal cost, so knowing that $\lambda \mbox{*} = \frac{dTC}{dq}$ can be a real shortcut. If we have $\lambda \mbox{*}$, then we do not have to derive $TC\mbox{*} = f(q)$ and then take the derivative to get MC. The Cost Function has Parents This section included some complicated ideas, but we end by prioritizing things. There is no doubt that the most important idea is that the cost function has a source and does not appear from nowhere. This is captured by Figure 11.10the cost function is derived by doing comparative statics analysis on the input cost minimization problem. Although we are often interested in the response of an endogenous variable to a shock, comparative statics in the input cost minimization problem is focused on how the objective function, minimum total cost, is affected by shocking q. Minimum total cost as a function of q is the cost function. By explaining what it means to be above or below the cost function in terms of the isoquant–isocost graph, we emphasized the idea that the cost function shows the cheapest way to produce any given output. A good way to remember this is to ponder the striking fact that there is no space below the cost function, meaning that it is impossible to produce the given output any cheaper than the cheapest way possible. Changes in other parameters besides output cause the entire cost function to shift because minimum total cost depends on all of the exogenous variables. If q changes, we move along the cost function; other shocks shift TC. Finally, we explained a mathematically sophisticated idea: $\lambda \mbox{*}$ provides information on the rate of change of the optimum value of the objective function as the constraint is relaxed. This interpretation of the Lagrangean multiplier holds for every constrained optimization problem. We did not apply this interpretation in the Theory of Consumer Behavior because utility (the objective function) cannot be cardinally measured. In the old days, when utility was believed to be cardinally measured in utils, $\lambda \mbox{*}$ was the marginal utility of money. $\lambda \mbox{*}$ would tell you the rate of change in maximum utility if you gave the consumer an infinitesimal increase in income. Since total cost is directly observable and countable, $\lambda \mbox{*}$ can be correctly interpreted as marginal cost, $\frac{dTC}{dq}$. This gives a shortcut to the cost function and MC. Exercises 1. With the production function, $q = L^{0.75} K^{0.5}$, and exogenous variables w = 2, r = 3, use Excel to create a graph of the cost function for the same q values as the one in the CS1 sheet. Copy and paste your graph in a Word document. 2. How is the cost function you just derived different from the one in the CS1 sheet? Which variable is responsible for generating this difference? 3. From the cost functions in the CS1 sheet and question 1, what can you deduce about cost functions derived from Cobb-Douglas production functions? 4. If someone solves an input cost minimization problem and finds that $\lambda \mbox{*}$ = 50, what does this mean?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/11%3A_Input_Cost_Minimization/11.03%3A_Deriving_the_Cost_Function.txt
In the next chapter, we will work on the firm’s second optimization problem: maximize profits by choosing the amount of output to produce. Because profits are revenues minus costs, the cost function plays an important role in the firm’s profit maximization problem. This section is devoted to the terminology of cost curves and an exploration of their geometric properties. Derived from the cost function, a variety of cost curves are used to solve and display the firm’s profit-maximization problem. This section defines and derives them. A basic idea that is easy to forget is that there are many shapes of cost functions. Our work on deriving the cost function used a Cobb-Douglas production function and that gives rise to a particularly shaped cost function. A different production function would give a different cost function. A key idea is that $q=f(L,K)$ determines the shape of $TC \mbox{*}=f(q)$. Names and Acronyms You know that if we track $TC \mbox{*}$, minimum total cost, as a function of q, we derive the cost function. Since we will be using other measures of costs, to avoid confusion, we refer to the cost function as the total cost (TC) function. The total cost function has units of dollars ($) on the y axis. We can divide total costs into two parts, total variable costs, TVC, and total fixed costs, TFC. $TC(q) = TVC(q) + TFC$ If the firm is in the short run, it has at least one fixed factor of production (usually K) and the total fixed costs are the dollar value spent on the fixed inputs (rK). Notice that the total fixed costs do not vary with output. TFC is a constant and does not change as output changes so there is no "(q)" in the TFC function like there is on TVC and TC. The total variable costs are the costs of the factors that the firm is free to adjust or vary (hence the name "variable costs"), usually L. As output rises, firms need more inputs to produce the increased output so total variable costs rise. In the long run, defined as a planning horizon in which there are no fixed factors, there are no fixed costs (TFC = 0) and, therefore, $TC(q) = TVC(q)$. In other words, the total cost and total variable cost functions are identical. In addition to total costs, the firm has average, or per unit, costs associated with each level of output. Average total cost, ATC (also known as AC), is the total cost divided by the output level. $ATC(q)=\frac{TC(q)}{q}$ Average variable cost, AVC, is total variable cost divided by output. $AVC(q)=\frac{TVC(q)}{q}$ Average fixed cost, AFC, is total fixed cost divided by output. $AFC(q)=\frac{TFC(q)}{q}$ Notice that AFC(q) is a function of q even though TFC is not because AFC is TFC divided by q. Since the numerator is a constant, AFC(q) is a rectangular hyperbola ($y = c/x$) and is guaranteed to fall as q rises. This can be confirmed by a simple example. Say TFC =$100. For very small q, such as 0.0001, AFC is extremely large. But AFC falls really fast as q rises from zero (and AFC is undefined at q = 0). At $q =1$, AFC is $100, at q = 2, AFC is$50, and so forth. The larger the value of q, the closer AFC gets to zero (i.e., it approaches the x axis). It is easy to show that the average total cost must equal the sum of the average variable and average fixed costs: $TC(q) = TVC(q) + TFC$ $\frac{TC(q)}{q} = \frac{TVC(q)}{q} + \frac{TFC}{q}$ $ATC(q) = AVC(q) + AFC(q)$ We often omit AFC(q) from the graphical display of the firm’s cost structure (see Figure 11.14) because we know that $AFC(q) = ATC(q) - AVC(q)$. Thus, average fixed cost can be easily determined by simply measuring the vertical distance between ATC and AVC at a given q. The facts that $AFC(q) = ATC(q) - AVC(q)$ and AFC goes to zero as q rises means that AVC must approach ATC as q rises. Always draw AVC getting closer to ATC as q increases past minimum AVC. Figure 11.14 obeys this condition. Unlike the total curves, which share the same y axis units of dollars, the average costs are a rate, dollars per unit of output. You cannot plot total and average cost curves on the same graph because the y axes are different. Another cost concept that we get from the total cost function is marginal cost (MC). Like average costs, MC is a rate and it comes in $/unit. Marginal cost is often graphed together with the average curves (as shown in Figure 11.14). Marginal means additional in economics. Marginal cost tells you the additional cost of producing more output. If the change in output is discrete, then we are measuring marginal cost from one point to another on the cost curve and the equation looks like this: $MC(q)=\frac{\Delta TC(q)}{\Delta q}$ If, on the other hand, we treat the change in output as infinitesimally small, then we use the derivative and we have: $MC(q)=\frac{dTC(q)}{dq}$ Because TFC does not vary with q, marginal cost also can be found by taking the derivative of TVC(q) with respect to q. Average cost and marginal cost are used to refer to entire functions (see Figure 11.14), but also to specific values. For example, if ATC =$10/unit and MC = $3/unit at $q=5$, this means that it costs$10 per unit to make the five units and, thus, the firm had $50 of total costs to make five units. The MC tells us that the 5th unit costs an additional$3 so the total cost went from $47 for 4 units to$50 for 5 units. The Geometry of Cost Curves The average and marginal curves are connected to each other and must be drawn according to strict requirements. Whenever a marginal curve is above an average curve, the average curve must be rising. Conversely, whenever a marginal is below an average, the average must be falling. For example, consider the average score on an exam. After the first 10 students are graded, there is an average score. The 11th student is now graded. Suppose she gets a score above average. Hers is the marginal score and we know it is above the average so it has to pull the average up. Suppose the next student did poorly. His marginal score is below the average and it pulls the average down. So, we know that whenever a marginal score is below the average, the average must be falling and whenever a marginal score is above the average, the average must be rising. The only time the average stays the same is when the marginal score is exactly equal to the average score. This relationship between the average and marginal means that the marginal cost curve must intersect the average variable and average total cost curves at their respective minimums, as shown in Figure 11.14. From q = 0 to the intersection of MC with ATC, MC is below the ATC and the ATC falls. To the right of the intersection of MC with ATC, MC is above the ATC so the ATC is pulled up. MC and AVC curves share the same relationship. Figure 11.14 also shows a property that was highlighted earlier: The gap between ATC and AVC must fall as q rises. You will understand these abstract ideas better by exploring concrete examples. Three cost functional forms will be examined: 1. Cobb-Douglas Cost Curves 2. Canonical Cost Curves 3. Quadratic Cost Curves Instead of memorizing specific facts or points, look for the pattern and repeated connections. Focus on the relationship between the total and average and marginal curves. STEP Open the Excel workbook CostCurves.xls and read the Intro sheet, then go to the CobbDouglas sheet to see the first example. 1. Cobb-Douglas Cost Curves The CobbDouglas sheet is the CostFn sheet from the DerivingCostFunction.xls workbook with the ATC and MC curves plotted below the TC curve. Column I has a formula for the TC curve using $L \mbox{*}$ and $K \mbox{*}$, from which we can compute ATC and MC in columns J and K. Click on an MC cell, for example, cell K4, to see that the cell formula is actually for $\lambda \mbox{*}$. We are using the shortcut that $\lambda \mbox{*}$ = MC. With L and K both endogenous, there are no fixed factors of production. This means we are in the long run and there are no fixed costs. Thus, TC = TVC and ATC = AVC. It is immediately obvious that the marginal and average curves do not look at all like the conventional family of cost curves as shown in Figure 11.14. In fact, a Cobb-Douglas production function cannot give U-shaped average and marginal cost curves as in Figure 11.14. Remember that there are many functional forms for cost curves (total, average, and marginal) and the shape depends on the production function. In other words, the production function is expressed in the cost structure of a firm. STEP Set the exponent on capital, d, to 2 to replicate Figure 11.15. Because average cost is falling as q rises in Figure 11.15 (and your computer screen), it means that total cost is increasing less than linearly as output rises. The total cost graph on your screen confirms that this is the case. It costs $33 to make 200 units, but only$43 to make 400 units. Double output again to 800. How much does it cost? Cell I9 tells you, $55. This is puzzling. If input prices remain constant, how can we double output and not at least double costs? The answer lies in the production function. You changed the exponent on capital, d, from 0.2 to 2. Now the sum of the exponents, $c + d$, is greater than 1. For the Cobb-Douglas production function, this means that we are operating under increasing returns to scale. This means that if we double the inputs, we get more than double the output. Or, put another way, we can double the output by using less than double the inputs. This firm can make 400 units cheaper per unit than 200 units. It can make 800 units even cheaper per unit because it is taking advantage of the increasing returns to scale. Increasing returns are a big problem in the eyes of some economists because they lead to a paradox: One firm should make all of the output. There are situations in which increasing returns seem to be justified, such as the case of natural monopolies, in which a single firm provides the output for an entire industry because the production function exhibits increasing returns to scale. The classic examples are utility companies, e.g., electric, water, and natural gas companies. Often, these firms are nationalized or heavily regulated. We can emphasize the crucial connection between the production function and the cost function via the isoquant map. STEP Scroll down to row 100 or so in the CobbDouglas sheet. The three isoquants are based on a Cobb-Douglas production function with parameter values from the top of the sheet, except for d, which can be manipulated from the Set d radio buttons (above the chart). The three red points are the cost-minimizing input combinations for three different output levels: 100, 120, and 140. Above the graph, the value of the sum of the exponents, initially 0.95, is displayed. A description of the shape of the total cost function, which depends on the value of c + d, and a small picture of that shape is shown. Figure 11.16 has the initial display. The spacing between the points is critical. The distance from A to B is a little less than that from B to C. This means that as output is increased from 120 to 140, the firm needs a bigger increase in inputs than when q rose from 100 to 120. As output continues rising by 20 units, the next isoquant we have to reach is getting farther and farther away, requiring progressively more inputs, and progressively higher costs. This is why TC is increasing at an increasing rate. STEP Click on the d = 0.25 option. The isoquants shift in because it takes fewer inputs to make the three levels of output depicted. The distance between the isoquants has decreased and TC is linear. Most importantly, the distance between the points is identical. With $c + d = 1$, the spacing of the isoquants is constant. As q increases by 20, the next isoquant is the same distance away and the firm increases its input use and costs by a constant amount. This is why the TC function is a line, increasing at a constant rate. STEP Click on the d = 0.3 option. Once again, the chart refreshes and isoquants shift in. Now the distance between the isoquants is decreasing. As q rises, the isoquants get closer together and the total cost function is increasing at a decreasing rate. STEP Click on the d = 0.35 option. This produces even stronger increasing returns and a TC function that bends faster than d = 0.3. The fundamental point is that the distance between the isoquants reflects the production function. There are three cases: 1. If the distance is increasing as constant increases in quantity are applied, the total cost function will increase at an increasing rate. 2. If the distance remains constant, the cost function will be linear. 3. If the distance get smaller as output rises, the firm has costs that rise at a decreasing rate. This holds for all production functions and, in the case of Cobb-Douglas, it is easy to see what is going on because the value of c + d immediately reveals the returns to scale and spacing between the isoquants. But the advantage of Cobb-Douglas in easily displaying the three cases (depending on the value of C + d) means it cannot do all three cases at once. A Cobb-Douglas production function can generate a TC functon that is increasing at an increasing or constant or decreasing rate, but not all three. The shape of the cost function is dependent on the production technology. Repeatedly cycle through the radio buttons, keeping your eye on the isoquants, the distance between the points, and the resulting total cost function. Your task is to understand and cement the relationship between the production and cost functions. An accordion is a good metaphor for what is going on. When scrunched up, the isoquants are being squeezed together, which gives increasing returns to scale and TC increasing at a decreasing rate. When the accordion is expanded and the isoquants are far apart, we have decreasing returns to scale and TC rising at an increasing rate. Do not be confused. The reason why increasing (decreasing) returns to scale leads to TC rising at a decreasing (increasing) rate (they are opposite) is that productivity (returns to scale) and costs are opposites. Increased productivity enables slower increases in costs of production. Production increasing at an increasing rate and costs increasing at a decreasing rate are two sides of the same coin. 2. Canonical Cost Curves STEP Proceed to the Cubic sheet. This sheet displays the canonical cost structure, in other words, the most commonly used cost function. It produces the familiar U-shaped family of average and marginal costs (which Cobb-Douglas cannot). The canonical cost curves graph can be generated by a cost function with a cubic polynomial functional form. $TC(q) = aq^3 + b q^2 + c q + d$ The d coefficient (not to be confused with the d exponent in the Cobb-Douglas production function) represents the fixed cost. If $d > 0$, then there are fixed costs and we know the firm is in the short run. Once we have the cost function, the top curve on the top graph in the Cubic sheet, we can apply the cost definitions (from the beginning of this section) to get all of the other cost curves. The other total curves are: $TVC(q) = aq^3 + b q^2 + c q$ $TFC = d$ STEP Click on each of the three curves in the top graph of the Cubic sheet to see the data that are being plotted. Now turn your attention to the bottom graph. The curves in the bottom graph are all derived from the top graph. Notice that the y axis label is different, the totals in the top have units of$, while the average and marginal curves have a y scale of $/unit (of output). STEP Click on each of the three curves in the bottom graph to see the data that are being plotted. Custom formatting has been applied to the numbers in the average and marginal cost cells to display “$/unit” in each cell. It is easy to forget that "\$" is not the units of average and marginal cost curves. The average total and average variable costs are easy to compute: simply divide the total by q. You can confirm that column E’s formula does exactly this. There is no ATC value for $q=0$ because dividing by zero is undefined. We can also divide the equation itself by q to get an average. This is done for AVC. The formula in cell F2 is "= a_*(A6^2) + b_*A6 + c_" because dividing $TVC(q) = aq^3 + b q^2 + c q$ by q yields $= aq^2 + b q + c$. Notice that AVC for $q=0$ does exist. Marginal cost is more difficult to understand than average cost. Marginal cost is defined as the additional cost of producing more output. "More" can be an arbitrary, finite amount (such as 1 unit or 10 units) or an infinitesimally small change in the number of units. If we use an arbitrary, finite amount of increase in q, then we compute MC as $\frac{\Delta TC}{\Delta q}$. We can also compute MC for an infinitesimally small change, using the derivative, $\frac{dTC}{dq}$. These two computations will be exactly the same only if MC is a line. The two approaches are applied in columns G and H. The derivative of TC with respect to q is: $TC(q) = aq^3 + b q^2 + c q + d$ $\frac{dTC}{dq} = 3aq^2 + 2b q + c$ Notice how we apply the usual derivative rule, bringing the exponent down and subtracting one from the exponent for each term. The d coefficient, TFC, disappears because it does not have q in it (or, if you prefer, think of d as $dq^0$). The expression for MC is entered in column G. Column H has MC for a discrete-size change. You can vary the size of the change in q by adjusting the step size in cell B3. STEP Make the step size smaller and smaller. Try 0.1, 0.01, and 0.001. As you make the step size smaller, the values in column H get closer to those in column G. This, once again, demonstrates the concept of the derivative. Another way to get the cost function is to use the neat result from Lagrangean method. We can simply use $\lambda \mbox{*} = MC$ and we have the MC curve. No delta-size change or derivative required. If what we really wanted was the total cost function, then we would have to integrate the $\lambda \mbox{*}$ function with respect to q. The constant of integration is the fixed cost, which would be zero in the long run. The family of cost curves in the Intro and Cubic sheets (and in Figure 11.14) are the canonical cost curves displayed in countless economics textbooks. You might wonder, if not Cobb-Douglas, then what production function could produce such a cost function? That is not an easy question to answer. In fact, the functional form for technology that would give rise to the canonical cost curves is quite complicated and it is not worth the effort to painstakingly derive the usual U-shaped average and marginal cost curves from first principles. It is sufficient to know that a production function underlies the polynomial TC function and its resulting U-shaped average and marginal cost curves. We also want to keep in mind that if input prices rise, the cost curves shift up and, if technology improves, they shift down. 3. Quadratic Cost Curves STEP Proceed to the Quadratic sheet to see a final example of cost curves. It is immediately clear that the quadratic functional form is a special case of the cubic cost function, with coefficients a and c equal to zero. Look at the top chart and connect the shapes of the TC, TVC, and TFC functions to the functional form $TC(q)=bq^2+d$. Given the coefficient values in the sheet, this gives $TC(q)=q^2+1$, $TVC(q)=q^2$, and $TFC=1$. The bottom chart does not look familiar, but it obeys the definitions of average and marginal cost explained earlier in this section. ATC is $TC(q)$ divided by q: $ATC(q)=q+\frac{1}{q}$. Similarly, AVC is $TVC(q)/q$, which is q (a ray out of the origin). MC is the derivative of TC with respect to q, which is 2q. Although not the usual U-shaped curves, the MC curve (actually, MC is linear) intersects AVC and ATC at their minimums. When MC is below ATC, ATC is falling, but beyond the point at which MC intersects ATC (at the minimum ATC), MC is above ATC and ATC is rising. As q increases, AVC converges to ATC, which implies that AFC goes to zero. The shapes of the cost curves are not the usual U-shaped average and marginal curves, but this is another of the many possible cost structures that could be derived from a firm’s input cost minimization problem. The Role of Cost Curves in the Theory of the Firm Cost curves are not particularly exciting, but they are an important geometric tool. When combined with a firm’s revenue structure, the family of cost curves is used to find the profit-maximizing level of output and maximum profits. Cost curves can come in many forms and shapes, but they all share the basic idea that they are derived by minimizing the total cost of producing output, where output is generated by the firm’s production function. Different production functions give rise to different cost functions. The shape of the cost function, rising at an increasing, constant, or decreasing rate, is determined by the production function. With increasing returns to scale, for example, a firm can more than double output when it doubles its input use. That means, on the cost side, that doubling output will less than double total cost. Returns to scale can be spotted by the spacing between the isoquants. With increasing returns to scale, for example, the gaps between the isoquants get smaller as output rises. No matter the production function, it is always true that for output levels at which marginal cost is below an average cost, the average must be falling and MC above AVC or ATC means AVC or ATC is rising. It is also true that, in the short run (when there are fixed costs), AVC approaches ATC as output rises. Lastly, consider the message conveyed by Figure 11.17. The arrows show the progressionaverage and marginal curves come from the total cost function, which comes from the input cost minimization problem (with the production function expressed in the isoquants). Economists use graphs to communicate. It may seem like graphs are conjured out of thin air, but this is false. All graphs have a genealogy and a story to tell. When you know where graphs come from, that helps in reading them correctly. Exercises 1. A Cobb-Douglas production function with increasing returns to scale yields a total cost function that increases at a decreasing rate. Use Word’s Drawing Tools to draw the underlying isoquant map for such a production function. A commonly used specification for production functions in empirical work is the translog functional form. There are several versions. When applied to the cost function, you get a result like this: $\ln TC\ = \alpha_0 + \alpha_1 \ln Q\ + \alpha_2 \ln w\ + \alpha_3 \ln r\ + \alpha_4 \ln Q\ \ln w\ + \alpha_5 \ln Q\ \ln r\ + \alpha_6 \ln w\ \ln r\$ Notice that the function is a modification of the log version of a Cobb-Douglas function. In addition to the individual log terms there are combinations of the three variables, called interaction terms. Click the button at the bottom of the Q&A sheet in the CostCurves.xls workbook to reveal a sheet with translog cost function parameters. Use this sheet to answer the following questions. 1. Enter a formula in cell B18 for the TC of producing 100 units of output, given the alpha coefficient and input price values in cells B5:B13. Fill your formula down and then create a chart of the total cost function (with appropriate axes labels and a title). Copy and paste your chart in a Word document. Hints: $TC = e \ln TC\$ and the exponentiation operator in Excel is EXP(). "=EXP(number)" in Excel returns e raised to the power of that number. 2. Compute MC via the change in output from 100 to 110 in cell C19. Report your result. 3. Compute MC via the derivative at Q = 100 in cell D18. Report your result. Hint: $\frac{d}{dx}(e^{f(x)})=e^{f(x)}\frac{d}{dx}(f(x))$ 4. Compare your results for MC in questions 3 and 4are your answers the same or different? Explain.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/11%3A_Input_Cost_Minimization/11.04%3A_Cost_Curves.txt
• 12.1: Initial Solution With a total cost function, TC(q), and its associated average and marginal cost curves, we are ready to solve the the firm’s output profit maximization problem. The firm chooses the amount of output that maximizes profit, defined as total revenue minus total cost. This is the second of three optimization problems that make up the Theory of the Firm. • 12.2: Deriving the Supply Curve The most important comparative statics analysis of the firm’s output profit maximization problem is based on tracking q* (quantity supplied) as price changes, ceteris paribus. This gives us the firm’s supply curve. • 12.3: Diffusion and Technical Change 12: Output Profit Maximization With a total cost function, $TC(q)$, and its associated average and marginal cost curves, we are ready to solve the the firm’s output profit maximization problem. The firm chooses the amount of output that maximizes profit, defined as total revenue minus total cost. This is the second of three optimization problems that make up the Theory of the Firm. All firms face this profit maximization problem, but this chapter works with a perfectly competitive (PC) firm in the short run (SR). There are, of course, many other market structures and types of firms, but perfect competition is the first step from which more sophisticated scenarios arise. The firm’s market structure tells us the environment in which it operates. Its market structure determines the firm’s revenue function. A PC firm is the simplest case because it takes price as given. Thus, revenues are simply price times quantity and the revenue function is linear. Remember that we are not trying to describe the actual operation of a business. In fact, a truly perfectly competitive firm does not exist in the real world. The concept is an abstraction that enables derivation of the supply curve. This is our goal. Remember also that the short run is defined by the fact that at least one input (usually K) is fixed. In the long run, the firm is free to choose how much to use of every factor. K is fixed not because it is immovable (like a pizza oven or a building), but because the firm has contracted to rent a certain amount. It cannot increase or decrease the amount of K in the short run. Profit maximization and its graphs may be familiar from introductory economics. This experience will help you, but do not be complacent. Keep your eye on how the economic way of thinking is being applied in this case and make connections with other optimization problems we have explored. Perfectly Competitive Market Structure A perfectly competitive firm sells a product provided by countless other firms selling that homogeneous (which means identical) product to perfectly informed consumers. Because the product is homogeneous, there are no quality differences or other reasons for consumers to care about who they buy from. Because consumers are perfectly informed, they know the price of every seller. Thus, the PC firm’s market structure is one of intense price competition. Every firm sells the product at the exact same price because if anyone tried to sell at even a tiny bit higher than the market price, no one would buy from them. The shorthand term for this environment is price taking. The PC firm must take the price and cannot choose its priceprice is exogenous to the firm. In addition to price taking, the market structure of the PC firm is characterized by an assumption about the movement of other firms into and out of the industry: free entry and exit. Firms can enter or leave the market, selling the same good as everyone else, at any time. These two ideas, price taking and free entry, distinguish the PC firm from its polar opposite, monopoly. A monopolist chooses price and has a barrier to entry. Between these two extremes are many other market structures in which real-world firms actually exist. The PC firm’s market structure means that an individual PC firm does not worry about what other firms are doing. Each firm simply chooses its own output to maximize profit and does not watch the other firms to gain a strategic advantage. In this sense, there is no rivalry in perfect competition. Setting Up the Problem As usual, we organize the optimization problem into three parts: 1. Goal: maximize profits ($\pi$, Greek letter pi), which equal total revenues (TR) minus total costs (TC). 2. Endogenous variable: output (q). 3. Exogenous variables: price of the product (P), input prices (the wage rate (w) and the rental rate of capital (r)), and technology (parameters in the production function). Unlike the consumer’s utility maximization and the firm’s input cost minimization problems, this profit maximization problem is unconstrained. The firm does not have a restriction, like a budget constraint or isoquant, that limits its choice of output to a particular range. It can choose any non-negative level of output. This greatly simplifies the optimization problem. For the analytical method, it means we do not need the Lagrangean method. All we need to do is take a single derivative and set it equal to zero. Finding the Initial Solution Suppose the cost function is: $TC(q) = aq^3 + b q^2 + c q + d$ Then we can form the PC firm’s profit function and optimization problem like this: $\max\limits_{q} \pi=TR-TC$ $\max\limits_{q} \pi=Pq-(aq^3 + b q^2 + c q + d)$ As usual, we have two ways to solve this optimization problem: numerically and analytically. STEP Open the Excel workbook OutputProfitMaxPCSR.xls and look over the Intro sheet. The Intro sheet is not meant to be immediately understood. It offers highlights of material that will be explained and prints as one landscaped page. It provides a compact summary of the optimal solution of the output profit maximization problem for a perfectly competitive firm in the short run. STEP Proceed to the OptimalChoice sheet to find the initial solution. The sheet is organized into the components of an optimization problem, with goal, endogenous, and exogenous variable cells. Initially, the firm is producing nine units of output and making $11.74 of profit. Is this the highest profit it can possibly make? No. The sheet reveals the information needed to give this answer. By comparing marginal revenue (MR) and marginal cost (MC), we immediately know that the firm would make a mistake (we would say it is inefficient) if it produced just nine units. The MC of the ninth unit is$3.52 as shown in cell B22, but what about MR? Perhaps you remember from introductory economics that $P=MR$ for perfectly competitive firms? We can see that the additional revenue produced by the last unit, $7 (the price), is greater than the additional cost,$3.52 (cell B22). Thus, the firm should produce more. How much exactly should the firm produce? STEP Run Solver to find out. Look carefully at B22. At the optimal solution, $q \mbox{*} \approx 13.09$, MC = $7 per unit. $P = MC$, a special case of $MR=MC$ for a PC firm, is the equimarginal condition in this problem, analogous to $MRS = \frac{p_1}{p_2}$ and $TRS = \frac{w}{r}$. When the equimarginal condition is met, the firm is guaranteed to be maximizing profits. To find the optimal solution via the analytical method, we take the derivative of the profit function with respect to q, set it equal to zero, and solve for $q \mbox{*}$. Our cubic cost function introduces the complication that the solution has two roots so we have to use the quadratic formula. STEP Click the button to see how to solve this problem with calculus. Cell AC17’s formula has the root that maximizes profits (the other root minimizes profitsmore on this in the next section). As usual, Solver and calculus agree (not exactly, but they give effectively the same answer). Representing the Optimal Solution with Graphs Since this is an unconstrained optimization problem (unlike utility maximization and input cost minimization), the graphical display of the optimal solution is different. The firm’s output profit maximization problem is usually represented by a graph that depicts the family of cost curves along with marginal and average revenue. Figure 12.1 and the Intro sheet shows this canonical graph for a perfectly competitive firm (signaled by the fact that firm demand is horizontal, so marginal revenue equals demand). Figure 12.1 is the usual display of the optimal solution, but it is actually part of a much larger graphical display. STEP Proceed to the Graphs sheet to see how Figure 12.1 fits into the bigger picture, also shown in Figure 12.2. Zoom out to see all four graphs. Each of the four graphs in Figure 12.2 and on your screen can be used to show the firm’s optimization problem and its solution. We will walk through each one. 1. The top left graph plots total revenue and total cost. TR is linear because the firm’s market structure is perfect competition, hence, it is a price taker. The cubic total function produces the shape of TC. The firm wants to choose q to maximize the difference between revenues and costs. 2. The top right graph shows the profit function, which is $TR - TC$. The firm wants to choose q so that it is at the highest point on the profit hill. 3. The bottom right graph displays marginal profit, which can be expressed as the derivative of the profit function with respect to q. The firm can find the maximum profit by choosing q so that marginal profit is zero. This is the first-order condition from the analytical solution. 4. Finally, the bottom left graph is the usual display. The firm chooses q where MR (which equals P given that the firm is a price taker) equals MC. Profits can be calculated as the area of the rectangle $(AR - ATC)q$. To be clear, all four graphs in Figure 12.2 show the same optimal q and maximum profits, but the graph that is most often used is the bottom left. It highlights the comparison of MR and MC and the family of cost curves provides information about the firm’s cost structure. We can also find profits as the area of the rectangle (with blue top and dashed line bottom). STEP Move the output with the slider control (in the middle of the four charts) to the left and right of $q \mbox{*}$ to see how the profit rectangle changes. Only when q is such that $MR = MC$ do you get the maximum area of the profit rectangle. Moving left from optimal q, you can make the rectangle taller, but you must make it shorter to do this and you end up with less area. You can make the rectangle longer by moving right from optimal q, but ATC rises and the rectangle gets thinner, so once again the area falls. The intersection of MR and MC immediately reveals the optimal q. Profits at any q are also easily seen as the area of a rectangle, length times width, with units in dollars. Because the y axis is a rate,$/unit, and the x axis is in units of the product, multiplying the two leaves dollars. In other words, say the product is milk in gallons. Then price, average total, and average variable cost are all in $/gallon. Suppose that at a price of$2/gallon, MR = MC at an output of 7,000 gallons and ATC = $1.50/gallon at this output. Clearly, profits are ($2/gallon - $1.50/gallon) x 7,000 gallons, which equals$3,500. We can compute profits from the profit rectangle at any level of output. The height of the rectangle is always average revenue (which equals price) minus average total cost. This vertical distance is average profit. When multiplied by the level of output, we get profits, in dollars, at that level of output. The bottom left graph has another advantage over the other graphs. It can be used to explain a curious and puzzling feature of a firm’s short run profit maximization problem. The story revolves around a firm with negative profits and what it should do in this situation. The Shutdown Rule The firm has an option when maximum profits are negative: it can simply shut down, close its doors, hire no workers, and produce nothing. The Shutdown Rule says the the firm will maximize profits by producing nothing ($q \mbox{*} = 0$) when $P < AVC$. The key to whether the firm shuts down or continues production in the face of negative profits lies in its fixed costs. If the firm can do better by shutting down and paying its fixed costs instead of producing and choosing the level of output where $MR = MC$, then it should produce nothing. Continuing production in the face of negative profits versus shutting down are actually the last two of four possible profit positions for the firm. 1. Excess Profits: $\pi \mbox{*} > 0 \text{ and } P > ATC$ 2. Normal Profits: $\pi \mbox{*} = 0 \text{ and } P = ATC$ 3. Negative Profits, Continuing Production: $\pi \mbox{*} < 0 \text{ and } P \geq AVC$ 4. Shutdown: $\pi \mbox{*} < 0 \text{ and } P < AVC$ Case 1, excess profits, occurs whenever maximum profits are positive. The example we have been working on is this case. With P = 7, we know that $q \mbox{*} = 13.09$ and $\pi \mbox{*} = \20.23$. STEP In the Graphs sheet, click on the pull down menu (over cell R5) and select the Zero Profits option. Your screen now looks like Figure 12.3. Notice that the price ($5.373) in the bottom left chart just touches the minimum of the average total cost curve. The profit rectangle has zero area because it has zero height. The best the firm can do is zero profitsall other choices of q lead to lower (negative) profits. In the top left graph, you can see that TR just touches TC. In the top right graph, the top of the profit hill just touches the x axis. These charts confirm what the bottom left chart tells uswith P =$5.373, $q \mbox{*}$ yields $\pi \mbox{*} = 0$. The third and fourth profit cases are the flip side of the first two in the sense that price is so low that profits are now negative. This means firms will leave in the long run, but another question arises: should the firm shut down immediately or continue production? STEP Click on the pull down menu (over cell R5) and select the Neg Profits, Cont Prod option. With the Neg Profits, Cont Prod option selected, P = 5.10. The firm produces $q \mbox{*}$ = 11.43 and suffers negative maximum profits of $-\3.16$. Notice that price is below ATC in the bottom left graph, so that the profit rectangle, (AR - ATC)q, will be a negative number. (The area is not negative, but it is interpreted as a negative amount since revenues are below costs.) In the top left graph, the TR line is below the TC curve. In the top right graph, the profit function is below the x axis. There is a maximum, or top of the hill, but it is negative, like a mountain under water. Keep your eye on the top right graph, reproduced as Figure 12.4. Notice that the top of the profit function is higher than the intercept (where q = 0). It is better for the firm to continue production, even though it is earning negative profits of $-\3.16$ at the optimal output level, because it would make an even lower negative profit of $-\5$ (the fixed cost) if it shut down. The canonical graph of profit maximization can be used to determine whether the firm should produce or shut down by comparing price to average variable cost. The Shutdown Rule is easy: hire no labor and produce nothing if $P < AVC$. STEP Look at the bottom left graph on your screen. It confirms that the Shutdown Rule works. Profits are negative because price is below average total cost, but the firm will continue production because $P > AVC$. When the relationship between P and AVC is such that price is greater than average variable cost, it means that the top of the profit function is higher than the y intercept, as in Figure 12.4. STEP Click on the pull down menu (over cell R5) and select the Neg Profits, Shutdown option. Figure 12.5 displays the top right graph. In this case, the top of the profit function is below the y intercept. In other words, the maximum profit if the firm produces, $-\9.81$, is worse than the negative profit incurred if the firm shuts down, $-\5$. The firm optimizes by choosing $q \mbox{*}$ = 0, that is, shutting down. STEP Look at the bottom left graph on your screen. Once again, we have confirmation of the Shutdown Rule. With P = 4.5, $P < AVC$ and the firm should shut down. STEP Carefully watch the canonical (bottom left) and profit function (top right) graphs as you change the price (with the pull down menu over cell R5). As long as $P > AVC$, the top of the profit hill is above the y intercept. If $P = AVC$, the two are exactly equal and the firm is indifferent between producing and shutting down. $P < AVC$ is the magic cutoff point. When this happens, the top of the hill is below the y intercept (which is the negative profit suffered if the firm produces nothing). Thus, the firm’s best choice is to produce nothing. Here is why the rule works. Multiply the Shutdown Rule by q to get: $\begin{gathered} %star suppresses line # (P<AVC)q \ Pq<AVCq\ TR<TVC\end{gathered}$ $TR<TVC$ is a restatement of the Shutdown Ruleproduce nothing if total revenue cannot cover total variable costs. This makes sense. Why produce if you can’t even pay for the variable expenses? You are better off not producing at all. If total revenue is less than average total cost, then profits are negative. However, the firm can be in a situation where $TR < TC$, but $TR > TVC$. If so, then production makes sense because you will be able to reduce some of the fixed costs you have to pay no matter what you do. Profits are negative, but it is better to produce than not produce because variable costs are covered and fixed costs are at least partially reduced. STEP For a summary of the four cases and what the Shutdown Rule is doing, click the button (over cell AC5). What’s Normal about Zero Profits? In economics, zero profits are called normal profits. This is confusing. Zero sounds bad, not normal. There is a logical explanation, but it requires a clear separation of accounting versus economic profits. They differ because economists include opportunity costs when calculating economic profits. • Accounting profits = revenues - explicit costs • Economic profits = revenues - explicit costs - opportunity costs In economics, without an adjective, "profits" means economic profits. So, when profits are zero that means economic profits are zero. Economic profits have had an extra item subtracted, the opportunity costs of using firm resources to make this particular product. An accountant would subtract explicit (out-of-pocket) costs (wages, rent, etc.) from revenues and if this number is positive, announce that the firm is making money. The economist would then subtract the cost of the profits that could be made by the next best alternative industry that the firm could be in. If economic profits are zero, it means the opportunity costs are exactly equal to the accounting profit and the firm cannot do better by switching to its next best alternative. Although this may seem needlessly contorted at first, there is a nice interpretation of economic profits: If positive, the firm will stay in the industry and new firms will enter in the long run; if negative, the firm will exit in the long run; and if zero, there will be neither exit nor entry in the long run. It is in this sense of equilibrium that we say zero profits are normal. With $\pi = 0$, there is stability and no tendency to change in the movement of firms. The distinction between economic and accounting profits also explains why positive profits are excess profits. It is not meant as a pejorative term, but to indicate that the firm is earning greater profits than are needed to keep producing that product in the long run. Excess profits also mean that others are attracted and will enter that industry. Economists are not concerned with how much money the firm made, but with profits as a signal to entry and exit. Defining economic profits as accounting profits minus opportunity costs gives us a profit measure that tells us whether the firm will stay or leave in the long run. Shutdown Rule and Corner Solution The Shutdown Rule is usually covered in introductory economics. Memorization is often all that is achieved. We can do better by properly situating the Shutdown Rule in the landscape of mathematical and economic conceptsit is a corner solution. Recall that, in the Theory of Consumer Behavior, there are situations in which the MRS does not equal the price ratio, yet the solution is optimal. This is a corner solution. Food stamps are an example. The fact that food stamps can only be used to buy food creates a horizontal segment on the budget constraint so that a consumer might not be able to make MRS = $\frac{p_1}{p_2}$. At the kink in the constraint, the consumer is optimizing even though the equimarginal condition is not met. Corner solutions are a general phenomenon. They can be seen whenever a restriction or border blocks further improvement in the objective function. Consider Figure 12.6 which sketches a maximization problem to highlight the difference between an interior and a corner solution. In panel B, the agent cannot choose negative values of the x variable and, therefore, the function is cut off by the y axis. In panel B, although the marginal condition is not met, we have an optimal solution, defined as doing the best we can without violating any constraints. Shutting down is another example of a corner solution because, once again, the equimarginal condition is not met at $q=0$, yet producing nothing is the optimal solution. Shutting down is an unusual example of a corner solution because there is a place where the marginal condition is met (there is an output where $MR = MC$), but it is not optimal. The profit function twists in such a way (see Figure 12.5) that profit is decreasing as output rises from zero. This means that profits would go up if we were able to produce negative output. Since we are not allowed to choose $q < 0$, we have a corner solution. How can we know if we should choose q at $MR = MC$, the interior solution, or shut down, the corner solution? The only way is to compare the profit positions at the two quantities. The good news is that no checking is required for cases 1 and 2. As long as profits are non-negative, there is no way that a profit of minus total fixed cost can be better than the interior solution of q where $MR = MC$. But, whenever, $MR = MC$ yields negative maximum profits, comparing those negative profits to TFC is necessary. Or, you could just use the Shutdown Rule and see if P < AVC, which will give the same, correct answer. The complexity of the firm’s profit maximization problem in the short run, with its shutdown possibility, should increase your sensitivity to lurking problems with analytical and numerical methods. We know neither is perfect so there may be glitches in applying these methods to the firm’s profit maximization problem. The Q&A sheet provides an example. Be sure to look carefully at questions 2 and 3. Finding and Displaying the Initial Solution The output profit maximization problem for a PC firm in the short run is a single-variable (q) unconstrained problem. It can be solved with numerical and analytical methods. The equimarginal rule applied is that $MR = MC$ and since price taking behavior means that $P=MR$ for a PC firm, the equimarginal rule is often shown as $P=MC$. The firm’s profit maximization problem contains a complication in the short run. If maximum profits are negative, it is possible that the firm is better off not producing anything. A shortcut to determine whether or not to produce when $\pi \mbox{*}<0$ is the Shutdown Rule, $P < AVC$. The initial optimal solution is displayed by a canonical graph that superimposes the firm’s revenue side (average and marginal revenue) over its cost structure (average and marginal costs). Optimal output is easily found where MR intersects MC (as long as $P > AVC$) and maximum profit is displayed as the area of the appropriate rectangle. The ability to instantly show the optimal solution, maximum profits, and whether or not to shut down explains the popularity of this graph. You can think of the firm as walking through a series of three steps when solving its profit maximization problem: 1. Choose q where $MR = MC$ in the canonical graph. 2. Compute profits at $q \mbox{*}$ via $(AR - ATC)q$ (the profit rectangle). 3. If profits are negative, shut down if $P < AVC$. The PC firm’s profit maximization is simpler in the long run. If $\pi < 0$, firms exit the industry; $\pi > 0$ (also known as excess profits) lead to entry. Thus, in long run equilibrium (a state never actually attained), $P = ATC$ and $\pi = 0$ for all firms. This is why zero economic profits are called normal profits. Exercises 1. Use Excel’s Solver to find the optimal output and profit for a firm with cost function $TC = 2q^2 + 10q + 50$ and $P = 40$. Take a screen shot of your optimal solution (including output and profits) and paste it in a Word document. 2. Use analytical methods to solve the problem in the previous question. 3. For what price range will the firm in question 1 shut down? Explain. 4. If fixed costs are higher, will this influence the firm’s shutdown decision? Explain.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/12%3A_Output_Profit_Maximization/12.01%3A_Initial_Solution.txt
The most important comparative statics analysis of the firm’s output profit maximization problem is based on tracking $q \mbox{*}$ (quantity supplied) as price changes, ceteris paribus. This gives us the firm’s supply curve. An important thing to remember is that the supply curve has two parts: 1. MC when P $>$ min AVC 2. Zero otherwise (Shutdown Rule) As usual, we have numerical and analytical methods at our disposal for the comparative statics analysis that generates the supply curve. Before we begin, we show how Solver can be modified to deal with the shut down possibility and revisit the fact that it is not a silver bullet. Solver Issues STEP Open the Excel workbook DerivingSupply.xls, read the Intro sheet, then go to the OptimalChoice sheet to see an implementation of a PC firm’s profit maximization problem in the short run. The sheet looks like the OptimalChoice sheet in the OutputProfitMaxPCSR.xls workbook (from the previous section), but it has a few additional cells. The IF statements in cells C4 and C8 of the OptimalChoice sheet are a convenient way to incorporate the firm’s shutdown option. STEP Click on C8 to reveal its formula: = IF(max profit $>= -$ d, q, 0). We will use this cell as the correct optimal solution in all cases, including the shutdown case. It is easy to see that Solver has been run because at $q \approx 10$ in cell B8, $MR = MC$ since $P=4$ and cell B18 reports $MC=4$. This q, however, is not the optimal solution because cell B4 shows that $\pi = - 15$ (using the common convention that "()" denote negative numbers). This firm would be better off not producing at all and suffering a loss of $TFC= - 5$. The Shutdown Rule says the same thing since $P < AVC$ (cell B15 is $5). While Solver’s answer is wrong (because it found the top of the profit hill, which is lower than the y intercept at $-TFC$), we can add a step to Solver where we check for exactly this situation. This is what cells C8 and C4 do. The expression $max\_profit \geq -d$ is used to test if Solver’s answer (the interior solution) has higher profits than negative total fixed costs (the corner solution). If true, it keeps Solver’s solution; if false, the optimal solution is zero (shut down). Solver will find the best of the positive levels of output in cell B8 and the IF statement in cell C8 checks to make sure that the best solution (of the q $>$ 0) is better than shutting down and producing nothing (q = 0). With P = 4, the best of all of the positive levels of output, q = 10, provides a profit of minus$15. Cells C4 and C8 show that producing nothing yields a higher profit (and smaller loss) of minus $5 and is the correct optimal solution. While this is an improvement over manually checking Solver’s answer, there is another potential problem with Solver in this application. STEP To see the problem, set P (cell B12) to 7 and run Solver. The optimal q is approximately 13.09 and the firm is enjoying excess profits. Cells B4 = C4 and B8 = C8 because Solver’s answer gives profits greater than minus TFC. All is well. STEP Now set cell B8 = 1. Run Solver from this initial value. Solver’s result is disastrous! What happened? STEP Click the button to see why starting from q = 1 leads Solver astray. The explanation on the sheet makes it clear that the initial or starting value can play a critical role when numerical methods are utilized. This profit maximization problem has a sufficiently complicated surface that a numerical algorithm, such as Solver, cannot easily distinguish between local and global optimal solutions. There is no simple fix. The lesson is that you have to know the optimization problem you are dealing with and be careful interpreting the answers provided by a numerical algorithm. The explanation of Solver’s failure involves the minimum point of the profit function and this provides an opportunity to explain the two roots in the quadratic formula. A picture, in this case, really is worth a thousand words. STEP Click the button. Cell Z17 has the other root from the quadratic formula (computed by adding instead of subtracting the square root term). Both roots are places where the profit function is flat (in the top right graph on the sheet). Notice how the dashed lines from the max and min profit points lead to points where marginal profit ($m \pi$) is zero. These are the two roots in the quadratic formula. The two roots can also be seen in the canonical, bottom left graph as the two points where MR and MC intersect. Of course, we only care about the root that maximizes profits. One way to ensure that $MR=MC$ yields a profit max is to make sure that $MC < MR$ to the left of the intersection. In other words, MC cuts MR from below. Numerical Methods to Derive the Supply Curve STEP Set cell B8 back to 10 and P = 4 so Solver will converge to the local max at $q = -15$. STEP Run the Comparative Statics Wizard from $P = 4$ with 0.05 sized shocks 100 times. Track the C4 and C8 cells as endogenous variables. You can safely ignore the warningyou are using the CSWiz to keep track of these cells, but will not include them as changing cells in the Solver dialog box. Your results will look like those in the CS1 sheet. Notice that at low prices, the firm is producing nothing. This is the part of the supply curve where the firm shuts down to maximize profits. The supply curve and inverse supply curves can be graphed with the CSWiz data, as shown in Figure 12.7 and the CS1 sheet. Of course, the tail runs along the quantity axis all the way to zero. Just as with the demand curve, $q=f(P)$ is the supply curve and flipping the axes, $P=f^{-1}(q)$, gives the inverse supply curve. Figure 12.7 applies our usual graphical exposition. The leftmost chart is the underlying graph from which the other charts are produced. We shock P and track $q \mbox{*}$. This gives the supply curve. Unlike the demand curve, however, notice that the supply curve follows MC as long as P is not below AVC. The discontinuity is at the minimum AVC. Row 32 of the CS1 sheet shows the break occurs for this cost function between$4.90 and $4.95. Prices below this minimum AVC value result in no quantity supplied since the firm shuts down. Analytical methods can be used to find the discontinuity. First, we obtain an expression for AVC. $\begin{gathered} %star suppresses line # TC=0.04q^3 - 0.9 q^2 + 10 q + 5\ TVC=0.04q^3 - 0.9 q^2 + 10 q\ AVC=\frac{TVC}{q}=0.04q^2 - 0.9 q + 10 \end{gathered}$ Then we take the derivative of AVC with respect to q and set it equal to zero to find its minimum point. $\begin{gathered} %star suppresses line # \min\limits_{q} AVC=0.04q^2 - 0.9 q + 10\ \frac{dAVC}{dq}=0.08q - 0.9 = 0\ q \mbox{*} = \frac{0.9}{0.08}=11.25\end{gathered}$ By plugging this minimum value of output into the AVC function, we know the price at which the discontinuity kicks in. $AVC[q=11.25]=0.04[11.25]^2 - 0.9 [11.25] + 10=4.9375$ In the CS1 sheet, the discontinuity occurs when price rises from$4.90 to $4.95. Our analytical work tells us that the discontinuity is exactly at$4.9375. Any price below this yields optimal q of zero. Notice how we used the derivative to find the value of q at which the rate of change for the AVC curve was zero. This is the bottom of the U-shaped AVC curve and prices below this AVC result in shutting down. The lesson is that derivative is a tool that has a variety of uses. The CS1 sheet also computes the price elasticity of supply in column E. STEP Scroll down to see a comparison of slope and elasticities via the $\Delta$ and derivative approaches. In this case, the two approaches are not exactly the same because $q \mbox{*}$ is non-linear in P. The sheet has all of the details in case you want to refresh your understanding of this concept. Analytical Methods to Derive the Supply Curve For the analytical approach, we use a different cost function to give us more practice. $TC(q)=q^2+20$ With this quadratic cost function, we can set up and solve the PC firm’s profit maximization problem. Because it is a perfectly competitive firm, we know price is given and, thus, $TR = Pq$. Therefore, the optimization problem is: $\max\limits_{q} \pi=Pq-(q^2+20)$ We proceed by taking the derivative with respect to q and setting it to zero, then solving this first-order condition for optimal q. $\frac{d \pi}{dq}=P-2q=0$ $q \mbox{*}=\frac{1}{2}P$ This is the supply function. It gives the quantity supplied by a firm at every given price. For example, with $P = 20$, $q \mbox{*}$ = 10. The inverse supply curve is found by expressing the equation as $P=f(q)$. $P=2q \mbox{*}$ The supply function tells us that $q \mbox{*}$ increases by one-half fold for every increase in P. The size of the change in P does not matter since $\frac{dq}{dP}$ is constant. The price elasticity of supply is $+1$. $\begin{gathered} %star suppresses line # \frac{dq}{dP} = \frac{1}{2}\ \frac{dq}{dP}\frac{q}{P} = \frac{1}{2}\frac{P}{\frac{1}{2}P}=1\end{gathered}$ We can compute the price elasticity of supply from one point to another. We know that at $P=20$, $q \mbox{*} = 10$. If $P=30$, $q \mbox{*} = 15$. A 50% rise in price led to a 50% increase in quantity supplied so the price elasticity of supply is $+1$. The result is the same as the derivative approach because $q \mbox{*}$ is linear in P. A PC firm with a quadratic cost function will not shut down with any price greater than zero. By constructing its family of cost curves and graph of the optimal solution, we can see why. We begin with the cost curves. We know $TVC = 2q$ and $TFC = 20$. Then we can find the average and marginal curves. $\begin{gathered} %star suppresses line # ATC(q)=\frac{TC}{q}=\frac{q^2+20}{q}=q+\frac{20}{q}\ AVC(q)=\frac{TVC}{q}=\frac{q^2}{q}=q\ MC(q)=\frac{dTC}{dq}=\frac{d(q^2+20)}{dq}=2q\end{gathered}$ STEP Proceed to the Graphs sheet to see the four graph display of the optimal solution for this problem. If $P = 20$, then $q \mbox{*} = 10$ and $\pi \mbox{*} = \80$. It is also obvious that there is no positive price at which this firm will shut down because AVC is simply a ray with slope $+1$ out of the origin. Thus, price can never fall below AVC. Notice also how there is only one point where $MR=MC$, unlike the two intersections we saw with the cubic cost function. The quadratic cost function cannot produce the S-shape TC needed for the profit function to have a minimum profit at the bottom of a U-shape. The profit function in the top right graph has a single top of the hill (where $m \pi = 0$). Points Off the Supply Curve As we did with the demand curve (see Figure 4.12), we can explore the meaning of being off the supply curve. The interpretation is quite similar. STEP Return to the CS1 sheet and manipulate the point off the supply and inverse supply curves with the scroll bar in column E. Figure 12.8 shows what is on your screen, but in Excel you can move the red dot. As you do, the chosen q and profit for that quantity is displayed. Profits are maximized when you are on the supply curve. It is clear that the supply curve, like the demand curve, has a hidden third dimensionprofit for supply and utility for demand. The right most panel shows the mountain and how you approach the top at the optimal solution. The ridgeline connecting the mountain tops is the supply curve. Like the demand curve, points off the supply curve are associated with lower values of the objective function. Notice how the point off the curve moves in a vertical fashion in the supply curve graph and horizontally on the inverse supply curve graph. This happens because price is constant (at P = 6.25). With the price on the x axis, points can be above or below the supply curve. Points off the inverse supply curve are to the right or left because P is on the y axis. Finally, on the inverse supply curve, the inefficiency of being off the curve is obvious because output levels off the inverse supply curve means the firm is not choosing a point where $MR (= P) = MC$. The Supply Curve has Parents Like demand and cost curves, supply is derived from an optimization problem. Knowing where key relationships come from separates introductory from more advanced economics and is an important aspect of mastering the economic way of thinking. The supply curve is a comparative statics analysis of the effects on optimal quantity as price changes, ceteris paribus. Unlike the demand curve, the supply curve has a discontinuity because the firm will shut down if price falls below AVC. The supply curve depends critically on the firm’s cost function. The inverse supply curve is simply MC above AVC and zero otherwise. The firm will choose that level of output where $MR (=P) = MC$ as long as $P > AVC$. Like the demand curve, points off the supply curve are interpreted as inefficient solutions to the optimization problem. Although possible, no optimizing agent would choose a point off the supply (or demand) curve. Exercises 1. What happens to the short run supply curve if wages rise? Explain. Use Word’s Drawing Tools to create a graph depicting your answer. 2. What happens to the inverse short run supply curve if wages rise? Explain. Use Word’s Drawing Tools to create a graph depicting your answer. 3. What happens to the short run supply curve if the rental rate of capital increases? Explain. 4. What happens to the short run supply curve if the price (P) increases? Explain. 5. Suppose a firm is off its short run supply curve, but at a point where $MR = MC$. Use Word’s Drawing Tools to a draw the profit function for this situation and label a point Z that meets the supposed conditions.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/12%3A_Output_Profit_Maximization/12.02%3A_Deriving_the_Supply_Curve.txt
The Theory of the Firm is a highly abstracted model of a real-world firm, yet there are fundamental ideas that can be applied to observed firm behavior. This section does exactly that, applying the Shutdown Rule to explain differing rates of diffusion of new technology. The Shutdown Rule, $P < AVC$, says that firms will not produce when price is below average variable cost because profits are maximized (and losses minimized) by shutting down instead of producing at the best of the positive output choices (at $MR = MC$). Diffusion of new technology is the process by which new methods of production are adopted by firms. The speed of diffusion is criticalthe faster firms upgrade and modernize, the richer the society. We will see that some industries have fast and others slow diffusion with the Shutdown Rule playing a key role. Setting the Table Consider two thoughts that are both wrong: 1. Always upgrade to have the best equipment or to use "best practice" techniques. 2. Never throw working machinery away or abandon a process that can produce output. The first statement is wrong because firms would always be replacing almost new machinery, tools, and plant to have the very latest equipment. The second statement is the polar opposite of the first: Now you keep using ancient machinery that was long ago superseded by better technology just because it is still functioning. There has to be a middle ground between these two extremes and a logical way to determine when to replace equipment. Consider these two words that are accepted as synonymous in common usage, but are different in the language of the specialized literature of diffusion: 1. Outmoded: machinery that is not the best at the time, but is still used. 2. Obsolete: machinery that is scrapped (thrown away) yet still functions. Your phone is outmoded if it is not the latest, greatest available version. When you replace your phone with a new one, the old one becomes obsolete. At any point in time, a few people have the newest, fanciest model; the rest have versions of outmoded models still in use; and there are many obsolete models that are no longer being used. As time goes by, the newest model becomes outmoded and, eventually, obsolete. The distinction between outmoded and obsolete sharpens our focus on this question: When does machinery go from being outmoded to obsolete? Another important idea is labor productivity: the ability of labor to make output. This is measured in two ways, output per hour or labor required to produce one unit of output. The output per hour version is simply the average product of labor, $\frac{q}{L}$. The bigger this ratio, the more productive is labor. You can take the reciprocal and ask, “How much labor is needed to make one unit of output?” This measure, called the unit labor requirement, gets smaller as labor productivity improves. There are two ways of increasing labor productivity: 1. Better labor: increasing education. 2. Better machinery: technical (or technological) change. Most people only think of the first way. More educated and skilled labor obviously will be more effective in translating labor input into output. But holding labor quality constant, if workers have better technology, such as computers or power tools, then labor productivity rises. So, if you want to increase ditch digging productivity, you can improve the worker (think ditch digging classes) or you can improve the technology. A worker with a shovel digs a ditch a lot faster than one without. But the explosion in productivity and output really occurs when you give the worker a backhoe. But here’s the curious thing, after backhoes are invented and brought online, if you look at the entire industry of ditch digging, you will see many different methods being used. Not everyone will instantly adopt the backhoe. The question we are interested in boils down to explaining the rate of diffusion: how rapidly do the latest, best machinery and methods spread? The mere existence of a new machine (e.g., a backhoe) is not enough to spur economy-wide increases in labor productivity. If the machine is not adopted rapidly, it will have little effect on the economy. We want fast diffusion so new methods spread quickly. This will boost productivity and economic growth. The rate of diffusion is like adding a drop of red dye in a bucket of water. How rapidly will the water turn red? What factors affect the rate of diffusion? If we stir, the rate of diffusion rocketshow can we "stir" the economy to speed up diffusion? It turns out that the rate of diffusion of technical change in an economy varies across industries and depends on specific characteristics. We are not searching for an unknown constant, but for the factors that explain wide variation in rates of diffusionsometimes backhoes are rapidly adopted and other times not. The rate of diffusion depends on whether machinery is determined to be outmoded versus obsolete. If machines are scrapped and replaced with the latest technology fairly quickly, then the rate of diffusion of technical change will be fast. If old technology is kept online and in production for a long time, then the rate of diffusion of technical change will be slow. Before we see how the Shutdown Rule plays a critical role in deciding whether machinery is outmoded or obsolete, we review data used by W. E. G. Salter (1960) to support the claim that the rate of diffusion varies across industries. We also introduce a new graph that captures the idea of a distribution of methods or vintages of machinery. On the Variation of Methods Used Across Industries Salter presents data on a variety of goods. He focuses on the methods of production used at any point in time. It is quite obvious that there is always a mix of technologies being used. As new plants come online and new machinery is installed, older plants with older machinery remain in operation. For example, Salter’s Table 5, reproduced as Figure 12.9, shows a mix of technologies used in pig-iron production. Notice that the labor productivity of the best-practice plants (the latest technology) rises from 1911 to 1926. The industry average, however, lags behind because the latest technology is not immediately adopted by every manufacturer. The machine charged and cast method (the right most column) is the best technology, but even by 1926, 30.6% of the firms are not using it. These firms remain in operation with older technology. This slow diffusion hampers industry-wide labor productivity. Figure 12.10 (Salter’s Table 6) focuses on the production of five-cent cigars. Salter keeps constant the quality and type of cigar, the five-cent variety, to focus on an apples-to-apples comparison of production methods. Because the measure of productivity is the labor required to make 1,000 five-cent cigars, the lower the hours required, the greater the labor productivity. The two-operator machine is the best practice, but three other methods are also used. Once again, the point is that a mix of methods are used and all of them combined determines industry-wide productivity. Figure 12.11 offers a final example of Salter’s point that an economy’s labor productivity depends on the technology actually being utilized to make output. The Range of all plants column shows substantial variation in output from the best-practice firms to the least productive methods still being used. Notice that lower numbers are higher productivity because, as the title says, we are measuring "labour content per unit of output." For bricks, with 17 plants in operation, the middle 50% range is from a best 0.93 hours to make 1,000 bricks to 1.75 hours. That is a huge difference and it is just the middle 50%. Take a moment to look at the ranges of the other products in Figure 12.11. The Ratio of range to mean columns measure the rate of diffusion. If somehow every plant adopted the best-practice method, this ratio would be zero. Thus, houses and men’s shoes are industries with much faster diffusion than the others. Pig-iron, five-cent cigars, and products in Figure 12.11 are examples of a widespread phenomenon that was of great interest to Salter. The rate of diffusion of new technology is neither constant nor instantaneously fast. Salter wanted to know what diffusion depends on in the hope of manipulating it. After all, if there is a policy or lever we can pull to speed up diffusion, we would improve productivity and increase output. A Graph is Born Salter used an uncommon graph, an ordered histogram, to show how an industry incorporated various technologies in production. Figure 12.12 (Salter’s original Fig. 5) uses rectangles to indicate each method or vintage of machinery. We call this a Salter graph. The greater the base of each rectangle in Figure 12.12, the greater the share of the industry’s output for that particular technology. So, in the middle of the graph, the wider rectangle has a bigger share of the output than the narrower one right next to it. The sum of lengths of the bases have to add up to 100% of the industry output. The height of each rectangle tells you how much labor is needed to make one unit with that technology. The lower the height (because the y axis shows the labor required to make one unit of output), the greater the labor productivity for that technology. The Salter graph has to have a stair-step structure because the rectangles are ordered according to when they came online. The oldest technology is to the right and the newest is to the left. The left-most rectangle is the best-practice technology at that time and all of the other rectangles are at different stages of outmodedness. The Salter graph in Figure 12.12 is actually a single frame of a motion picture. As time goes by, and new techniques are invented and brought online, some of the right most rectangles will “fall over” and be replaced by a new shorter rectangle coming in from the left. Figure 12.13 shows a possibility for the next frame in the movie. The base of the rectangle of the newest technology in Figure 12.13 equals the sum of the widths of the three rectangles representing obsolete technologies, which fall off the graph because they are no longer used. The wider the base of the newest technology, the better in terms of fast diffusion of technological change and rapid increases in industry-wide productivity. If a new technology swept through an industry like wildfire, the Salter graph would show it as having a very long base, indicating it was producing a large share of industry output. Another, less favorable possibility is that the newest technology has a small width. This would mean that few firms have adopted the best-practice method and industry-wide productivity will not improve by much. The industry will remain dominated by outmoded methods. Consider the two Salter graphs in Figure 12.14 (Salter’s original Fig. 12). They are enhanced by a strip in the middle, the height of which represents the industry average productivity. We would much prefer the industry on the left in Figure 12.14 because it has a lower industry average unit labor requirement, which means it has higher productivity. This is a result of much more rapid diffusion of newer, higher productivity technology. The industry average shaded bar is a weighted average of all of the technologies in existence at any point in time. This statistic is the correct way to add up the rectangles with differing widths into a single measure of industry productivity. To understand how to do this, we turn to a concrete example in Excel. STEP Open the Excel workbook DiffusionTechChange.xls, read the Intro sheet, then go to the IndustryAverage sheet to see how a weighted average is computed and how the Salter graph works. Cells C9 and C10 show how two technologies contribute to the industry output. Initially, Methods A and B produce 50% of the total output. Because A (the superior, best-practice technology) requires only 1 hour of labor to make a unit of output, whereas B (an outmoded technology) requires 2 hours, the industry average productivity is 1.5 hours per unit of output. STEP Click on the scroll bar a few times to increase A’s share of total output to 90%. Notice how the Salter graph changes as you manipulate the scroll bar. The Salter graph now shows A’s share as a much wider rectangle (indicating much faster diffusion) and the red, industry (weighted) average rectangle is much shorter. Although the simple average does not change, the weighted average falls because more of the output is being generated by the more productive A technology. The weighted average computation (implemented in the formula for cell M10) is: $WeightedAverage=\frac{Output_A}{TotalOutput}UnitLReq_A+\frac{Output_B}{TotalOutput}UnitLReq_B$ STEP Click on the scroll bar to decrease A’s share of total output to 10%. This time, the industry (weighted) average is 1.9 because only 10% of the output is produced with the best-practice technology. This would be an example of slow diffusion. The contributions of each technology to industry output, weighted by the share of total output, is a good way to show how the rate of diffusion affects industry-wide productivity. Having seen data that there is substantial variation in the rate of diffusion and that a Salter graph displays this variation, we are ready to explain why we see industries with mixes of technologies. We answer two questions: 1. Why is a machine that works sometimes kept (so it is outmoded) and other times scrapped (so it is obsolete)? 2. What determines the rate of diffusion of technical change? 1. Outmoded versus Obsolete? We assume that new technologies are being constantly generated in all industries, but some are adopted more quickly. Why is that? Why are some factories and technologies quickly replaced while others remain online? Salter’s work pointed to an easily overlooked element: the cost structure of the firms in an industry. STEP Proceed to the Output sheet. The opening situation is depicted in Figure 12.15. The graph shows two firms, one that is labor intensive and the other capital intensive. The capital intensive firm has a larger gap between ATC and AVC because it has higher fixed (capital) costs. The much lower AVC curve will prove to be critical. Both firms in Figure 12.15 are earning small, but positive economic profits. As time goes by, however, new technologies are introduced and incorporated in newly built factories with shiny, modern equipment. The products from firms with the newest factories with their best-practice methods (the left-most rectangle in a Salter graph) can be made more cheaply so competitive pressure drives the price down. STEP Click on the scroll bar to lower the price. Since you know the Shutdown Rule, it is easy to see that the L-intensive firm will shut down first. As soon as you make $P < AVC$, the factory is obsolete and taken offline. The factory on the left will survive as an outmoded technology that is still in operation for much longer. You will have to keep driving the price down for much longer to see it shut its doors. All firms use the same Shutdown Rule, but differing cost structures is what makes some factories stay in production while others close down. So, to directly answer the question, Why is a machine that works sometimes kept (so it is outmoded) and other times scrapped (so it is obsolete)? Because the Shutdown Rule, $P < AVC$, determines the difference between outmoded and obsolete technology. Old plants that are kept online, using outmoded machines, operate in an environment in which profits may be negative, but $P > AVC$. These plants will remain in operation as long as revenues cover variable costs. Once $P < AVC$, we know the machines will be scrapped and become obsolete as the factory is closed down. 2. What Does the Rate of Diffusion Depend On? Figure 12.15 shows that the firm’s cost structure is one of the factors which determine the rate of diffusion of technical change. Industries with capital intensive production and low variable costs will have slow rates of diffusion because plants and technologies will remain online until $P < AVC$. Steel is a good example of such an industry. Old factories remain in production alongside modern mini-mills. The Salter graph looks like the right panel in Figure 12.14 and the cost structure is given by the left panel in Figure 12.15. On the other hand, industries who produce in a way that labor is dominant and fixed costs are low will see rapid rates of diffusion of new methods. Legal services are a good example. Cost curves look like the right panel in Figure 12.15 so when new computers and information systems (such as LexisNexis) are developed, they are rapidly adopted and old ways are discarded. Thus, the Salter graph looks like the left panel in Figure 12.14. Another factor affecting the rate of diffusion is the speed at which price falls. Competition among firms can be intense or muted. If, for example, the government protects an industry from foreign competition with trade barriers, preventing price from falling, the rate of diffusion of new technology and growth of labor productivity are retarded. This has certainly played a role in the rate of diffusion in the steel industry. So, what determines the rate of diffusion of technical change? There are three factors: 1. New ideas and inventions from research and development (R&D): This is the creativity of the society. Curiosity and willingness to experiment produce a stream of better methods. The faster the flow, the better. 2. The cost structure of the firm: Capital intensive industry with high fixed and low variable costs retards diffusion of new technology. The new ideas are there, but the old ways stay online. 3. The speed at which price falls: If it is slow, we get slow diffusion. We want to encourage competition so price puts pressure on outmoded methods and drives them to be obsolete. The first factor is the obvious one that everyone thinks of when explaining why technology affects labor productivity and economic growth. Innovation is the implementation of inventionnew ideas are the raw material which expand the production function. But Salter identified another crucial factor: Even if new technology exists, it will be mixed with existing technology and the rate at which it is adopted will depend on the Shutdown Rule. Highly capital intensive industries with low AVC will feel the drag of old technology for a long time because the gap between ATC and AVC will be great. Old methods will stay outmoded as long as $P > AVC$. The Shutdown Rule compares average variable cost to price. Both matter. Low AVC will keep old methods around, but so will slow decline in P. Although economists usually defend free trade policies on the basis of comparative advantage, this analysis points to another reason for allowing foreign competition in domestic markets. As price is pushed down, firms are forced to modernize, taking old methods offline and investing in the newest technology. Steel tariffs are an example. You might be confused about the claim that competition makes price fall as time goes by. It seems like inflation, prices rising, is the usual state of affairs. The explanation lies in the difference between real and nominal price. In nominal terms, also known as current prices, the price of a light bulb is definitely higher today than 10 years ago and much higher than 100 years ago. But in this application, the correct price to consider is the real price, in terms of actual input use. In real terms, the price of lighting is incredibly lower today. Figure 12.16, created by Nobel Prize winner William Nordhaus, tells an amazing story. In terms of the number of hours of work needed to buy 1,000 lumen hours, the price of light went from incredibly expensive for thousands of years to a free fall since the 1800s. In terms of input use, as technology improves, costs and, therefore, price of the output fall over time. Nordhaus argues that "price indexes can capture the small, run-of-the-mill changes in economic activity, but revolutionary jumps in technology are simply ignored by the indexes" (Bresnahan and Gordon, eds., 1997, p. 55). Thus, the real price of lighting, in terms of the labor used, keeps falling and falling as time goes by. It Is Diffusion, not Discovery, that Really Matters Wilfred Edward Graham Salter was an Australian economist born in 1929. His promising career was tragically cut short when he died in 1963 after battling heart disease. His dissertation, finished in 1960, was published by Cambridge University Press as Productivity and Technical Change and was met with wide acclaim. Salter was amazed by the ability of markets to incorporate new technology to increase output per person. He realized that scientific knowledge, technology "on the shelf," is not the only or even the most important driver of rapid growth. The new technology has to be implemented, actually used in production, and the faster it is adopted, the faster the economy grows. Salter’s primary contribution was in showing that the rate of diffusion varies tremendously and depends on the cost structures of firms. Industries with high fixed and low variable costs have large $ATC - AVC$ gaps that imply long time spans for outmoded technology. We want nimble, adaptive firms and startups that challenge established titans. Replacing old with new machinery is necessary for rising productivity. Economies with ossified, rigid institutions are stagnant. There was a silver lining after Germany and Japan’s factories were destroyed during World War II. The latest, greatest technology could be used to make all of an industry’s output and productivity increased rapidly. Exercises 1. Sometimes a best practice investment is quickly leapfrogged by newer technology. Google "fiber optic overinvestment" to see an example. Briefly describe what happened and cite at least one web source. 2. Automobile emissions requirements are stricter in Japan than in the United States (where many areas have no vehicle inspection at all). In both countries, newer cars pass inspection (if required) easily, but older cars are more likely to fail inspection and be removed from the operating car fleet. Draw hypothetical Salter graphs, with emissions on the y axis, for the car fleets of Japan and the United States that reflect the stricter emissions standards in Japan. 3. What happens to a late model year Toyota or Honda that has failed an emissions inspection in Japan and, therefore, cannot be used there? Google "Japan used engines" to find out. What effect does this have on the United States Salter graph that you drew above? 4. The National Highway and Traffic Safety Administration maintains a data base of car characteristics by model year. For miles per gallon (MPG) performance, they show the following: These data cannot be used to show a Salter graph (with MPG on the y axis) of the US car fleet. Why not? What additional information is needed?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/12%3A_Output_Profit_Maximization/12.03%3A_Diffusion_and_Technical_Change.txt
Recall that the firm’s backbone is the production function. Inputs, or factors of production, typically labor (L) and capital (K) are used to make output, or product (q). In previous chapters, we explored the firm’s input cost minimization and output profit maximization problems. This chapter returns to the input side and works on the firm’s third optimization problem: input profit maximization. We continue working with a perfectly competitive (PC) firm, but we extend the assumption of perfect competition to input markets. Thus, not only is the firm one of many sellers of a perfectly homogeneous product with free entry and exit, it is also one of many buyers of labor and capital. Our firm is an output and input price taker. This means that our PC firm only chooses the amount of input to hire, not how much to pay for it. If it has market power, then the firm not only determines how much to hire, but also gets to choose the input price. In this case, we say the firm has monopsony power. While you have surely heard of monopoly, monopsony may be new to you. They are similar in that one is selling (monopoly) and the other buying (monopsony) and that means price (output or input) is no longer exogenous. A classic example is the only hospital in a small town hiring nurses. Another example is a big box retailer. Walmart is such a big buyer that they have monopsony power. They can negotiate with suppliers and extract cheaper prices from them. Notice that a firm can have both monopoly and monopsony power. In a Labor Economics course, you study how firms can take advantage of the ability to set input prices to make greater profits. We assume this possibility away and stay with a PC firm that takes the wage rate (w) and rental rate of capital (r) as given. Our PC firm is such a small buyer that it can hire as much L and K as it wants at the going w and r. Setting Up the Problem There are three parts to every optimization problem. Here is the framework for a PC firm. 1. Goal: Maximize profits ($\pi$), which equal total revenues minus total costs. To distinguish the input from the output side, we use the terms total revenue product (TRP) and total factor cost (TFacC). The idea is that labor and capital are used to make product that is sold so price times the number of units produced is the TRP. 2. Endogenous variables: labor and capital, in the long run; only L in the short run. 3. Exogenous variables: price (of the product, P), input prices (the wage rate and the rental rate of capital), and technology (parameters in the production function). As usual, we will work with a Cobb-Douglas production function, with $\alpha >0$, $\beta >0$, and $\alpha + \beta < 1$. $q=AK^\alpha L^\beta$ Revenues are the output price multiplied by the output produced, $TR=Pq$. We substitute the production function for q in TR to get total revenue product: $TRP=PAK^\alpha L^\beta$ The units of TRP are dollars (just like total revenue). The "revenue product" language indicates that we are considering the amount of revenue ($) produced by the inputs. The costs are simply the amounts spent on labor and capital, $wL + rK$. These are called total factor costs. The firm chooses L and K to max profits. $\begin{gathered} %star suppresses line # \max\limits_{L,K} \pi = PAK^\alpha L^\beta - (wL+rK)\end{gathered}$ Finding the Initial Solution First the problem is solved using numerical methods, and then the analytical approach is used. STEP Open the Excel workbook InputProfitMax.xls and read the Intro sheet,then go to the TwoVar sheet to see the problem implemented in Excel. The sheet is named TwoVar because both inputs are choice variables, which means this is a long run profit maximization problem. As usual, the sheet is organized into the color-coded components of an optimization problem, with goal, endogenous, and exogenous cells. STEP Read the description of the firm, a bakery, and scroll down to the endogenous variables. On opening, the sheet has 500 hours of labor hired and 100 units of capital rented, yielding a profit of$936. Is this the best this firm can do? Cells B48 and B49 show the marginal revenue product of labor and marginal factor cost. By hiring one more hour of labor, revenues would rise by more than costs, so profits would increase. Clearly, therefore, this bakery is not optimizing. STEP Run Solver to find the initial solution. Your screen should look like Figure 13.1. The firm hires roughly 1,431 hours of labor and rents 153 machines (but click on cells B34 and B35 to see more decimal places). This yields a maximum possible profit of just over $1,900. Notice that the marginal revenue product and marginal factor cost cells are now exactly equal at$20/hour. This is no coincidence. The equimarginal condition for input profit maximization is that $MRP=MFC$. Since the firm is an input price taker, $MFC=w$ (just like $P=MR$ for a PC firm) so it is also true that $MRP=w$ at the optimal solution. Finally, notice the breakdown of the firms revenues in rows 44 to 46. Labor’s share (wL), capital’s share (rK), and profits (whatever is left) add up to 100%. K and L’s shares, 75% and 20% equal $\alpha$ and $\beta$. Is that a coincidence? No, that’s a property of the Cobb-Douglas functional form. The exponent tells you the share of revenues that factor will receive. We can also solve this problem via the analytical approach. We know the objective function and can substitute in each of the parameter values. $\begin{gathered} %star suppresses line # \max\limits_{L,K} \pi = PAK^\alpha L^\beta - (wL+rK)\ \max\limits_{L,K} \pi = 2*30*K^{0.2} L^{0.75} - (2L+3K)\end{gathered}$ Next, we take derivatives with respect to L and K, set them equal to zero, and use algebra to solve the two equation system of first-order conditions. We can move the 20 and 50 to the right hand side and this immediately reveals the equimarginal conditions: $MRP_L = w$ and $MRP_K = r$. We solve the first equation for L and substitute it into the second equation to solve for optimal K. We use the rule that $(x^a)^b = x^{ab}$ to solve for L. Substitute the expression for L into the second first-order condition. Compute optimal L from the expression for L. $L \mbox{*}=2.25^4K^{0.8}=2.25^4[152.6842]^{0.8}=1431.414$ Compute maximum profits. $\pi \mbox{*}=2*30*[152.6842]^{0.2}*[1431.414]^{0.75}-2*[1431.414]-3*[152.6842]=\1908.55$ This analytical solution is extremely close to Excel’s solution. Practically speaking, as we would expect, the two solutions are the same. The Short Run A slightly different version of the firm’s input profit maximization problem involves the short run when capital is not variable. By putting a bar over K, we highlight that capital is fixed. $\max\limits_{L} \pi = PA\bar{K}^\alpha L^\beta - wL-r\bar{K})$ We do the analytical solution first this time and in general form. There is only one derivative (since there is only one choice variable) and one first-order condition. STEP To see the numerical version of this problem, proceed to the OneVar sheet. Notice that there is only one endogenous variable, L. Capital has been moved to the exogenous list because we are in the short run. Notice also that there are two graphs. Each one can be used to represent the initial solution. Below the graphs, you can see that the marginal revenue product of labor does not equal the wage. As you know, this means you need to run Solver because the firm is not optimizing. STEP Run Solver to find the initial solution. Your screen should look like Figure 13.2. The bottom graph shows that the optimal labor use can be found where the marginal revenue product of labor (the curve) equals the wage (at \$20/hr). This is the canonical graph for the input side profit maximization problem. Like $MR=MC$ on the output side, the intersection of the two marginal relationships instantly reveals the optimal solution. The top graph is a different way of viewing the exact same problem. It is using the production function as a constraint (the TRP curve) and three representative isoprofit lines are displayed. Each isoprofit line shows the combination of L and q that gives the same profit. The firm is trying to get on the highest isoprofit (to the northwest) while meeting the constraint. It can roll on the TRP curve (like it rolled on the isoquant) until it hits an isoprofit line that is tangent to the TRP. The constrained optimization problem can be written like this: $\begin{gathered} %star suppresses line # \max\limits_{L,q} \pi = Pq - wL-r\bar{K}\ \textrm{s.t. } q=A\bar{K}^\alpha L^\beta\end{gathered}$ The Lagrangean method could be applied to solve this problem. Naturally, the exact same solution is obtained if we use the Lagrangean or the more common approach of directly substituting the constraint (the production function) into the revenue function. Suppose we wanted to check if the analytical and numerical results are the same. We need to evaluate the expression for optimal L at the parameter values in the OneVar sheet. The expression is complicated enough that entering it in a cell as you would write it is a bad idea. The parentheses are likely to cause confusion. It is better to create houses for each part then fill them in. Here’s how. STEP Watch this short video on how to enter a complicated formula in Excel: vimeo.com/415967747. Entering parentheses as pairs, is a good habit to develop when working in a spreadsheet. It is easy to make an order of operations mistake or get mismatching parentheses if you try to enter the formula like you would on a piece of paper. STEP Enter the formula in cell M28 (just like in the video) to practice building houses in formulas in Excel. In so doing, you confirm that the analytical and numerical methods yield substantially the same answer. Another Short Run Production Function A Cobb-Douglas production function has many advantages, including that the sum of exponents reveals whether returns to scale are increasing, constant, or decreasing if they are greater, equal, or less than one. However, once the exponents are set, the function can only exhibit those returns to scale. Likewise, in the short run, with K fixed, our Cobb-Douglas functional form showed the Law of Diminishing Returns because $\beta = 0.75$. A more flexible functional form would enable production to have increasing and diminishing returns as more labor is added. Like the cubic polynomial we used for the total cost function, a cubic functional form can give us an S-shaped TRP curve. $TRP=aL^3+bL^2+cL$ STEP Proceed to the Graphs sheet to see this functional form implemented in a set of four graphs that can be used to represent the firm’s input profit maximization problem (Figure 13.3). It is striking that these graphs mirror the four graphs we used to describe the firm’s output side profit maximization problem. The two top graphs show total revenue and total cost on the top left, along with total profits on the top right. The bottom graphs display a series of marginal and average curves on the bottom left and marginal profit on the bottom right. If you look carefully, you will notice that things are switched around a bit. Instead of total cost being a curve (as it is on the output side), it is a straight line because total factor cost on the input side in the short run is $wL+ r\bar{K}$. On the other hand, total revenue product (so named to distinguish it from total revenue on the output side) is a curve (instead of a straight line). Unlike the canonical output side profit maximization graph with U-shaped MC, ATC, and AVC curves and a horizontal $P = MR$ line, the bottom left graph has a horizontal MFC line and the MRP and ARP functions are curves and they are upside down. But there are also key similarities. The equimarginal rule is in play: $MFC=MRP$ reveals the labor use that maximizes profits. Also, a rectangle of $(ARP-AFC)L$ gives an area that is equal to profits. The length of the profit rectangle ranges from zero to the chosen amount of labor hired. The height is the difference between average revenue product, ARP, and average factor cost, AFC. The area of this rectangle is profit because $ARP - AFC$ is profit per hour so multiplying by L, measured in hours, yields profits. Another way to think about this is that multiplying L by ARP yields total revenues (since $L*TRP/L=TRP$) and multiplying L by AFC gives total costs (since $L*TFacC/L=TFacC$). Subtracting the total cost rectangle from the total revenue rectangle leaves the profit rectangle. Another similarity between output and input profit maximization is that the firm has the same four profit positions. STEP In the Graphs sheet, click on the pull down menu (near cell P4) and cycle through all of the profit positions. As with the output side, the shock is output price. As it falls, so do maximum profits. The Neg Profits, Cont Prod and Neg Profits, Shutdown options show that the firm will shut down when the $w>ARP$. This is analogous to the $P < AVC$ Shutdown Rule. Keep your eye on the total profits in the top right graph to see that the story is the samethe firm is deciding whether the negative profit at best of the positive levels of L is better than hiring no L at all. The connection between input and output is simple. The firm shuts down when $w > ARP$ which we can multiply by L to give $wL > TRP$. But wL and TRP are TVC and TR on the output side. Divide both by q and we get $AVC>P$, which is the same as $P<AVC$, the usual output side Shutdown Rule. In addition, the $wL > TRP$ version of the Shutdown Rule supports the claim that revenues must cover variable costs for a firm to produce. Input Profit Maximization Highlights At this point, you might be suffering from repetitive stress syndromewe seem to be going over and over the same ideas. That is an important level to attain in mastering the economic way of thinking. The body of knowledge in economics is grounded in a core methodology of optimization and comparative statics. The framework is used over and over and over again. Like every optimization problem, the input side profit maximization problem can be organized into a goal, endogenous, and exogenous variables. This problem has a canonical graph (with MFC and MRP as the key elements) and an equimarginal rule $MFC = MRP$. Because the firm is an input price taker, $MFC=w$. This means that every additional hour of labor adds w to total cost. If the firm was a monopsony, this would not be true and the optimization problem would be more complicated. Finally, because the input profit maximization problem is the flip side of the output side profit maximization problem, it should not be surprising that we can represent the initial solution with a set of four graphs. The parallelism carries through all the way to the Shutdown Rule, where $w>ARP$ is equivalent to $P < AVC$. We will stress the connections between input and output side again in the next chapter. Exercises 1. Use the TwoVar sheet to compute the long run beta elasticity of $L \mbox{*}$ from beta = 0.75 to beta = 0.74. Show your work. 2. In the Q&A sheet, question 4 asks you to find short run beta elasticity of $L \mbox{*}$ from beta = 0.75 to beta 0.74. The InputProfitMaxA.doc file in the Answers folder shows that the answer is about 28. Explain why the short run elasticity (which is admittedly quite large) is much smaller than the long run elasticity that you computed in the previous question. 3. Use Excel to set up and solve (with Solver, of course) the constrained version of the input profit maximization problem in the OneVar sheet. Take a screenshot of your solution (including the constraint cell) and paste it in your Word document. 4. In the Graphs sheet, select the Neg Profits, Shutdown case. Does the top, right graph support the $w>ARP$ Shutdown Rule? Explain.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/13%3A_Input_Profit_Maximization/13.01%3A_Initial_Solution.txt
A profit-maximizing firm with Cobb-Douglas technology and given prices in all markets (P, w, and r) in the short run can be modeled as solving the following optimization problem: $\max\limits_{L} \pi = PA\bar{K}^\alpha L^\beta - wL-r\bar{K})$ The previous section found the initial solution for this problem. This section is devoted to comparative statics analysis. How will this firm respond to a change in one of its exogenous variables, ceteris paribus? Although there are several exogenous variables from which to choose, the responsiveness of optimal L to a change in the wage is of utmost importance. This comparative statics analysis will give us the short run demand for labor. After deriving the demand for labor in the short run, we will examine the long run demand for labor. A comparison of short and long run wage elasticities of labor reveals that labor demand is more responsive in the long run. We then explore how changes in P affect $L \mbox{*}$. Demand for Labor in the Short Run We begin with numerical methods for a comparative statics analysis of a change in the wage (also called the wage rate is measured in $/hr). STEP Open the Excel workbook DerivingDemandL.xls and read the Intro sheet, then go to the OneVar sheet. The layout is the same as the InputProfitMax.xls workbook in the previous section. It is clear from the graphs and the equivalence of wage and MRP below the graphs that the firm is at its optimal solution. The yellow-backgrounded cell, the wage rate, is the shock variable on which we will focus. STEP Change the wage in the OneVar sheet to$19/hr from the initial value of $20/hr. It is difficult to see anything in the top graph, however, the isoprofit line is no longer tangent to the TRP. The bottom graph clearly shows that the red diamond (at L = 1431 hours) has a marginal revenue product greater than the marginal factor cost (equal to the wage). Cells H40 and I40 show that the wage is less than MRP. STEP Since the firm is no longer optimizing, run Solver to find the new optimal solution. You will find that, to maximize profits, the firm will hire 1757 hours when the wage falls to$19/hr, ceteris paribus. At this level of labor use, the marginal revenue product once again equals the marginal factor cost. Although we have only two data points, it should be clear that the firm will hire that amount of labor where the marginal revenue product equals the wage, in the short run. This means that the marginal revenue product curve is the firm’s (inverse) demand for labor curve. Quote the firm a wage and it will look to its MRP curve to decide how much labor to hire. We have two points on the demand for labor curve; at w = $20/hr, $L \mbox{*}$ = 1431 hours and at w =$19/hr, $L \mbox{*}$ = 1757 hours. Can we pick more points off of the demand for labor curve? STEP Set the initial wage back to $20/hr and use the Comparative Statics Wizard to apply five$1/hr decreases in the wage. Create charts of the demand for labor and the inverse demand for labor. Your results should look like those in the CS1 sheet. The CSWiz output makes common sense. As the wage drops, the firm hires more labor. Look also at the objective functionas wage falls, maximum profits are rising. The key idea here is that firm hiring decisions are driven by profit maximization. The reason why L increases as w falls is that this response is profit maximizing. Like demand curves in the Theory of Consumer Behavior, the pricethe wage in this casecan be placed on the x or y axis. The two displays use the same information and convey the same message. We can also derive the short run demand for labor via analytical methods. This problem was presented in the previous section. For your convenience, it is repeated below. We need to leave w as a variable, but for maximum generality we solve for $L \mbox{*}$ as a function of all parameters. $\max\limits_{L} \pi = PA\bar{K}^\alpha L^\beta - wL-r\bar{K})$ We take the derivative with respect to L, set it equal to zero, and solve for $L \mbox{*}$. This expression is the demand curve for labor. If we substitute in values for all exogenous variables except w, we can plot $L \mbox{*}$ as a function of w, ceteris paribus. Do the numerical methods based on the CSWiz add-in agree with the analytical derivation of the demand for labor? STEP In the CS1 sheet, click on cell C16. This is Solver’s answer for $L \mbox{*}$ when the wage is $20/hr. Do not be misled by all of the decimal places. That is false precision. STEP Click on cell E26. It displays $L \mbox{*}$ when the wage is$20/hr based on the reduced-form solution. Do not be misled by the number displayed in cell E26. This is Excel’s display for the formula entered into that cell. Excel’s memory has a different number. STEP Widen column E to see more decimal places. We proceed slowly because things can get confusing here. Consider this hierarchy of truth: 1. Solver is giving a number close to the exact right answer in cell C16. 2. Excel is representing the exact right answer as a decimal in cell E26. 3. The exact right answer is $\frac{w}{\beta PA\bar{K}^\alpha}^{\frac{1}{\beta -1}}$ evaluated at w = $20/hr, along with the other parameter values. STEP To see that E26 is not the exact answer, make column E very wide, then select cell E26 and click Excel’s Increase Decimal button repeatedly. You will see that, eventually, Excel will start reporting zeroes. Excel has finite memory and, therefore, it cannot compute an infinite number of decimal places for the exact answer. The decimal representation of the exact answer stored in Excel’s memory is not the exact answer. To be clear, Excel can display the exact answer if it is an integer or fraction that can be represented with finite memory. For example, $\frac{x}{7}$, evaluated at $x=14$ is 2 so, no problem for Excel. If 2 is the answer, Excel has it exactly right. Evaluating at $x=1$ means there is no decimal representation with a finite number of digits. Excel cannot display the exact answer in this case. Enter $=1/7$ in a cell, widen the column, and click the Increase Decimal button repeatedly to see that Excel eventually starts showing zeroes. Thus, neither E26 nor C16 is the exact answer. They are both so close to the answer, however, that we can say they "substantially agree" and are correct. We can also use the analytical approach to reinforce the idea that the short-run (inverse) demand for labor is the marginal revenue product of labor. The first-order condition gives the equimarginal rule. $\begin{gathered} %star suppresses line # \frac{d \pi}{dL} = \beta PA\bar{K}^\alpha L^{\beta-1} = w\end{gathered}$ The term on the left is the MRP. Evaluating the $\beta PA\bar{K}^\alpha$ portion at their initial values gives 123.0187 (as shown in cell K26 of the CS1 sheet). Thus, $MRP=123.0187L^{\beta-1}$ and at $\beta = 0.75$, $MRP=123.0187L^{0.25}$. The CS1 sheet has an inverse demand for labor chart. Is the relationship in this chart the same as the MRP function that we just found? Let’s find out. By finding the function that fits the data in the inverse demand for labor chart, we can compare this relationship to the MRP function. STEP Right-click on the series in the inverse demand for labor chart and select the Add Trendline option. Select the Power fit, scroll down and check the Display equation on chart option. Click OK. Move the equation (if needed) and increase the font size to see it better. Scroll right to see what your chart should look like. The answer is clear: The fitted curve that reveals the function for the inverse demand curve for labor is the marginal revenue product of labor curve. The fitted curve’s coefficient and exponent are almost exactly that of the MRP. Next, we turn our attention to the wage elasticity of labor demand. We can compute the elasticity at a point or from one point to another. We do the former below and leave the latter as an exercise question. Elasticity at a point begins by finding the derivative of the reduced-form expression. We substitute in the known value for $\beta PA\bar{K}^\alpha=123.0187$ in the denominator and $\beta =0.75$ in the exponent. $\begin{gathered} %star suppresses line # L \mbox{*}=(\frac{w}{\beta PA\bar{K}^\alpha})^{\frac{1}{\beta-1}}= (\frac{w}{123.0187})^{\frac{1}{0.75-1}}=(\frac{w}{123.0187})^{-4}\end{gathered}$ To take the derivative with respect to w, we isolate w. $\begin{gathered} %star suppresses line # L \mbox{*}=(\frac{w}{123.0187})^{-4}=\frac{w^{-4}}{123.0187^{-4}}=(\frac{1}{123.0187^{-4}})w^{-4}\end{gathered}$ Now we can apply our usual derivative rule, moving the exponent to the front and subtracting one from it. $\begin{gathered} %star suppresses line # \frac{dL \mbox{*}}{dw}=-4(\frac{1}{123.0187^{-4}})w^{-5}\end{gathered}$ This expression is merely the slope or instantaneous rate of change of optimal labor hired as a function of the wage. To find the elasticity, we must multiply the derivative by the ratio w/L. $\begin{gathered} %star suppresses line # \frac{dL \mbox{*}}{dw}\frac{w}{L}=-4(\frac{1}{123.0187^{-4}})w^{-5}\frac{w}{L}\end{gathered}$ But we have an expression for L, so we substitute it in. $\begin{gathered} %star suppresses line # \frac{dL \mbox{*}}{dw}\frac{w}{L}=-4(\frac{1}{123.0187^{-4}})w^{-5}\frac{w}{(\frac{1}{123.0187^{-4}})w^{-4}}\end{gathered}$ The $123.0187^{-4}$ terms cancel. And $w^{-5}$ times w in the numerator is $w^{-4}$ so that cancels with $w^{-4}$ in the denominator. We are left with this. $\frac{dL \mbox{*}}{dw}\frac{w}{L}=-4$ As has happened before (remember the price and income and cross price elasticity of demand?), the Cobb-Douglas functional form produces a constant wage elasticity of short run labor demand. This elasticity value says that labor demand is extremely responsive to changes in the wage. We would not expect to find such a large wage elasticity of short-run labor demand in the real world. For a Cobb-Douglas production function, the elasticity is driven by the value of beta. If we had left $\beta$ in the expression for optimal L instead of using 0.75 (see the first two exercise questions), we would get this expression for the wage elasticity of labor demand: $\frac{dL \mbox{*}}{dw}\frac{w}{L}=\frac{1}{\beta-1}$ If we compute the elasticity from one point to another, say from a wage of$20/hr to $19/hour (see exercise question 3), we will get a different answer than $-4$. That makes sense since we know that $L \mbox{*}$ is non linear in w. As the change in the wage approaches zero, the elasticity computed from one point to another approaches $-4$. Demand for Labor in the Long Run If we relax the assumption that capital is fixed, we change the firm’s planning horizon from short to long run. The TwoVar sheet implements the firm’s long run input profit maximization problem. There are two endogenous variables, labor and capital, and no fixed factors of production. STEP To derive the firm’s long run demand for labor, use the Comparative Statics Wizard from the TwoVar sheet. As you did in the short run analysis, apply$1 decreases in the wage. Your results should show labor use rising as wage falls, just as in the short run. But what about the elasticityis it the same in the short and long run? STEP Use your CSWiz results to compute the wage elasticity of labor demand from a wage of $20/hr to$19/hr. Is it close to $-4$, the point elasticity at w=$20/hr? The CSCompared sheet is similar, but not the same as your results. It shocks wage by$1/hr increments in the short and long run. The difference in the elasticity is dramaticlabor demand is incredibly responsive in the long compared to the short run. The elasticity almost triples, from $-3.5$ to almost $-11$. You should find the same result with your CSWiz data for a wage decreasethe long run elasticity is much higher (in absolute value) than in the short run. What is going on? Figure 13.4 provides an answer to this question. The movement from point A to B is the short run response for a $1/hr wage increase. As the short run results in the CSCompared sheet show, when the wage rises from$20/hr to $21/hr, $L \mbox{*}$ falls from roughly 1,431 hours to 1,178 hours. In the short run, capital stays fixed and the firm moves along its marginal revenue product curve (which as we already know is the firm’s short run demand for labor) as the wage changes. The $K=153$ in the parentheses signals that this is the value of K for this MRP schedule. In the long run, however, the adjustment is different. The data in the CSCompared sheet show clearly that the firm will change both labor and capital as the wage rises. Notice that capital falls from 153 machines to 73 machines as the wage rises from$20/hr to $21/hr. This change in capital shifts labor’s marginal revenue product curve. As shown in Figure 13.4, the firm’s long run response to the change in the wage is from A to C, not simply A to B. It decreases labor use as it moves along the initial MRP and then again when MRP shifts as K falls. This is the reason why the wage elasticity of labor demand is more responsive in the long run. Figure 13.5 shows the firm’s long run demand for labor and that it is no longer the MRP curve. Because capital falls as wage rises, leading to a further decrease in labor hired, the firm is much more responsive to changes in the wage. It is clear that the inverse labor demand curve shown in Figure 13.4 is flatter in the long run than the MRP curve (which is the short run inverse demand for labor). A wage decrease would stimulate more labor hired in the long than short run because K would rise in the long run. The Shutdown Rule and the Demand Curve for Labor Recall that, on the output side, the supply curve is the MC curve when $P > AVC$.If $P < AVC$ where $MR = MC$, then the firm ignores this marginal signal (which is the top of a local profit hill) and shuts down ($q = 0$). The supply curve has a tail where the quantity supplied is zero when the price falls below average variable cost. There is a similar tail, with $L=0$, on the demand curve for labor. The previous section showed that if $w>ARP$, the firm will shut down, hiring no labor and producing no output. STEP Proceed to the Graphs sheet to quickly review this concept. Use the pull down menu to change the firm’s output price and place the firm in any of the four profit positions. Select Neg Profits, Shutdown to see that the firm will shut down when P is so low that it shifts ARP down so much that $w>ARP$. This is analogous to the $P < AVC$ Shutdown Rule. The Shutdown Rule means that we have to change our definition of the demand curve for labor to get it exactly right. In the short run, the inverse demand curve is the MRP curve, as long as $w> ARP$; otherwise it is zero, as shown in Figure 13.6. The Shutdown Rule is usually presented from the output side as $P < AVC$. This version of the rule is perfectly compatible with the input side version of the shutdown rule, $w>ARPL$. Either wage increases or output price decreases can trigger a shutdown. In Figure 13.6, it is easy to see what is happening when wage increasesthe horizontal MFC line shifts up and it rises above ARP, the firm shuts down. What is happening on the output side? Remember that as wage rises, cost curves on the output side shift up. At the precise point at which a higher wage triggers the decision to not hire any labor, the AVC curve will have shifted above P and the firm will decide to not produce any output. The same story is at work when P falls. On the output side, it easy to see that when the horizontal $P=MR$ line falls below AVC, the firm shuts down. What is happening on the input side? As P falls, the MRP and ARP curves in Figure 13.6 shift down. At the precise moment when P falls below AVC and the firm decides to produce no output, the ARP shift below the horizontal wage line in Figure 13.6 and the firm will decide to hire no labor. Demand for Labor Depends on P Another comparative statics analysis for input profit maximization revolves around the effect that P has on $L \mbox{*}$. This shows how the demand for labor is a derived demand from the desirability of the product. In other words, the stronger the demand for the product, the greater the demand for labor. Suppose demand for bread rises in our Excel workbook. This increases P, ceteris paribus. What happens to L? We explain the short run response here and leave the long run for exercise questions 4 and 5. STEP Return to the OneVar sheet. Return the wage to$20/hr. Run Solver. Instead of simply changing P and running Solver again, we want to see what effect P has on the graphs that show the initial solution. STEP Change P to $2.10 and look carefully at the charts. It is difficult to see that the TRP curve has changed so that it is no longer tangent to the isoprofit line, but the bottom chart clearly shows that the initial solution is no longer optimal. What happened? From our analytical work, we know that $MRP = \beta PA\bar{K}^\alpha L^{\beta-1}$ so it is clear that an increase in P will shift the MRP curve up. That is what you are seeing in the bottom graph on the OneVar sheet. Return P to$2/unit to see that MFC stays constant (w remains unchanged), but MRP is moving. STEP With P=$2.10, run Solver. What happens to $L \mbox{*}$? Not surprisingly, the firm wants to hire more labor. The reason is that the MRP curve shifts and a new solution is found where the new $MRP = w$. Labor cost and productivity are unchanged, but the demand for labor is affected by consumer’s desire for the product (expressed through the P). We say that that demand for labor is a derived demandthe firm’s need for labor (and other inputs) comes from the fact that it has customers who want its product. Figure 13.7 shows what happens as you increase the product price. If the demand for a firm’s output is high, the price will be high, and this will induce an increased demand (shift) for labor. It is easy to see that labor is a derived demand by considering professional sports. Pro athletes in major sports make a lot of money because they are in high demand. Sports teams know that the price of the good they produce (including broadcast and streaming revenue) is high. The output side is most definitely reflected in the input side via the product price. Marginal Productivity Theory of Distribution The input side profit maximization problem can be used to examine the distribution of firm revenues. The basic idea is that shares are a function of an input’s productivity: The more productive the input, the greater its share. STEP From the TwoVar sheet, run a comparative statics experiment that changes the exponent on labor from 0.75 to 0.755 (5 shocks of 0.001). In the endogenous variables input box, be sure to track not only L and K, but also the shares received in cells C44:C46. Check your results with the CS3 sheet. The CS2 sheet has the outcome of a change in alpha, the exponent on capital. It explains how "large" shocks of, say, 0.1 will cause catastrophic failure as $\alpha + \beta$ approaches $+1$. This is why the change in beta so smallto stay away from the singularity. By increasing the exponent on labor in the Cobb-Douglas production function, labor’s productivity rises. In other words, labor can make more output, ceteris paribus, as the exponent on labor increases. The firm maximizes profit by using more labor and labor’s share of firm revenues rises. The CSWiz data show that we can immediately determine the percentage share of revenues gained by each input by the input’s exponent in the production function. Although a different production function may not have this simple short-cut to determine the percentage share of revenues accruing to each input, it remains true that an input’s share will depend on its marginal productivity. Whereas algebraic convenience and simplicity are often invoked as a rationale for utilizing the Cobb-Douglas functional form, in the case of factor shares, a strong empirical regularity supports the use $AK^\alpha L^\beta$. About 2/3 of national income has gone to labor and 1/3 to capital. “In fact, the long-term stability of factor shares has become enshrined as one of the “stylized facts” of growth” (Gollin, 2002, pp. 458–459). More recent measurements of factor shares shows that capital is gaining a greater share and this is an active, exciting area of research. Labor Demand Highlights The most important comparative statics exercise on the input side is to derive the demand for inputs. This chapter focused on labor demand and showed that the short run demand for labor is the marginal revenue product of labor curve. In the long run, however, the demand for labor is not the MRP curve because $K \mbox{*}$ changes as w changes. For this same reason, labor demand is more responsive to changes in the wage in the long run. Whether in the long or short run, the demand curve for labor is subject to the same Shutdown Rule qualification as the supply curve for output. If the wage is higher than the ARP at the point at which $MRP = MFC$, the firm will hire no labor. This coincides perfectly with the firm’s decision to shut down on the output side, producing no output. In addition to changes in the wage, this chapter explored the effects of a change in product price. As P increases, $L \mbox{*}$ rises. In terms of the canonical graph, an increase in P shifts the MRP and leads to a new optimal solution. This leads economists to think of and say that labor demand is a derived demand because the price of the product influences how much labor the firm wants. This section ended by pointing out that an input’s productivity determines its share of firm revenues. As productivity rises, so does the percentage share accruing to that input. Productivity is a key variable in determining input use and distribution of revenues. Exercises 1. Derive the wage elasticity of short run labor demand for the general case where $L \mbox{*}=(\frac{w}{\beta PA\bar{K}^\alpha})^{\frac{1}{\beta-1}}$. Show your work, using Word’s Equation Editor. 2. Does your result from the previous question agree with the $-4$ value obtained in the text? 3. Compute the wage elasticity of short run labor demand (using the parameter values in the OneVar sheet) from w=$20/hr to \$19/hr. Show your work. 4. Use the Comparative Statics Wizard to analyze the effect of an increase in the product price in the long run. Compute the P elasticity of $L \mbox{*}$ from $P = 2.00$ to $P 2.10$. Copy and paste your results in a Word document. 5. Is $L \mbox{*}$ more responsive to changes in P in the short run or long run? Explain why.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/13%3A_Input_Profit_Maximization/13.02%3A_Deriving_Demand_for_Labor.txt
We have considered three separate optimization problems in our study of the perfectly competitive (PC) firm. Figures 14.1, 14.2, and 14.3 provide a snapshot of the initial solution and the key comparative statics analysis from each of the three optimization problems. This chapter ties things together with the fundamental point that these three problems are tightly integrated and are actually different views of the same firm and same optimal solution. Change an exogenous variable and all three optimization problems are affected. The new optimal solutions and comparative statics results are consistenti.e., they tell you the same thing and are never contradictory. Figure 14.1 shows the input side cost minimization problem. Quantity is exogenous in this problem and the firm looks for the input mix that minimizes the total cost of producing the given q. The right panel in Figure 14.1 shows the cost function that comes from tracking minimum total cost as q varies, ceteris paribus. Figure 14.2 shows output side profit maximization. The PC firm (since $P=MR$ is a horizontal line) gets average and marginal cost curves from the cost function and finds the quantity that maximizes profit. The right panel in Figure 14.2 shows where supply curves come from: shock P, ceteris paribus, and track optimal q. Figure 14.3 returns to the input side, but this time the firm solves a profit maximization problem, choosing how much labor to hire. The right panel in 14.3 shows how changing w, ceteris paribus, produces the demand curve for labor. These three optimization problems share a common methodology. In each case, we set up and solve the problem, then do comparative statics analysis. There are other shocks that can be explored, but the one shown here is the most important. But there is one last crucial concept that is the focus of this chapter: these three problems do not exist in isolation, instead, they are woven together to comprise the Theory of the Firm. The relationships among the three exhibit a consistency that can be demonstrated with Excel. Perfect Competition in the Long Run STEP Open the Excel workbook Consistency.xls and read the Intro sheet; then proceed to the TheoryoftheFirmLongRun sheet. Use the button to fit the graphs on your screen so that all of them can be seen simultaneously. The first and most important point is that all three optimization problems, in unison, comprise the Theory of the Firm. Perhaps because they see it in introductory economics, many students think of the output profit maximization graph as the firm. The display in Consistency.xls gives a strong visual presentation and constant reminder that the firm has three facets. Gray-backgrounded cells are dead (click on one to see that it has a number, not a formula). They serve as benchmarks for comparisons when we do comparative statics. The output and input profit maximization graphs do not have the usual U-shaped curves because the production function is Cobb-Douglas. This functional form cannot generate conventional U-shaped MC and AC curves (or upside down U-shaped MRP and ARP). There is no separate AVC curve because we are in the long run, so $AC = AVC$. STEP Compare the initial solutions for each of the three problems. There are several ways in which they agree. 1. $L \mbox{*}$ and $K \mbox{*}$ are the same in the Input Profit Max (left) and Input Cost Min (middle) graphs. 2. If you use these amounts of L and K, you will produce 636 units of output, as shown in the Output Profit Max (right) graph. 3. $\pi \mbox{*}$ is the same in the Input and Output Profit Max graphs. There is no profit in the Input Cost Min graph because there is no output price (P) and, therefore, no revenue in that optimization problem. 4. Total cost from each side is exactly the same. You can find TC from the Input Profit Max by creating a cell that computes $wL \mbox{*} + rK \mbox{*}$. This will equal $36,262. From the Output Profit Max side, calculate TC by subtracting revenue, $Pq$, from profit. Again, you get$36,262. We can also see consistency in the ways in which the three optimization problems respond to shocks. As you would expect, the comparative statics results are identical. STEP Wage increase of 1%. Change cell B2 to 20.2. Use the button if needed to see more clearly how the graphs have changed. Figure 14.4 shows the results. On the Input Profit Max graph, we see that optimal labor use has fallen by 14.7% as wage rose by 1% (so the wage elasticity of labor from wage = $20/hr to$20.20/hr is $-14.7$). Labor demand collapsed because the horizontal wage line shifted up and because the MRP schedule shifted left. The latter effect is due to the fact that optimal K fell. On the Input Cost Min graph, we see that the firm is minimizing the cost of producing a lower level of output. In other words, we are on a new isoquant. Notice that the changes in $L \mbox{*}$ and $K \mbox{*}$ are consistent with the decreases reported from the Input Profit Max results. The wage increase in the Output Profit Max graph is felt via the shifting up of the cost curves. The firm decreases $q \mbox{*}$ because MC shifted up and therefore the intersection of MR and MC occurs to the left of the initial solution. Figure 14.4 and your screen shows how the Theory of the Firm reacts in a consistent manner to a wage shock. Is this true of other shocks? Yes. Here is another example. STEP Click the button, and then implement a labor productivity increase to 0.751 by changing cell F2. Figure 14.5 shows the dramatic results of this shock. Input use and output produced have increased by about 18% in response to this tiny change in c. As with the wage shock, comparison of the effects of the change in c on the three optimization problems shows consistency. The two input side problems show that input use is the same and the inputs used will make the desired output on the output side. Profits on the input and output sides are the same. The productivity increase has shifted MRP up and cost curves down. Other shocks are explored in Q&A and exercise questions. In every case, changing an exogenous variable, ceteris paribus, produces effects felt throughout the three optimization problems and the results are always consistent. Perfect Competition in the Short Run STEP Go to the TheoryoftheFirmShortRun sheet to explore the comparative statics properties of the three optimization problems in the short run. This sheet has several differences compared to the previous overall view of the firm in the long run. • There is an additional exogenous variable, K, because we are in the short run. Its value is set to the long run optimal solution for the initial values of the other parameters. • There is a missing graph in the input profit max problem. With K fixed, we no longer need to depict its optimal solution. • There is a straight, horizontal line in the isoquant side graph. With K fixed, the firm will not be able to roll around the isoquant to find the cost-minimizing input mix. It must use the given amount of K. • There is an extra cost curve in the output profit max graph. Having K fixed means there is a fixed cost so we now have separate average total and average variable costs. STEP Compare the initial solutions for each of the three problems. As expected, they agree in input use, output produced, and profits generated. As before, we can change the light-green-backgrounded exogenous variable cells in row 2 and follow the results in the graphs. STEP Apply a wage increase of 1%. Change cell B2 to 20.2. Use the button if needed to see more clearly how the graphs have changed. Figure 14.6 shows the results of this shock. The usual consistency properties are readily apparent. We observe the same change in $L \mbox{*}$, $q \mbox{*}$, and $\pi \mbox{*}$ across the board. Notice that the input profit max problem does not show a shift in MRP because K is fixed. If we compare the short (Figure 14.4) to the long run (Figure 14.6), we see that the responsiveness of the changes in endogenous variables is greater in the long run. Labor and output fall by more in the long run. Profits, however, fall by less in the long run. STEP Click the button, then implement a labor productivity increase to 0.751 by changing cell F2. Figure 14.7 displays the results. Figure 14.7 shows consistency in the results and, once again, the long run changes are more responsive than in the short run. L and K fall by more and the increase in profits is higher in the long run. Long versus Short Run When we compared the short and long run results for shocks in w and c, the long run exhibited greater responsiveness in labor and output. Is there a general principle at work? Yes. The general law is that long run responses are always at least as or more elastic than in the short run. This is known as the Le Chatelier Principle. Le Chatelier’s idea, which he originally applied to the concept of equilibrium in chemical reactions, was introduced to economics by Nobel laureate Paul Samuelson in 1947. The Le Chatelier principle explains how a system that is in equilibrium will react to a perturbation. It predicts that the system will respond in a manner that will counteract the perturbation. Samuelson, following the methods of the hard sciences, has transported this principle of chemist Henri-Louis Le Chatelier to economics, to study the response of agents to price changes given some additional constraints. In his extension of this principle, Samuelson uses the metaphor of squeezing a balloon to further explain the concept. If you squeeze a balloon, its volume will decrease more if you keep its temperature constant than it will if you let the squeezing warm it up. This principle is now considered as a standard tool for comparative static analysis in economic theory. (Szenberg, et al., 2005, p. 51, footnote omitted) In the context of the short and long run responses to shocks by a firm, the Le Chatelier Principle says that long run effects are greater because there are fewer constraints. When the wage rises, a firm in the short run is stuck with its given quantity of K. In the long run, however, it will be able to adjust both L and K and it is this additional freedom of movement that guarantees at least as great or a greater response in input use and output produced. For increasing c, the Le Chatelier Principle is reflected in the fact that labor demand is much more responsive in the long run than the short run. In the long run, the firm is able to take greater advantage of the labor productivity shock by renting more machines and hiring even more labor. This is, of course, reflected in the greater profits obtained in the long run in response to the increased c. A Holistic View of the Firm Figures 14.1, 14.2, and 14.3 are fundamental graphs for the Theory of the Firm. They represent the three optimization problems that, in unison, comprise the theory. The firm is not merely its output side representation, but includes all three optimization problem, as shown in the Consistency.xls workbook. The input cost min (isoquants and isocosts that can be used to derive the cost function), output profit max (horizontal P with the family of cost curves that yield a supply curve), and input profit max graphs (horizontal w with MRP generating a demand curve for an input) are all intertwined. Not only do they all yield consistent answers for the initial solution, they all provide consistent comparative statics responses. If we compare short and long run effects of shocks, we see that the firm responds more energetically in the long run. The wage elasticity of labor is greater (in absolute value) in the long run and, via consistency, so is the wage elasticity of output. Similarly, the c elasticities of labor and output are also greater in the long run. Both of these results are examples of the Le Chatelier Principle: With fewer constraints, responsiveness increases. Since the short run prevents K from varying, the firm is less able to adjust to a shock. It can only vary L and, thus, its adjustment is more restricted and inelastic. Exercises 1. What happens in the long run when price increases by 1%? Implement the shock and take a picture of the results, then paste it in a Word document. Comment on the changes in optimal labor, capital, output, and profits. 2. Compute the long run output price elasticity of labor demand. Show your work. 3. Apply the same 1% price increase in the short run. Take a picture of the results, then paste it in your Word document. Comment on the changes in optimal labor, capital, output, and profits. 4. Compute the short run output price elasticity of labor demand. Show your work. 5. Compare the price elasticities of labor demand in the long (question 2) and short run (question 4). Is the Le Chatelier Principle at work here? Explain why or why not. 6. With output price 1% higher, increase the wage by 1% in the long and short run. Do these two shocks cancel each other out in either case? Explain.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/14%3A_Consistency.txt
Like the perfectly competitive firm, a monopolist has three interrelated optimization problems. Attention is focused on the output profit max problem because that is where the essential difference lies between a perfectly competitive (PC) firm and a monopoly. We know that via consistency, monopoly power manifests itself on the input side also. A monopoly will produce less than a PC firm and, in turn, hire less labor and capital. Unlike a PC firm, a monopoly chooses output and the price at which to sell the product. This makes the monopoly problem harder to solve. Fortunately, your experience with optimization, comparative statics, and graphical displays give you the background needed to understand and master monopoly. Definition and Issues A monopoly is defined as a firm that is the sole seller of a product with no close substitutes. The definition is inherently vague because there is no clear demarcation for what constitutes a close substitute. Consider this example: In the old days, a local cable provider might have an exclusive agreement to provide cable TV in a community. One could argue that the cable provider was a monopoly because it was the sole seller of cable TV. But what are the substitutes for cable TV? Years ago, cable TV was the only way to access subscription channels such as ESPN and HBO. Commercial broadcasts (with national broadcasters such as ABC, NBC, and CBS and local channels) were a poor substitute for cable TV. In this environment, cable TV would be a good example of a monopoly. Today, however, cable TV has strong competition from satellite services and streaming services from the web. Even if a firm had an exclusive franchise to deliver cable TV in a community, there are many ways to get essentially the same package of channels. Today, cable TV is not a monopoly. Of course, cable TV is not a good example of perfect competition either. The cable company does not accept price as a given variable. It is in the middle, somewhere between perfect competition and monopoly. Markets served by a few firms are called oligopolies. Add more firms and you eventually get monopolistic competition. The study of how firms behave under a variety of market structures is part of the subdiscipline of economics called Industrial Organization. Figure 15.1 sums things up. Barrier to Entry To remain a monopoly, the firm must have a barrier to entry to prevent other firms from selling its product. In the cable TV example, the barrier to entry was provided by the exclusive agreement with the community. Such governmental restriction is a common form of a barrier to entry. Another way to erect a barrier to entry is control over a needed input. ALCOA (the Aluminum Corporation of America) had a monopoly in aluminum in the early 20th century because it owned virtually all bauxite reserves. If a product requires entry on a large scale, like automobile manufacturing, this is considered a barrier to entry. To compete against established car companies, a firm must not only produce cars, but also many spare parts and figure out how to sell the product. Like the concept of a close substitute, a barrier to entry is not a simple yes or no issue. Barriers can be weak or strong and they can change over time. Cable TV’s barrier was eroded not by changes in legal rules, but by technological changethe advent of satellite TV and the web. Monopoly’s Revenue Function We know that the firm’s market structure impacts its revenue function. The simplest case is a perfectly (or purely) competitive firm. It takes price as given and, therefore, revenues are simply price times quantity. For a perfect competitor, even though market demand is downward sloping, the firm’s own individual demand curve is perfectly elastic at the given, market price. Because the PC firm can sell as much as it wants at the given price, selling one more unit of output makes total revenue (TR) increase by the price of the product. Marginal revenue (MR) is defined as the change in TR when one more unit is sold. Thus, for a PC firm, $MR = P$. This is not true for a monopoly. A critical implication of monopoly power is that MR diverges from the demand curve. But this is too abstract. We can use Excel to make these concepts clearer. STEP Open the Excel workbook Monopoly.xls and read the Intro sheet, then go to the Revenue sheet to see how monopoly power affects the firm’s revenue function. The sheet opens with a perfectly competitive revenue structure. Total revenue is a linear function of output and, therefore, $P = MR$ with a horizontal line in the bottom graph. A graph with a linear TR and corresponding horizontal MR means it is a PC firm. Unlike a PC firm, a monopoly faces the market’s downward sloping demand curve. We can model a linear inverse demand curve simply as $P = p_0 - p_1q$. Because the slope parameter, $p_1$, in cell T2 is initially zero, TR is linear and MR is horizontal. STEP To show how monopoly power affects the firm’s revenue function, click on the Price Slope scroll bar. Notice that as you increase the slope parameter, MR diverges more from D. The smaller (in absolute value) the price elasticity of demand, the greater the divergence of MR from D and the stronger the monopoly power. We will see that the monopolist uses the divergence of MR from D to extract higher profits than would be possible if there were other sellers of the product. When drawing MR and D in the case of a linear inverse demand curve, keep in mind these two basic rules: 1. MR and D have the same intercept. 2. MR bisects the y axis and D. We can derive these properties easily. With our inverse D curve, $P = p_0 - p_1q$, we can do the following: $TR=Pq$ $TR = (p_0 - p_1)q$ $TR = p_0q - p_1q$ $MR = \frac{dTR}{dq} = p_0 - 2p_1$ Clearly, both D and MR share the same intercept, $p_0$. Because the slope of MR is $-2p_1$, it is twice the slope of D, which is simply $-p_1$. Thus, when you draw a linear inverse demand curve and then prepare to draw the corresponding MR curve, remember the two rules: (1) the intercept is the same and (2) MR has twice the slope so at every y axis value, MR is halfway between the y axis and the D curve. Figure 15.2, with an inverse demand curve slope of $-1$, shows the monopoly’s revenue function. Unlike the PC firm, TR is a curve and MR diverges from D. MR bisects the y axis and D. The dashed line at $20/unit, for example, shows the distance from the y axis to MR is 10, the same as MR to D. Notice that where MR = 0 at q = 20, TR is at its maximum. At this quantity, the price elasticity of demand is exactly $-1$. Figure 15.2 shows that MR can be negative. This can happen because there are two opposing forces at work. Increasing quantity increases TR, since $TR= Pq$. However, the only way to sell that extra product is to lower the price (by traveling down the demand curve) so TR falls. When the increase to TR by selling additional output outweighs the effect of the drop in the price, MR is positive. Eventually, however, with a linear demand curve, the monopolist will reach a point at which the increase in revenue for selling one more unit is negative. In the range of output ($q > 20$ in Figure 15.2) where $MR < 0$, the effect of the decreased price outweighs the positive effect of selling more output. When $MR > 0$, the price elasticity of demand is greater than 1 (in absolute value). When MR is negative, demand is inelastic. The monopolist will never produce on the negative part of MR, which is the same as the inelastic portion of the demand curve. There is a neat formula that expresses the relationship between MR and P. With an inverse demand curve, $P(Q)$, we know that $TR = P(Q)Q$. From the TR function we can take the derivative with respect to output to find the MR function. We use the Product Rule: $MR=\frac{dTR}{dQ}=P+\frac{dP}{dQ}Q$ If we factor out P from this expression, then MR can be rewritten as: $MR=P+\frac{dP}{dQ}Q=P(1+\frac{dP}{dQ}\frac{Q}{P})=P(1+\frac{1}{\epsilon})$ The Greek letter epsilon ($\epsilon$) is the price elasticity of demand ($\frac{dQ}{dP}\frac{P}{Q}$). The expression shows that $MR = P$ under perfect competition because an individual firm faces a perfectly elastic demand curve. This means epsilon is infinite and its reciprocal is zero. It also shows that the more inelastic the demand curve (the closer $\epsilon$ is to 0), the greater the separation between MR and the demand curve (P). If $\epsilon = 0$, then MR is undefined. With $\epsilon = 0$, inverse demand is a vertical line. The monopoly would charge an infinite price. Setting Up the Problem There are three parts to every optimization problem. Here is the framework for a monopolist’s output side profit maximization problem. 1. Goal: maximize profits ($\pi$), which equal total revenues (TR) minus total costs (TC). 2. Endogenous variables: output (q) and price (P) 3. Exogenous variables: input prices (the wage rate and the rental rate of capital), demand function coefficients, and technology (parameters in the production function). A monopoly differs from a PC firm only on the revenue sideprice is now endogenous. The cost structure is the same. The monopoly has an input cost min problem and it is used to derive a cost function. Increases in input prices shift cost curves up and improvements in technology shift cost curves down. The monopolist has a long and short run, just like a PC firm, and in the short run there is a gap between ATC and AVC that represents fixed costs. Finding the Initial Solution We will show the conventional approach to solving the monopoly problem first, then turn to an alternative formulation based on constrained optimization. The conventional approach is to find optimal q where $MR=MC$, then get optimal P from the demand curve, and then compute optimal $\pi$ as a rectangle. This is the standard approach and there is a canonical graph that goes along with this approach. Its primary virtue is that it can be easily compared to the perfectly competitive case. The conventional approach can be demonstrated with a concrete problem. Suppose the cost function is $TC = aq^3 + bq^2 + cq + d$. Suppose the market (inverse) demand curve is $P = p_0 - p_1q$. Thus, $TR=Pq=(p_0-p_1)q$. With this information, we can form the firm’s profit function and optimization problem, like this: $\begin{gathered} %star suppresses line # \max\limits_{q} \pi = TR-TC \ \max\limits_{q} \pi = (p_0-p_1)q - (aq^3 + bq^2 + cq + d)\end{gathered}$ We first solve this problem with numerical methods, then analytically. STEP Proceed to the OptimalChoice sheet and look it over. The profit function has been entered into cell B4. Quantity and price are displayed as endogenous variables, but q is bolded to indicate that it is the primary endogenous variable. In other words, Solver will search for the profit-maximizing output and, having found it, will compute the highest price that can be obtained from the demand curve. The firm is making$245 in profits by producing 10 units of output and charging $34.50 per unit, but this is not the profit-maximizing solution. We know this because the marginal revenue of the 10th unit is$29/unit, whereas the marginal cost of that last unit is only \$4/unit. Clearly, the firm should produce more because it is making more in additional revenues from the last unit produced than the additional cost of producing that unit. STEP Run Solver to find the optimal solution. At the optimal solution, the equimarginal condition, $MR = MC$, is met. With positive profits, this is a clear signal that we have found the answer. Before you click the button, try doing the problem on your own. This is a single variable unconstrained maximization because $P = p_0 - p_1q$ has been substituted into the profit function. Take the derivative with respect to q, set it equal to zero, and solve for optimal q. Substituting in the parameter values to make it a concrete problem makes it easier to do the math: $\begin{gathered} %star suppresses line # \max\limits_{q} \pi = (40-0.55)q - (0.04q^3 - 0.9q^2 + 10q + 50)\end{gathered}$ You can check your work by clicking the button. You can also confirm that the two approaches, Solver and calculus, agree. STEP Proceed to the OutputSide sheet to see a familiar set of four graphs. As usual, the totals are on the top and the average and marginal curves on the bottom. The cost curves are the quite similar to the PC firm’s output profit maximization graphs, but the revenue curves are quite different. The bottom left-hand corner graph in Figure 15.3 is the canonical graph for a monopolist. It can be used to quickly find $q \mbox{*}$, $P \mbox{*}$, and $\pi \mbox{*}$. Here’s how to read and use the conventional monopoly graph: 1. Finding $q \mbox{*}$: Choose q where $MR = MC$. This gives the biggest the difference between TR and TC and puts you on top of the profit hill (in the top right graph). 2. At $q \mbox{*}$, travel straight up until you hit the demand curve to get $P \mbox{*}$. This is the highest price that the monopolist can get for the chosen level of output. 3. Create the usual profit rectangle as $(AR – ATC)q \mbox{*}$. It has length $q \mbox{*}$ and height $AR - ATC$ (where $AR = P$). The area of this rectangle equals the distance of the line segment between TR and TC, which is the height of the profit hill. Play with the slider controls to improve your understanding of the graphs and relationships. STEP Click the Fixed Cost slider to manipulate total fixed costs (d in the cubic cost function). Changes in fixed costs do not affect the monopolist’s optimal quantity and price solution. This is just like the perfectly competitive case. STEP Click the button; explore changes in the price intercept to see how the firm responds. At a low enough price intercept, profits become negative and, just like a PC firm, if $P < AVC$, the firm will shut down. You can also control the firm’s monopoly power by manipulating the inverse demand curve’s slope. STEP Set the Price Slope slider to zero. What happens? You stripped the monopoly of its price power and it is a PC firm. No Supply Curve for Monopoly Monopolists do not have a supply curve. This seems like a strange statement since monopolies produce output and so "supply" whatever good or service of which they are the sole seller. But the key lies in the definition of a supply curve: given price, the supply curve gives the quantity that will be produced. Because a PC firm is a price taker, it is possible to shock P and see how the optimal output changes. We can derive $q \mbox{*} = f(P, \textrm{ ceteris paribus})$ and this is called a supply curve. Unlike a perfectly competitive firm, for which price is exogenous, a monopoly chooses the price. Thus, we cannot ask, "Given this price, what is the optimal quantity supplied?" With price as an endogenous variable, it cannot serve as a shock variable in a comparative statics analysis. We can (and you just did) shock a monopolist’s demand curve parameters such as the intercept and slope, but this is not an exogenous change in the price of the product. The experiment of changing the price cannot be applied to a monopolist and, therefore, the monopolist has no supply curve. Measuring Monopoly Power Another common misconception is that monopoly is either zero or one. In fact, it is a continuum and you can have more or less monopoly power. There are several ways to measure it. STEP Proceed to the Lerner sheet. This sheet demonstrates the point that the more inelastic the demand faced by a monopolist, the greater the monopoly power. In other words, from a profit-maximizing point of view, it is better to have a monopoly over a product that everyone desperately needs (i.e., very inelastic) than to be the sole seller of a product that has a highly elastic market demand curve. Abba Lerner formalized this idea in a mathematical expression that bears his name, the Lerner Index. "If P = price and MC = marginal cost, then the index of the degree of monopoly power is $\frac{P-MC}{P}$." (Lerner, 1934, p. 169). This measure of monopoly power uses the gap between P and MC as a percentage of P. The Lerner Index takes advantage of the fact that a monopolist will choose that quantity where $MR = MC$, then charge the highest price possible for that quantity. The higher the price that can be charged, the more inelastic is demand and the greater the monopoly power. The Lerner sheet compares two monopolies with the exact same cost structure (assumed for simplicity to have a constant $MC = AC$). They both produce the same profit-maximizing quantity, but Firm 2 faces a more inelastic demand curve than Firm 1 and, therefore, it has a bigger gap between price and marginal cost. STEP Click on cells B16 and I16 to see the simple formulas for the Lerner Index. The idea is that the bigger the divergence between price and marginal cost, the greater the monopoly power. Firm 2 has more monopoly power than Firm 1 and more monopoly profits. The Lerner Index for each firm reflects this. Notice that a perfectly competitive firm that sets $MC = P$ will have a Lerner Index of zero. As the index approaches one, monopoly power rises. STEP Change Firm 2’s demand parameters to 130 for the intercept and 20 for the slope. The y axis is locked down so the entire D and MR functions are not displayed. The optimal quantity is still 3, but P and profits are higher, as is the Lerner Index. STEP Make the demand curve more inelastic at $Q=3$ by setting the demand parameters to 190 and 30. Optimal P has increased again, along with profits. The Lerner Index reflects the greater monopoly power. STEP One last time, change the demand parameters to 6010 and 1000. The graph is hard to read because only MR is shown; D is literally off the chart. Firm 2 continues to produce the same output as Firm 1, but has a much, much higher optimal price and maximum profits. Its Lerner Index is close to one. It cannot rise above one, but the closer it gets, the greater the divergence of P and MC so the greater the monopoly power. The Lerner sheet also shows that the Lerner Index can be expressed as the reciprocal of the price elasticity of demand at the profit-maximizing price. The few algebra steps needed to connect the Lerner Index to the price elasticity start in row 25. STEP Set Firm 2’s demand parameters back to 70 and 10, and then click the button. The price elasticity of demand for the two firms is displayed. If you click in the cells, you can see the formula. Notice that the reciprocal of the inverse demand curve’s slope is used to compute the price elasticity of demand correctly. Firm 2’s price elasticity of demand at the profit-maximizing price is lower than Firm 1’s. The lower the price elasticity and the higher the Lerner Index, the greater the firm’s monopoly power. STEP Proceed to the Herfindahl sheet for a quick look at another way to measure monopoly power. Instead of measuring the markup of price over marginal cost, we can see how big the firms are in an industry. Strictly speaking, a monopoly is one firm so it would have a 100% market share, but in practice, firms have monopoly power even though they are not technically monopolies. Any firm that faces a downward sloping demand curve and has the ability to set its price is said to have monopoly power. If a market has many firms, each with the same share of total sales, we have a competitive market structure. If, on the other hand, only a few firms exist, the market is monopolized. The question is how to measure the degree of monopolization? We can sort the firms in an industry from highest to lowest share and then add the shares of the four biggest firms. This gives the four firm concentration ratio in cell D5. It turns out this is not a very good way to distinguish between concentrated and unconcentrated industries. The problem is that the four firm concentration ratio tells you nothing about the sizes of the top four firms or the rest of the market. The four firm concentration ratio is 70%, which seems pretty highly concentrated. The biggest firm’s share, 30%, is almost one-third of the entire industry. STEP Click on the button. The four firm concentration ratio is the same as before (70%), but this industry is clearly much more concentrated. Firm A is even bigger and the others are tiny. STEP Click on the button. The four firm concentration ratio is the same as before (70%), but this industry is clearly less concentrated. The four top firms are equal so no one firm really dominates. The primary virtue of the four firm concentration ratio is that it is easy to compute and understand. However, because we have three scenarios with wildly different shares for the top four firms yielding the same four firm concentration ratio, we can conclude that this ratio is a poor way to determine whether firms in a market are in a competitive or monopolistic environment. The four firm concentration ratio might be easy to compute and understand, but it is incapable of picking up differences in the distribution of shares. A better way to judge concentration is via the Herfindahl Index. Unlike the Lerner Index, there is confusion about who invented it. Hirschman concludes, “The net result is that my index is named either after Gini who did not invent it at all or after Herfindahl who reinvented it. Well, it’s a cruel world” (Hirschman, 1964, p. 761). It is sometimes called the Herfindahl-Hirschman Index (HHI). Fortunately, its computation is simpler than its paternity. The idea is to square each share and sum, like this: $H=\sum_{i=1}^{n}S_i^2$ The index ranges from 1/n to 1 (when using decimal values of shares). The higher the index, the greater the concentration. By squaring the shares, it gives more weight to bigger firms: for example, $0.1^2 = 0.01$, while $0.3^2=0.09$. The Herfindahl sheet shows the computation. Notice how each value in column B is squared in column G. The sum of the squares is in cell G15 and it is the value of the Herfindahl Index. STEP Click on the three buttons one after the other to cycle through them. Notice how the Herfindahl Index changes (but the four firm concentration ratio does not). For Distribution A, the H value is 0.325. This is quite high. The 0.1375 value with Distribution B means there is more competition in this scenario than the other two. The Herfindahl Index is not perfect because no single number can completely describe an entire distribution. It is, however, better than the four firm concentration ratio and often used to measure the degree of market competition. The United States Department of Justice is charged with regulating the conduct and organization of businesses. The mission of the Antitrust Division is to promote economic competition. They use the Herfindahl Index as part of their Horizontal Merger Guidelines (www.justice.gov/atr/horizontal-merger-guidelines-08192010). Markets with a Herfindahl Index less than 0.15 are "unconcentrated," values between 0.15 and 0.25 are "moderately concentrated," and anything over 0.25 is "highly concentrated." The Department of Justice deems any proposed merger that increases the Herfindahl Index by more than 0.01 (100 points in the scale they use) in concentrated markets as warranting scrutiny. They can go to court to block mergers to prevent too much concentration. They can also break up companies that have too much monopoly power. This is known as antitrust law and is part of the Industrial Organization field of economics. An Unconventional Approach The monopolist’s profit maximization problem can also be solved by choosing P and q simultaneously subject to the constraint of the demand curve. While this is not the usual way of framing the monopoly’s optimization problem, it enables practice with the Lagrangean method of solving constrained optimization problems and reading isoprofit curves. The analytical solution is based on rewriting the constraint so it is equal to zero ($P-(p_0-p_1q)=0$), forming the Lagrangean, setting derivatives equal to zero, and solving the system of equations for the optimal solution. Set each derivative equal to zero and solve the three first-order conditions for $q \mbox{*}$, $P \mbox{*}$, and $\lambda \mbox{*}$. From the first equation, $\lambda = -q$, substitute into the second equation: $P - 3aq^2 - 2bq - c + [-q] p_1 = 0$ From the third first-order condition, $P = p_0 - p_1q$, so $(p_0 - p_1q) - 3aq^2 - 2bq - c - qp_1 = 0$ Rearrange the terms to prepare for using the quadratic formula. STEP Proceed to the ConOpt sheet to see formulas based on the Lagrangean solution starting in cell F24. Naturally, we get the same, correct answer as the unconstrained version. The ConOpt sheet shows that monopoly as a constrained optimization problem can be depicted with a graph. The pink curves are isoprofit curves and the black line is inverse demand. The MR curve is not drawn because it is not used. The firm is trying to get to highest isoprofit without violating the demand curve constraint. Clearly, the opening values are not optimal. STEP Run Solver and get a Sensitivity Report to confirm the value of lambda star is minus optimal quantity. Notice how the Solver dialog box is set up so Solver chooses cells B8 and B9 subject to the constraint. After running Solver, the graph, reproduced in Figure 15.4, shows the usual tangency result. The point of tangency provides the optimal q and P solution, while the value of the isoprofit curve at that point is the level of profits. Do not be confused. The constrained version is rarely used. The conventional approach is the canonical output profit maximization graph (bottom left in Figure 15,2). This graph shows the optimal q where $MR=MC$ and easily displays $P \mbox{*}$ from the demand curve and $\pi \mbox{*}$ as a rectangle. Figure 15.4 gives the same optimal solution, but presents the problem in a different way. Understanding that the demand curve serves as a constraint on monopoly is helpful. Monopoly power is not infinite. A monopolist cannot choose a ridiculously high price and a high quantity. As price rises, quantity sold must fall. Monopoly Basics A monopoly differs from a perfectly competitive firm in that a monopolist can choose the quantity and price, whereas a perfect competitor is a price taker. In addition, a monopolist has a barrier to entry that enables it to maintain positive economic profits even in the long run. The two are the same, however, in the cost structure (like a perfect competitor, the monopolist derives its cost function from the input cost minimization problem) and the fact that it seeks to maximize profits (where $MR = MC$ as long as $P>AVC$). We depict the monopolist’s optimal solution with a graph that superimposes D and MR over the family of cost curves (MC, ATC, and AVC). Like a PC firm, a monopolist can suffer negative profits in the short run and it will shut down when $P < AVC$. Monopoly’s canonical graph (the bottom left chart in Figure 15.2) belongs in the pantheon of fundamental graphs in economics. Like the indifference curves with a budget constraint or supply and demand, a linear inverse demand with its associated marginal revenue showing optimal q (at the intersection of MR and MC, of course) and optimal P is a truly classic graph. One way to measure monopoly power is by the Lerner Index. The greater the gap between price and marginal cost, the greater the monopoly power. The greater the price elasticity of demand, the lower the Lerner Index and the weaker the monopoly power. The Herfindahl Index is another way to measure the strength of monopolization in a market. It measures industry concentration. Unlike the four firm concentration ratio, it uses market shares of every firm to create a single number that reflects the concentration of an industry. Mergers that boost the Herfindahl Index by more than 0.01 (100 points) in concentrated markets are carefully scrutinized by the Department of Justice because it is presumed that the market will not be competitive. We concluded this chapter with an unconventional analysis. The monopoly’s profit maximization problem can be cast as a constrained optimization problem. In addition to providing practice with the Lagrangean method, this way of looking at monopoly makes quite clear that the monopolist must obey the demand curve. Exercises 1. De Beers is an internationally famous company that had a monopoly over diamonds. Google “synthetic diamonds” to learn more. Include web citations with supporting evidence in your answers to these two questions. 1. What was their barrier to entry when they had a monopoly? 2. What happened to their monopoly? 2. Use Word’s Drawing Tools to depict a monopoly shutting down in the short run. Explain the graph. 3. In the ConOpt sheet, set the demand intercept (cell B13) to 9 and the fixed cost (B18) to 180. Run Solver. Why is Solver generating a miserable result? What is the correct answer? 4. Use Word’s Drawing Tools to depict the effect of monopoly from the input side profit maximization perspective. Explain the graph. Hint: With perfect competition, $L \mbox{*}$ is found where $w = MRP$ (where MRP is based on the given, constant price, $PxMP$). With monopoly, however, P and MR diverge. 5. Is the effect of monopoly on the input side consistent with the effect of monopoly on the output side? Explain.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/15%3A_Monopoly.txt
In perfect competition, firms are price takers with no power to affect the market price. Each firm optimizes by choosing q to equalize MC and P. In monopoly, the sole seller of a product with no close substitutes optimizes by choosing q to equalize MC and MR and then charges the highest price that clears the market (given by the demand curve). In both market structures, the profits of the individual firm are not affected by what anyone else does. In perfect competition, there are so many other firms that Firm i does not care about what Firm j is doing. In monopoly, there is no other firm to worry about. What about market structures between the extremes of perfect competition and monopoly? Oligopoly is a market dominated by a few firms. Their decisions are interdependent. In other words, what each individual firm chooses does affect the sales and profits of the other firm. To optimize, each firm must anticipate what their rivals will do and then choose its best options. This is clearly a more realistic model than that of perfect competition and monopoly, which rely on idealized, abstract descriptions of firms that have no real-world counterparts. How do oligopolies behave? We know that, like other firms, they optimize given the economic environment, but because of interdependence, it is much more difficult to analyze. This chapter opens the door to the analysis of strategic behavior. It presents a few basic ideas from the fields of Game Theory and Industrial Organization. Interdependence and Nash Equilibrium It seems obvious when we say that firms are interdependent, but exactly what does this mean? Consider two power companies that generate and sell electricity. This is a good example of a homogeneous product. We assume consumers do not care at all which of the two firms provides electricity to their homes. To keep it simple, suppose that each power company can choose either a high level of output or a low level of output. Market price is a function of the output decisions of the two firms. Each power company’s profits are functions of their own decision to produce and the market price. Figure 16.1 displays a payoff matrix, which shows the possible choices and outcomes. You read the entries in the payoff matrix like coordinate pairs on a graph, the first part is for Firm 1 and the second for Firm 2. The $300,$300 pair in the top left of the four entries says that Firm 1 chose high output and Firm 2 chose high output. Each firm ends up with low profits. If Firm 2 had chosen low output (top right), Firm 1 profits would be much higher, $1,000, because it made a lot of output and price rose when Firm 2 decided to cut back. This particular game is a one-shot, simultaneous-move game known as the Prisoner’s Dilemma. You have probably seen it before. Two criminals are arrested and questioned separately. If both stay silent, they get 1 year in jail. If both confess, they get 3 years. But if one confesses and the other does not, the one who talks gets no jail time and the silent one gets 10 years. You can match those outcomes to the payoff matrix in Figure 16.1. The outcome that is best for both firms together is$1600 total, with \$800 for each company. But, like the criminals version of the game, that is going to be an unlikely outcome. Suppose that both agree beforehand that they are going to collude and both choose low output. Unless they can write a binding agreement that is enforceable (so a cheater can be punished), there is an incentive for each firm to change its decision and choose high output if it thinks that the other firm will stick with low output. As a result, both firms end up with low profits (and both criminals confess). If you think the other firm is going to cheat, your best move is to also cheat. If you think the other firm is going to honor the agreement, your best move, in the sense of profit maximization, is to cheat and produce a high output (assuming this is a one-time game and you never have to see your opponent again). It looks like cheating, producing high output (or confessing), is the best move no matter what the other firm does. We say that this game has a dominant strategyproduce high output (confess). This result illustrates the reason why cartelsgroups of firms that get together to charge the monopoly price and split the monopoly profitsare unstable. It is difficult for oligopolistic firms to get together and act like a monopoly because there is an incentive for individual firms to cheat on the agreement and produce more to take advantage of high prices. Because of the interdependence of firms’ decision making, competition among firms in an oligopoly may resemble military operations involving tactics, strategies, moves, and countermoves. Economists model these sophisticated decision making processes using game theory, a branch of mathematics and economics that was developed by John von Neumann (pronounced noy-man) and Oskar Morgenstern in the 1930s. One of the most important contributors to game theory is John Nash, a mathematician who shared the Nobel Prize in Economics. A game-theoretic analysis of oligopoly is based on the assumption that each firm assumes that its rivals are optimizing agents. That is, managers act as though their opponents or rivals will always adopt the most profitable countermove to any move they make. The manager’s job is to find the optimal response. Nash’s most important and enduring contribution is the concept named after him, the Nash equilibrium. Once we are in a world where firms are interdependent and one firm’s profits depends on what other firms do, we are out of the world of exogenously given price that we used for perfect competition and out of the isolated world of the monopolist. John Nash invented an equilibrium concept that describes a state of rest in this new world of interdependence. A Nash equilibrium exists when each player, observing what her rivals have chosen, would not choose to alter the move she herself chose. In other words, this is a no regrets equilibrium: After observing the outcome, the player does not wish she would have done something else instead. We will explore in detail a concrete example of a duopoly (a market with two firms) with a single Nash equilibrium. Remember, however, that this is simply one example. Some games have one Nash equilibrium, some have many, and some have none. There are many, many games and scenarios in game theory and we will look at just one simple example. The Cournot Model Augustin Cournot (pronounced coor-no) was a remarkably creative 19th-century French economist (see the References in section 12.2). Cournot originally set up a model of duopolists who produce the same good and optimize by choosing their own output levels based on assumptions about what the rival will do. Here is the set up: • Two firms. • Each produces the exact same product. • Constant unit cost. • Firms choose output levels at the same time. • Both know the market demand for the product. The profit of each firm depends on how much it produces and how much its rival produces. If the rival produces a lot, the the market price falls. The interdependence is that one firm’s decision about how much to produce affects the price and, thus, the rival’s profit. What strategy should each firm use to choose its output level? The answer depends on its beliefs regarding its rival’s behavior. STEP Open the Excel workbook GameTheory.xls and read the Intro sheet, then go to the Parameters sheet. Market demand is given by the linear inverse demand curve and, for simplicity, we assume a linear total cost function. This means that $MC = AC$ is a horizontal line. STEP Proceed to the PerfectCompetition sheet. With many small PC firms, the industry as a whole will produce where demand intersects supply (which is the sum of the individual firm’s MCs). The graph shows that a perfectly competitive market will produce 15,000 kwh at a price of 5/kwh. What happens if a single firm takes over the entire market? STEP Proceed to the Monopoly sheet. Use the Choose Q slider control to determine the profit-maximizing quantity. Keep your eye on cell B18 as you adjust output. The optimal output is found where $MR = MC$. The monopolist will produce 7500 kwh and charge a price of 12.5/kwh. This solution nets a maximum profit of 56,250 cents. Not surprisingly, compared to the perfectly competitive results, monopoly results in lower output and higher prices. Cournot was the first to ask the question, "What happens if the industry is shared by two firms?" To understand the answer, the concept of residual demand is crucial because it enables us to solve the firm’s optimization problem. Residual demand is the demand curve facing the firm after the sales from the other firm are subtracted. From there, the reaction function for each firm is derived from a comparative statics analysis. The two reaction functions are then combined to yield the Nash equilibrium, which is the answer to Cournot’s question. That is confusing. We turn to Excel to see each step and how it all works. Residual Demand To figure out the quantity and price combination with two competing firms, we need to understand how the firms will behave. STEP Proceed to the ResidualDemand sheet. This sheet shows how Firm 1 decides what to do, given Firm 2’s output decision. Think of the chart as belonging to Firm 1. It will use this chart to decide what to do, given different scenarios. Conjectured Q2, in cell B14, is the key variable. A conjecture is an educated guess. It is based on incomplete information. Firm 1 does not know and cannot control what Firm 2 is going to do. Firm 1 must act, however, so it treats Firm 2’s output decision as a conjecture and proceeds based on that projected value. Conjectured Q2 is an exogenous variable for Firm 1. It does not know what Firm 2 will do and cannot control it. The conjectured output of Firm 2 may be different from Firm 2’s actual output. Firm 1 can, however, examine how it would react to different possible values of Firm 2 output. The ResidualDemand sheet opens with Conjectured Q2 = 0. In this scenario, Firm 2 produces nothing and Firm 1 behaves as a monopolist, producing 7,500 kwh and charging a price of 12.5/kwh. STEP Click five times on the scroll bar in cell C14. With each click, Conjectured Q2 rises by 1,000 units and the red lines in the graph shift left. The red lines are the critical factor for Firm 1. They represent residual demand and residual marginal revenue. The idea behind residual demand is that Firm 2’s output will be sold first, leaving Firm 1 with the rest of the market. The residual in the name refers to the fact that Firm 2 will supply a given amount of the market and then Firm 1 is free to decide what to do with the demand that is left over. With each click, Firm 2 was producing more and so the demand left over for Firm 1 was falling. This is why the residual demand shifts left when Firm 2 produces more. As the Parameters sheet shows, the inverse demand curve for the entire market is given by the function $P = 20 - 0.001Q$. If Conjectured Q2 = 5,000, then the residual inverse demand curve is $P = 20 - 0.001Q - 0.001(5000)$. In other words, we subtract the amount supplied by Firm 2. Thus, the residual inverse demand curve is $P = 15 - 0.001Q$. Figure 16.2 shows how the residual demand is shifted left by 5,000 kwh when Conjectured Q2 is 5,000. The key idea is that Firm 2’s output is subtracted from the demand curve and what is left over, the residual, is the demand faced by Firm 1. Once we have residual demand for Firm 1, we can find the profit-maximizing solution. Firm 1 derives residual MR from its residual demand curve and uses this to maximize profits by setting residual $MR = MC$. In Figure 16.2, Firm 1 is not maximizing profits by producing 7,500 units and charging 7.5/kwh. Notice that the price is read from the residual demand curve, not the full market demand curve. STEP Use the scroll bar (below the chart) to find Firm 1s optimal solution when Conjectured Q2 is 5,000. You should have found that optimal $Q$ is 5,000 kwh, optimal P = 10/kwh and maximum $\pi$ are 25,000 cents. The Reaction Function Now that we know how the duopolist uses residual demand to choose the quantity (and price) that maximizes profits, we can proceed to the next step in answering Cournot’s question: "What happens if the industry is shared by two firms?" We track each duopolist’s optimal output as a function of Conjectured Q2. This gives the reaction (or best response) function. The reaction function is a comparative statics analysis based on shocking Conjectured Q2. STEP Fill in the table in the Residual Demand sheet. You are picking points off of Firm 1’s reaction function. You already have two of the rows. In addition to the optimal solution at Conjectured Q2 = 5,000 which we just found, when Conjectured Q2 = 0, optimal output is 7,500 and optimal price is 12.5/kwh. Fill in the rest of the table. STEP Check your work by clicking the button. The filled in table is giving us Firm 1’s reaction function. It is similar to the output of the CSWizthe leftmost column is the exogenous variable and the other columns are endogenous responses. Deriving Firm 1’s reaction function is an important step in figuring out how two firms will interact. The reaction function gives us Firm 1’s optimal response to Firm 2’s output decision. We do not know, however, what Firm 2 will actually do. It has a reaction function just like Firm 1. The two firms must interact to determine what will happen in the market. Finding the Nash Equilibrium Residual demand enabled us to understand the reaction function. We are now ready for the third and final step so we can answer Cournot’s question concerning the results of a duopoly. Remember, perfect competition gives 15,000 kwh of output and monopoly gives only 7,500 (and at a higher price). Presumably, duopoly is between them, but where? STEP Proceed to the Duopoly sheet. The display is new, but easy to understand. Instead of working with just Firm 1, both are shown. They have the same costs. The sheet has buttons that make it a snap to see what each firm will do. The analytical solution is used so you do not have to run Solver every time Conjectured Q2 changes. STEP Notice that Conjectured Q2 (in cell B13) is zero. To find the optimal solution, click the button. Not surprisingly (given our earlier work with the residual demand graph) since Conjectured Q2 is zero, Firm 1 chooses to produce 7500 kwh. But look at cell G13Firm 1 has optimized, but now we need to ask what Firm 2 would do if Firm 1 made 7,500 kwh? Firm 2 wants to maximize profits just like Firm 1. STEP Click the button. Firm 2’s solution makes sense. If Firm 1 makes 7,500 kwh, then Firm 2 maximizes profits by taking the residual demand and producing 3,750 kwh. Their combined output means $P=8.75$. This is not, however, an equilibrium solution because Firm 1 is not going to produce 7,500 kwh. Why not? STEP Look at cell B13. Click on cell B13. B13’s formula, =G20, makes clear how Firm 1’s decision is connected to its rival. If Firm 2 says it wants to produce 3,750, then Firm 1 regrets and will change its previous choice. We need to find the optimal output for Firm 1 given Firm 2’s new level of output. STEP Click the button. Firm 1 chooses to make 5,625 kwh (based on Firm 2’s output of 3,750 kwh), but now we return to Firm 2. Will it produce 3,750 kwh? No. When Firm 1 changed its output, cell G13 updated. Like B13, G13 connects Firm 2’s optimal decision to Firm 1’s output choice. It is Firm 2’s turn to regret its previous decision. Firm 2 can make higher profits by changing its output when Firm 1 makes 5,625 kwh. How much will Firm 2 want to produce? Let’s find out. STEP Click the button. Firm 2 is set, but what about Firm 1? Does it regret making 5,625? Yes, it does because it can make higher profits by changing its decision. We will not be in equilibrium until both firms are happy with their output choice and do not wish to change it. Since Firm 2 changed its output, Firm 1 will want to change its output. STEP Click the button. You might be thinking that this will never end. That is incorrect. It will end. You can actually see it end. STEP Repeatedly move back and forth, clicking the and buttons, one after the other. What happens? After repeatedly clicking, you are looking at convergence. Clearly, the two optimal output levels closed in on 5,000this is the Nash equilibrium solution to this problem and the answer to Cournot’s question. The duopoly will produce a combined total of 10,000 kwh with a price of 10/kwh. This makes sense since it is in between the perfectly competitive (15,000 kwh) and monopoly outcomes (7,500 kwh). Manually optimizing for each firm in turn, back and forth, until the equilibrium solutions comes into focus is a great way to understand the concept of a Nash equilibrium. It is a position of rest where neither firm regrets its previous decision. In fact, a Nash equilibrium is often referred to as a "no regrets" point. There is, however, a faster way to find the position of rest. STEP click the button. This button does all of the hard work for you. It alternately solves one firm’s problem given the other firm’s output many times. It continues to maximize firm profits until there is less than a 0.001 difference between a firm’s optimal output and its optimal output based on the conjectured output of its rival. STEP To see this, click on cells B20 and G20. They are close to 5,000, but not exactly 5,000. The button also displays the individual firm’s reaction functions (scroll down if needed). In this case, the two reaction functions are identical. Finally, the button shows the two reaction functions on the same chart and the intersection instantly reveals the Nash equilibrium. Figure 16.3 shows the Nash equilibrium chart with additional elements to help explain it. Point 1 in Figure 16.3 represents the first time Firm 1 maximized profits, with Conjectured Q2 of zero. Point 2 shows Firm 2’s optimization based on Firm 1 making 7,500 kwh. You can see, by following the arrows, how this would lead to the intersection as the Nash equilibrium. You might wonder why the reaction functions are not the same in Figure 16.3 since they are identical when graphed by themselves (as shown below the buttons in the Duopoly sheet). The answer lies in the axesto plot them both on the same graph, we use the reaction function for Firm 2 and the inverse reaction function for Firm 1. Scroll down to see the inverse reaction function starting in row 63. Remember: A Nash equilibrium exists when each player, observing what her rivals have chosen, would not choose to alter the move she herself chose. Nash equilibrium is a no regrets point for all players. Figure 16.3 shows that the Nash equilibrium is at the intersection of the two reaction functions. Only there will both firms decline the offer to change their optimal decisions. This is a position of rest. Evaluating Duopoly’s Nash Equilibrium We know the answer to Cournot’s question. Duopoly, at its Nash equilibrium, leaves us in between perfect competition and monopoly. But we can say more about the duopoly outcome. We focus on profits. STEP In cell D16 in the Duopoly sheet, enter a formula that adds the profits of the two firms at the Nash equilibrium. What are industry profits? You might recall monopoly had maximum profits of 56,250 cents. That is better than the 50,000 cents you just computed with your formula in cell D16 of = B16 + G16. Can duopolists increase their profits to 56,250 like a monopolist? Yes, they can, but they will not be able to honor their commitments. STEP Set quantities for both firms (in cells B20 and G20) to 3,750. What happens to profits? Amazingly, they go up. If the two rivals can agree to simply split the monopoly output of 7,500 kwh, each will make 28,125 cents and match the monopoly outcome. But this will not last. Why not? Why don’t the two firms get together and produce 3,750 units each and make greater joint profits than the Nash equilibrium solution? A single click reveals the answer. STEP Click the button or the button. If the rival makes 3,750 kwh, the firm maximizes profits at 5,625 kwh. In other words, they have an incentive to cheatjust like in the Prisoner’s Dilemma game. As soon as one takes advantage, the other fires back and they spin back to the Nash equilibrium. You might suggest writing a contract, but that is illegal and unenforceable in the United States. There are other options and strategies, but they would take us too far from Intermediate Microeconomics. One strong attraction that is easy to see is merger. If the two firms combine into a single entity, they will be a monopoly and enjoy monopoly profits. Presumably, the Department of Justice would object. Interdependence Game theory is an exciting, growing area of economics. Its primary appeal lies in the realistic modeling of agents as strategic decision makers playing against each other, moving and countering. This is obviously what a real-world firm does. The Cournot model is a simple game matching two firms against each other. It illustrates nicely the notion of interdependence and how one firm moves, and then the other responds, and so on. Whereas some games do not have a Nash equilibrium, the Cournot duopolists do settle down to a position of rest. The Summary sheet has the outcomes from perfect competition, duopoly, and monopoly. It is clear that monopoly maximizes firm profits, but perfect competition offers the consumer the lowest price and most output. We will return to this comparison in the third and final part of this book. We have just scratched the surface of game theory. There are many, many more games. The workbook RockPaperScissors.xls lets you play this child’s game in Excel. Section 17.7 on Cartels and Deadweight Loss has another application of game theory. For an entertaining version of the Prisoner’s Dilemma in a game show, see this Golden Balls episode finale: tiny.cc/splitsteal. And for a really clever twist, watch this one: tiny.cc/ibrahim. Nick’s strategy has been outlawed from the show. The Cornell game theory blog has an entry explaining it: tiny.cc/splitstealanalysis. Exercises These exercises are based on $c_1=5$. If you did the Q&A questions and changed this parameter, change it back to its original value. 1. If Conjectured Q2 is 15,000, why does Firm 1 decide to produce nothing? Use the ResidualDemand sheet to support your explanation. 2. Suppose Firm 1 produces 4,500 kwh and Firm 2 produces 6,000 kwh. Does Firm 1 have any regrets? Does Firm 2 have any regrets? Enter these two values in the Duopoly sheet and click the buttons. Which firm changed its mind? Why? 3. Click the button in the Duopoly sheet. Explore the effect of changing Firm 1’s cost function so that $c_2$ (cell B10) is 0.001 (with B11 = 5). How does this affect the Nash equilibrium?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/16%3A_Game_Theory.txt
We begin our analysis of the market system by making an obvious, but necessary point: A market demand (or supply) curve is the sum of individual demand (or supply) curves. STEP Open the Excel workbook SupplyDemand.xls, read the Intro sheet, then go to the SummingD sheet. The sheet has three consumers, with three different utility functions and different incomes. We assume the consumers face the same prices for goods 1 and 2. We set $p_2 = 10$, but leave $p_1$ as a variable to derive the individual demand curve for each consumer. STEP Confirm, by clicking on a few cells in the range B18:D22, that the formulas in these cells represent the individual demand curves for each consumer. Notice that the graphs below the data represent the individual demand ($x_1 \mbox{*} = f(p_1)$) and inverse demand ($p_1 = f(x_1 \mbox{*})$) curves. Given individual demands, market demand can be found by simply summing the optimal quantity demanded at each price. STEP Confirm, by examining the formula in cell E18, that market demand has been computed by adding the individual demands at $p_1 = 1$. The same, of course, holds true for the other points on the market demand curve. Because we often display demand schedules as inverse demand curves, with price on the y axis, the red arrow (see your screen and Figure 17.1) shows that market demand is the result of a horizontal summation. At $p_1 = 5$, we read off each of the individual quantities demanded and add them together to obtain the market quantity demanded of 24.3 units. Supply works just like demand. We add individual supply curves (horizontally if we are working with inverse supply curves) to get the market supply curve. Because individual supply curves are P above AVC, we know that the market supply curve is simply the sum of the marginal costs above minimum AVC of all the firms producing the particular good or service sold in this market. So the way it works is that each of the individual buyers and sellers optimizes to decide how much to buy or sell at any given price. The Theory of Consumer Behavior and the Theory of the Firm are the sources of individual demand and supply. Once we have the many individual demand and supply curves, we add them up. So market demand and supply are composed of the sum of many individual pieces. Some consumers want a lot of the product at a given price, while others want less (or maybe none at all), but they all get added together to form market demand. The same is true for supply. Initial Solution The next step is obvious: market supply and demand are combined to generate an equilibrium solution that determines the quantity produced and consumed. This equilibrium solution is the market’s answer to society’s resource allocation problem. The simple story is that price adjusts, responding to surpluses and shortages, until it settles down at its equilibrium level, where quantity demanded equals quantity supplied. This is the intersection of the two curves. It is confusing, but true that in the supply and demand model, price and quantity are endogenous variables. How can price be endogenousdon’t consumers and PC firms take the price as given? Yes, they do and for individual buyers and sellers, price is exogenous, but, for the system as a whole, price is endogenous. At the individual agent level, price is given and cannot be controlled by the agent so it is exogenous. But we are now at a different level. We are allowing forces of supply and demand to move the price until it settles down. Thus, at the level of the market, we say price is endogenous because it is determined by forces within the system. It is worth repeating that equilibrium means no tendency to change. When applied to the model of supply and demand, equilibrium means that price (and therefore quantity demanded and supplied) has no tendency to change. A price that does have a tendency to change (because there is a surplus or shortage) is a disequilibrium price. We can put these ideas in the same framework that we used to solve optimization problems. There are two ways to find the equilibrium solution and they yield the same answer: 1. Analytical methods using algebra: conventional paper and pencil. 2. Numerical methods using a computer: for example, Excel’s Solver. STEP Proceed to the EquilibriumSolution sheet to see how the supply and demand model has been implemented in Excel. The information has been organized into three main areas: endogenous variables, exogenous variables, and an equilibrium condition. Excel’s Solver will be used to find the values of the endogenous variables that meet the equilibrium condition. As usual, green represents exogenous variables, the coefficients on the demand and supply curves. Although price and quantity are both endogenous variables, price is bolded to indicate that the model will be solved by finding the equilibrium price and then the equilibrium quantity (demanded and supplied) is determined. This is similar to the approach we took with monopoly where we maximized profits by choosing q, then found P from the demand curve. Finally, the equilibrium condition is represented by the difference between quantity demanded and supplied. On opening, the price is too high. At $P=125$, quantity demanded ($Q_d$) is 112.5 and $Q_s$ is about 173. Thus, have a surplus ($Q_d < Q_s$) and, therefore, price is pushed down (as firms seek to unload unsold inventory). STEP Use the scroll bar next to the price cell to set the price below the intersection of supply and demand. The dashed line (representing the current price) responds to changes in the price cell (B12). Notice how the quantity demanded and supplied cells also change as you manipulate the price, which makes the equilibrium condition cell (B17) change. With P below the intersection, the market experiences a shortage ($Q_d > Q_s$) and price is pushed up. The force in the market model is the pressure generated by surpluses (excess supply) or shortages (excess demand). Obviously, the equilibrium price is found where supply and demand intersect. At this price, there is no tendency to change. The forces of supply and demand are balanced. We can find this price by adjusting the price manually and keeping our eye on the chart or by using Excel’s Solver. STEP Open Solver. The Solver dialog box appears, as shown in Figure 17.2. Notice that the objective is not to Max or Min, but to set an equilibrium condition equal to zero. Notice also that P, price, is being used to drive the market to equilibrium and there are no constraints. STEP Click Solve to find the equilibrium solution. The chart makes it easy to see that Solver is correct. At $P = 100$, $Q_d = Q_s = 125$. Without a surplus or shortage, there is no tendency for the price to change and we have found the equilibrium resting point. The equilibrium quantity, 125 units, is the market’s answer to society’s resource allocation problem. It says that we should send enough resources from the scarce, finite amount of inputs available to produce 125 units of this product. We envision a supply and demand diagram for every product and the equilibrium quantity, in each market, is the market’s answer to how much we should have of each commodity. The analytical approach is easier than the math we applied for optimization problems because there is no derivative or Lagrangean. All we need to do is find the intersection of supply and demand. Given either market supply and demand curves $Q = f(P)$ or inverse supply and demand functions, $P = f(Q)$, we find the equilibrium solution by setting supply and demand equal to each other. The inverse functions in the Excel workbook are: $P=350-2Q_d$ $P=35+0.52Q_s$ Setting the inverse functions equal to each other, we replace the $Q_d$ and $Q_s$ with $Q_e$ because we are finding the value that lies on both of the curves: $350-2Q_e=35+0.52Q_e$ $385=2.52Q_e$ $Q_e=\frac{315}{2.52}=125$ Substituting this solution into either inverse function yields $P_e = 100$. We can also easily flip the inverse functions, solving for Q in terms of P, to obtain the demand and supply functions: $P=350-2Q_d \rightarrow 2Q_d=350 - P \rightarrow Q_d = 175 - \frac{1}{2}P$ $P=35+0.52Q_s \rightarrow 0.52Q_s=P - 35 \rightarrow Q_s=\frac{1}{0.52}P -\frac{35}{0.52}$ If we set demand equal to supply, using $P_e$ to denote the common value we seek, we find the equilibrium price: $175 - \frac{1}{2}P_e = \frac{1}{0.52}P_e -\frac{35}{0.52}$ $175 +\frac{35}{0.52} = \frac{1.26}{0.52}P_e$ $P_e=\frac{175 +\frac{35}{0.52}}{\frac{1.26}{0.52}}=100$ Plugging this equilibrium price into either function gives $Q_e = 125$. This work shows something obvious, but worth making clear: we can use $P=f(Q)$ functions to find $Q_e$, then $P_e$ or we can use $Q=f(P)$ functions to find $P_e$, then $Q_e$. We get the same result either way since we are merely flipping the axes. If you think using supply and demand functions ($Q=f(P)$) to get $P_e$ and then $Q_e$ is more faithful to what is going on in the market, you are a Marshallian for that is exactly how he saw markets functioning. And that is why P is on the y axisso the reader sees it fluctuate up and down until it settles down to its equilibrium value. We finish our work on the initial solution by pointing out that it is not surprising that numerical methods, using Solver, agree with the analytical approach. Given supply and demand for this product, we know that the market equilibrium solution would call for producing 125 units. The market system would, therefore, allocate the labor and capital needed to make this amount. Elasticity We can compute the price elasticity of demand and supply at the equilibrium price (the point elasticity) by applying our usual formula, $\frac{dQ}{dP}\frac{P}{Q}$. This time, we must use the demand and supply curves, $Q=f(P)$. STEP Click the button to see the calculation. Although it has text wrapped around it, the number displayed for the price elasticity of demand is based on this part of the formula: $(-1/d1_)*(P/Qd)$. With $Q_d = \frac{d_0}{d_1} - \frac{1}{d_1}P$, it is easy to see that $\frac{dQ}{dP}=-\frac{1}{d_1}$ and then we multiply by $\frac{P}{Q}$. Likewise, the price elasticity of supply is the slope of the supply function times the $\frac{P}{Q}$ ratio. At the equilibrium price and quantity, demand is much more price inelastic than supply. This does not matter right now, but it will in future work. STEP With $P = 100$, click on the price scroll bar and watch the price elasticities. Keep clicking until you set $P=125$. As you increase price, the elasticities change. Even though the slopes are constant, the supply and demand elasticities change because the $\frac{P}{Q}$ ratio is changing. Multiplying the slope by a price-quantity coordinate produces a percentage change measure of responsiveness. The price elasticity of demand at $P=125$ is $- 0.56$ means that a 1% increase in the price leads only to a 0.56% decrease in the quantity demanded. This means demand is not very responsive since the percentage change in quantity is less than the percentage change in the price. Notice, however, the demand is more responsive at $P=125$ than it was at $P_e=100$. We will see in future applications of the supply and demand model that the price elasticities play crucial roles. For now, remember that slope and elasticity are not the same and that the price elasticity tells us how responsive quantity demanded or supplied is to a change in the price. Long Run Equilibrium Another concept at play in the model of supply and demand is that of long run equilibrium. In the long run (when there are no fixed factors of production), a competitive market has another adjustment to make. In addition to responding to pressure from surpluses and shortages, the market will respond to the presence of non-zero profits. The story is simple. Excess profits (economic profits greater than zero) will lead to the entry of more firms. This will shift the inverse supply curve right, lowering the price until all excess profits are competed away. If the long run price is too low, firms suffering negative profits will exit, shifting the inverse supply curve left and raising prices. Thus, a long run competitive equilibrium has to look like Figure 17.3. The left panel in Figure 17.3 shows supply and demand in the market as a whole, while the right panel depicts a single firm that is just one of the many firms in this perfectly competitive industry. The two graphs have the same y axis, but the scale of the x axis is different. A single firm can only produce a few units (q), but "millions" (an arbitrary number chosen just as an example) are bought and sold in the market (uppercase Q for emphasis). The idea is that there are many firms, each producing small amounts of the same output. In the aggregate, they make "millions" of units, but one individual firm produces only a tiny amount of the total. Notice how the market demand curve is downward sloping, but the firm’s demand curve is horizontal. This is the classic price taking environment in which a PC firm operates. Notice also that the market supply curve is the sum of the individual firm MC curves because individual firm supply is MC where $P>AVC$. We could chop off the bottom of the market supply curve (below $P_e$), but that would be confusing. In essence, the long run adjustment process endogenizes the number of firms. This means that forces within the model determine the number of firms in an industry. This is not true in the short run, where the number of firms is assumed fixed (although they can shutdown if $P<AVC$) and the only adjustment is that market surpluses and shortages are eliminated by price movements. Notice that the long run equilibrium price meets two equilibrium conditions: 1. Quantity demanded equals quantity supplied so there is no surplus or shortage in the market. 2. Economic profits are zero so there is no incentive for entry or desire to exit. Long run equilibrium is even more fanciful and unrealistic than our abstract models of the consumer and firm. There has never been and never will be a market in long run equilibrium. Its primary purpose is as an indicator of where a market is heading. The long run equilibrium model tells us that even though we are at an equilibrium with no surplus or shortage (such as with $P_e=100$ in the Excel workbook), further adjustments will be made depending on the profit position of the firms. If profits are positive, entry will increase supply and lower price; while negative profits will lead to exit, decreased supply and higher prices. In the Excel workbook, we do not know if the market is in long run equilibrium when $P_e=100$ because we do not have a representative firm with its cost curves so we can determine its profits. A key takeaway is that, like price, the number of firms is endogenous in the long run because there are forces in the model that determine its value. No one sets the number of firms. The interaction of buyers and sellers is generating the number of firms as an equilibrium outcome. Comparative Statics Comparative statics analysis with the supply and demand equilibrium model is familiar. Most introductory economics courses emphasize shifts in supply and demand. Here is a quick review, with special emphasis on equilibrium as an answer to society’s resource allocation problem. A change in any variable that affects supply or demand, other than price, causes a shift in the inverse supply or demand curve. A change in price causes a movement along stationary supply and demand curves. An increase in demand or supply means a rightward shift in inverse demand or supply. For demand, the shift factors are income, prices of other goods related in consumption (i.e., complements and substitutes), tastes, consumers’ expectations about future prices, and the number of buyers. The usual shift factors for supply include input prices, technology, firms’ expectations, and the number of sellers. As usual, comparative statics analysis consists of finding the initial solution, applying the shock, determining the new solution, and comparing the initial to the new solution. In the case of supply and demand, we want to make statements about the changes in equilibrium price and quantity. $P_e$ and $Q_e$ are the endogenous variables in the equilibrium model and we track how they respond to shocks. For example, new technology lowered costs, What would that do to equilibrium price and quantity? We can use the EquilibriumModel sheet to see what happens. STEP Make sure $P=100$ so the market is in equilibrium, then click on the s0 slider to lower the inverse supply curve intercept to 15. The graph updates as you change the s0 and a new, red inverse supply curve appears. The original, black line remains as a benchmark, but there is only one demand and supply at any point in time. At $P=100$, there is a surplus. We need to find the new equilibrium solution. STEP Run Solver to find the new $P_e$ and $Q_e$. Figure 17.4 shows the result. The equilibrium price falls (from $100/unit to roughly$84/unit) and the equilibrium quantity rises from (from 125 to about 133 units). The decentralized market system has generated a new answer to society’s resource allocation problem. Ceteris paribus, if a product enjoys a productivity increase from a new technology, making it cheaper to produce the product, the system will produce more of it. This response makes common sense, but it is absolutely critical to understand that the increase in output is not decreed from on high. It is bubbling up from belowoutput rises because supply shifts and market forces lower prices and raise output. We do not examine the equilibration process from the initial to the new solution when doing comparative statics analysis. We might directly converge to the new equilibrium, with price falling gradually until $Q_d=Q_s$. Or, price might collapse, falling below the equilibrium price, then rising above it, and so on. This would be oscillatory convergence. With comparative statics, however, the focus is entirely on comparing the new to the initial solution. We may, in fact, be interested in the path to the new equilibrium, but that would take us into comparative dynamicsa topic for advanced microeconomics. Applying Supply and Demand To escape the usual trap of thinking of supply and demand in purely graphical terms, we apply the model to a real world example. We avoid graphs completely and focus on the mechanics and logic of supply and demand. The market system uses supply and demand for outputs and inputs. This example focuses on labor, but there are many applications of supply and demand for capitalperhaps the stock market is the most prominent. Consider that most fans of American football would not know the second highest paid position in the NFL. Everyone knows quarterbacks are the highest paid, but what position is second? Are star running backs, wide receivers, or maybe linebackers then next highest paid? No, the answer is left tackles www.spotrac.com/nfl/positional/. In The Blind Side: Evolution of a Game (2006), Michael Lewis explains that free agency, allowing players to sell their services to the highest bidder, radically altered the pay structure of the NFL. How did this happen? Supply and demand. First, Lewis (p. 33) explains, there is little supply for the left tackle position. The ideal left tackle was big, but a lot of people were big. What set him apart were his more subtle specifications. He was wide in the ass and massive in the thighs: the girth of his lower body lessened the likelihood that Lawrence Taylor, or his successors, would run right over him. He had long arms: pass rushers tried to get in tight to the blocker’s body, then spin off it, and long arms helped to keep them at bay. He had giant hands, so that when he grabbed ahold of you, it meant something. But size alone couldn’t cope with the threat to the quarterback’s blind side, because that threat was also fast. The ideal left tackle also had great feet. Incredibly nimble and quick feet. Quick enough feet, ideally, that the idea of racing him in a five-yard dash made the team’s running backs uneasy. He had the body control of a ballerina and the agility of a basketball player. The combination was just incredibly rare. And so, ultimately, very expensive. In addition to low supply, there is high demand. The left tackle is charged with protecting the quarterback’s blind side, the direction from which defensive ends and blitzing linebackers come shooting in, causing sacks, fumbles, and worst of all, injuries. Because the quarterback is the team’s most prized asset, the left tackle position is a highly sought-after bodyguard. But even more surprising than the fact that blind side tackles are the second highest paid players in the NFL is that this was not always the case. Lewis reports that for many years, linemen were low paid, as shown in Figure 17.5. So, why do blind side tackles make so much money today? NFL players did not enjoy free agency until the 1993 season. Up to that time, players were drafted or signed by teams and could move only by being traded. Then the players’ union and team owners signed a contract that enabled free agency for players so they could move wherever they wanted. In return, the players agreed to a salary cap that was a percentage of league-wide team revenue. Free agency meant that a player could sell himself to the highest bidderin other words, the market would operate to establish player salaries. At first, everyone was shocked. Teams spent unheard of amounts on unknown linemen. Players that most fans never heard of made millions. Then a starting left tackle for the Bills, Will Wolford, announced his deal: \$7.65 million over three years to play for the Colts. No one had ever paid so much money for a mere lineman. Not only that, his contract stipulated that Wolford was guaranteed to be the highest paid player on offense for as long as he was on the team. The NFL threatened to invalidate the outlandish contract. In the end, the contract was allowed, but the commissioner decreed that such terms in a contract could not be used again. Lewis, pp. 227–228 (emphasis added), explains what had happened: The curious thing about this market revaluation is that nothing had changed in the game to make the left tackle position more valuable. Lawrence Taylor had been around since 1981. Bill Walsh’s passing game had long since swept across the league. Passing attempts per game reached a new peak and remained there. There had been no meaningful change in strategy, or rules, or the threat posed by the defense to quarterbacks’ health in ten years. There was no new data to enable NFL front offices to value left tacklesor any offensive linemenmore precisely. The only thing that happened is that the market was allowed to function. And the market assigned a radically higher value to the left tackle than had the old pre-market football culture. Economics students around the world study supply and demand, but they think it is a graph. It is so much more than an X. It is a model that explains how pressures from buyers and sellers are balanced. This example shows that markets value commodities by reflecting the underlying demand and supply conditions. Blind side tackles are worth a lot of money in the NFL. Before markets were used, they were grossly underpaid. There were no statistics for linemen like yards rushing or field goal percentage so they could not differentiate themselves. The market system, however, expressing the desires of general managers and reflecting the true importance of the blind side tackle, correctly values the position. Markets are neither moral nor caring. They are a way to consolidate information from disparate sources. Prices are high when everyone wants something or there is very little of it available. For blind side tackles, with both forces at work, the market system was a bonanza. Supply and Demand and Resource Allocation This introduction to the market system via partial equilibrium showed how an individual market settles down to its equilibrium solution. Much of this material is familiar because most introductory economics courses emphasize supply and demand analysis. There are two fundamental concepts, however, that are critical in gaining a deep understanding of supply and demand. 1. Supply and demand curves do not materialize out of thin air. They are the result of comparative statics analyses on consumer and firm optimization problems. In other words, supply and demand must be interpreted as the reduced-form solutions from utility- and profit-maximizing agents. Figure 17.6 drives this point home by adding representative consumer and firm graphs to supply and demand. The notation in Figure 17.6 is awkward because we are combining consumer and firm theories which have their own individual histories. Thus, X in the left panel is the number of units of the same good that is produced by the firm in the right panel with label "q (units)." Likewise, P in the middle and right panels equals $P_x$ in the left panel. Notational awkwardness notwithstanding, it is true that consumers generate demand for every good and service and the sum of individual demands is market demand. The same holds for supply and firms. Figure 17.6 is a great way to put it all together. 1. Supply and demand is a resource allocation mechanism. It is the equilibrium quantity that is of greatest importance in the supply and demand model because this is the market’s answer to society’s resource allocation problem. The price is the variable that drives a market to equilibrium, but it is $Q_e$ that represents how much of society’s scarce resources are to be allocated to the production of each commodity, according to the market system. A picture of this is in the Intro sheet. Now that you have finished this section, take another look at it and walk through it carefully. Introductory economics students are taught supply and demand, but they do not understand that the market demand and supply curves are reduced forms from individual optimization problems. Deriving demand and supply is a bright line separating introductory from intermediate courses. In addition, introductory courses stress price and equilibration (surpluses and shortages) as students learn the basics of supply and demand. Unfortunately, this means students miss the fundamental point: the equilibrium quantity is the decentralized, market system’s answer to how much of society’s scarce resources should be devoted to this particular commodity. There are graphs like Figure 17.6 for every good or service allocated by the market. While the graphics in the Intro sheet emphasize the importance of $Q_e$, Figure 17.7 offers another way to explain what supply and demand is really all about. Filling in the mountain of society’s finite resources with a checkerboard pattern conveys that the factors of production are individually owned and controlled. Each square represents the resources controlled by each person. Every person owns a tiny piece of the mountain and decides what do with that labor and capital. Every product allocated by the market system has a supply and demand that attracts individual resources owners. Out of this cacophony of interactions, an equilibrium is found and resources flow to the production of an amazing variety of goods and services. This is the truly fascinating aspect of supply and demand. Each agent is self-interested and thinking only of their own gain, but the outcome of the market system establishes a pattern that answers the question of how to use scarce resources. Of course, the checkerboard pattern in Figure 17.7 makes it seem like everyone controls equal shares, yet there is no question that some people own more resources than others. Inequality in the distribution of resources can be a serious obstacle facing the market system. It will not work well if resources are grossly unequally distributed. This leads to another common misconception regarding equilibrium and desirability. Can we conclude, by virtue of the fact that the market is in equilibrium, that the market system has correctly solved society’s optimization problem? Absolutely not. Equilibrium does not automatically equal optimal. The next section tackles this issue. Exercises STEP Click the button in the EquilibriumSolution sheet to set the coefficients to their initial values. 1. Use the scroll bar in cell C7 of the EquilibriumSolution sheet to set the intercept of the inverse demand curve to 375. Use Excel’s Solver to find the equilibrium solution. Take a picture of the answer and paste it in your Word document. 2. Solve the equilibrium model with $d_0 = 375$ via analytical methods. Show your work, using Word’s Equation Editor as needed. 3. Because the intercept increased compared with the initial values of the parameters, we know there has been an increase in demand. How has the market responded to this shock? Is the market’s response reasonable?
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/17%3A_Partial_Equilibrium/17.01%3A_Supply_and_Demand.txt
Society’s resource allocation problem is an especially important optimization problem. It is an easy problem to envision. Pile up of all of society’s factors of production and then ask, "How should we use these resources? What should we make? How much of each product should be produced? How should we distribute the output?" These are questions about resource allocation. An important idea is that of a constraint. Needs and wants by consumers far outstrip available resources. More of one means less of other goods and services. The previous section showed how supply and demand establishes an equilibrium price and output. The latter is the market system’s answer to the resource allocation questions. Although we are not studying alternative resource allocation methods, it is worth pointing out that if supply and demand is not used, that does not make difficult choices go away. Scarcity means there is not enough to go around. We may decide we do not want to use markets to allocate scarce organs, but we will still need a mechanism to decide whose lives are saved. This section changes the focus from how supply and demand works to an evaluation of the market system’s solution. The approach is clear: We first consider what an optimal allocation would look like, and then check to see whether the market’s allocation conforms to the optimal solution. Finding the Optimal Quantity in a Single Market To find the optimal solution, we conduct a fanciful analysis. Like the imaginary budget line we used to find income and substitution effects, we work out a thought experiment that actually can never be carried out. Suppose you had special powers and could allocate resources any way you wanted? Your official title might be Omniscient, Omnipotent Social Planner, or OOSP, for short. You are omniscient, or all knowing, so you know every consumer’s desires and every firm’s costs of production. Because you are omnipotent, or all powerful, you can decide how much to produce of each good and service and how it is produced and distributed. Because this is partial equilibrium analysis, we focus on just one good or service. The question for you, OOSP, is, “How much should be produced of this particular commodity?” One way for you to answer this question is to measure the total gain obtained by the consumers and producers of the good. When we compute the gain, we subtract the cost of acquiring the product for consumers and, for firms, the costs of production. The plan is to compute the total net gain for different quantities and pick that quantity at which the total gain is maximized. The notion of net gain, something above the cost that is captured by consumers and firms, is the fundamental idea behind consumers’ and producers’ surplus. Consumers’ surplus is the gain from consumption after accounting for the costs of purchasing the product. Producer’s surplus is the difference between total revenues and total variable costs. In the long run, it is profit. We begin with producers’ surplus because it is uncontroversial. We will see that consumers’ surplus is problematic. Producers’ Surplus At any given price, if sellers get that price for all of the units sold, they get a surplus from the sale of each unit except the last one. The sum of these surpluses is the producer’s surplus. The sum of all of the producer’s surpluses in the market is the producers’ surplus, PS. The location of the apostrophe matters. Producer’s surplus is the surplus obtained by one firm. If the focus is on all of the firms, we use producers’ surplus. STEP Open the Excel workbook CSPS.xls, read the Intro sheet, then go to the PS sheet. The sheet displays an inverse supply curve given by \(P = 35 + 0.52Q_s\). The area of the green triangle is PS. To see why, consider the situation when output is 75 units and the price is \$74/unit. The very last unit sold added \$74 to total cost (given that we know that the supply curve is the marginal cost curve). Thus, the 75th unit sold yielded no surplus. In general, the marginal unit yields no surplus. But what about the other units? All of the other units are inframarginal units. In other words, these are units below the marginal (last) unit and, in general, the inframarginal units generate surplus. The firm is receiving a price in excess of marginal cost for these units, from 1 to 74, and, therefore, it is reaping a surplus each of those units. We can add them up to get producer’s surplus. Consider the 50th unit. The marginal cost of the 50th unit is given by \(35 + 0.52 * 50 = \$61\). The firm would have been willing to sell the 50th unit for \$61, but it was paid \$74 for that 50th unit. So, it made \$13 on the 50th unit. STEP Look at cell Q68. It reports the surplus generated by the 50th unit, \$13, as we computed above. Look at cell Q28. It reports the surplus generated by the 50th unit, \$33.80. Cell R19 adds the surpluses from all of the inframarginal units. Notice how PS steadily falls from the first to the last unit. The key to PS is that all quantities are sold at the same price, but marginal cost starts low and rises. The firm makes a surplus above MC on all output except the last one. Cell R19 differs from cells B19 and B21 because cell R19 is based on an integer interpretation of output. If output is continuous, then we can compute the PS as the area of the triangle created by the horizontal price and the supply curve. Notice that cell B19 offers another way to understand PS. If supply is marginal cost, then the area under the marginal cost curve is total variable cost. Because marginal cost is linear, the computation is easy. If MC was a curve, we would have to integrate. Total revenue is simply price times quantity. Cell B19 computes \(TR - TVC\), the excess over variable cost, which is the producers’ surplus. STEP If \(Q_s = 95\), what is PS? Use the scroll bar in cell C12 to set quantity equal to 95. At 95 units of units of output, MC is \$84.40. At that price, the 95th unit has no surplus. But all of the other, inframarginal units generate surplus, adding up to \$2,346.50. STEP Explore other quantities and confirm that as output rises, so does producers’ surplus. Consumers’ Surplus The idea is the same. At any given price, if a buyer pays that price for all of the units bought, she gets a surplus from the purchase of each unit except the last one. The sum of these surpluses is the consumer’s surplus. The sum of all of the consumer’s surpluses is the consumers’ surplus, CS. STEP Proceed to the CS sheet. Given the inverse demand curve, \(P = 350 - 0.2Q_d\), we can easily compute CS for a given quantity as the area of the pink triangle. At \(Q_d = 95\), the price so consumers will buy 95 units is \$160/unit. The last unit purchased provides no surplus, but the inframarginal units generate CS. The area under the demand curve, but above the price, is a measure of the net satisfaction enjoyed by consumers. Consumers’ surplus comes from the fact that consumers would have paid more for each inframarginal unit than the price they actually paid so they get a surplus for each marginal unit. STEP Use the quantity scroll bar to confirm that as output rises, so does consumers’ surplus. As mentioned earlier, there is a problem with consumers’ surplus. We will finish how OOSP could use CS and PS before explaining the problem. Maximizing CS and PS Producers’ surplus is the amount by which the total revenue exceeds variable costs and measures gain for the firm. Consumers’ surplus also measures gain because it is the amount by which the total satisfaction provided by the commodity exceeds the total costs of purchasing the commodity. Both parties, consumers and producers, gain from trade. This is why a trade is madeboth buyer and seller are better off. When you buy something, you part with some money in exchange for the good or service. If the purchase is voluntary, you must value what you are getting more than what you paid for it or else you would not have bought it. Similarly, the seller values the money you pay more than the good or service or else she would refuse to sell at that price. The gains from voluntary trade are captured in the terms consumers’ and producers’ surplus. Casting the problem in terms of surplus received by buyers and sellers leads naturally to this question: What is the level of output that maximizes the total surplus? After all, it is clear that as quantity changes the CS and PS also change. Thus, OOSP is faced with the following optimization problem: \[\max\limits_{q} CS(q)+PS(q)\] The idea is to maximize the gains from trade for all buyers and sellers. This problem can be solved analytically and numerically. We focus on the latter. STEP Proceed to the CSandPS sheet. This sheet combines the surpluses enjoyed by producers and consumers into a single chart, shown in Figure 17.8. Understanding this chart is fundamental. We proceed slowly. The vertical dashed line represents the quantity, which OOSP controls and will choose so that CS + PS is maximized. There are two prices on the chart, one for the firm and the other for the consumer. The idea is that OOSP uses the quantity to determine the prices needed for firms to be willing to produce the output level and for consumers to want to buy that amount of output. This is not an equilibrium model of supply and demand. OOSP cares only about choosing the optimal output. Price for consumers and firms is used only to compute surplus. In Figure 17.8 (and on your computer screen), producers receive a price of \$84.40 for each of the 95 units, yet consumers pay \$160.00 per unit. Remember that OOSP, our benevolent dictator, has magical powers so she can charge one price to consumers and give a different price to producers. By adding the values in cells E18 and B21, we get the value in cell J20. It is highlighted in yellow and maximizing it is the goal. STEP Click on the slider control (over cell C12), to increase output in increments of five units. As output increases, CS and PS both rise. STEP Continue clicking on the slider control so that output rises above 125 units. Now the sum of CS and PS is falling. That is confusing because the two triangles are getting bigger. But once the price to consumers falls below the price to the firms, we have to pay the difference. This is explained below in more detail. For now, let’s work finding optimal Q. STEP Launch Solver and use it to find \(Q \mbox{*}\). With an empty Solver dialog box, you have to provide the objective (J20) and changing cell (B12). We find that \(CS + PS\) is maximized at \(Q \mbox{*} = 125\) units. In other words, OOSP should order the manufacture of 125 units of this product, allocating the inputs needed from society’s scarce resource endowment. This level of output maximizes the sum of CS and PS. We have seen this number before. In the previous section, we found that the equilibrium solution was \(Q_e=125\) units. This means that the market’s solution is the optimal solution. This is a remarkable result. No one intended this. No one chose this. No one directed this. Supply and demand established an equilibrium output which answered the question of how much to produce and we now see that it is the same solution we would have chosen if our goal was to maximize consumers’ and producer’s surplus. This is truly amazing. Deadweight Loss If OOSP chooses an output level below 125 and charges a price to consumers based on the inverse demand curve and pays producers a price based on the inverse supply curve, it will generate a smaller value of \(CS + PS\). How much smaller? The amount of surplus not captured is given by the trapezoid between the consumers’ and producers’ surpluses. This area is called deadweight loss, DWL. It is a fundamental concept in economics and merits careful attention. STEP Enter 95 in cell B12, then click the button. Not only do data appear below the button, but the chart has been modified to include a red trapezoid. The area of the trapezoid is displayed in cell D30. STEP Click on cell D26. The formula is simply the solution of the intersection of the supply and demand curves. We know this quantity is the solution to the problem of maximizing CS and PS. STEP Click on cell D28. This seemingly complicated formula is not really that hard. It displays the maximum possible total surplus. Two things are being added, CS and PS. The first part of the formula is PS: 0.5*(((s0_ + s1_ *D26) \(-\) s0_)*D26). It is half the height of the PS triangle times the length (or quantity produced). The second part of the formula uses the same area of the triangle formula to compute the CS: 0.5*((d0_ - (d0_ \(-\) d1_*D26))*D26). STEP Click on cell D30. The formula, = D28 \(-\) J20, makes crystal clear that deadweight loss is maximum total surplus minus the sum of CS and PS at any value of output. In other words, deadweight loss is a measure of the inefficiency of producing the wrong level of output in a particular market. Deadweight loss vaporizes surplus so that it disappears into thin air. Deadweight loss is pure waste. STEP Click on the slider control (over cell C12) to increase output in increments of five units. As you increase output, note that the deadweight loss falls as the output approaches the optimal quantity. There is no deadweight loss when the output is at 125 because this is the optimal level of output. Another way to expressing the efficiency in the allocation of resources of the equilibrium solution is to say it has no deadweight loss. That is, no inefficiency in allocating resources. As Q approaches \(Q \mbox{*}\) we reach the maximum possible \(CS + PS\) and DWL goes to zero. As Q keeps rising, past \(Q > Q \mbox{*}\), we get less total \(CS+PS\) and deadweight loss rises. We get deadweight loss on either side of \(Q \mbox{*}\). The explanation for deadweight loss when \(Q>Q \mbox{*}\) is more complicated. Let’s look at some concrete numbers. STEP Set output above the optimal level, for example, Q = 150. Your screen should look like Figure 17.9. It is true that CS and PS triangles are large, but with a higher price to firms than consumers, society has to pay for the difference. Once we account for this, the total gain is less than that at \(Q=125\) and we suffer deadweight loss, as shown by the red triangle. Figure 17.9 shows that it is possible to have sellers receive \$113 per unit sold yet have buyers pay only \$50 per unit sold, but someone is going to have to make up that \$63 per unit difference. The total value of the subsidy, \$63/unit times 150 units is \$9,340. This amount (rectangle ABCD in Figure 17.9) must be subtracted from the sum of CS and PS. When we add everything up, we get a total surplus of \$18,900 at \(Q=150\), which is lower than the maximum total surplus. Cell J20 uses an IF statement to get the calculation right. The deadweight loss from producing 150 units is \$787.50 (cell D30). The deadweight loss at \(Q = 150\) is given by the area of the red triangle in Figure 17.9. The geometry is easy. We must subtract a rectangle with height 63 and length 150 from the sum of the pink CS and green PS triangles. This leaves the red triangle as the DWL caused by producing too much output. There is one optimal output and at that value, deadweight loss is zero. Outputs above and below \(Q \mbox{*}\) produce inefficiency in the allocation of resources because we fail to maximize \(CS + PS\). This is called deadweight loss. Price Controls Price controls are legally mandated limits on prices. A price ceiling sets the highest price at which the good can be legally sold. A price floor does the opposite: The good cannot be sold any lower than the given amount. To be effective, a price ceiling has to be set below and a price floor has to be set above the equilibrium price. Most introductory economics students are taught that price ceilings generate shortages and price floors lead to surpluses. For most students, the take-home message is that market forces cannot push the price above the ceiling or below the floor so the market cannot clear and this is why price controls are undesirable. It turns out that this is not exactly right. Although it is true that ceilings lead to persistent excess demand and floors prevent the market from eliminating excess supply, the real reason behind the unpopularity (among economists) of price controls is the fact that they cause a misallocation of resources. STEP Proceed to the PriceCeiling sheet. Suppose there is a price ceiling on this good at \$84.40. At this price, there is a shortage of the good because quantity demanded at \$84.40 is 132.8 units (cell B13) while quantity supplied is only 95 (cell B12). The price cannot be bid up because \$84.40 is the highest price at which the good can be legally sold. Thus, with this price ceiling, the output level is 95. We know this is an inefficient result because we know \(Q \mbox{*} = 125\). This is the real reason why this price ceiling is a poor policy, not because it causes a shortage. The price ceiling fails to maximize total surplus. To be clear, with this price ceiling, too few resources are allocated to the production of this good or service. There will be only 95 units of it produced, not the optimal 125 units. The fact that there is a shortage is true, but it is the misallocation of resources that is the problem. While the misallocation of resources is easy to see since the quantity is wrong, deadweight loss is more complicated. It depends on the story about the price control and how agents react. Suppose, for example, that market players are all honest so there is no illegal selling of the good above the maximum price. In other words, producers do not violate the law. Suppose further that the good is allocated via lottery so there are no lines of buyers or resources spent waiting. This means that consumers’ surplus is now a trapezoid instead of a triangle. STEP Click the button. As shown in Figure 17.10 (and on your screen), a rectangle has been removed from deadweight loss so it is now just the red triangle. In addition to the usual CS triangle in Figure 17.10, consumers enjoy the area of the rectangle computed by multiplying a price of \$160 (which is the price consumers are willing to pay for 95 units of the good) minus \$84.40 (the price consumers actually pay) times 95 units. The good news behind this price ceiling with no cheating story is that the deadweight loss is much smaller than in the CSandPS sheet with \(Q=95\) because the lucky consumers who can purchase the good do not have to pay \$160/unit. The bad news is that there is still a deadweight loss of \$1,134. This is a measure of the inefficiency of the price ceiling with no illegal market. Suppose instead that there are unlawful sales of the product at the illegal market price, \$160/unit (this is the most buyers are willing to pay for 95 units). Suppose, in addition, that somehow there are no wasted resources associated with this illegal market. No police investigations, court cases, or any other resources are spent on stopping criminal sales. Then the producers get the rectangle. With this idealized illegal market, the rectangle is transferred from consumers to producers, but the deadweight loss stays the same. The Q&A sheet asks you to demonstrate this. If, as is almost surely true, illegal selling results in more resources being spent, then the deadweight loss is larger than the red triangle. Illegal activity often leads to violence (think of illegal drugs, which are a market with a price ceiling of zero) and we would subtract that from \(CS+PC\) and thereby increase DWL. Consider two other stories about the price ceiling. A limited set of buyers are given coupons to buy the product. To buy the good (at the legal price), you must have a coupon. If a rationing coupon scheme is used, the sellers of the coupons get the rectangle. The deadweight loss remains the same. Suppose, finally, that a price ceiling is set and the good is allocated on a first-come-first-serve basis. In other words, buyers have to wait in line. With this story, the resources buyers waste standing in line (or paying others to stand in line for them) must be subtracted from the total surplus. The deadweight loss rises. If the entire rectangle is lost, then the deadweight loss is the same as that in the CSandPS sheet when 95 units of output are produced. Price controls are a popular way to modify market results. Unfortunately, from a resource allocation standpoint, price controls suffer from the fact that they fail to maximize total surplus. It is this property and not that they produce shortages that earn price ceilings criticism. We want the allocation mechanism to give optimal Q. It is confusing that correctly measuring deadweight loss depends on the story, but do not be distracted by the many ways price controls are implemented. The take-home message is that any deviation from \(Q \mbox{*}\) means that the allocation scheme has failed. Deadweight loss, which gives a measure of the inefficiency in monetary units, depends on the specific implementation of the price control, but the fact that it is not zero is evidence that it has failed. Caveat Emptor "Let the buyer beware" is the meaning of the Latin phrase, caveat emptor. This idea from contract law is a warning to the buyer that they are responsible for what they are buying. The consumer needs to be careful so they aren’t tricked or end up with a poor quality, unsuitable product. Caveat emptor applies to deadweight loss. On the one hand, deadweight loss is a common way that economists measure inefficiency. It is based on the idea that the maximum total surplus is not attained from a particular output level. But users need to know what they are getting themselves intodeadweight loss has two glaring weaknesses. The first has to do with our calculation of consumers’ surplus. For technical reasons, restrictive assumptions about the utility function must be imposed. For example, a Cobb-Douglas utility function for individual consumers will not work because it has an income effect. A quasilinear utility function will work (no income effect), but it is unlikely that all consumers have quasilinear utility. Consumers’ surplus violates the rule that we should not make interpersonal utility comparisons. We are using the demand curve to add up dollar measures of the extra satisfaction that different people get from consuming a product. That is unsound and breaks a basic tenet of modern utility theory. The second weakness stems from the use of partial equilibrium analysis. We are calculating deadweight loss based on the impact in a single market of a deviation in output from its optimal level. The focus on one market is too limited. If we apply too many or too few resources to the production of one good, we will cause deviations from optimal output for other goods and services. So, the deadweight loss computation based on one market is a lower bound. To get it exactly right, we would have to analyze effects on other markets and do a general equilibrium analysis. Regarding deadweight loss, it is caveat emptor. Remember that deadweight loss measures inefficiency and it is commonly used in applied work, but it is not exactly right. The best way to think of deadweight loss is as an approximation. Some economists are appalled at the thought of using it, they are usually more theoretically oriented. Economists who do empirical work are more likely to argue that deadweight loss is imperfect, but practically speaking, it is a useful way of measuring inefficiency. Optimal Allocation of Resources This is an important section. It introduced producers’ and consumers’ surpluses, which are key elements in the omnipotent, omniscient social planner’s objective function. The idea that there is an optimal level of output for each good and service is fundamental. From this idea we get the procedure for evaluating any allocation scheme or government policy: We compare an observed result to the optimal answer. It is obvious that quantities below the intersection of supply and demand cannot be optimal because both CS and PS rise as Q increases. The situation with quantity above the intersection of supply and demand is more subtle. To get the calculation right, whenever quantity is above the intersection point, we must subtract from the sum of CS and PS a rectangle that is the difference between prices multiplied by quantity. The most important and remarkable result from this section is that \(Qe = Q \mbox{*}\). This says that in a properly functioning market, the equilibrium quantity (which is the market system’s answer to society’s resource allocation problem) yields the socially optimal level of output. Price controls lead to inefficient allocation of resources. The output generated does not match the optimal output. The deadweight loss associated with a price control depends on the story of how the particular implementation of the price control is enforced and responded to by buyers and sellers. There is no question that deadweight loss is a linchpin of policy analysis. Countless estimates of deadweight loss and cost–benefit studies have been conducted. It is, however, flawed. Measuring consumers’ surplus in value of money terms from a market demand curve in a partial equilibrium setting leaves us on very thin ice. Applications and estimates of deadweight loss should be seen as an approximation to the exact measure of the loss from the misallocation of resources (if such a measure exists). While deadweight loss is flawed, the notion of a misallocation of resources is not. The idea that there is an optimal solution to society’s resource allocation problem is perfectly valid. So is defining an allocation that deviates from optimal as a misallocation of resources. These are bedrock ideas in microeconomic theory. This should mark the end of this section, but because there is so much confusion about equilibrium and optimal resource allocation, what follows is an attempt to provide some clarity. Equilibrium and Optimal Resource Allocation The material below is being repeated for emphasis. The Theory of Consumer Behavior and Theory of the Firm are stepping stones to the \(Q_e = Q \mbox{*}\) result. Let’s put things in perspective and explain why this is so fundamental. It is absolutely true that philosophers and deep thinkers of the day were baffled by the market system. There was active debate about how and why Europe and, within Europe, England was getting so rich. How could the unplanned, individual decisions of many buyers and sellers produce a pattern, much less a good result? It seemed obvious that a leaderless, fragmented system would produce chaos. In the previous section, we saw that the equilibrium quantity, \(Q_e\), generated by a properly functioning market is located at the intersection of supply and demand. The market uses a good’s price to send signals to buyers and sellers. Prices above equilibrium are pushed down, whereas prices below equilibrium are pushed up. At the equilibrium solution, the price has no tendency to change and output is also at rest. The equilibrium level of output is the market’s answer to how much of society’s resources will be devoted to producing this particular good. Our work in this section on consumers’ and producers’ surplus takes a much different perspective on the resource allocation problem. Instead of examining how the market works, we have created a thought experiment, giving an imaginary social planner incredible powers. Given the goal of maximizing total surplus, OOSP would choose an optimal quantity, \(Q \mbox{*}\), that should be produced. If we produce less or more than this socially optimal amount, society would forego surpluses that would make producers and consumers better off. This is called deadweight loss. If we compare the market’s equilibrium quantity to the socially optimal quantity, we are struck by an amazing result: \(Q_e = Q \mbox{*}\). This critical equivalence means that we do not need a dictator, benevolent or otherwise, to optimally allocate resources. The market, using prices, can settle down to a position of rest where all gains from trade are completely exploited and the sum of producers’ and consumers’ surplus is maximized. There is no guarantee, however, that \(Q_e=Q \mbox{*}\)there are conditions under which the invisible hand does not lead the market to optimality. We will see examples where the equality does not hold and the market is said to fail. As you work on this section and this part of the book, do not lose sight of the main point: The market’s ability to generate an equilibrium quantity that is socially optimal is nothing short of amazing and unbelievable. It is equivalent to geese flying a V. A pattern is generated by the interactions of individuals with no awareness or intent to make the pattern. Consider this hypothetical: we learn that broccoli cures cancer. Would we need a president, prime minister, or king to tell farmers to grow more broccoli? Of course not. Broccoli would fly of the shelves, its price would rocket, and farmers would automatically start producing more broccoli. Analogies from biology are many, but this one might be so shocking and different from anything you have seen before that it will convey why supply and demand is so fascinating to economists. STEP Visit http://tiny.cc/siphonophore to learn about this creature and see it in action. Exercises 1. From the CSandPS sheet, click the button, then set \(d_0=375\) and use Solver to find the optimal quantity. Take a picture of the cells that contain your answer and paste it in a Word doc. 2. Click the button. Suppose there was a price ceiling of \$84.40. What is the story about price ceilings assumed by the chart and DWL computations on the sheet? 3. Suppose the government implemented a price support scheme (this is a type of price floor that is used frequently for agricultural products) where they only allowed 95 units to be produced. Cell E16 shows that the market price would be \$185. Compute the deadweight loss and explain it.
textbooks/socialsci/Economics/Intermediate_Microeconomics_with_Excel_(Barreto)/17%3A_Partial_Equilibrium/17.02%3A_Consumers%27_and_Producers%27_Surplus.txt